Provisioning environments in Titan

Titan Tutorial #5: Defining deployments straight from a Jupyter Notebook

One of our most important objectives at Akoios is to make daily life easier for the Data Scientists.

To this end, Titan has been designed to enable Data Scientists to perform as many tasks as possible straight from the tools they use every day (e.g. Jupyter Notebook).

In this new edition in our tutorial series, we will see how we can easily define in which environment we want our models to run once they have been deployed.

Since Titan version 0.5, it is possible to easily define the environment we want our service to run on top of. Our approach has been to keep it as simple and expressive as possible in order to reduce the hassle for the Data Scientists.

As already mentioned, defining all this information can be made directly in a Jupyter Notebook. To make the provisioning, it is just needed to specify the following information using a YAML format in a Markdown cell. A simple example is shown below:

titan: v1
image: scipy
cpu: 2
memory: 1024MB
- pip install requirements.txt

The information in the cell specifies the following:

  • We will be using v1 of Titan.
  • We want the model to run in a scipy runtime.
  • Regarding hardware, we are requesting 2 cores and 1024MB of RAM memory.
  • Finally, we provide (if needed) a list of required dependencies for our model in the requirements.txt file.

These are the available options for each of the parameters:

  • image: Preferred runtime for the model (scipy, tensorflow, datascience…).
  • cpu: Preferred number of cores.
  • memory: Desired memory in MB (NNNNMB). E.g. 1024MB, 2048MB…
  • command: Any arbitrary command.

If no YAML configuration is provided, a default hardware configuration will be provisioned (Please note that this can be customized for each installation of Titan):

  • Default hardware provisioning: 2 Cores — 2GB RAM

In the same manner, limits can be established for the hardware. E.g.:

  • Max hardware provisioning: 4 Cores — 8GB RAM

In order to illustrate this, this could be the content of a sample Jupyter Notebook:

After the model has been deployed using the command,

$ titan deploy 

The service will be shown in the dashboard:

As it can be seen in the highlighted area, the model has been correctly provisioned with the specified runtime (scipy) and hardware (2 cores and 1GB).


In this post we have seen how Titan makes hardware and runtime provisioning as easy as it can get. Moreover, it is important to remark that arbitrary hardware limits and policies can be defined in every Titan installation to fit the needs of every company.

Next Tutorial

Don’t miss our next tutorial, where we see how to manage service versioning and deployments using Titan.


Titan can help you to radically reduce and simplify the effort required to put AI/ML models into production, enabling Data Science teams to be agile, more productive and closer to the business impact of their developments.

If you want to know more about how to start using Titan or getting a free demo, please visit our website or drop us a line at

If you prefer, you can schedule a meeting with us here.

Akoios: Frictionless solutions for modern data science.





Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store



More from Medium

Benchmarking Pyomo

Creating an Automated Data Processing Pipeline with Apache Airflow, Kubernetes, and R — Part 3

A storefront on a street corner, with a tree partially obscuring a window

Graph Databases and Object Graph Mapping with neo4j and python

Python Data Anonymization & Masking Guide