A first approach to MLOps using Titan

Titan Tutorial #9: Integrating and consuming services in a healthcare use case

Machine learning techniques are increasingly attracting interest from the healthcare sector due to its multiple applications in this field.

From oncology screening to drug synthesis and voice assistants, Machine Learning is expected to play an important role in the coming years in the transformation and improvements of health systems.

In this tutorial we will illustrate how, starting from a ML model, we can build a basic MLOps system to manage the deployment and operation of a Breast Cancer Classification model.

ML has proven itself specially useful in the diagnosis of several types of cancer as it has been shown in recent news, such as the Google approach for breast cancer.

For this tutorial and, in order to illustrate our first approach to MLOps with titan, we will be using a simplified version of this interesting and very well explained model.

The aim of this model is to classify breast tumors into two categories Benign (non-cancerous) and Malignant (cancerous), based on a popular set of labeled data on breast cancer from the University of Wisconsin.

In order to conduct this classification, we will be using a Support Vector Machine (SVM), a binary linear classification technique able to draw a decision boundary that minimizes the generalization error.

SVM Visualization

If you are interested in knowing more about the technical details of SVMs, I recommend you to check Andrew Ng’s lecture notes and videos about SVMs.

Regarding the features, we need to define “y” (the feature we are trying to predict) and “X” (the predictor features). In our case, we have:

  • y : Our target feature will be whether if the tumor is Benign or Malignant.
  • X: We will use the rest of columns to make our prediction.

More specifically, the features which we will use as predictors will be the following:

id: ID numberdiagnosis: The diagnosis of breast tissues (M = malignant, B = benign)radius_mean: mean of distances from center to points on the perimetertexture_mean: standard deviation of gray-scale valuesperimeter_mean: mean size of the core tumorarea_meansmoothness_mean: mean of local variation in radius lengthscompactness_mean: mean of perimeter^2 / area - 1.0concavity_mean: mean of severity of concave portions of the contourconcave points_mean: mean for number of concave portions of the contoursymmetry_meanfractal_dimension_mean: mean for "coastline approximation" - 1radius_se: standard error for the mean of distances from center to points on the perimetertexture_se: standard error for standard deviation of gray-scale valuesperimeter_searea_sesmoothness_se: standard error for local variation in radius lengthscompactness_se: standard error for perimeter^2 / area - 1.0concavity_se: standard error for severity of concave portions of the contourconcave points_se: standard error for number of concave portions of the contoursymmetry_sefractal_dimension_se: standard error for "coastline approximation" - 1radius_worst: "worst" or largest mean value for mean of distances from center to points on the perimetertexture_worst: "worst" or largest mean value for standard deviation of gray-scale valuesperimeter_worstarea_worstsmoothness_worst: "worst" or largest mean value for local variation in radius lengthscompactness_worst: "worst" or largest mean value for perimeter^2 / area - 1.0concavity_worst: "worst" or largest mean value for severity of concave portions of the contourconcave points_worst: "worst" or largest mean value for number of concave portions of the contoursymmetry_worstfractal_dimension_worst: "worst" or largest mean value for "coastline approximation" - 1

For the training, we will use allow the model to be trained using three different approaches:

  • Basic SVM fitting
  • SVM fitting with normalized data
  • GridSearch fitting

Let’s now dive into the interesting part of the tutorial. Our idea is to build a basic MLOps system around our SVM classification model.

Imagining a real world use case for a model like this, we could need the following services and features:

  • A prediction service: To estimate a diagnostic for new input data.
  • An updating service: To retrieve new samples from a dynamic data source.
  • A training service: To re-train the model with the new data.
  • A performance tracking service: To track the performance of the model after having it trained with new data.
  • A notification service: To warn the user if the performance of the model goes below a certain threshold.

Let’s see now how to put all these pieces together to build our system. First thing is to build our model in Jupyter Notebook so we can define the services to be provided with Titan.

The required services are depicted in the figure below:

The different endpoints which will be defined

Let’s now see the functions which implement each of these endpoints:


This endpoint allows us to update the dataset reading it again from the data source and prepare it for the SVM training:

def update_data():
# Read and prepare data
df_cancer = pd.read_csv("https://storage.googleapis.com/tutorial-datasets/data.csv")
df_cancer = df_cancer.drop(['id'], axis = 1)
df_cancer.drop("Unnamed: 32",axis=1,inplace=True)
# Define X and y
X = df_cancer.drop(['diagnosis'], axis = 1)
pd.set_option('display.max_columns', None)
df_cancer['diagnosis'] = df_cancer['diagnosis'].replace({'M': 0.0, 'B': 1.0})
y = df_cancer['diagnosis']
# Split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 20)
# Scale data
X_train_min = X_train.min()
X_train_max = X_train.max()
X_train_range = (X_train_max- X_train_min)
X_train_scaled = (X_train - X_train_min)/(X_train_range)
X_test_min = X_test.min()
X_test_range = (X_test - X_test_min).max()
X_test_scaled = (X_test - X_test_min)/X_test_range
return ("Dataset successfully updated")


This endpoint allows to have the model retrained using one of the three available training options:

  • Basic SVM fitting
  • SVM fitting with normalized data
  • GridSearch fitting
def train_svm(args, body):
type = args.get('param', args.get('basic', None))

if type[0] == "basic":
svc_model.fit(X_train, y_train)
y_predict = svc_model.predict(X_test)

if type[0] == "normalized":
svc_model.fit(X_train_scaled, y_train)
y_predict = svc_model.predict(X_test_scaled)

if type[0] == "gridsearch":
param_grid = {'C': [0.1, 1, 10, 100], 'gamma': [1, 0.1, 0.01, 0.001], 'kernel': ['rbf']}
grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=4)
y_predict = grid.predict(X_test_scaled)

return y_predict, svc_model

Please note that we are using the @endpoint decorator (as shown in the previous tutorial) to process the request data containing the training mode passed as a parameter.


This endpoint returns the estimated classification of a tumor (‘Benign’ or ‘Malign’) based on the input data passed as a parameter.

def prediction(args, body):
if (svc_model.predict(body.get('data', []))[0] == 1.0):
return "Benign"
elif (svc_model.predict(body.get('data', []))[0] == 0.0):
return "Malign"


This endpoint returns a global report of the most relevant performance metrics of the model in the following format:

              precision    recall  f1-score   support

0.0 1.00 0.98 0.99 48
1.0 0.99 1.00 0.99 66

micro avg 0.99 0.99 0.99 114
macro avg 0.99 0.99 0.99 114
weighted avg 0.99 0.99 0.99 114

The code of the function for this endpoint is pretty straightforward:

def show_performance():
if y_predict is not None:
return classification_report(y_test, y_predict)
return "Model not trained yet, make a call to /train_svm to train it"


Due to the nature of the model, it has been decided to track the recall value as the main performance metric to trigger the alerts.

If we look at a confusion matrix:

Confusion Matrix

And to the definition of precision and recall:

Precision and Recall

In our case, it makes sense to focus on recall since it is important to track and minimize the rates of False Negatives (Malign tumors classified as Benign).

You can see the whole code here:

As usual, once the model has been coded and the endpoints have been deployed, it can be easily deployed with titan using a single command:

$titan deploy

As it has been already stated, the aim of this tutorial is to build a basic MLOps system based on the set of services we have deployed with Titan.

Since we have all the services of our model easily accesible through a REST API it is quite easy to build pipelines to perform the processes we intend to.

For our system, we need to recurrently (e.g. in a daily basis):

  1. Update our dataset from the data source

2. Re-train the model with the new data

3. Evaluate the recall of the model and send an alert email in case the recall goes below a threshold (e.g. 90%)

We will use the easiest approach to automate these tasks combining bash scripting and cron.

Let’s see how we could carry out each of the actions:

Update our dataset from the data source

To achieve this, we need to make recurrent calls to the /update_data endpoint. In order to do that it is just needed no create a very simple bash script with a curl command to make the API call.

# Simple script to programmatically update the model dataset
curl https://services.demo.akoios.com/bcc/update_data

To have this script automatically executed, we can use a line like this in the crontab file of our computer or server:

0 8 * * * sh /route/to/script/update_data.sh

This will make the script to be executed everyday at 8am just as we needed!

Re-train the model with the new data

After retrieving the daily data, it is needed to re-train the model with these new inputs. To this end, we will take a similar approach to the one just seen:

#!bin/bash# Simple script to programmatically retrain the model curl -X POST "https://services.demo.akoios.com/bcc/train_svm?param=$1" -H "accept: text/plain" -H "Content-Type: text/plain" -d "{}"

Note that this script takes one input parameter to train the SVM using one of the three options: basic , normalized and gridsearch.

Our cron line can be as follows:

5 8 * * * sh /route/to/script/train.sh normalized

This line will execute this script everyday at 8:05am.

Evaluate the recall and send email if threshold is exceeded

In this last script, we will be define the logic to:

  • Track the recall of the model
  • Send an email if it goes below 90%

A script to implement this could be as follows:

#!bin/bash# Simple script to track the recall and send alertsrecall=$(curl -X GET "https://services.demo.akoios.com/bcc/get_recall" -H "accept: application/json")threshold=0.9if (( $(echo "$recall < $threshold" |bc -l) )); then
curl --url 'smtps://smtp.gmail.com:465' --ssl-reqd --mail-from 'mail@mail.com' --mail-rcpt 'mail@mail.com' --user 'mail@mail.com:Password' -T "mailcontent.txt"

As for the rest of the cases, we can set a cron task to run this script:

6 8 * * * sh /route/to/script/recall_tracking.sh

With this command, we will track the recall value just after the new training has been finished.

The following figure summarizes the basic architecture of our system:

Our system architecture

You can check all the code from the Jupyter Notebook and the scripts in this GitHub repository.


In this post we have built a basic system around a classification model to illustrate how titan can be useful to automate and improve all kinds of processes.

Using basic tools as bash scriptsand cron we have been able to articulate and coordinate the different services to obtain the required features.

We hope you enjoyed this tutorial! Thanks for reading!


Titan can help you to radically reduce and simplify the effort required to put AI/ML models into production, enabling Data Science teams to be agile, more productive and closer to the business impact of their developments.

If you want to know more about how to start using Titan or getting a free demo, please visit our website or drop us a line at info@akoios.com.

If you prefer, you can schedule a meeting with us here.

Akoios: Frictionless solutions for modern data science.





Love podcasts or audiobooks? Learn on the go with our new app.

Time Series Forecasting with TensorFlow.js

Mercury — A Chat-bot for Food Order Processing using ALBERT & CRF

LaSOT —  Large-Scale Dataset for Object Tracking Models

Yet Another Wake-Word Detection Engine

ISL: Statistical Learning

Review: LAPGAN — Laplacian Generative Adversarial Network (GAN)

Cross-Validation with Code in Python

Exploring Food Recipes using Machine Intelligence

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store



More from Medium

Federated Learning 101


Machine Learning for Data Engineers

How to make Deep Learning Reproducible in less than 100 lines of code