Deploying our first Machine Learning model using Titan

Akoios
6 min readOct 22, 2019

Titan Tutorial #2: Getting started with ML

In our previous post, we talked about how to deploy a very simple “Hello world” service, using Titan for transforming a piece of Python code into an API endpoint ready to be consumed.

Now, we are ready to start working with more complex programs like a Python ML model. Ready? Let’s get started!

In this post, we’ll implement and deploy a multivariate linear regression to illustrate how we can transform this ML model into a ready-to-use service.

We will use two very common libraries in Data Science:

  • Pandas (An open-source library to handle and analyze data structures)
  • Linear Regression from sklearn (A complete toolbox for data analysis)

Let’s dive now into the code of our Notebook.

First of all, we import the aforementioned libraries:

import pandas as pd
from sklearn.linear_model import LinearRegression
import json

Next step is to read the data we will using in our model. In this case, we will be using an Advertising Dataset containing information regarding the relationship between different advertising channels (TV, Radio & Newspaper) and the corresponding sales.

In our model, we will be reading the CSV file containing the data from a public GitHub repository and will store it in a Pandas Dataframe:

# Reading the dataset from a Gitlab repo
url = "https://gitlab.com/jfuentesibanez/datasets/raw/master/regression_tutorial/advertising.csv"
df = pd.read_csv(url)

It is convenient to make a brief exploration of the data frame to check its contents:

# Data exploration
df.head()

This cell will return the first 5 rows of our dataset:

+ — — — -+ — — — -+ — — — — — -+ — — — -+
| TV | Radio | Newspaper | Sales |
+ — — — -+ — — — -+ — — — — — -+ — — — -+
| 230.1 | 37.8 | 69.2 | 22.1 |
| 44.5 | 39.3 | 45.1 | 10.4 |
| 17.2 | 45.9 | 69.3 | 12.0 |
| 151.5 | 41.3 | 58.5 | 16.5 |
| 180.8 | 10.8 | 58.4 | 17.9 |

This dataset represents the relation between the investment in different advertising channels and the corresponding sales.

Now, it is time to split our dataset to have it prepared for the training or our Linear Regression model. We split data into predictors (first 3 columns) and the output (last column):

# In order to build he LR model we will use the first three columns as predictors (TV, Radio & Newspaper)predictors = ['TV', 'Radio', 'Newspaper']X = df[predictors]y = df['Sales']

Once data has been split, we can easily train our model using the LinearRegression() function from sklearn, which implements the ordinary least squares linear regression (you can find more information in the documentation.)

# Model fitting and initializationlm = LinearRegression()
model = lm.fit(X, y)

The predicting formula can be written as:

Sales = α + β₁*TV + β₂*Radio + β*Newspaper

In order to see the calculated coefficients for the model, we can run the following:

# Now we can see the coefficients of our model 
print(f'alpha = {model.intercept_}')
print(f'betas = {model.coef_}')

After that, with the model already trained, we can easily make predictions based on arbitrary input data as shown below:

# And make predictions based on arbitrary data 
input_params = [[200, 100, 100]]
print(model.predict(input_params))

In this piece of code, we are requesting a prediction for an investment of:

  • 200 monetary units in TV advertising
  • 100 monetary units in Radio advertising
  • 100 monetary units in Newspaper advertising

Now we have a fully working model!

The full code of the notebook can be found here:

If you prefer, you can clone this GitHub repository with the code.

Imagine now that you want to integrate this model with another corporate application in your company.

In order to do that, we need to transform this model into a service and that’s what we are going to do next using Titan, the product we have built at Akoios for this precise matter.

Before being able to deploy the model, we need to make some arrangements in our Python code so Titan can understand how to deploy the model.

As we saw in our previous post, it is now time to “instrumentalize” our model. In this case, we need to send data -the input values- to the service in order to get a response -the expected sales-, so we will be using a POST method.

# POST /predictionbody = json.loads(REQUEST)['body']# predict the output for a new sample. Function to be exposed through Titaninput_params = body['data']
print(model.predict(input_params))

What’s going on in the code above? Well, first of all, we have defined the HTTP method (POST in this case) and the name of the endpoint (prediction). With this information Titan will:

  • Execute all the Python code in this cell every time a POST request is made to the /prediction endpoint.
  • Return the result of the last print statement to the caller.

You might be wondering how you could test your model and its results prior to the deployment. If you want to make some testing before deployment, you can create in a cell a mock data structure to replicate the information which would be received after a POST request:

# Mock request object for local API testing
headers = {
'content-type': 'application/json'
}
body = {
'data': [[200,100,100]]
}
REQUEST = json.dumps({ 'headers': headers, 'body': body })

After running this cell, you will be able to make some testing in the cell that has been instrumentalized before.

Now we have it all ready for the deployment of the model. As in the previous post, deploying the model is as easy as running:

$ titan deploy
3,2,1… Deploy!

After the deployment, we will have our endpoint ready tu use at an URL like this:

https://your-domain-name/regression

You can explore the API using the Swagger UI interface automatically created in the URL above.

From this moment on, it is possible to consume the energy prediction service we have developed by just making a POST request from Postman* or any other application. As easy as that!

Testing the service
  • If you find problems getting the results from Postman, try to change the request type from raw JSON to raw Text.

Finally, remind that you can get the code of this tutorial here.

Wrap-up

In this second post of our series of tutorials we have seen how to create and deploy a simple ML model directly from a Jupyter Notebook using our product Titan.

Next Tutorial

In our next tutorial, we show how to manage all the deployed services. Don’t miss it!

Foreword

Titan can help you to radically reduce and simplify the effort required to put AI/ML models into production, enabling Data Science teams to be agile, more productive and closer to the business impact of their developments.

If you want to know more about how to start using Titan or getting a free demo, please visit our website or drop us a line at info@akoios.com.

If you prefer, you can schedule a meeting with us here.

Akoios: Frictionless solutions for modern data science.

--

--