Creating an scalable object detection service using Titan

Akoios
4 min readFeb 13, 2020

--

Titan Tutorial #4: Deploying an object detection model based on YOLO

In this new tutorial we will see how to develop and deploy a more complex model, more specifically, an object detection model. For those new to these topics, object detection is an umbrella concept that encompasses all those technologies related to computer vision and image processing dealing with the identification of objects of a certain class (humans, animals, vehicles…)

More specifically, detection means classification (what is the object) and localization (where is the object). The common output of detection models is shown below:

Example of object detection

For our example, we will use YOLO (You Only Look Once), a well-known, state-of-the-art object detection system oriented to real-time processing. From a data science perspective, the most interesting thing about YOLO is that it applies a single neural network to the full image, hence the name “You Only Look Once”.

If you are interested in the scientific details of YOLO you can take a look at this paper where the system is thoroughly explained.

Let’s get to work after this brief introduction. As previously stated, we want to develop a simple object detection system to have it deployed afterward using Titan so it can be used from any other application with a simple API call.

As usual when we work with Titan, we will use a Jupyter Notebook to do the coding. Since we will be using several external packages, first thing to do is to install the required dependencies for the model:

!pip install -r requirements.txt

In our case, therequirements.txtfile contains the following dependencies:

Cython
numpy
opencv-python
tqdm
requests
torchvision
torch>=1.3
matplotlib
pycocotools
Pillow==6.1

After that, we can specify the required imports in the next notebook cell as follows:

import time
import glob
import torch
import os
import requests
import json
import sys
import uuid
import request
import subprocess
from os.path import exists, join, basename, splitextfrom os.path import exists, join, basename, splitext from IPython.display import Image, clear_output

Next step is to download YOLO and COCO, the large-scale object detection, segmentation, and captioning dataset that we will ve using for this example. This dataset counts with more than 200k labeled images.

!git clone https://github.com/ultralytics/yolov3  # clone
!bash yolov3/data/get_coco_dataset_gdrive.sh # copy COCO2014 dataset (19GB)
%cd yolov3

Now we are ready to define a pair of functions to manage the retrieval of the input image and the execution of the YOLO algorithm over it.

In order to check that everything’s going fine, we can create a mock request object for local API testing as follows:

# Mock request object for local API testing
args = {
'url': 'https://i.postimg.cc/rF3W27kn/https-bucketeer-e05bbc84-baa3-437e-9518-adb32be77984-s3-amazon.png'
}
REQUEST = json.dumps({ 'args': args })

And finally, we can define and expose the cell we want to be run when calling the endpoint we are about to deploy using Titan:

# GET /detect
status = 200
content_type = 'image/jpeg'
try:
result = process(REQUEST)
print(result)
except Exception as err:
status = 500
content_type = 'application/json'
print(json.dumps({ 'error': 'Cannot process image due to an error: {}'.format(err)}))

As it can be seen in the last piece of code, we have instrumentalized the cell for titan by just adding #GET /detect at the top. This tells Titan to execute this cell when a GET call is received at the /detect endpoint.

Now we have it all ready for the deployment of the model. As you probably already know, deploying the model is as easy as running:

$ titan deploy

Once the deployment has finished (it could take some time) we will be able to start using our brand new object detection service. You can check and download all the code here.

The main advantage of Titan is its ability to transform models into services.

In order to make the request, you can use the available form at the Swagger interface that you will have available after the deployment.

Specifically, you can define the URL of the image you want to process in the available param field as shown in the picture:

In case you want to try by yourself, you can get the whole code here.

Happy detection!

Wrap-up

In this third post of our series of tutorials we have seen how to create and deploy a slightly complex ML model based on YOLO and the COCO dataset. Moreover, we have seen how using Titan the Data Scientist you can forget about all the infrastructure complexity required to get a model into production.

Next Tutorial

In the next tutorial, it is explained how to provision the runtime environment and hardware for the deployments.

Foreword

Titan can help you to radically reduce and simplify the effort required to put AI/ML models into production, enabling Data Science teams to be agile, more productive and closer to the business impact of their developments.

If you want to know more about how to start using Titan or getting a free demo, please visit our website or drop us a line at info@akoios.com.

If you prefer, you can schedule a meeting with us here.

Akoios: Frictionless solutions for modern data science.

--

--

No responses yet