Kubernetes Tutorial: Part 2 — Streamlit based object detection application

Parthasarathy Subburaj
3 min readNov 6, 2020

--

“There’s no place like 127.0.0.1”

Logos of Kubernetes and Streamlit

This is the second part of the five-part series — “ From Sandbox to K8S: Deploying a Streamlit based object detection application using Minikube.”

Streamlit

Streamlit is an open-source python library that aids in rapid developed and deployment of web-based applications. Of late it has gathered significant attention in the data science community because of the simplicity that it brings in developing attractive user interfaces without having to know about the well-known front-end development languages like HTML, CSS, JavaScript, etc. For readers who want to explore more on Streamlit, please refer to this YouTube tutorial series, it covers the basics and some advanced concepts in Streamlit as well. Having said this, some of the well-matured web development frameworks in Python like Flask and Django are much more robust and offers more flexibility to the developers. But for this tutorial, let’s stick to Streamlit for developing our application.

A simple comparison between different frameworks for Web Development in Python

Object Detection

In the field of computer vision, object detection is a well defined and matured area; they are a family of algorithms that helps us in identifying and locating objects of interest in an image or a video. I don’t want to rant much about object detection here since there are loads of good quality materials available online to understand this technology. In this tutorial series, we will be deploying two object detection models, namely SSD (Single Shot Detector) and YOLO-v3 (You Only Look Once) build using Pytorch that can detect common objects of interest. Also, we will be leveraging the idea of using pre-trained models instead of training them from scratch, thus saving a lot of time and compute resource. The codes and the models for object detection pipeline that we use here are heavily inspired from here and here. Also, we will be only using the CPUs to run our model.

Putting them together

Since the primary goal of this tutorial is to understand how one can deploy a machine learning model using K8S, we will restrict ourselves to building a simple application which enables users to select the algorithm they want to run, SSD or YOLO, upload an image for which they want to run the inference and see the predictions of the chosen algorithm.

This is how the welcome page looks like:

Streamlit application welcome page

And this is how it looks when a user runs inference on an image:

Streamlit application after running inference

As of now, this is directly run on my machine, and that’s evident from the localhost:8501 appearing at the top of the web page. Also, when building such an application, we need to take care that the model weights are persisted in memory as soon as the web page loads; this is essential as it prevents the model from loading into memory every time the user uploads an image for inference, this greatly enhances the user experience. We achieve this using the Streamlit’s inbuilt caching mechanism, the @st.cache decorator.

Now we are all set to take this application that is running in a sandbox environment and deploy the same in Minikube after containerization.

Part 3: Familiarizing the environment

--

--