Kubernetes Tutorial: Part 5- A Little Bit of Polishing

Parthasarathy Subburaj
5 min readNov 6, 2020

“It’s not a bug — it’s an undocumented feature.”

Logos of Kubernetes and Streamlit

This is the fifth part of the five-part series — “ From Sandbox to K8S: Deploying a Streamlit based object detection application using Minikube.”

Ingress

In the last article, we have successfully managed to deploy our application using Minikube and accessed the same via a NodePort Service object with the URL convention <minikube-ip>:<port-number>. But when we are developing a production-grade application, we might have multiple services each running in a different port. And soon it may become quite hazy because we just can't keep track of all the services and their endpoints. Wouldn’t it be better if we have a centralized system that manages external accesses to these services in the cluster which could take care of load balancing, SSL termination of HTTPS traffic and allow us to access the services at fancy name-based virtual hosting? Thanks to Ingress, all these pain points can now be easily managed with its help!

Though Ingress is a very vast topic and quite involved, here we will be just using it to create a Name-Based Virtual Hosting so that we could access our service in a pretty URL instead of the one that we currently have in place.

For us to use Ingress, the cluster must have an Ingress controller running, and it is not set by default. Luckily, Minikube is shipped with an Nginx controller which can be enabled by executing:

minikube addons enable ingress

We can use the following YAML file to create an Ingress:

YAML for Ingress creation

And the same can be created by executing:

kubectl create -f ingress.yaml

The created Ingress object maps the service object-detection-service to the host streamlit.exercise.com (you can think of something more fancier 😛). Since we cannot modify the external DSN server, we have to map Minikube’s IP with the virtual host that we have created, and the same can be achieved by modifying the /etc/hosts. The modified file looks like the one shown below:

partha@jarvis:~$ cat /etc/hosts
127.0.0.1 localhost
<minikube-ip> streamlit.exercise.com

where <minikube-ip> is the IP address of the Minikube which can be obtained by executing minikube ip. And now our service is accessible at:

http://streamlit.exercise.com/kubernetes-object-detection/

the /kubernetes-object-detection suffix comes from the base_url variable that we have used in the deployment.

Heath Check Probes

When we are working with a production-grade application, we typically will be dealing with a large number of containers which comes together to power the application. And it’s very much possible that one or more containers could go down as time progresses and might become unresponsive due to CPU overload, application deadlock etc.. A simple restart is needed to fix them. Checking the status of all the Pods in our application manually, and restarting them as and when needed, could be a very laborious and time-consuming task. Kubernetes has inbuild functionalities, namely Liveness and Readiness Probes for us to automate this process.

Liveness Probe has a simple checking mechanism that could be in the form of a command to be executed to check the health status, an HTTP request or opening a TCP socket in the container on a specified port. The checking process is performed at a regular interval of time, and if it succeeds without an error, then the container is marked healthy otherwise it is marked as unhealthy and will be scheduled for a restart.

Readiness Probes, on the other hand, helps us in determining when the container is available for accepting traffic. Sometimes it is possible that we could have multiple containers running inside a Pod, and for it to be ready and start serving the incoming requests all the containers in it should be up and running. So we might want to exclude this Pod from the service load balancer till the time it becomes ready. And having a Readiness Probe under these circumstances will fulfil our purpose.

Here we will create a Liveness probe for our Pod by a Liveness command that checks the health of our Pod on regular bases. Configurations for Liveness/Readiness probes can be set while creating the deployment. Hence we can use the same YAML file that we used for creating the deployment and just add the configurations for the Probe as shown below:

YAML snippet for creating Liveness Probes in Deployment Object

Here we are creating a probe that tries to open the file /streamlit_app/liveness.txt every 20 seconds and the probe will give slack for about 10 seconds at the beginning for the application to start. And we also modify the command to be executed a little bit just to see our probe in action. We basically create the liveness.txt file and delete it after 40 seconds, so basically our first few probes should pass and then it should fail which would trigger a restart for the Pod.

Seamless updates and rollbacks

The deployment object is shipped with another cool feature that helps us in updating our existing application in production without much downtime. Also, we can easily rollback our update and restore our application to its previous state in case if the new version of our application has a serious bug that could affect its functionality. In this section, we will roll out a new update (a buggy release 😜) for our existing application and see the roll out and rollbacks in action, the following video demonstrates the same:

All that concludes our series on deploying a Streamlit based object model using Minikube.

--

--

Parthasarathy Subburaj

Helping computers see and understand the world as we do!