Kubernetes Tutorial: Part 3— Familiarizing the environment
“If the code doesn’t bother you, don’t bother it.”
This is the third part of the five-part series — “ From Sandbox to K8S: Deploying a Streamlit based object detection application using Minikube.”
- Part 0: Prologue
- Part 1: What the heck is Kubernetes?
- Part 2: Streamlit based object detection application
- Part 3: Familiarizing the environment
- Part 4: The Crux
- Part 5: A Little Bit of Polishing
Everything in K8S is an object that can be created, modified or deleted by interacting with the kube-apiserver. There are multiple ways to interact with the kube-apiserver, command-line interface tool, i.e. kubectl or using Client Libraries or by just using the Web UI (Dashboard), we will be resorting to the
kubectl option for this tutorial series. Instructions on how to create the objects are mostly written in YAML files. These files are fed as parameters to the
kubectl command, which then converts the same to JSON before making the API request to the kube-apiserver for execution. In this article, we will slowly familiarize ourselves with K8S by creating a few objects and exploring the environment to some extent.
Once Minikube and kubectl are installed, you can start the cluster by executing the command
minikube startfrom your terminal with administrator priveledges. To make sure that your cluster is running fine and that you can connect to it execute the command
kubectl cluster-info which prints out the details of the running cluster, as shown below:
partha@jarvis:~/Documents/Projects$ kubectl cluster-info
Kubernetes master is running at https://172.17.0.2:8443
KubeDNS is running at https://172.17.0.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Most of the YAML files that are created for K8S object creation contains the following fields:
- apiVersion — The version of kube-apiserver that will be used for executing the command
- kind — Type of the object to be created
- metadata — Data that helps in uniquely identifying the object in the cluster
- spec — Desired state of the object in the cluster
Namespaces can be seen as different accounts created in the same physical cluster, just like how multiple users can access a single remote machine with different accounts. They provide scope for names, i.e. all objects need to have a unique name within a namespace, but not across namespaces. Each Kubernetes object can only reside in one namespace. Also, it helps the cluster administrator to set resource quota for each user, which prevents any particular user from exploiting the cluster’s resources. When no namespace is specified while creating an object, it will be created in the ‘default’ namespace.
Now let’s create a namespace for our tutorial and use that namespace for our deployment. As discussed earlier, we just need to create a YAML file containing the details of the object. For a namespace creation, we can use the one below:
this will create a namespace called ‘partha’, and now to actually create the namespace we just need to execute:
kubectl create -f namespace.yaml
-f indicates that we are passing a file for creation. Before executing the command, make sure you have the file
namespace.yaml in the directory from which the
kubectl command is executed. We can now check if our namespace is created or not by executing
partha@jarvis:~/Documents/Projects$ kubectl get namespaces
NAME STATUS AGE
default Active 42d
kube-node-lease Active 42d
kube-public Active 42d
kube-system Active 42d
kubernetes-dashboard Active 42d
partha Active 31d
You can see that our namespace ‘partha’ appears at the bottom of the list, at the same time you may notice that there are other namespaces that do exist in this list without we creating them explicitly, these are created by K8S internally to run the cluster. For more details on what these namespaces do refer here.
kubectl get is a handy command that we can use to list any type of resource in our cluster, we will be using this a lot in this tutorial
Secrets are objects that help us in storing and managing sensitive pieces of information like passwords, ssh keys etc. in our cluster. They can later be safely referenced withing the pods without having to put the verbatims in a Pod definition file resulting in security concerns. There are different types of secrets in K8S each having its own purpose. In this tutorial, we will be focusing on docker-registry secrets that help us in pulling images from a docker registry inside a pod. People who are familiar with dockers must be aware of Docker Hub, the cloud repository for storing and sharing container images. Here we will be containerizing our application (in the next section) and pushing the same to Docker Hub for using it in our cluster. To authenticate our cluster for pulling images from Docker Hub, we need to create a secret object that can be used by K8S to pull the image for us.
Before creating a docker-registry secret, make sure that you have your account created in the Docker Hub. Creating a docker-registry secret can be done directly from the command line by executing the command:
kubectl create secret docker-registry docker-registry-creds
--docker-server=https://index.docker.io/v1/ --docker-username=<user-name> --docker-password=<docker-hub-password> --docker-email=<registred-email> -n partha
The above command creates a docker-registry secret called docker-registry-creds (you can name it anything you want) in the namespace ‘partha’. In case if you want to connect to some other docker server, you can specify the same in the
docker-serverargument. Please note that we can also create the same secret via a YAML file, but we resorted to the command-line model just to have some variety
Again you can verify if your secret has been created or not using the
kubectl get command, as shown below:
partha@jarvis:~$ kubectl get secrets -n partha
NAME TYPE DATA AGE
default-token-s2vnk kubernetes.io/service-account-token 3 36d
docker-registry-creds kubernetes.io/dockerconfigjson 1 36d
Pods are the smallest deployable units of computing that one can create and manage in Kubernetes. They can be seen as a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. Pods in Kubernetes can run a single container in which case, they can be just seen as a wrapper around a single container, or they could run multiple containers that need to work hand-in-hand to create a service (will be discussed in the next section). For example, you could have a container that runs an application which fetches relevant information from a datastore residing in a public volume based on the user’s query, and an additional container that keeps on updating this datastore on a realtime basis, in scenarios like this it will be better if we can tag the two containers into a single pod.
Again we will create a pod via a YAML file containing the four fields that we discussed earlier. To begin with, let’s just spin up a pod that runs the
This YAML file creates a Pod named ‘ubuntu-base’ in the Namespace ‘partha’ as specified in the ‘metadata’ section. In the ‘spec’ section we can specify the name of our container ‘ubuntu-base-container’ and the image that should be used for creating the container, in this case, it will be ‘ubuntu:18.04’. Please note that since we are pulling the image directly from the docker hub — the default docker repository — we don’t need to specify the docker server address before the image name, we can simply get away by specifying the image name alone. But this may not be the case always, oftentimes when you are working in an organization they will have their own internal docker repository in which case you need to prefix the image name with the docker-server-address since our cluster should know where it needs to pull the image from. So the ‘image’ field looks something like this:
We should also specify the command to be run inside the container in the ‘command’ field that starts the application for which the container is spun. Here we just call the
sh command to just print out the line “Hello, Kubernetes” and then sleeps for 3600 seconds. It is important to note that the Pod will be only alive as long as the command that it executes is still running, once the command ends the Pod exits. In our case, the Pod lives only for 3600 seconds and then just dies and never restarts again (as per our
restartPolicy), so don’t be surprised if the pod disappears after that 😛
Before we move on to the next section, let’s just two a few useful commands that will be handy for debugging the Pods just incase if they misbehave. They are
kubectl logs and
kubectl exec .
Kubectl logs: As the name suggests, it prints out the logs of the pod we are interested in:
partha@jarvis:~/Documents/Projects$ kubectl logs pod/ubuntu-base -n partha
kubectl exec: This command will allow us to run any command in an already running container; we can leverage this to get access to the terminal of our running container as follows:
partha@jarvis:~/Documents/Projects$ kubectl exec -it pod/ubuntu-base bash -n partha
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
In the next part, we will be deploying our object detection application in K8S for real 😅