Kubernetes

Kubernetes Readiness Probes

Kubernetes is a fantastic framework for deploying microservices and apps. When pods don’t perform properly, they are restarted or removed from a service, which is a wonderful feature. Kubernetes requires our assistance in determining whether or not a pod is operational. Container Probes are used to set this up. In this article, we will try to understand what Kubernetes readiness probes are and how it works.

What Are Readiness Probes?

Kubernetes uses readiness probes to figure out when it’s safe to transmit traffic to a pod or when it is time to move the pod to the Ready state.

A readiness probe will evaluate whether a specific pod will accept traffic if utilized as a backend endpoint for a service.

The readiness probe runs for the remainder of the pod’s life; this means that it runs even after the pod has reached the Ready state. Also, our application can make itself inaccessible for maintenance or some background work by responding to the probe with various responses.

It indicates whether or not the container is ready to accept queries. In case the readiness probe destroys for any reason, the endpoints controller eliminates the IP address of the pods from the endpoints among all services that satisfy the pod. Failure is the default condition of readiness before the initial delay.

When Should You Use a Readiness Probe?

The readiness probe may be just like the liveness probe (which determines when a container should be restarted) in this scenario. But the presence of the readiness probe in the spec suggests that the pod will start without accepting any traffic and only accept traffic once the probe starts to succeed.

You can use both liveness and a readiness probe if your app is heavily reliant on backend services. The readiness probe ensures that each essential backend service is available, in addition to the liveness probe, which passes when the app is healthy. This prevents traffic from being sent to Pods that can only react with error messages.

A startup probe can help if your container requires loading a large amount of data, configuration files, or migrations during startup. A readiness probe is quite useful if you want to differentiate between an app that has failed and the other that is still processing its first data.

Prerequisite

A few prerequisites must be met before using Kubernetes readiness probes in practice. Ubuntu 20.0 is a Linux operating system that must be installed first. Because Kubernetes on Linux requires it, install the Minikube cluster as well.

Before moving to the command line terminal, we must first start Ubuntu 20.04, which has already been installed. Type “Terminal” into the Ubuntu 20.04 system’s search box to quickly launch the terminal.

After that, the Minikube should be started. In order to start the Minikube, use the terminal command “minikube start.” This command will launch the Kubernetes cluster and create a virtual machine capable of cluster execution. The “minikube start” command’s output is depicted below:

Example of Kubernetes Readiness Probes

We may configure an example app. In this case, a simple NGINX web server, to understand how readiness probes work. We have developed a basic deployment configuration here. Each aspect of the configuration file is presented in both attached screenshots:

This configuration should be saved to a file called readiness.yaml.

After that, use kubectl apply -f readiness.yaml to apply it. The instruction and its output can be seen in the following screenshot:

We have now developed a service for the complete understanding of the example.

Save this configuration to the svc.yaml file.

After that, use kubectl apply -f svc.yaml to apply it. The instruction and its output can be seen in the following screenshot:

Although there is no particular endpoint for readiness probes, we can obtain information about their present condition by running the kubectl describe pods <Name of the pod> command. Run the kubectl get pods command and check the status of the pods and other details.

Pods will be displayed, along with their status and ready states. As you can see, our pod is running as planned. The instruction and its output can be seen in the screenshot provided below:

The result of the “kubectl describe pod” is attached below. The instruction and its output can be seen in the following screenshot:

The section of Events will be displayed at the bottom of the output of the following command:

With the kubectl get endpoints command, we can examine the endpoints. The Nginx service has an endpoint, as can be seen. The instruction and its output can be seen in the following screenshot:

We may use the kubectl describe endpoints nginx command to see more information. The instruction and its output can be seen in the following screenshot:

Suppose we set the port parameter for the readiness probe to 81 and save the setup. First, verify the pod’s status directly. The state is “running”, as you can see below. The instruction and its output can be seen in the following screenshot:

Because we haven’t updated port 81, it returned a Boolean value of “true”, as shown in the screenshot below. If you change port 81 and if it is successfully updated, it will return “false” indicating that the Nginx service has no endpoints because the container isn’t ready to receive traffic. The instruction and its output can be seen in the screenshot below.

Conclusion:

In this article, the readiness probe’s effects have been observed, and the parameters that can be configured. Although we focused on the HTTP check, the techniques we learned may be applied to any of the other tests. To configure and operate readiness probes, you must first understand the architecture and dependencies of your application. We hope you found this article helpful. Check the other Linux Hint articles for more tips and articles.

About the author

Kalsoom Bibi

Hello, I am a freelance writer and usually write for Linux and other technology related content