Kubernetes

Kubectl Autoscale Command

Kubernetes provides autoscaling capability to manage the resources automatically without human interaction. The autoscale function automatically changes the number of nodes when required and saves resources. In this article, we will learn how to deploy the “kubectl autoscale” command and “HorizontalPodScaler” autoscaling. This tutorial teaches this important concept in detail. Let’s first discuss what kubectl autoscale concept is and then head towards the step-by-step process for better understanding. This article is very useful if you are new to the concept of Kubernetes autoscaling.

What Is Kubectl Autoscale?

Autoscaling is the main feature in the Kubernetes cluster which makes the resources to automatically update without the hassle of manually doing it. It is a very time and resource-wasting process to update the demanding resources manually. Kubernetes autoscaling provides an automatic facility to optimize the resources.

Autoscaler can create and destroy the number of nodes as per requirement. Autoscale reduces the wastage of resources. The Kubectl autoscale automatically chooses the pods that are currently executing inside the Kubernetes cluster.

There are two types of scaling: (1) HorizontalPodScaler and (2) Vertical scaler. The Horizontal scaler is different from the Vertical scaler. The HorizontalPodScaler helps in the decrease or increase of pods when needed. On the other hand, the Vertical scaler uses resources such as CPU and memory.

Here are all the steps that you can follow in your system and see the output for a better understanding.

Step 1: Starting a Minikube Cluster

In the first step, start the minikube tool to run the Kubernetes cluster so that we can execute the “kubectl autoscale” command. You can set up your nodes, pods, and even a cluster in the Kubernetes environment using the minikube cluster. To do so, use the following command to keep the minikube in active mode:

~$ minikube start

As you can see in the following output screenshot, this command enables the minikube cluster and makes the Kubernetes environment usable:

Step 2: Get the Pod Details

In this step, the Kubernetes cluster is running successfully. Now, we get the pod details in the cluster. The pod in Kubernetes is the collection of units that shares resources. The following script is executed by running the following command in your minikube cluster:

~$ kubectl get pods

Using the previous command which is “kubectl get pods”, we can get the list of all pods that run in the Kubernetes cluster.

After executing the “get pods” command, we obtain the following output:

A screenshot of a computer program Description automatically generated with low confidence

Step 3: Get the Deployments of Pod

In the previous “kubectl get pods” command, we get the details of pods. Now, we use the “get deployment” command to obtain the list of the created deployments. The following script is executed for this purpose:

~$ kubectl get deployments

After executing the command, the following screenshot shows the output:

Step 4: Autoscale Deployment

The autoscale command is used to make the automation selection of pods that run in the cluster. By deploying the autoscale in the cluster, we automatically insert and terminate the number of nodes. The following script is executed in the minikube cluster and it shows the filename, minimum pods, and maximum pods where the pods should be between 2 to 10:

~$ kubectl autoscale deployment nginx1-deployment1 --min=2 --max=10

After executing the command, the following output is generated:

Step 5: Create a Kubernetes YAML File

In this step, you will learn to create the YAML file in the cluster. YAML file is useful for deployment and application testing. There are various types in Kubernetes to create and edit the file.

In this article, we use the “nano” command to create the YAML file because it is the easiest way and the best choice for beginners.

Follow the given steps here to create a YAML file using nano:

  • To create a new file or change an existing one, navigate to the desired directory location.
  • Type in “nano”. After that, write the name of the file. For example, if you wish to create a new file name, write down the name – “deploo.yaml”.

Run the following script and create a YAML file in the project directory:

~$ nano deploo.yaml

After creating the “deploo.yaml” file, the next step is to configure the YAML file. We explain it in the following step.

Step 6: Content of YAML File

In this step, we can easily configure the Apache server and PHP files. Before we utilize the HorizontalPodScaler, we must configure the workload monitor. As the following piece of code shows the kind:deployment, the port of the web browser is 90 and the CPU limit is 200m.

You can see the complete “deploo.yaml” file information here:

apiVersion: apps/v1
kind
: Deployment
metadata
:
name
: php
spec
:
selector
:
matchLabels
:
run
: php-apache
template
:
metadata
:
labels
:
run
: php-apache
spec
:
containers
:
- name
: php
image
: registry.k8s.io/hpa-example
ports
:
- containerPort
: 90
resources
:
limits
:
cpu
: 200m
requests
:

cpu
: 100m
---
apiVersion
: v1
kind
: Service
metadata
:
name
: php
labels
:
run
: php-apache
spec
:
ports
:
- port
: 70
selector
:
run
: php-apache

Step 7: Create the Deployment

In this step, let’s create the YAML file named “deploo.yaml”. The following script is executed in the minikube cluster:

~$ kubectl create -f deploo.yaml

The output of the aforementioned command that we executed can be seen in the screenshot that follows. The output indicates that the YAML file has been created:

Step 8: Create the HorizontalPodScaler

In this step, we will show you the command to create the HorizontalPodAutoscaler. The pods are inserted and terminated automatically depending on demand. It is distinct from vertical scaling, whereby the CPU and memory resources are assigned by autoscale. The following script is executed in the minikube cluster:

~$ kubectl autoscale deployment php --cpu-percent=50 --min=10 –max=20

Here, you can see that we set the values for minimum and maximum to 10 and 20.

Attached is the output of the previous command:

Step 9: Check the HorizontalPodScaler

In this step, we check the present status of HorizontalPodAutoscaler which is newly created. The following command is executed:

~$ kubectl get hpa

Conclusion

One of the most useful features of Kubernetes is the “kubectl autoscale” which provides automatic resource updates in the Kubernetes cluster. Autoscaler helps when a cluster needs to increase the pods or to decrease the pods. In this article, we learned the two autoscale methods – one is the default autoscaler and the other is the HorizontalPodScaler.

First, we deployed the pods and declared them. Then, we created the autoscaler and configured the Apache server to deploy the workload monitor before the HorizontalPodScaler. After that, we created a YAML file and the HorizontalPodScaler. This article focused on the detailed steps of creating, configuring, and deploying the autoscale Kubernetes.

About the author

Kalsoom Bibi

Hello, I am a freelance writer and usually write for Linux and other technology related content