Prerequisite
To use the Kubernetes services, you’ll need a Minikube cluster. You’ll have to set up a Minikube cluster on your system to make this technique work. To set up a Minikube cluster, use the command line terminal. It can be used in two ways. Look for “Terminal” in your system’s program search section. Ctrl+Alt+T is a keyboard shortcut that can be used for this:
At least 300 MiB of memory must be available on each node in your cluster. You’ll need to have the metrics-server service running in your cluster for some tasks on this page. You can skip those steps if the metrics-server is already functioning. Now, type the following appended command.
Now use the attached command.
The response guides metrics.k8s.io if the resource metrics API is accessible, as shown in the screenshot above.
Steps to Create a Namespace
Make a namespace for the resources you’ll create here to separate them from the rest of your cluster.
A new pod is formed, as you can view below.
Provide the resources: requests field in the manifest of the container’s resource in order to define a memory request. Include resources: limits to set a RAM limit. You will design a Pod with one container. The container has a 100 MiB memory request and a 200 MiB memory limit. The Pod’s configuration file is as follows:
When the container starts, the args section of the configuration file supplies its parameters. The “—vm-bytes” and “150M” options instruct the container to allot 150 MiB of RAM.
Below you can see that we have created the Pod:
This command will check to see if the Pod Container is up and running:
According to the result, the Pod’s single container has a memory request of 100 MiB and a memory limit of 200 MiB.
To get the pod’s metrics, run the kubectl top command.
How May the Memory Limit of a Container be Exceeded?
If the Node appears to have enough memory, a container can surpass its memory request. On the other hand, a container cannot use more memory than it has. If a container takes more memory than assigned, it will be terminated. The container will be removed if it continues to use memory over its limit. The kubelet restarts a terminated container if it can be resumed, much like any other form of runtime failure.
Here we’ll create a Pod. This pod will try to allocate more memory than it already has.
The configuration file for a Pod with one container and a memory request of 50 MiB and a memory limit of 100 MiB is as follows:
According to the args section of the configuration file, the container will attempt to allocate 250 MiB of RAM, significantly above the 100 MiB limit.
Again, we have created the pod here.
Here you can view the Pod’s comprehensive information. The container either be running or not at this point. Repetition of the previous command until the container is killed is required:
Get a more in-depth look at the container’s status. According to the output, the container was destroyed because it ran out of memory.
In this example, the kubelet restarts the container because it can be restarted. Repeat this command numerous times to ensure that the container is killed and restarted regularly. According to the output, the container is killed, restored, killed again, initiated again, etc.
The following command lets you view comprehensive information related to the history of the Pod.
The result reveals that the container constantly starts and stops:
Here you can view the detailed information about your cluster’s Nodes:
A record of the container being killed due to an out-of-memory issue is included in the output:
This command deletes the pod, as you can see below.
What Should You Do if You Have a Memory Request That Is Too Large for Your Nodes?
Memory requests and limitations are usually linked with containers, but it’s also helpful to think about Pods as having memory requests and restrictions. The memory request is defined as the total of all memory needs for all containers in the Pod.
Pods are scheduled and maintained via requests.
Here we’ll build a Pod with a larger memory request than any Node in your cluster’s capacity.
Here’s the configuration file for a Pod, including one container and a 1000 GiB memory need, which is probably more than any Node in your cluster can manage.
The following apply command creates the Pod, as you can see.
Now make use of the ‘get pod’ command. The status of the Pod is PENDING (see the output). The Pod isn’t set to run on any Node and will remain PENDING indefinitely.
This command will help you view more details on the Pod, including upcoming events:
The output demonstrates that the container can’t be scheduled because the Nodes don’t have enough memory:
What Happens if You Do Not Specify a Memory Limit?
One of the following scenarios occurs if you don’t define a memory limit for a container:
- The container has no limit on how much RAM it can utilize. The OOM Killer could be triggered if the container consumes all available memory on the Node where it has been running. Furthermore, acontainer with no resource constraints will have a higher risk of being killed in the event of an OOM kill.
- The container is executed in a namespace with a default memory limit, and the default limit is applied to the container automatically. Cluster administrators can use a LimitRange to set a default number for the memory limit.
Conclusion:
We took a closer look at the Kubernetes OOMKilled error in this article. It aids Kubernetes in managing memory while scheduling pods and deciding which pods to destroy when resources become scarce.