Kubernetes provides various ways to control the number of resources individual containers can consume. One important way to limit the number of resources used by individual containers is through event rate limits. This article describes event rate limits and how to use this feature to limit resource consumption for your applications running on Kubernetes.
What is the Event Rate Limit in Kubernetes?
Event rate limits are a way to control the rate at which your application’s pods can consume additional CPU and Memory on a cluster. For example, if a request to send data to the service comes to the backend too quickly (e.g., ten requests per second), a rate limiter will block the request until the previous request has been processed. If any of your pods try to exceed this limit by requesting more than three requests per second, it will be rejected. This allows your applications to run smoothly even when multiple instances of the same application are running simultaneously, without consuming excessive amounts of resources from the cluster. You can configure rate limits for a namespace, a user, a server, and a source+object.
Why Should You Use the Event Rate Limit?
Here are the reasons why it’s better to use an event rate limit:
Controls the Rate at Which Events are Emitted from your Nodes
This is important for controlling the rate at which events are emitted from your nodes. The rate at which events are sent to k8s is variable, depending on the workload you impose on your cluster. Any abnormal event could cause unexpected workloads on the underlying infrastructure components and increase CPU utilization on the master nodes. For example, if a node is experiencing a high load due to an unexpected spike in traffic, it could produce an excessive number of events that could affect the cluster’s performance. Therefore, it is important to configure a threshold on the rate of events the cluster can process to prevent overload.
Consider the following scenario: you have a fleet of fifty pods running in your cluster, and each one emits roughly one event per second on average. In this scenario, it would be advisable to configure an event rate of fewer than one thousand events per minute to prevent the cluster from overloading and becoming unresponsive.
You Will Have Control Over the Number of Pods that Can be Created
You want to control the number of pods that can be created or released at any time. This might be needed to effectively manage workload across your cluster and avoid overload and resource contention issues.
It Prevents the Resources Available to an Application from Being Overwhelmed
You want to limit the rate of events coming from a single application to prevent the resources available to that application. For example, suppose a streaming application was to generate many events every second. In that case, this could overwhelm the resources allocated to it and cause the system to run slower or perform more poorly than it would otherwise. In particular, they ensure that CPU and memory, critical resources, and stay energized by enough resources in a short time.
It ensures that an Application Meets its Expected Performance Requirements
You want to set a minimum limit on the number of resources a specific application uses to ensure that it meets its expected performance requirements at all times. For example, suppose an application has a specified CPU and RAM allocation, which it must use to function correctly. In that case, you should ensure that it only attempts to allocate more resources than it has available.
Unnecessary Notifications Can be Avoided
Administrators can avoid flooding their infrastructure with unnecessary notifications by limiting the number of events generated.
It Will Help you Protect your Production Environment from Excessive Network Congestion
Enabling event rate limiting will help protect your production environment from excessive network congestion and prevent your users from experiencing unexpected downtime due to overloaded nodes or malfunctioning components. It will also allow you to quickly identify bottlenecks and performance issues so that you can troubleshoot them before they cause serious damage to your system. For organizations with compliance requirements such as PCI-DSS, enabling event rate limiting is an absolute must if you want to ensure that your application data is secure at all times.
How to Configure the Event Rate Limit?
There are a few ways you can enable the event rate limit in Kubernetes. The simplest way is to use the Limits configuration setting mentioned here.
You should create a new configuration file called limits or whatever name you are okay with. After creating your YAML in your cluster’s directory, add the following contents:
yamlkind: LimitRange
min: "1"
max: "3"
This defines a range or minimum and the maximum number of pods that can run at any given time. Value 1 is set to “min” and value 3 is set to “max”.
After adding the following content, apply it through the API. You can also perform the following action in a configuration file to enable the event rate limit:
event_rate_limit:
enabled: true
Kube-API:
You can see in the above text that the option “enabled” is set to true.
You can also check to see the default values at /etc/Kubernetes/[configuration_file_name].yaml after the event rate is enabled:
Plugins:
- configuration:
apiVersion: eventratelimit.admission.k8s.io/v1alpha1
kind: Configuration
limits:
- burst: 20000
qps: 5000
type: Server
...
You must provide the whole Kubernetes resource for the setting in the configuration directive if you want to change the event rate limit:
Kube-API:
event_rate_limit:
enabled: true
configuration:
apiVersion: eventratelimit.admission.k8s.io/v1alpha1
kind: Configuration
limits:
- type: Server
qps: 8000
burst: 40000
Conclusion
The event rate limit is a potent tool that Kubernetes administrators can use to limit the volume of events that are produced by their nodes. You can limit the number of queries that are by an outside user can make into the cluster by limiting the number of created events. This article discussed many more benefits of enabling Kubernetes event rate limits and points out why you should enable event rate limits and how to enable them.