Redis

Redis Cache Eviction

Generally, a cache is a software or hardware component that holds data from past client requests, so that data can be retrieved and served faster for the requests made in the future. In most cases, the backend systems are not as fast as you’d like them to be. The cache sits between your backend database layer and client application, providing quick data retrieval and a fast response time.

When the requested data is available in the Redis cache, the data can be directly served from the Redis cache without redirecting the request to the primary database, which is called a cache hit. On the other hand, if the data is not available in the Redis cache, the request needs to reach the original backend database to fetch the data as usual, which is a cache miss, but subsequent requests will serve from the cache.

With this type of behavior, Redis caching is an excellent choice for many use cases.

  • Machine learning inferences
  • Full-text searches
  • Complex relational database queries
  • Third-party API results

Redis caches bring your application several benefits while introducing one major challenge, which is called cache staleness. At some point, your cache might be full of stale data that is not even valid for today. Hence, there should be a mechanism to evict the old or invalid data as you continue to use your Redis cache with new data.

Redis Cache Eviction

When Redis cache was first introduced, the developers were using the TTL (Time-To-Live) value for every key to maintaining a relatively constant amount of memory by matching the velocity of data going in and out of the system. This was a lot of manual work for developers, which is something that the Redis server should be handling.

So, the ideal approach should be to automatically expel the old data as the maximum cache memory limit is reached, which is also the process followed by the Memcached system. Redis uses the special maxmemory directive to notify that the maximum memory limit has been reached.

Maxmemory Configuration Directive

Redis uses the maxmemory configuration directive to identify the maximum available memory for the cache to work on. There are two ways that you can set the maxmemory directive value.

  • Using the redis.conf file
  • CONFIG SET command at runtime

The following is an example of a redis.conf file that shows how you can specify the maximum cache memory for the data set. In this case, the maxmemory directive has been set to 1000 bytes.

Similarly, the Redis CONFIG SET command can be used to set the maxmemory directive at runtime, as shown in the following.

config set maxmemory 100

Furthermore, if you set the maxmemory directive value to 0, it means no memory limit has been specified for the Redis cache. In most 64-bit systems, this value is set to 0. The 32-bit system uses the 3GB maximum memory limit implicitly.

When the Redis cache hits the memory value specified by the maxmemory directive, it starts removing keys according to the selected eviction policy. If no key meets the policy criteria, then nothing will be evicted. Instead, the Redis server will reply with errors to commands like LPUSH and SET that consumes more memory and respond only to read-only commands like GET.

In the following section, we will discuss the Redis eviction policies and how they determine which keys should be evicted or not.

Eviction Policies

An eviction policy determines the behavior of the Redis cache when the maxmemory value is reached. Redis will remove all the keys that meet the given policy criteria at a given time. Hence, it is important to make sure that you have a large enough database to keep the desired keys. The following sections describe the different Redis eviction policies available for use.

noeviction

Several eviction policies are available to use, and noeviction is one of the most widely used if you don’t want to remove any old keys when the maximum memory limit is reached. However, it will throw errors on all Redis write operations in order to notify the user that the cache is full and free up some space. In short, the data will not be saved to the cache until you free up some space manually.

volatile-ttl

In the  volatile-ttl policy, Redis takes samples of keys whose expire fields are set to true and evicts the ones with the smallest TTL value. If there are no keys with the expire field set to true, then no eviction happens, which is similar to the noeviction policy.

volatile-random

This is another version of the volatile-ttl policy where Redis randomly removes keys whose expire field is set to true. Again, it considers only the keys that have a TTL value but not the whole key space.

allkeys-random

This is more similar to the volatile-random policy, but with the allkeys-random policy, the whole key space will be considered, not only the keys with a TTL value. In this case, Redis will evict keys randomly to add new data.

Both of the above policies use a random algorithm to evict keys from the cache, which is quite risky and may not be the optimal way to do the key eviction. So, Redis introduced a more optimal way to do this with the new LRU(Least Recently Used) algorithm.

The LRU(Least Recently Used) Algorithm

The LRU algorithm is based on the assumption that if a given key has been accessed recently, there is a higher probability of accessing it again in the near future. On the other hand, if we have not used a given key for a long time, there is a higher chance that the key will not be used soon or ever again. With that assumption, the LRU algorithm tries to evict the least recently used keys from the Redis cache, which is a more optimal approach than the random key eviction algorithms.

To implement this policy, the Redis object uses a new lru field with 24 bits assigned, which keeps track of when the key was last used. With the LRU algorithm in place, it takes a specified number of random keys and evicts the one with the highest idle time or oldest access time. The maxmemory-samples directive is used to specify the number of keys per sample. This can be set using the CONFIG SET command, as shown in the following.

config set maxmemory-samples 5

It is important to keep in mind that Redis runs an approximation of the LRU algorithm to keep things simple and save computing resources like CPU and memory.

There are two flavors of LRU eviction policy.

volatile-lru

The least recently used keys with the expire field set to true will be removed. The keys that are not associated with a TTL value will not be evicted under this policy because the sample is taken only from the keys whose expire field is set to true.

allkeys-lru

All the keys with the highest idle time will be removed. In this case, Redis will keep the most recently used keys.

The LFU (Least Frequently Used) Algorithm

Redis introduced the new LFU (Least Frequently Used) algorithm in version 4.0 to identify rarely used keys and remove them from the cache when the maximum memory limit is reached. With the LFU policy, keys that are accessed too often will remain.

volatile-lfu

The keys with the expire fields set to true and the ones that are least frequently used among the selected sample will be evicted.

allkeys-lfu

It searches the entire key space for rarely used keys and deletes them while keeping the frequently used keys.

Conclusion

To conclude, Redis uses the maxmemory directive to specify the maximum memory limit of the Redis cache. As discussed, when the cache reaches its maximum memory limit, the configured eviction policy will fire and remove the keys. Redis uses the approximations of the LRU and LFU algorithms to find the best candidates for eviction.

About the author

Nimesha Jinarajadasa

Being a Full-stack Senior Software Engineer for more than five years, I love technology, as technology has the power to solve our many problems within just a minute. I try to learn more and create more opportunities for this new world.