The article was first published on the public account "Mushroom can't sleep"
Review
"Understanding Redis Persistence at Source Level"
"Talk about Redis expired key deletion strategy"
"Redis Data Structure Detailed Explanation"
"Super detailed Redis five data structure underlying implementation"
In this issue, let's take a look at Redis's memory elimination strategy~
Why is there a memory elimination mechanism
Everyone knows that the key in Redis will set the expiration time. When the expiration time is reached, the corresponding key will be cleared through a certain strategy, but the redis memory is limited by the upper limit. When the upper limit of the memory is reached, the corresponding kv key value must be eliminated through a certain strategy Correct.
Redis memory limit
The maxmemory configuration option is used to configure the maximum memory limit that Redis can use to store data. It can be configured in the built-in file redis.conf, or configured by the command CONFIG SET when Redis is running. For example, if we want to configure a Redis cache with a memory limit of 100M, then we can configure it in redis.conf as follows:
maxmemory 100mb
Setting maxmemory to 0 means that there is no memory limit. In a 64-bit system, the default is 0 without limit, but in a 32-bit system, the default is 3GB.
When the stored data reaches the limit, Redis will choose different strategies according to the situation, or return errors (this will cause more memory to be wasted), or clear some old data and reclaim the memory to add new data.
Redis memory elimination strategy
- noenviction: Does not clear the data, but returns an error, which will cause more memory to be wasted, for most write commands (except for the DEL command and a few other commands)
- allkeys-lru: select the least recently used data from all data sets and eliminate them for new data use
- Volatile-lru: select the least recently used data from the data set with the expiration time set and eliminate it for new data use
- allkeys-random: arbitrarily select data from all data sets and eliminate them for use by new data
- Volatile-random: arbitrarily select data to be eliminated from the data set that has set expiration time for new data to use
- Volatile-ttl: select the data that will expire from the data set with the expiration time set, and use it for new data
- volatile-lfu: Eliminate the least frequently used key from all the keys configured with expiration time
- allkeys-lfu: Eliminate the least frequently used key from all keys
The process of recycling
It is very important to understand the recycling process. The recycling process is as follows:
- A client runs a new command and adds new data.
- Redis checks the memory usage and if it is greater than the maxmemory limit, it reclaims the keys according to the policy.
- A new command is executed, and so on.
We pass the check when adding data, and then recycle the key to return to below the limit to continuously cross the boundary of the memory limit.
If a command causes a large amount of memory to be used (such as saving a large set to a new key), then the memory limit will soon be exceeded by this apparent amount of memory.
Approximate LRU algorithm
Redis's LRU algorithm is not a strict LRU implementation. This means that Redis cannot select the best candidate keys to recycle, that is, the keys that have not been accessed the longest. On the contrary, Redis will try to implement an approximate LRU algorithm, by sampling a small number of keys, and then reclaiming the most suitable (with the longest access time) from the sampled keys.
However, starting from Redis3.0, the algorithm has been improved to maintain a pool of reclaimed candidate keys. This improves the performance of the algorithm, making it closer to the behavior of the real LRU algorithm. Redis's LRU algorithm is very important. You can adjust the accuracy of the algorithm by changing the number of samples checked each time it is collected.
This parameter can be configured as follows:
maxmemory-samples 5
The reason why Redis does not use a real LRU implementation is because it consumes more memory. However, the approximate value is basically equivalent for applications using Redis.
LFU
LFU (Least frequently used) the least frequently used algorithm. The LRU is least recently used algorithm.
Starting from Redis 4.0, the LFU expiration strategy can be used. This mode may be better in some cases (providing better hit rate/miss rate), because using LFU Redis will try to track the access frequency of items, so rarely used items will be eliminated and frequently used The project has a higher chance of staying in memory.
Then why does the LFU algorithm appear? Please look at the following scene:
A - A - A - - - A - A -A - - -
B - - - - B - - B - - - - - - B
If it is the LRU algorithm, then A will be eliminated, because B is the most recently used, but it is obvious that A is used the most frequently, so A should be left, so the LFU algorithm came into being. (Eliminate the least used key)
LFU divides the 24 bits of the internal clock of the original key object into two parts. The first 16 bits also represent the clock, and the last 8 bits represent a counter, called Morris counter . The last 8 bits represent the access frequency of the current key object, 8 bits can only represent 255, but redis does not adopt a linear increase method, but uses a complex formula to adjust the data increase speed by configuring two parameters.
The following figure shows the number of hits of the key from left to right, and the impact factor from top to bottom. Under the condition of an impact factor of 100, the last 8 bits can be filled to 255 after 10M hits.
factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
---|---|---|---|---|---|
0 | 104 | 255 | 255 | 255 | 255 |
1 | 18 | 49 | 255 | 255 | 255 |
10 | 10 | 18 | 142 | 255 | 255 |
100 | 8 | 11 | 49 | 143 | 255 |
This parameter is configurable, through this:
lfu-log-factor 10
is about the increase of the counter, so what is the situation to reduce it?
The default is that if a key is not used every minute, the Morris counter will be reduced by 1. This can also be configured as follows:
lfu-decay-time 1
is, what should I do with the new key? Will it be eliminated when it comes up?
In order to avoid this problem, Redis defaults the value of the last 8-bit counter of the newly allocated key to 5 to prevent it from being deleted directly due to low access frequency.
to sum up
In order to avoid the memory exceeding the capacity, Redis uses a specific memory elimination strategy to release the memory. The main idea is LRU and the LFU algorithm introduced by Redis 4.0. LRU is the least recently used algorithm, and LFU is the least used algorithm.
For more exciting content, search on "Mushroom can't sleep"
If you think it is helpful to you, please give me a like, your support is the biggest motivation for my creation
The more proactive you are, the more proactive you are. See you in the next issue~
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。