Last time I was in " Will Redis data be deleted immediately when it expires?" "Speaking of if there is too much expired data, it cannot be completely deleted by regular deletion (more than 25% of expired keys are still deleted after each deletion), and at the same time, these keys will never be requested by the client again, so they cannot be deleted lazily, and the memory will be hit. What will happen to the full?
The answer is to take the memory elimination mechanism .
The story begins with the official position of the three princes and nine ministers of the Redis Empire...
In the Redis empire, the national law, family law and military law of the entire empire are recorded in redis.conf
, which controls the operation of the entire empire.
The limit on the size of national land resources occupied by civil servants is set by judicial officials named "maxmemory". There are two ways to achieve this:
- Use at runtime
CONFIG SET maxmemory 4gb
to specify that the maximum site resource of imperial officials is 4GB; - Record the
maxmemory 4gb
decree into theredis.conf
"Code", and specify the use of the "Code" to operate in the Empire Operation.
It should be noted that if maxmemory
is 0, there is no limit on 64
bit "space", and 32
bit "space" has 3GB
Implicit limit for 3GB
.
Redis memory elimination strategy
There is a limit on the resources of the imperial official site, and the selection of new people every year will result in no site resources to use. What should I do? How to choose some civil servants to eliminate?
In the era of Redis 4.0, there were a total of 6 elimination strategies, and then 2 strategies were added.
In general, we can divide them into two categories according to whether we need to eliminate them:
- Do not implement the elimination strategy,
noeviction
; - 7 other strategies to eliminate according to different rules.
noeviction policy
By default, resources exceeding the value of maxmemory
will not be eliminated, and newcomers are not allowed to join.
This is a related household, a royal family member, a permanent VIP, hello.
With the addition of officials, the resource capacity will be full sooner or later because it will not be eliminated. After it is full, when a "new person" wants to come in, Redis directly returns an error and strikes .
Show, really capricious.
Various elimination strategies
The remaining 7 strategies can also be divided into two categories according to the set of candidates for elimination and the scope of elimination:
Eliminate employees with a set expiration time . Those who do not have an expiration time will not be eliminated. The elimination strategy is as follows:
- volatile-lru : Eliminate those who have been working on the front line at least recently;
- volatile-lfu : a new strategy after 4.0 to eliminate those who work the least on the front line;
- volatile-random : Random elimination, freeing up pits for newcomers;
- volatile-ttl : Eliminate the civil servants whose term of office is set, and whoever is closest to the term of office will be eliminated first.
Eliminate all types of personnel , whether it is a permanent VIP royal relative or a person who has set an expiration time.
- allkeys-lru : Eliminate employees who have worked on the front line at least recently;
- allkeys-lfu : Eliminate civil servants who work at least on the front line;
- allkeys-random : Randomly eliminate staff to make room for new recruits.
The story ends here. Next, "Code Brother" will share with you how to choose the appropriate elimination strategy and set the best cache size in the actual Redis.
The elimination execution process is shown in the following figure:
- The client sends a new command to the server;
- When the server receives the client command, Redis checks the memory usage. If it is greater than the
maxmemory
limit, the data will be expelled according to the policy. - Execute the new command.
allkeys-lru usage scenarios
If your application has obvious differences between hot and cold data, it is recommended that you use this strategy based on experience, make full use of the LRU algorithm to retain the most frequently accessed data recently , and improve access performance with limited memory.
allkeys-random usage scenarios
If there is no obvious difference between hot and cold data, all data distribution queries are relatively balanced, and these data will be randomly queried, then use the allkeys-random strategy to randomly select and eliminate data.
volatile-lru usage scenarios
In business scenarios, there are some data that cannot be deleted, such as top news and videos. At this time, we do not set an expiration time for these data, so that the data will not be deleted. The least recently accessed data.
One thing to note is that executing expire time for the key will consume some memory, so using allkeyds-lru
will improve memory efficiency.
It is a better solution to use different Redis instance clusters for business systems that need to hold data that cannot be deleted and that can all eliminate data.
For business scenarios, some data cannot be deleted using the volatile-lru
strategy, and the other type can use allkyes-lru 或者 allkeys-random
.
How appropriate is the Redis capacity setting?
Cache is not the bigger the better, the boss wants to get the highest profit with the least cost.
Data access is localized, according to the "28 principle": usually 20% of the data can support 80% of the access requests.
So can we set the cache size to 20% of the total data volume?
Of course, it can't be so absolute, this is the ideal state. Because there may be some personalized needs, the data accessed by different users may be very different, and the "two-eight principle" is not completely applicable.
We should make a comprehensive assessment based on actual access characteristics and costs. According to experience, it is recommended to set the capacity to 15%~30% of the total data volume.
Brother code, other elimination rules are relatively simple, volatile-lru and volatile-lfu are more complicated, what is their algorithm?
volatile-lru uses the LRU algorithm to eliminate least recently used data. And volatile-lfu uses the LFU algorithm, which considers the timeliness and access frequency of the data on the basis of the LRU algorithm, and the key with the least access will be deleted.
As for the specific algorithm details, we will break it down next time. If you do too much at one time, you will easily choke in the ocean of knowledge.
The reading of the article is getting lower and lower
Can you call me a pretty boy in the comment section?
If you don't want to call me pretty, can you give me a like and watch it?
References
1. https://redis.io/docs/manual/eviction/
2. Redis core technology and actual combat
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。