2

Cache avalanche + breakdown + penetration

Cache avalanche

Currently, e-commerce homepages and hotspot data are cached. Generally, the cache is refreshed by scheduled tasks, or updated after unchecked, there is a problem with scheduled task refreshing: cache avalanche.

Cache avalanche refers to the fact that the cache just fails or is updated during a large QPS, and a large number of requests hit the DB, causing the DB to fail to handle it and hang up.

image.png

Solution

  1. Stagger the expiration time of KEY

setRedis(Key,value,time + Math.random() * 10000);

  1. Hot data never expires, keeping consistent updates with the database

Cache penetration

Refers to data that is not in the cache or the database, but a query to the database is generated. For example, the id of the database is automatically incremented from 1, if it is initiated as the data with the id value of -1 or the id is particularly large and non-existent data.

image.png

solution

  1. The interface layer adds verification, such as user authentication verification, parameter verification, illegal parameters directly code Return, such as: id for basic verification, id <=0 direct interception, etc.
  2. The data that cannot be retrieved from the cache is also not retrieved in the database. At this time, you can also write the Value pair of the corresponding Key as null, the location is wrong, and you will try again later. For scenes, the effective time of the cache can be set to a short point, such as 30 seconds (setting too long will cause it to be unusable under normal conditions).
  3. Cache can be combined with uri and request parameters according to the scene.
  4. The nginx proxy layer restricts high-frequency access to the same IP

cache breakdown

Cache breakdown is the same as cache avalanche, but the former is a point and the latter is a face. Cache breakdown refers to a key that is very hot. It is constantly carrying large concurrency, and the large concurrency concentrates on accessing this point. When the key is invalid, the continuous large concurrency will break through the cache and directly request the database. It's like cutting a hole in an undamaged barrel.

solution

  1. Hotspot data never expires, keeping consistent updates with the database.
  2. Plus the mutex lock. When the cache is updated, the lock is locked. At this time, the incoming request can be blocked by sleep first.

image.png


菜问
625 声望132 粉丝

10年后端开发,常用编程语言PHP,java,golang,python。