10

cache is a commonly used method in the process of writing code, and it is a way of changing space for time. Take the HTTP protocol that we often use, there are also two caching methods: strong caching and negotiated caching. When we open a website, the browser will query the response header of the request, by judging whether there are fields such as Cache-Control , Last-Modified , ETag in the response header, to determine whether to directly use the previously downloaded resource cache instead of re-loading from the server download.

The following is when we visit Baidu, some resources hit the negotiated cache, the server returns 304 status code, and some resources hit the strong cache and directly read the local cache.

However, the cache is not unlimited, there will be a size limit. Whether it is our cookie (different browsers are different, generally around 4KB ) or localStorage (same as cookie , different browsers are different, some browsers are 5MB , some browsers are 10MB ), there will be size restrictions .

At this time, an algorithm needs to be involved, and the cache that exceeds the size limit needs to be eliminated. The general rule is to eliminate the cache that has not been accessed recently, which is the protagonist to be introduced today: LRU ( Least recently used : the least recently use). Of course, in addition to LRU, common cache eliminations are FIFO ( first-in, first-out : first in, first out) and LFU ( Least frequently used : least used).

What is LRU?

LRU ( Least recently used : Least recently used) algorithm when the cache is full, will eliminate the data with the lowest probability of being accessed in the future according to the access records of all data. That is to say, the algorithm believes that the data that has been accessed recently has the highest probability of being accessed in the future.

In order to facilitate the understanding of the whole process of the LRU algorithm, a simple diagram is drawn:

  1. Suppose we have a piece of memory that can store a total of 5 data blocks;
  2. Store A, B, C, D, E into the memory in turn, and the memory is full at this time;
  3. When new data is inserted again, the data A that has been stored in the memory for the longest time will be eliminated;
  4. When we read data B again externally, the B that is already at the end will be marked as active, and when the header is mentioned, data C becomes the data with the longest storage time;
  5. Insert new data G again, and the data C with the longest storage time will be eliminated;

Algorithm implementation

The following is a simple code to implement this logic.

class LRUCache {
    list = [] // 用于标记先后顺序
    cache = {} // 用于缓存所有数据
    capacity = 0 // 缓存的最大容量
    constructor (capacity) {
    // 存储 LRU 可缓存的最大容量
        this.capacity = capacity
    }
}

The basic structure is shown above, and the LRU needs to implement two methods: get and put .

class LRUCache {
  // 获取数据
    get (key) { }
  // 存储数据
    put (key, value) { }
}

Let's now see how to store the data:

class LRUCache {
  // 存储数据
    put (key, value) {
    // 存储之前需要先判断长度是否达到上限
    if (this.list.length >= this.capacity) {
      // 由于每次存储后,都会将 key 放入 list 最后,
      // 所以,需要取出第一个 key,并删除cache中的数据。
            const latest = this.list.shift()
            delete this.cache[latest]
        }
    // 写入缓存
        this.cache[key] = value
    // 写入缓存后,需要将 key 放入 list 的最后
        this.list.push(key)
  }
}

Then, every time you acquire data, you need to update list , and put the currently acquired key at the end of list .

class LRUCache {
  // 获取数据
    get (key) {
        if (this.cache[key] !== undefined) {
        // 如果 key 对应的缓存存在
      // 在返回缓存之前,需要重新激活 key
            this.active(key)
            return this.cache[key]
        }
        return undefined
  }
  // 重新激活key,将指定 key 移动到 list 最后
    active (key) {
    // 先将 key 在 list 中删除
        const idx = this.list.indexOf(key)
        if (idx !== -1) {
            this.list.splice(idx, 1)
    }
    // 然后将 key 放到 list 最后面
        this.list.push(key)
    }
}

At this time, it has not been fully realized, because in addition to get operation, the put operation also needs to reactivate the corresponding key .

class LRUCache {
  // 存储数据
    put (key, value) {
        if (this.cache[key]) {
            // 如果该 key 之前存在,将 key 重新激活
            this.active(key)
            this.cache[key] = value
      // 而且此时缓存的长度不会发生变化
      // 所以不需要进行后续的长度判断,可以直接返回
            return
        }

    // 存储之前需要先判断长度是否达到上限
    if (this.list.length >= this.capacity) {
      // 由于每次存储后,都会将 key 放入 list 最后,
      // 所以,需要取出第一个 key,并删除cache中的数据。
            const latest = this.list.shift()
            delete this.cache[latest]
        }
    // 写入缓存
        this.cache[key] = value
    // 写入缓存后,需要将 key 放入 list 的最后
        this.list.push(key)
  }
}

Some people may think that this algorithm has no application scenarios in the front end. Speaking of which, the LRU algorithm is used in Vue's built-in components keep-alive .

In the follow-up, we should continue to introduce LFU algorithm, so stay tuned...


Shenfq
4k 声望6.8k 粉丝