2

Business scene

I just finished a project some time ago. Let me talk about the business scenario first. It is different from other front-end projects. This time the project is to directly call the interface of the third-party service, and our server only does authentication and transparent transmission. For the sake of flexibility, the three parties disassembled the interfaces very loosely, so this project is like throwing you a bunch of Lego pellets for you to assemble into a robot. So you can roughly analyze some of the characteristics of this project when requesting the interface, and then make some targeted optimizations:

  1. requests more interfaces than , maybe your list of n items was originally done with one interface, and now you need n*10 interfaces to get the complete data, some functional modules may need to request thousands of interfaces;
  2. basically get requests , read only but not write;
  3. interface call repetition rate is high. , because the interface is very fragmented, so there may be some commonly used interfaces that need to be called repeatedly;
  4. real-time requirements of the data returned by the . The third-party data is not updated in real time. It may only be updated once a day or a week, but the third-party requires that it cannot be removed in any way.

So in summary, the front-end cache has become a highly feasible optimization solution.

solution

The front-end HTTP request uses Axios, so the Axios interceptor can be used to manage the cache. Sort out the logic:

  1. Create a cache object;
  2. Before the request is initiated, determine whether the request hits the cache:

    1. Yes, return the cached content directly;
    2. No, initiate a request, and store the result of the request in the cache after the request is successful.

As the title says, the caching strategy here is the LRU (Least Recently Used) strategy, because the cache cannot be infinitely large, and an excessively large cache may cause browser page performance degradation and even memory leaks. LRU will delete the least recently used cache content after the cache reaches the maximum capacity, so there is no need to worry about the infinite increase of the cache. So how to implement the LRU caching strategy? There are ready-made wheels on Github, but for deeper learning, let's implement one manually.

Implement LRU

LRU mainly has two functions, deposit and withdrawal. Sort out the logic:

  1. Deposit:

    1. If the cache is full, delete the least recently used cache content, save the current cache to the most frequently used location;
    2. Otherwise, directly store the cache in the most frequently used location.
  2. Read:

    1. If there is this cache, return the cache content and put the cache in the most frequently used location;
    2. If not, return -1.

Here we can see that the cache has priority. What do we use to indicate the priority? If you use an array to store, you can put the less frequently used ones at the head of the array and the frequently used ones at the end. But in view of the low efficiency of data insertion, here we use the Map object as the container storage cache.

code show as below:

class LRUCache {
    constructor(capacity) {
        if (typeof capacity !== 'number' || capacity < 0) {
            throw new TypeError('capacity必须是一个非负数');
        }
        this.capacity = capacity;
        this.cache = new Map();
    }

    get(key) {
        if (!this.cache.has(key)) {
            return -1;
        }
        let tmp = this.cache.get(key);
        // 将当前的缓存移动到最常用的位置
        this.cache.delete(key);
        this.cache.set(key, tmp);
        return tmp;
    }

    set(key, value) {
        if (this.cache.has(key)) {
            // 如果缓存存在更新缓存位置
            this.cache.delete(key);
        } else if (this.cache.size >= this.capacity) {
            // 如果缓存容量已满,删除最近最少使用的缓存
            this.cache.delete(this.cache.keys().next.val);
        }
        this.cache.set(key, value);
    }
}

Combined with Axios to realize request caching

Straighten out the general logic: each request generates a string of hashes according to the requested method, url, and parameters, and the cache content is hash->response. If the request method, url, and parameters are consistent in subsequent requests, it is considered to hit the cache.

code show as below:

import axios from 'axios';
import md5 from 'md5';
import LRUCache from './LRU.js';

const cache = new LRUCache(100);

const _axios = axios.create();

// 将请求参数排序,防止相同参数生成的hash不同
function sortObject(obj = {}) {
    let result = {};
    Object.keys(obj)
        .sort()
        .forEach((key) => {
            result[key] = obj[key];
        });
}

// 根据request method,url,data/params生成cache的标识
function genHashByConfig(config) {
    const target = {
        method: config.method,
        url: config.url,
        params: config.method === 'get' ? sortObject(config.params) : null,
        data: config.method === 'post' ? sortObject(config.data) : null,
    };
    return md5(JSON.stringify(target));
}

_axios.interceptors.response.use(
    function(response) {
        // 设置缓存
        const hashKey = genHashByConfig(response.config);
        cache.set(hashKey, response.data);
        return response.data;
    },
    function(error) {
        return Promise.reject(error);
    }
);

// 将axios请求封装,如果命中缓存就不需要发起http请求,直接返回缓存内容
export default function request({
    method,
    url,
    params = null,
    data = null,
    ...res
}) {
    const hashKey = genHashByConfig({ method, url, params, data });
    const result = cache.get(hashKey);
    if (~result) {
        console.log('cache hit');
        return Promise.resolve(result);
    }
    return _axios({ method, url, params, data, ...res });
}

Package requested:

import request from './axios.js';

export function getApi(params) {
    return request({
        method: 'get',
        url: '/list',
        params,
    });
}

export function postApi(data) {
    return request({
        method: 'post',
        url: '/list',
        data,
    });
}

One thing to note here is that I hashed the request method, url, and parameters. In order to prevent the order of the parameters from changing and causing inconsistent hash results, I sorted the parameters before the hash operation. In actual development, the parameters The type is not necessarily the object, you can modify it according to your own needs.

After the above transformation, after the first request, if the same request is triggered again, the http request will not be sent, but it will be obtained directly from the cache, which is really fast and saving~

Reference: JS implements an LRU algorithm

不死小强
2.2k 声望587 粉丝

前端开发