头图

How to use redis to limit current

first published on Dale's blog

background

In work, we often encounter situations where it is necessary to limit the current of an interface or a certain call. You will also encounter redis data while limiting the current. In the case of distributed scenarios, the atomicity of the operation is required.

Current limiting algorithm

The mainstream current limiting algorithms are as follows:

  1. Counter (fixed window)
  2. Sliding window (split counter)
  3. Leaky bucket algorithm
  4. Token bucket algorithm

For the interpretation algorithm, there are many online good article, paste it here used four kinds limiting algorithm .

In this article, discuss the first two, namely counter and sliding window .

Business explanation

Current limiting is a scene often encountered in business. For example: current limit for interfaces, current limit for calls, etc.

Take the interface current limit as an example, the process is as follows:

限流流程

After the request reaches the server, it is necessary to determine whether the current interface reaches the threshold. If:

  1. When the threshold is reached, the request ends.
  2. If the threshold is not reached, count ++ , continue to call the next step.
There are many ways to limit the current. If it is only a small stand-alone deployment application, you can consider counting and operating in the memory. If it is a complex project and a distributed deployment project, you can consider using redis for counting. And limiting the logic is not necessarily limited to Java code may be used lua in nginx operation, e.g. famous openresty , empathy other service gateway may also be implemented.

Current limiting in distributed services

First analyze the business scenario. In the distributed deployment api scenario, you need to pay attention to the following points:

  1. Using gateways to load balance the APIs, it is difficult to share memory between processes deployed on different servers.
  2. The business based on current limiting is to limit the current of a certain interface or certain interfaces of the entire system, so the count must be read by different processes.
  3. The triggering of the count occurs after the request reaches the server, so atomicity needs to be considered. That is: at the same time, only one request can trigger the counting. This puts high concurrency requirements on the counting service requirements.

Analyze the feasibility of nginx + lua

nginx often used at the entrance of the request. After using its load balancing, the request can be distributed to different services. Using lua to operate on the memory seems to be able to achieve the above requirements (feasibility to be verified).

However, in actual situations, a system does not necessarily deploy only one nginx as an entrance. On the one hand, it is the risk of a single machine, on the other hand, the geographical location is different, and the access speed to the same machine may vary greatly depending on the network. Therefore, everyone prefers to use DNS or other to do a layer of load balancing to make the nginx Therefore, nginx + lua cannot meet our needs.

Analyze the feasibility of redis

redis is a non-relational database based on memory. Its concurrency can withstand the test. At the same time, it can also meet the needs of different processes to read and modify the same data.

For atomicity, the redis operation inherently supports atomicity, and the string type INCR (atomic accumulation) operation is very compatible with current limiting services.

Redis achieves current limit

Let us go back to the beginning of the process, the counting current limit operations are:

  1. Query current count
  2. Accumulate the current count

In a distributed system, you must always pay attention to atomicity. In a single process, the way we keep the data thread safe is to lock, whether it is a reentrant lock or synchronized , the semantics are to tell other threads that this data (code block) is now requisitioned, and you will come back later. In a distributed system, we can naturally think of distributed lock .

The pseudo code is as follows:

Lock lock = getDistributedLock();

try{
    lock.lock();
    // 从 redis 中获取计数
    Integer count = getCountFromRedis();
    
    if(count >= limit){
        // 超过阈值,不予调用
        return false;
    }
    // 未超过阈值,允许调用
    incrRedisCount();
    return true;
}catch{
    ...
}finally{
    lock.unlock();
}

At first glance, there is no problem with this logic, but it is actually very problematic:

  1. Using distributed locks will obviously slow down the entire system and waste a lot of resources.
  2. The redis incr operation will return the accumulated value, so the query operation is not necessary.

The pseudo code is as follows:

Integer count = incrRedisCount();
if (count >= limit){
    return false;
}
return true;

Is it a lot simpler? However, there are other problems that follow. Most businesses do not require us to limit the number of times, but rather require us to limit the number of requests made by the interface within a period of time-sliding window.

Implementation of sliding window

As the name implies, a sliding window is to slide a fixed window. In terms of current limiting, counting is performed within a period of time. Once the time has passed, a new counting is started immediately.
How to achieve some time this logic?
In fact, it is very simple, we can use the timestamp to achieve this function.

// 秒级时间戳
long timestamp = System.currentTimeMillis() / 1000;
Long aLong = redisTemplate.opsForValue().increment(RedisKeyEnum.SYSTEM_FLOW_LIMIT.getKey() + timestamp);
return aLong;

At this point, there will be a problem. If you look at the above code and create a key every second, sooner or later the redis memory will be burst. We need a strategy to delete this key.
Stupid way, you can record these keys, and then delete these keys asynchronously. But a better way is to set an expiration value slightly larger than the window when the key is first created. So, the code is as follows:

    /**
     * 按秒统计发送消息数量
     *
     * @return
     */
    public Long getSystemMessageCountAtomic() {
        // 秒级时间戳
        long timestamp = System.currentTimeMillis() / 1000;
        Long aLong = redisTemplate.opsForValue().increment(RedisKeyEnum.SYSTEM_FLOW_LIMIT.getKey() + timestamp);
        if (aLong != null && aLong == 1) {
            redisTemplate.expire(RedisKeyEnum.SYSTEM_FLOW_LIMIT.getKey() + timestamp, 2, TimeUnit.SECONDS);
        }
        return aLong;
    }

expire command will only be executed during the first count. Why do we need to set a time slightly larger than the window?
Imagine if you set the same time as the window, the key A generated at time a, and the expiration time is one second. Then at time b, the generated key is also keyA (within the same second), but due to network or other reasons, the command at time b is sent to the redis server after one second. Since the expiration time is one second, at this time the old keyA has expired, then a new key will be created at time b.

At this point, another question needs to be considered, if the limit is exceeded, how the above code will behave.

Assume that only 100 requests are allowed in one second. Then for the 101st time, the incr command will be executed in redis, and all subsequent requests will be executed. In fact, the execution of these commands is meaningless, because at the 101st time, the request in this second has reached the limit, so we need another storage to record the above data.

I choose AtomicLong to record the window that has reached the limit. Analyze whether it is feasible.

  1. AtomicLong belongs to the java.util.concurrent.atomic package and uses CAS and volatile to ensure the thread safety of data.
  2. For the above requirements, we only need to record the flag on a single machine, without considering the distributed situation.

The discussion is feasible, and the code is shown below.

private final AtomicLong flag = new AtomicLong();

/**
     * 系统全局流量限制
     */
    public void systemFlowLimit() {
        // 判断 flag 是否与当前秒相同
        if (flag.get() != System.currentTimeMillis() / 1000) {
            // 由于 flag.get 到 flag.set 之间的所有操作组合之后 不具备原子性,所以会有 小于 线程数 的线程会进入到这里面。
            // 意思是,当 第一个 线程将 flag 设置为 当前秒级 时间戳之后, 会有一部分线程已经执行完 flag.get 的判断逻辑
            // 此时,部分线程会继续 redis 操作与 日志操作
            Long count = systemLimitService.getSystemMessageCountAtomic();
            if (count >= systemProperties.getFlowLimit()) {
                // 超过之后会将flag 设置为当前秒
                flag.set(System.currentTimeMillis() / 1000);
                LOGGER.warn("system flow now is out of system flow limit,at:{}", System.currentTimeMillis() / 1000);
                throw new BusinessException(...);
            }
        } else {
            throw new BusinessException(...);
        }
    }

Summarize

The above has sorted out some methods of using redis for current limiting. The frequently used algorithm is the sliding window, so it took a lot of pen and ink to explain the implementation of the sliding window.

Of course, we can also use lua scripts to operate redis to achieve current limiting and other redis operations.

One scenario I often encounter is that writing data to the redis queue needs to be flow-limited, and when the traffic reaches it, part of the content in the redis queue needs to be deleted. At this point, using lua scripts to do it can elegantly maintain the atomicity of multiple redis operations, and can also reduce the overhead of network conditions.


Dale
103 声望3 粉丝

谁敢横刀立马!