2
The previous article in this series: [Code Appreciation] Simple and Elegant JavaScript Code Snippets (1): Asynchronous Control

Flow control (also known as current limiting, controls the frequency of calls)

In order to ensure the stable operation of the system, the backend often limits the frequency of calls (for example, no more than 10 times per person per second). In order to avoid wasting resources or being punished by the system, the front end also needs to actively limit the frequency of calling APIs.

When the front end needs to pull lists in large batches, or when the API needs to be called to query the details of each list item, it is especially necessary to limit the current.

Here is a flow control tool function wrapFlowControl , its benefits are:

  • Simple to use and transparent to the caller: Just wrap your original asynchronous function to get a function with flow control restrictions, which is used in the same way as the original asynchronous function. const apiWithFlowControl = wrapFlowControl(callAPI, 2);
  • None of the calls are dropped (unlike debounce or throttle ). Each call will be executed and the corresponding result will be obtained. It just might be delayed in order to control the frequency.

Example of use:

// 创建了一个调度队列
const apiWithFlowControl = wrapFlowControl(callAPI, 2);

// ......

<button
  onClick={() => {
    const count = ++countRef.current;
    // 请求调度队列安排一次函数调用
    apiWithFlowControl(count).then((result) => {
      // do something with api result
    });
  }}
>
  Call apiWithFlowControl
</button>

codesandbox online example

The essence of this solution is to first create a scheduling queue through wrapFlowControl , and then request the scheduling queue to schedule a function call each time apiWithFlowControl is called.

Code

Code implementation of wrapFlowControl :

const ONE_SECOND_MS = 1000;

/**
 * 控制函数调用频率。在任何一个1秒的区间,调用fn的次数不会超过maxExecPerSec次。
 * 如果函数触发频率超过限制,则会延缓一部分调用,使得实际调用频率满足上面的要求。
 */
export function wrapFlowControl<Args extends any[], Ret>(
  fn: (...args: Args) => Promise<Ret>,
  maxExecPerSec: number
) {
  if (maxExecPerSec < 1) throw new Error(`invalid maxExecPerSec`);
  // 调度队列,记录将要执行的任务
  const queue: QueueItem[] = [];
  // 最近一秒钟的执行记录,用于判断执行频率是否超出限制
  const executed: ExecutedItem[] = [];

  return function wrapped(...args: Args): Promise<Ret> {
    return enqueue(args);
  };

  function enqueue(args: Args): Promise<Ret> {
    return new Promise((resolve, reject) => {
      queue.push({ args, resolve, reject });
      scheduleCheckQueue();
    });
  }

  function scheduleCheckQueue() {
    const nextTask = queue[0];
    // 仅在queue为空时,才会停止scheduleCheckQueue递归调用
    if (!nextTask) return;
    cleanExecuted();
    if (executed.length < maxExecPerSec) {
      // 最近一秒钟执行的数量少于阈值,才可以执行下一个task
      queue.shift();
      execute(nextTask);
      scheduleCheckQueue();
    } else {
      // 过一会再调度
      const earliestExecuted = executed[0];
      const now = new Date().valueOf();
      const waitTime = earliestExecuted.timestamp + ONE_SECOND_MS - now;
      setTimeout(() => {
        // 此时earliestExecuted已经可以被清除,给下一个task的执行提供配额
        scheduleCheckQueue();
      }, waitTime);
    }
  }

  function cleanExecuted() {
    const now = new Date().valueOf();
    const oneSecondAgo = now - ONE_SECOND_MS;
    while (executed[0]?.timestamp <= oneSecondAgo) {
      executed.shift();
    }
  }

  function execute({ args, resolve, reject }: QueueItem) {
    const timestamp = new Date().valueOf();
    fn(...args).then(resolve, reject);
    executed.push({ timestamp });
  }

  type QueueItem = {
    args: Args;
    resolve: (ret: Ret) => void;
    reject: (error: any) => void;
  };

  type ExecutedItem = {
    timestamp: number;
  };
}

Lazy Deterministic Function Logic

As can be seen from the above example, when using wrapFlowControl , you need to predefine the logic of the asynchronous function callAPI to get the flow control function.

But in some special scenarios, we need to determine what logic the asynchronous function should execute when the call is made. About to defer "determine-on-definition" to "determine-on-call" by . So we implemented another utility function createFlowControlScheduler .

In the usage example above, DemoWrapFlowControl is an example: we decide to call API1 or API2 when the user clicks the button.

// 创建一个调度队列
const scheduleCallWithFlowControl = createFlowControlScheduler(2);

// ......

<button
  onClick={() => {
    const count = ++countRef.current;
    // 在调用时才决定要执行的异步操作
    // 将异步操作加入调度队列
    // 这2个异步操作共用一个流控额度
    if (count % 2 === 1) {
      scheduleCallWithFlowControl(() => callAPI1(count)).then(
        (result) => {
          // do something with api1 result
        }
      );
    } else {
      scheduleCallWithFlowControl(() => callAPI2(count)).then(
        (result) => {
          // do something with api2 result
        }
      );
    }
  }}
>
  Call scheduleCallWithFlowControl
</button>

codesandbox online example

The essence of this scheme is to first create a scheduling queue through createFlowControlScheduler , and then add it to the scheduling queue whenever scheduleCallWithFlowControl receives an asynchronous task. Scheduling the queue ensures that all asynchronous tasks are called (in the order in which they were added to the queue) and that tasks are not executed more frequently than specified.

The implementation of createFlowControlScheduler is actually very simple, based on the previous implementation of wrapFlowControl :

/**
 * 类似于wrapFlowControl,只不过将task的定义延迟到调用wrapper时才提供,
 * 而不是在创建flowControl wrapper时就提供
 */
export function createFlowControlScheduler(maxExecPerSec: number) {
  return wrapFlowControl(async <T>(task: () => Promise<T>) => {
    return task();
  }, maxExecPerSec);
}

Expanding Thinking

How can we modify our utility function to support the frequency limit of "no more than n times per minute"? Or make it support the "no more than n tasks in progress" limit?
How to modify our utility function so that and support the frequency limits of "no more than n times per second" and "no more than m times per minute"? How to implement a more flexible scheduling queue, so that different scheduling restrictions can ?

For example, the frequency limit is "no more than 10 times per second" and "no more than 30 times per minute". The point of this is to allow bursts of high-frequency calls for a short period of time (by relaxing the second-level limit), while preventing high-frequency calls from lasting too long (through the minute-level limit).

Retry

Earlier we have got a solution to limit the frequency of calls in the front end. However, even if we have limited the call frequency on the front end, we may still encounter errors:

  1. The front-end flow control cannot fully meet the back-end flow control restrictions. The backend may impose an overall limit on the sum of all user calls. For example, the call frequency of all users cannot exceed 10,000 times per second, and the front-end flow control cannot align with this limit.
  2. Non-flow control error. For example, back-end services or network instability, resulting in temporary unavailability.

Therefore, in the face of these inevitable errors in the front end, it is necessary to retry to get the results. Here is a retry tool function wrapRetry , its benefits are:

  • Simple to use and transparent to the caller: Just like the previous flow control tool function, you only need to wrap your original asynchronous function to get an automatic retry function, which is used in the same way as the original asynchronous function.
  • Supports customizing the error type to be retried, the number of retries, and the retry waiting time.

How to use:

const apiWithRetry = wrapRetry(
  callAPI,
  (error, retryCount) => error.type === "throttle" && retryCount <= 5
);

It is used in a similar way to wrapFlowControl .

Code

wrapRetry code implementation:

/**
 * 捕获到特定的失败以后会重试。适合无副作用的操作。
 * 比如数据请求可能被流控拦截,就可以用它来做自动重试。
 */
export function wrapRetry<Args extends any[], Ret>(
  fn: (...args: Args) => Promise<Ret>,
  shouldRetry: (error: any, retryCount: number) => boolean,
  startRetryWait: number = 1000
) {
  return async function wrapped(...args: Args): Promise<Ret> {
    return callFn(args, startRetryWait, 0);
  };

  async function callFn(
    args: Args,
    wait: number,
    retryCount: number
  ): Promise<Ret> {
    try {
      return await fn(...args);
    } catch (error) {
      if (shouldRetry(error, retryCount)) {
        if (wait > 0) await timeout(wait);
        // nextWait是wait的 1 ~ 2 倍
        // 如果startRetryWait是0,则wait总是0
        const nextWait = wait * (Math.random() + 1);
        return callFn(args, nextWait, retryCount + 1);
      } else {
        throw error;
      }
    }
  }
}

function timeout(wait: number) {
  return new Promise((res) => {
    setTimeout(() => {
      res(null);
    }, wait);
  });
}

Among them, we added an optimization point: let the retry wait time gradually increase. For example, the waiting time for the second retry is 1 to 2 times the waiting time for the first retry. This is to reduce the number of calls as much as possible and avoid putting more pressure on the backend that is being unstable.

There is no choice of 2 times increase, in order to avoid the retry waiting time is too long and reduce the user experience.

composability

It is worth mentioning that automatic retry can be used in combination with the previous throttling tools (because they are both transparent to the caller and do not change the way the function is used):

const apiWithFlowControl = wrapFlowControl(callAPI, 2);
const apiWithRetry = wrapRetry(
  apiWithFlowControl,
  (error, retryCount) => error.type === "throttle" && retryCount <= 5
);

Note that the current limit is wrapped inside, and the retry is wrapped outside, so as to ensure that the request initiated by the retry can also be controlled by the current limit.


csRyan
1.1k 声望198 粉丝

So you're passionate? How passionate? What actions does your passion lead you to do? If the heart doesn't find a perfect rhyme with the head, then your passion means nothing.