2

Preface

Do you think it is too tiring to write a service in C++, but you are obsessed with the real performance of C++ and can't extricate yourself? As a veteran C++ programmer (you can see my C++ project on github more than ten years ago: https://github.com/kevwan ), I heard a friend talk to me about the C++ framework he wrote these days and said The development of various C++ services can be completed with minimal code, which makes me curious! So I went to research and found it was really interesting!

Actual combat (dry goods)

Not much to say, let's take a look at how to implement a high-performance Http service with 10 lines of C++ code, easily QPS hundreds of thousands. Linus said: talk is cheap, show me the code ↓

int main() {
    WFHttpServer server([](WFHttpTask *task) {
        task->get_resp()->append_output_body("Hello World!");
    });
    if (server.start(8888) == 0) {
        getchar(); // press "Enter" to end.
        server.stop();
    }
    return 0;
}

This server uses workflow , installation and compilation are very simple, take Linux as an example, after pulling down the code, one line of command will complete the compilation:

➜ git clone https://github.com/sogou/workflow
➜ cd workflow
➜ make
➜ cd tutorial
➜ make
➜ ./helloworld

The code is in the tutorial directory. The compiled helloworld can be run directly. It listens on port 8888 and can be accessed by curl:

➜ curl -i http://localhost:8888
HTTP/1.1 200 OK
Content-Length: 25
Connection: Keep-Alive

Hello World!

Along with the above 10 lines of code, we explain in detail:

  1. We choose Http protocol, so a WFHttpServer is constructed;
  2. A network interaction is a task, because it is the Http protocol, so we are WFHttpTask ;
  3. For the server, my interactive task is to fill in the reply after receiving the request. These can be obtained task->get_req() and task->get_resp()
  4. The logic is in a function (that is, the lambda above), which means what to do after receiving the message. Here is a sentence of "Hello World!";
  5. Server startup and exit use two simple APIs, start() and stop() getchar(); stuck in the workflow is a pure asynchronous framework.

pure asynchronous is the high performance of this Http server:

  • First, multi-threaded service

    If we do some blocking things in this function after receiving the request (such as waiting for locks, io requests or busy calculations, etc.), then when a user requests me, I will not have a thread to handle the new user

  • Second, network threads and execution threads have excellent scheduling strategies

    No amount of threads may be occupied. We need to do any time-consuming operations that the server function wants to do without affecting the network thread

  • Third, taking Linux as an example, the encapsulation of epoll is efficient and easy to use

    If the service only intends to support 10,000 QPS, it is actually very simple to implement the bottom layer, but if we want 100,000 or even close to one million, we have very high requirements for the I/O model of the server bottom layer for sending and receiving.

Let's take a look at how workflow achieves these high concurrency capabilities:

Based on the above architecture, workflow can easily reach hundreds of thousands of QPS, high throughput, low cost, and fast development, which perfectly supports all back-end online services of Sogou! For detailed code implementation, please refer to workflow source code. Then we speak with data. By comparing with the world-renowned high-performance Http server nginx and the domestic open source framework pioneer brpc, let's take a look at the relationship between QPS and concurrency under a fixed data length:

The above is the wrk pressure test done on the same machine with the same variables. For details, you can check the machine configuration, parameters and pressure test tool code on github. When the data length remains the same, QPS increases as the concurrency increases, and then stabilizes. In this process, workflow has always had obvious advantages, which is higher than nginx and brpc . Especially for the two curves with data lengths of 64 and 512, when the concurrency is sufficient, the QPS of 50W can be maintained.

to sum up

workflow can be recognized by 4k stars on github in the open source for half a year. Of course, in addition to simple and high-performance , there are many other features. If you are curious or hopeful about other usage scenarios Try a stress test and feel the heart rate acceleration brought by high QPS, then welcome to click workflow github to find more brainstorming functions and usage.

project address

https://github.com/sogou/workflow

Welcome to use workflow and support star

WeChat exchange

Follow the "microservice practice" official account and reply to the group to obtain the QR code of the microservice community group.

For the go-zero series of articles, please refer to the official account of "Microservice Practice"

kevinwan
936 声望3.5k 粉丝

go-zero作者