2
头图

Hi everyone, this is Zhang Jintao.

In this article, I will introduce you to a tool-k6, which has no direct relationship with K8s. It is an open source performance stress measurement tool.

The story behind k6

In August 2016, k6 released the first version on GitHub. So far, an excellent open source load stress testing tool has entered people's field of vision.

June 2021 is a big day for Grafana and k6, Grafana Labs acquired k6.

In fact, the fate of Grafana and k6 goes back 2 years ago.

In 2019, during the stress test of Grafana 6.0's short-term token refresh behavior, Grafana Labs conducted a series of technical selections.

Since most of the back-end software of Grafana Labs is implemented using Go, it happens that k6 meets the requirements of OSS and Go, and the load test is written in JS (Grafana front-end framework and UI are both in use). This makes k6 continue to complete the mission of tracking bugs for Grafana developers and testers since Grafana 6.0 version.

img

Figure 1, k6 joined Grafana Labs

Various pressure measurement tools

A handy automated load stress testing tool will greatly improve the code quality and efficiency of program developers.

The following figure shows some of the more common tools used for load stress testing. We can see on GitHub. Currently, the most frequently updated and active projects are Gatling, Jmeter and k6.

img

Figure 2. Pressure measurement tools

How to choose from them is simply a comparison of tool efficiency. Mainly consider from the following two aspects:

  • Tool performance
  • Tool experience

The figure below provides a simple comparison of the above tools.

img

Here I mainly compare three of the more active projects.

  • JMeter-Friends who are familiar with Java may know this tool better. Due to its long existence, JMeter's functions are the most comprehensive among them, and the integration and additional components are better. I believe everyone is familiar with Blazemeter, a SaaS service built on it. This also leads to a huge problem, the complexity of use is high and not lightweight enough;
  • Gatling-Gatling also has SaaS product Gatling Frontline. As far as the threshold of use is concerned, JS is much lower than Scala;
  • k6-k6 was originally developed and maintained by several employees of the SaaS service Load Impact. Low barriers to use (JS), simpler parameterization, and the concept of "load testing as code" also makes its maintenance cost lower. The future can be expected.

img

Figure 3, comparison of 3 popular tools

Execution effect

img

Or like this:

img

Install k6

k6 is developed in the Go language. The steps to install k6 are very simple, just download the binary file directly on its GitHub Release page. for example:

(MoeLove) ➜ wget -q https://github.com/grafana/k6/releases/download/v0.35.0/k6-v0.35.0-linux-amd64.tar.gz 
(MoeLove) ➜ tar -xzf k6-v0.35.0-linux-amd64.tar.gz 
(MoeLove) ➜ ls
k6-v0.35.0-linux-amd64  k6-v0.35.0-linux-amd64.tar.gz
(MoeLove) ➜ mv ./k6-v0.35.0-linux-amd64/k6 ~/bin/k6
(MoeLove) ➜ k6 version
k6 v0.35.0 (2021-11-17T09:53:18+0000/1c44b2d, go1.17.3, linux/amd64)

Or you can use its Docker image directly:

➜  ~ docker run  --rm loadimpact/k6  version   
k6 v0.35.0 (2021-11-17T09:53:03+0000/1c44b2d, go1.17.3, linux/amd64)

core concept

There are not many concepts in k6. The most important one is the virtual users (VUs) used to execute the test. Its essence is the number of concurrent tasks.

When using k6 to perform a test, you can --vus or -u , and the default is 1.

Hands-on practice

I personally feel that k6 is one of the better user experience among the current mainstream stress testing tools. It uses JS (ES6) as the configuration language, which is quite convenient. Let's make some examples.

Simple request

For HTTP requests, we only need to import http k6/http .

Note that in k6, there must be a default as an entry point by default, which is similar to the commonly used function main

import http from "k6/http";

export default function(){
  http.get("https://test-api.k6.io/public/crocodiles/")
}

The effect after execution is as follows:

(MoeLove) ➜ k6 run simple_http_get.js 

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: simple_http_get.js
     output: -

  scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
           * default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)


running (00m01.1s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  00m01.1s/10m0s  1/1 iters, 1 per VU

     data_received..................: 6.3 kB 5.7 kB/s
     data_sent......................: 634 B  578 B/s
     http_req_blocked...............: avg=848.34ms min=848.34ms med=848.34ms max=848.34ms p(90)=848.34ms p(95)=848.34ms
     http_req_connecting............: avg=75.59µs  min=75.59µs  med=75.59µs  max=75.59µs  p(90)=75.59µs  p(95)=75.59µs 
     http_req_duration..............: avg=247.46ms min=247.46ms med=247.46ms max=247.46ms p(90)=247.46ms p(95)=247.46ms
       { expected_response:true }...: avg=247.46ms min=247.46ms med=247.46ms max=247.46ms p(90)=247.46ms p(95)=247.46ms
     http_req_failed................: 0.00%  ✓ 0        ✗ 1  
     http_req_receiving.............: avg=455.24µs min=455.24µs med=455.24µs max=455.24µs p(90)=455.24µs p(95)=455.24µs
     http_req_sending...............: avg=103.77µs min=103.77µs med=103.77µs max=103.77µs p(90)=103.77µs p(95)=103.77µs
     http_req_tls_handshaking.......: avg=848.07ms min=848.07ms med=848.07ms max=848.07ms p(90)=848.07ms p(95)=848.07ms
     http_req_waiting...............: avg=246.9ms  min=246.9ms  med=246.9ms  max=246.9ms  p(90)=246.9ms  p(95)=246.9ms 
     http_reqs......................: 1      0.911502/s
     iteration_duration.............: avg=1.09s    min=1.09s    med=1.09s    max=1.09s    p(90)=1.09s    p(95)=1.09s   
     iterations.....................: 1      0.911502/s
     vus............................: 1      min=1      max=1
     vus_max........................: 1      min=1      max=1

By default, k6 will output the execution result to the terminal. At the same time, it comes with some indicators that will be output at the same time.

These indicators are basically semantic, and the meaning can be understood by looking at the name, so I won't introduce them one by one here.

Request with inspection

We can add some tests to the request at the same time to determine whether the response value of the interface meets our expectations. as follows:

import http from "k6/http";
import { check, group } from "k6";

export default function() {

    group("GET", function() {
        let res = http.get("http://httpbin.org/get?verb=get");
        check(res, {
            "status is 200": (r) => r.status === 200,
            "is verb correct": (r) => r.json().args.verb === "get",
        });
    });
}

The check function is introduced to perform some logic of judgment. Of course, the above ==> is actually an abbreviation in ES6, and it can be expanded into a normal function. for example:

import http from "k6/http";
import { check, group } from "k6";

export default function() {

    group("GET", function() {
        let res = http.get("http://httpbin.org/get?verb=get");
        check(res, {
          "status is 200": function(r){
             return r.status === 200
          },
            "is verb correct": (r) => r.json().args.verb === "get",
        });
    });
}

After executing this script with k6, the output obtained is more as follows than before:

     █ GET

       ✓ status is 200
       ✓ is verb correct

     checks.........................: 100.00% ✓ 2        ✗ 0

From here, we can see whether the test of our current request interface has passed (it can also be used to determine whether the current interface can provide services normally).

Custom indicator output

Next, let's try to define some self-determined indicators in the stress testing process. Just import some different types of indicators k6/metrics This is basically the same as the type in Prometheus.

Here I have added two metrics. A testCounter used to count the total number of tests performed, and passedRate used to calculate the pass rate.

import http from "k6/http";
import { Counter, Rate } from "k6/metrics";
import { check, group } from "k6";


let testCounter = new Counter("test_counter");
let passedRate = new Rate("passed_rate");

export default function() {

    group("GET", function() {
        let res = http.get("http://httpbin.org/get?verb=get");
        let passed = check(res, {
            "status is 200": (r) => r.status === 200,
            "is verb correct": (r) => r.json().args.verb === "get",
        });

        testCounter.add(1);
        passedRate.add(passed);
    });
}

Here we set 2 VUs, and set the execution process to 10s . The output after execution is as follows:

(MoeLove) ➜ k6 run -u 2 -d 10s  simple_custom_metrics.js
...
  execution: local
     script: simple_custom_metrics.js
     output: -

  scenarios: (100.00%) 1 scenario, 2 max VUs, 40s max duration (incl. graceful stop):
           * default: 2 looping VUs for 10s (gracefulStop: 30s)


running (10.4s), 0/2 VUs, 36 complete and 0 interrupted iterations
default ✓ [======================================] 2 VUs  10s

     █ GET

       ✓ status is 200
       ✓ is verb correct

     checks.........................: 100.00% ✓ 72       ✗ 0  
     data_received..................: 18 kB   1.7 kB/s
     data_sent......................: 3.9 kB  372 B/s
     group_duration.................: avg=567.35ms min=440.56ms med=600.52ms max=738.73ms p(90)=620.88ms p(95)=655.17ms
     http_req_blocked...............: avg=266.72µs min=72.33µs  med=135.14µs max=776.66µs p(90)=644.4µs  p(95)=719.96µs
     http_req_connecting............: avg=170.04µs min=45.51µs  med=79.9µs   max=520.69µs p(90)=399.41µs p(95)=463.55µs
     http_req_duration..............: avg=566.82ms min=439.69ms med=600.31ms max=738.16ms p(90)=620.52ms p(95)=654.61ms
       { expected_response:true }...: avg=566.82ms min=439.69ms med=600.31ms max=738.16ms p(90)=620.52ms p(95)=654.61ms
     http_req_failed................: 0.00%   ✓ 0        ✗ 36 
     http_req_receiving.............: avg=309.13µs min=122.4µs  med=231.72µs max=755.3µs  p(90)=597.95µs p(95)=641.92µs
     http_req_sending...............: avg=80.69µs  min=20.47µs  med=38.91µs  max=235.1µs  p(90)=197.87µs p(95)=214.79µs
     http_req_tls_handshaking.......: avg=0s       min=0s       med=0s       max=0s       p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=566.43ms min=439.31ms med=600.16ms max=737.8ms  p(90)=620.19ms p(95)=654.18ms
     http_reqs......................: 36      3.472534/s
     iteration_duration.............: avg=567.38ms min=440.62ms med=600.53ms max=738.75ms p(90)=620.89ms p(95)=655.2ms 
     iterations.....................: 36      3.472534/s
     passed_rate....................: 100.00% ✓ 36       ✗ 0  
     test_counter...................: 36      3.472534/s
     vus............................: 2       min=2      max=2
     vus_max........................: 2       min=2      max=2

You can see that there are two more lines in the output:

     passed_rate....................: 100.00% ✓ 36       ✗ 0  
     test_counter...................: 36      3.472534/s

In line with our expectations.

But this does not seem intuitive enough, we can try to use k6 Cloud to show the results. After logging in, as long as you -o cloud when k6 is executed, you can see all the indicators on the cloud

img

Summarize

This article is mainly to introduce a modern stress testing tool k6 with relatively good user experience. I am currently planning to introduce it into the CI of our project in order to understand the impact of each core part change on the performance of the project.

If the follow-up progress is smooth, I will share how k6 is applied to the CI environment, so stay tuned.


Welcome to subscribe to my article public account【MoeLove】


张晋涛
1.7k 声望19.7k 粉丝