Preface
The previous section introduced the introduction of https://github.com/houbb/junitperf .
In this section, we analyze its implementation from the perspective of source code.
How to do the
Junit Rules
Junit4 friends are certainly no strangers, so have you ever heard of junit rules?
To implement a performance testing framework based on junit4, the core point is to understand Junit Rules.
Official document: https://github.com/junit-team/junit4/wiki/Rules
Rules role
The rules allow for the flexibility to add or redefine the behavior of each test method in the test class.
Testers can reuse or extend one of the rules provided below, or write their own rules.
Custom rule
ps: The following content is from an official example.
Most custom rules can be implemented as extensions of ExternalResource rules.
However, if you need more information about the test class or method in question, you need to implement the TestRule interface.
import org.junit.rules.TestRule;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;
public class IdentityRule implements TestRule {
@Override
public Statement apply(final Statement base, final Description description) {
return base;
}
}
Of course, the power of implementing TestRule comes from using a combination of custom constructors, adding methods to classes for testing, and wrapping the provided Statement in a new Statement.
For example, consider the following test rule that provides a named logger for each test:
package org.example.junit;
import java.util.logging.Logger;
import org.junit.rules.TestRule;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;
public class TestLogger implements TestRule {
private Logger logger;
public Logger getLogger() {
return this.logger;
}
@Override
public Statement apply(final Statement base, final Description description) {
return new Statement() {
@Override
public void evaluate() throws Throwable {
logger = Logger.getLogger(description.getTestClass().getName() + '.' + description.getDisplayName());
base.evaluate();
}
};
}
}
Then this rule can be used in the following way:
import java.util.logging.Logger;
import org.example.junit.TestLogger;
import org.junit.Rule;
import org.junit.Test;
public class MyLoggerTest {
@Rule
public final TestLogger logger = new TestLogger();
@Test
public void checkOutMyLogger() {
final Logger log = logger.getLogger();
log.warn("Your test is showing!");
}
}
Definition and use
Looking at the above example, we found that the custom rules in junit4 are relatively simple.
Definition method: implement the TestRule interface
Usage; use @Rule
on the created internal attribute.
Isn't it simple?
Well, you have learned 1+1=2, let us learn about Taylor expansion.
Performance test algorithm flow
How to count the execution time of a method?
I believe you will not be unfamiliar. You only need to count a time before and after the execution of the method, and then the difference is time-consuming.
How to simulate multiple thread calls?
Use java multi-threaded execution for simulation.
How to generate report file?
Combine the various dimensions of the above statistics to generate the corresponding html and other files.
What we are going to do is to combine the above points, and then implement it in conjunction with Junit4 Rules.
It doesn't sound difficult, doesn't it?
Next, let's take a look at the implementation source code.
Getting started with Rule
Getting started example
Let's first look at an introductory example of junit4:
public class HelloWorldTest {
@Rule
public JunitPerfRule junitPerfRule = new JunitPerfRule();
/**
* 单一线程,执行 1000ms,默认以 html 输出测试结果
* @throws InterruptedException if any
*/
@Test
@JunitPerfConfig(duration = 1000)
public void helloWorldTest() throws InterruptedException {
System.out.println("hello world");
Thread.sleep(20);
}
}
JunitPerfRule is the custom rule we mentioned earlier.
JunitPerfRule
The implementation is as follows:
public class JunitPerfRule implements TestRule {
//region private fields
// 省略内部变量
//endregion
@Override
public Statement apply(Statement statement, Description description) {
Statement activeStatement = statement;
JunitPerfConfig junitPerfConfig = description.getAnnotation(JunitPerfConfig.class);
JunitPerfRequire junitPerfRequire = description.getAnnotation(JunitPerfRequire.class);
if (ObjectUtil.isNotNull(junitPerfConfig)) {
// Group test contexts by test class
ACTIVE_CONTEXTS.putIfAbsent(description.getTestClass(), new HashSet<EvaluationContext>());
EvaluationContext evaluationContext = new EvaluationContext(description.getMethodName(), DateUtil.getSimpleDateStr());
evaluationContext.loadConfig(junitPerfConfig);
evaluationContext.loadRequire(junitPerfRequire);
ACTIVE_CONTEXTS.get(description.getTestClass()).add(evaluationContext);
activeStatement = new PerformanceEvaluationStatement(evaluationContext,
statement,
statisticsCalculator,
reporterSet,
ACTIVE_CONTEXTS.get(description.getTestClass()),
description.getTestClass()
);
}
return activeStatement;
}
}
When the main process is to execute a method, first obtain the @JunitPerfConfig
and @JunitPerfRequire
on the method, and then perform the corresponding execution statistics.
Statement
Statement is the core object of junit4 execution.
It can be found that this implementation is rewritten as PerformanceEvaluationStatement based on the annotation information.
The core implementation of PerformanceEvaluationStatement is as follows:
/**
* 性能测试 statement
* @author 老马啸西风
* @see com.github.houbb.junitperf.core.rule.JunitPerfRule 用于此规则
*/
public class PerformanceEvaluationStatement extends Statement {
// 省略内部变量
@Override
public void evaluate() throws Throwable {
List<PerformanceEvaluationTask> taskList = new LinkedList<>();
try {
EvaluationConfig evaluationConfig = evaluationContext.getEvaluationConfig();
// 根据注解配置,创建对应的执行线程数
for(int i = 0; i < evaluationConfig.getConfigThreads(); i++) {
// 初始化执行任务
PerformanceEvaluationTask task = new PerformanceEvaluationTask(evaluationConfig.getConfigWarmUp(),
statement, statisticsCalculator);
Thread t = FACTORY.newThread(task);
taskList.add(task);
// 子线程执行任务
t.start();
}
//主线程沉睡等待
Thread.sleep(evaluationConfig.getConfigDuration());
} finally {
//具体详情,当执行打断时,被打断的任务可能已经开始执行(尚未执行完),会出现主线程往下走,被打断的线程也在继续走的情况
for(PerformanceEvaluationTask task : taskList) {
task.setContinue(false); //终止执行的任务
}
}
// 更新统计信息
evaluationContext.setStatisticsCalculator(statisticsCalculator);
evaluationContext.runValidation();
generateReportor();
}
/**
* 报告生成
*/
private synchronized void generateReportor() {
for(Reporter reporter : reporterSet) {
reporter.report(testClass, evaluationContextSet);
}
}
}
Here is the core implementation part, the main process is as follows:
(1) Create the corresponding task sub-thread according to the configuration
(2) According to the configuration, initialize the subtasks and execute them
(3) The main thread waits for a deep sleep
(4) When the main thread ends its sleep, interrupt the sub-threads and update statistics
(5) Generate corresponding test report files based on statistical information
PerformanceEvaluationTask
The realization of subtasks is also worth noting. The core realization is as follows:
public class PerformanceEvaluationTask implements Runnable {
/**
* 热身时间
*/
private long warmUpNs;
/**
* junit statement
*/
private final Statement statement;
/**
* 统计计算者
*/
private StatisticsCalculator statisticsCalculator;
/**
* 是否继续标志位
*/
private volatile boolean isContinue;
public PerformanceEvaluationTask(long warmUpNs, Statement statement, StatisticsCalculator statisticsCalculator) {
this.warmUpNs = warmUpNs;
this.statement = statement;
this.statisticsCalculator = statisticsCalculator;
this.isContinue = true; //默认创建时继续执行
}
@Override
public void run() {
long startTimeNs = System.nanoTime();
long startMeasurements = startTimeNs + warmUpNs;
while (isContinue) {
evaluateStatement(startMeasurements);
}
}
/**
* 执行校验
* @param startMeasurements 开始时间
*/
private void evaluateStatement(long startMeasurements) {
//0. 如果继续执行为 false,退出执行。
if(!isContinue) {
return;
}
//1. 准备阶段
if (nanoTime() < startMeasurements) {
try {
statement.evaluate();
} catch (Throwable throwable) {
// IGNORE
}
} else {
long startTimeNs = nanoTime();
try {
statement.evaluate();
statisticsCalculator.addLatencyMeasurement(getCostTimeNs(startTimeNs));
statisticsCalculator.incrementEvaluationCount();
} catch (InterruptedException e) { // NOSONAR
// IGNORE - no metrics
} catch (Throwable throwable) {
statisticsCalculator.incrementEvaluationCount();
statisticsCalculator.incrementErrorCount();
statisticsCalculator.addLatencyMeasurement(getCostTimeNs(startTimeNs));
}
}
}
/**
* 获取消耗的时间(单位:毫秒)
* @param startTimeNs 开始时间
* @return 消耗的时间
*/
private long getCostTimeNs(long startTimeNs) {
long currentTimeNs = System.nanoTime();
return currentTimeNs - startTimeNs;
}
//region getter & setter
public boolean isContinue() {
return isContinue;
}
public void setContinue(boolean aContinue) {
isContinue = aContinue;
}
//endregion
}
This task is mainly responsible for the time-consuming task of statistics.
Count the number of corresponding successes, the number of exceptions, etc.
The isContinue variable defined by volatile makes it easy to terminate the loop after the main thread's sleep ends.
ps: There is still a problem that can be found here. If statement.evaluate();
has already started execution, it cannot be interrupted. This is an area that can be improved.
summary
This article starts with the junit rules and analyzes the implementation principle of the entire performance test tool.
In general, the idea is not difficult to achieve, all complex applications, there are simply part .
In order to make it easier for everyone to understand, a lot of simplifications have been made in the source code part.
If you want to get the complete source code, please go to the open source address: https://github.com/houbb/junitperf .
I am an old horse, and I look forward to seeing you again next time.
Of course, maybe you can find that this method is not elegant enough. Junit5 provides us with more powerful functions. We will explain the implementation of junit5 in the next section.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。