Guide
In "Single Test from Head to Toe-Talking about Effective Unit Testing (Part 1)", it mainly introduces: the pyramid model, why do single test, the stages and indicators of single test. In the next part, we mainly introduce mock , And how not to abuse mocks, use case writing strategies and more, let’s take a look at it quickly~

Seven. Mock must be talked about
test doubles
In the book "xUnit Test Patterns", the author first proposed the concept of test doubles. The mock we often talk about is just one of them, and it is the one most easily confused with Stub (stubbing). In the introduction to gomonkey in the previous section, you can notice that I did not use mocks, all were stubs. Yes, gomonkey is not a mock tool, but an advanced piling tool, adapted to most of our usage scenarios.
There are five types of test doubles:
·Dummy Object
Used for objects that are passed to the caller but never actually used, usually they are just used to fill the parameter list
·Test Stub
Stubs are usually used to provide encapsulated responses in tests. For example, sometimes programming does not respond to all calls. Stubs will also record the call record. For example, an email gateway is a good example. It can be used to record all the messages sent or the number of messages it sends. In short, Stubs are generally an encapsulation of a real object
·Test Spy
Test Spy is like a spy. It is inserted inside the SUT and is responsible for transmitting the indirect outputs inside the SUT to the outside. Its characteristic is to return the internal indirect output to the test case for verification by the test case. Test Spy is only responsible for obtaining internal information and sending it out, not for verifying the correctness of the information.

·Mock Object
Encapsulate suitable objects for the set calling method and the parameters that need to be responded
·Fake Object
Fake objects often work together with the implementation of classes, but only to allow other programs to run normally. For example, an in-memory database is a good example.
Stub and mock
Piling and mock should be the easiest to confuse, and it is customary to use mock to describe the ability to simulate return. It is natural to get used to it, and mock is often talked about.
As far as I understand it, stub can be understood as a subset of mock, which is more powerful:

· Mock can verify the implementation process, verify whether a function is executed, and how many times it has been executed
· Mock can take effect according to conditions, such as passing in specific parameters to make the mock effect take effect
· Mock can specify the return result
· When the mock specifies any parameter to return a fixed result, it is equal to stub
However, go's mock tool gomock only takes effect based on the interface, and is not suitable for news and Penguin projects, while gomonkey's stub covers most of the usage scenarios.

8. Don't abuse mock
I put this part in a separate chapter to show its important significance. Need to understand Xiao Peng's "Seven Deadly Sins of Mock", on gitchat.

Two schools
From 2004 to 2005, two big sects were formed in the arena: the classic test-driven development sect and the mockist (mock extreme sect).

Let me talk about mockist first. He advocated mocking all external functions called by the tested function. In other words, only pay attention to the line of code of the function under test, as long as you call other functions, they will all be mocked and tested with fake data.

Let's talk about the classic test-driven development school. They advocate not to abuse mocks. If they can't mock, the tested unit is not necessarily a specific function, it may be multiple functions, chained together. Mock again when necessary.

The two schools have been fighting for many years, and the theory has its own advantages and disadvantages, and they still coexist today. Existence is reasonable. For example, mockist uses too many mocks and cannot cover the function interface. This part is very error-prone; the classic school has too many strings and is questioned as an integration test.

For our practical application, we don't have to compel to follow a certain faction, just combine, and mock when needed, try to have as few mocks as possible, and don't need to be entangled.

When is it suitable for mock

If an object has the following characteristics, it is more suitable to use mock objects:
· The object provides non-deterministic results (such as current time or current temperature)
· Some states of the object are difficult to create or reproduce (such as network errors or file read and write errors)
· Execution on object methods is too slow (such as initializing the database before the test starts)
· The object does not yet exist or its behavior may change (for example, drive to create new classes in test-driven development)
· The object must contain some data or methods specially prepared for testing (the latter is not suitable for statically typed languages, and the popular Mock framework cannot add new methods to the object. Stub is possible.)
Therefore, don't abuse mock (stub). When other method functions are called in the method under test, the first reaction should be to walk in and string together, rather than mock out from the root.
9. Use case design method
Read an article: Think like a machine

The article describes the fundamental idea of thinking about program design-considering input and output. When we design the case, we want to get the most comprehensive design. We basically consider the combination of full input and full output. Of course, on the one hand, this is too time-consuming and often impossible to implement; on the one hand, this is not the desired result. , To consider the input-output ratio. At this time, it is necessary to combine theory with practice, theory guide practice, and practice fine theory.

Let's talk about the theory first

  1. Starting from the previous article, considering input and output, we must first know which ones are input and output:
  2. White box & black box design
    White box method:
    ·Logic coverage (statement, branch, condition, condition combination, etc.)
    ·Path (full path, minimum linear irrelevant path)
    ·Cycle: Combine 5 kinds of scenes (skip cycle, cycle once, cycle maximum times, cycle m hits, cycle m misses)

Black box method:
Equivalence class: correct, wrong (legal, illegal)
Boundary method: [1, 10] ==> 0, 1, 2, 9, 10, 11 (a valid supplement to the equivalence class)



  1. Combined application
    Full input and output is difficult to implement. Instead, we think about the white box and black box design method designed by the big gods in the industry. Through careful thinking, we can judge that it is the methodological manifestation of full input and output.

Therefore, for the white box & black box use case design method, I personally practice each method and understand its advantages and disadvantages. From the perspective of design coverage, condition combination>minimal linear irrelevant path>condition>branch>statement.

The picture below is a practice when I thought about use case design in the early days, and now I recall that it was over-designed.

But in reality, we are worried about "over-designing", and we are still unable to give the answer "what method should be used to ensure foolproof design".
·Excessive design will also make the case fragile
·In a limited time, we seek to maximize the benefits

  1. Small functions & important (calculation, object processing): try to be comprehensive
  2. Heavier logic, more lines of code: branch, statement coverage + loop + typical boundary processing (Let’s look at an example: GetUserGiftList)
  3. Leads to "implementation-based" and "intent-based" design: Too many calls to the Stub function under test, the closer to "implementation-based" (the second mention of "intent-based")

10. Based on intention and based on realization
This topic is very important.
Based on the intention: think about what the function ultimately wants to do, treat the function under test as a black box, consider its output, and not pay attention to how the middle is implemented, what temporary variables are generated, how many loops, and what judgments are there.
Based on realization: I also consider input and output, and how to realize it in the middle. Mock is a good example. For example, when we write a case, we will use mock to verify whether an external method is called in a function, how many times it is called, and what is the execution order of statements. The change of the program is faster than the demand, refactoring occurs at any time, and there is a slight change, and a large number of cases fail. This is also a situation mentioned in the "Seven Deadly Sins of Mock".
What we want is based on intent, away from realization.
Combining actual combat experience, I summarize as follows:

  1. "Either write it well or not write it." Case is also code, it also needs to be maintained, and there is a workload, so it needs to be written in place instead of writing too much. Write a bunch of useless, you still have to maintain, it is better to delete.
  2. When you get a function, first ask yourself what the function is to achieve and what the final output is; then, ask yourself what is the risk of this function, which part of the logic is not confident, and the most error-prone (calculation, complex judgments) , The fate of a certain abnormal branch is medium). These are the points we want to cover in our case.
  3. Inline functions, direct get/set, few lines and no logic, as long as you judge that there is no risk, you don't need to write a case.
  4. Determine the case to be written, then design specific use cases with core aspects such as branch condition combination and boundaries, and implement the writing.
    You can combine the news several times with case review records to understand in detail.
    Let's look at a specific case:
  5. Get this function, as a test classmate, I first understand the purpose of the function from the developer: add user gifts that match the format and match the time
    2. Read the code, understand the code flow, several abnormal branches, and do a code review first
    3. Design case coverage according to the necessary abnormal branches
  6. The normal business process is designed according to the function intent described in the development. The case is as follows:
    Function under test

    Single test case of normal path

func TestNum_CorrectRet(t *testing.T) {
giftRecord := map[string]string{
"1:1000": "10",
"1:2001": "100",
"1:999":  "20",
"2":      "200",
"a":      "30",
"2:1001": "20",
"2:999":  "200",
}
 
expectRet := map[int]int{
1: 110,
2: 20,
}
 
var s *redis.xxx
patches := gomonkey.ApplyMethod(reflect.TypeOf(s), "Getxxx", func(_ *redis.xxx, _ string)(map[string]string, error) {
return giftRecord, nil
})
defer patches.Reset()
 
p := &StarData{xxx }
userStarNum, err := p.GetNum(10000)
 
assert.Nil(t, err)
assert.JSONEq(t, Calorie.StructToString(expectRet), Calorie.StructToString(userStarNum))
 
}
Some students will ask: But you still look at the code in the end? See how the correct logic of the code is handled, and then go to the design case and structure the data? And if you don't look at the code, how do you know which abnormal branches to cover?

Answer: 1. As a test classmate, I am writing a case for development classmates. I really need to know which exceptions to deal with, but it is not limited to a few in the code. It should also include the exceptions that I understand, which should be reflected in the case. middle. Our case is by no means to prove how the code is implemented! Through single testing, we can often find bugs. But in the future, it will be developed to write a single test, and the function he designed himself must know which abnormal branches to cover.

  1. Well, I need to look at the normal flow of the code, but it doesn't mean that the code is pulled down to design a case. The case is actually through the communication with the development, to understand the structure of the input data, output format, data checksum calculation process, to design input and output.

11. Strategies for writing use cases

Regarding how to write the single test in order, we focused on practicing, basically there are three situations:
·Independent atom: mockist, we overthrew it. Of course, the bottom function may have no external dependencies, so testing it alone is enough.
·Top-down (red line): Measure down from the entry function. In the process of practice, I found it difficult to execute, because I had to figure out what data and format each call needs to return from the entrance. It is very difficult to string together a case.
·Bottom-up (yellow line): We found that the entry function often has no logic. Call another function and get the response back. So the entry function may not need to be written? We continue to look down, every time we call the function, we also call out the previous online and offline bugs. We found that the part of the code that has the problem is often the bottom end of the call chain, especially involving calculations, complex branch loops, and so on. Moreover, the bottom function tends to be more measurable.

Therefore, considering two aspects, we choose bottom-up design to choose the function writing case:
1. The function at the bottom is usually very measurable

  1. There are many core logics, especially those involving calculations, splicing, and branching.

12. Solving the problem of testability-refactoring
An important reason for the inability to write a single test is that the code is not testable. If a function has eighty to ninety lines, two to three hundred lines, it is basically untestable, or "untestable". Because there is too much logic inside, what have been experienced from the first line to the last line, various function calls external dependencies, various if/for, various exception branch handling, the number of lines of code to write a case may be the original function Several times.
Therefore, to promote single testing, refactoring to improve testability is a must. Moreover, through refactoring, the code structure is indirectly clearer, more readable and maintainable, and easier to find and locate problems.
Common problems: repeated codes, magic numbers, arrow-like codes, etc.
The recommended theoretical books are the second edition of "Refactoring: Improving the Design of Existing Code" and "Clean Code"
I output an article about refactoring.
We use the cyclomatic complexity and function length of codecc (Tencent Code Inspection Center) to evaluate the quality of code structure. We learn and practice together with development, and we continue to produce results.
For arrow-like codes, consider the following steps:
1. Use the guard sentence more, judge the exception first, and return the exception
2. Separate the judgment sentence
3. Extract the core part into a function

13. Use case maintenance, readability, maintainability, and reliability
Use case design elements
· Separate testing of internal logic from external requests
·Strictly verify the input and output of the service boundary (interface)
· Use assertions instead of native error reporting functions
· Avoid random results
Try to avoid the result of asserting time
·Use setup and teardown at the right time
·The test cases are isolated from each other and do not affect each other
·Atomicity, all tests have only two results: success and failure
·Avoid logic in the test, that is, if, switch, for, while, etc. should not be included
·Don’t protect it, try...catch...
· Only test one focus per use case
·Sleep less, and the behavior of delaying the test duration is unhealthy
·3A strategy: arrange, action, assert
Use case readability
· The title should clearly indicate the intent, such as Test+tested function name+condition+result. After a case fails, you can know which scene failed by name, instead of reading the code line by line. In the future, it may be someone else who maintains this test code. We need to make it easy for others to read
·The content of the test code should be clear, and the 3A principle: arrange, action, and assert are divided into three parts. If there are many lines of code in the data preparation part arrange, consider pulling it out.
·The intention of the assertion is obvious, you can consider turning the magic number into a variable, and the name is easy to pass.
·One case, don’t do too many asserts, be specific
· Consistent with the requirements of the business code, must be readable
Use case maintainability
·Repetition: text string repetition, structure repetition, semantic repetition
·Reject hard coding
·Design based on intent. Do not cause a batch of cases to fail just because the business code is refactored once
·Pay attention to the various bad smells of the code, see the second edition of "Refactoring"
Use case trustworthiness
The unit test is small and fast. It is not to find the bug this time, but to put it on the assembly line and try to find out whether each MR has a bug. The single test operation fails, the only reason should be bugs, not because of unstable external dependencies, implementation-based involvement, etc. Long-term failures will lose the warning effect of unit tests. The "wolf is coming" story is a painful lesson .
·Non-tested program defect, case of random failure
·The case that never fails
·The case without assert
·The case of misnomer
14. The promotion process of news unit testing
We mentioned that the practice of unit testing is divided into 4 stages, and each stage has goals.
The first stage will be written, all staff will write, not required to write well
·The promotion from the top to the bottom, from the director to the team leader, strongly support, without hesitation, make the team members emotional
·Quickly determine the single test frame and use it skillfully
·Combined with development requirements, output the usage methods of single test framework in various scenarios, including assert, mock, table-driven, etc.
·Encapsulate http2WebContext to facilitate the generation of context objects
·Multiple training to explain the single test theory and the use of the framework
· Each team (terminal, access layer) appoints a single test interface person, who will taste the crab first. He is the person who is most familiar with the use of the framework and wrote the most cases in the early stage
·After the integration and use of the single-test framework have been run-in, the meeting will be started. Some students will use it on a pilot basis to ensure that there are two consecutive iterations. These students have case outputs.
·In the summary data of each iteration, add the relevant data of the single test: the group leader and director are very concerned about the data information of the single test, and they are specifically encouraged to increase the number of cases and the number of lines of code

The second stage is written, effective, and written by all staff
· Test students explore the correct use of mocks and the correct ideas for use case design, share them with the team, and reach an agreement after discussion
· Pair programming, pair 2-3 developments in each iteration, write cases together, and improve each other.
The pairing here is flexible: For some development, you only need to spend half a day to explain the framework to him, practice with him, and he can get started without worrying; some development will be assigned to the needs of test students and test students. After writing the case, he developed a review study and tried to write his first case; some development may not accept it at the beginning, and after observing for a period of time, he found that others are not suitable for single testing. It is not that difficult to write, and it is also beneficial to the team. He will even take the initiative to find test students to teach him to write a case.
·Test students review the case submitted by the development, follow up the development and modify and re-MR
·For two consecutive iterations, teacher dot and Qiao’s helper were invited to do case review, the effect was very good
·Analysis of iterative single test data, focusing on demand coverage, personnel coverage, and case increments
·The leader continues to encourage and support the single test
·The "Unit Test" field is added to the requirements of each iteration, which is set after evaluation by the team leader. MR without single test will not pass, and single test will be reviewed

The third stage of improving the measurability
·Test and develop together to learn the second edition of "Refactoring", weekly sharing sessions
·Some key students give priority to refactoring their own code
·Strict requirements of test students, first ensure that there is a single test, and then refactor in small steps, and each step has a single test guarantee
· Through the codecc scanning of the pipeline, the cyclomatic complexity and function length must meet the standards, and it is not allowed to manually interfere with its passage
The fourth stage of TDD
·It is not guaranteed that development students can achieve TDD, the threshold is still quite high, and it needs to be proficient offline before applying it to business development
·Gradually promote the development of business code and test code synchronously, instead of completing the business code and then fill the case
·Test students to become TDD
15. Pipeline

The single test must be run on the assembly line. The client and background are equipped with the assembly line to ensure that each push and MR are run once and a report is sent.
For the go single test, the news access layer modules are compiled through MakeFile. Because some environment variables need to be imported, I integrated go test into MakeFile, and execute make test to run all the test cases under this module.
GO = go
 
CGO_LDFLAGS = xxx
CGO_LDFLAGS += xxx
CGO_LDFLAGS += xxx
CGO_LDFLAGS += xxx
 
TARGET =aaa
 
export CGO_LDFLAGS

all:$(TARGET)
 
$(TARGET): main.go
$(GO) build -o $@ $^
test:
CFLAGS=-g
export CFLAGS
$(GO) test $(M)  -v -gcflags=all=-l -coverpkg=./... -coverprofile=test.out ./...
clean:
rm -f $(TARGET) 
Note: The above method can only generate the coverage of the tested code file, and cannot get the untested coverage. You can create an empty test file in the root directory to solve this problem and get full code coverage.
//main_test.go
package main
 
import (
        "fmt"
        "testing"
)
 
func TestNothing(t *testing.T) {
        fmt.Println("ok")
}
Pipeline plus process

cd ${WORKSPACE} can enter the current workspace directory

export GOPATH=${WORKSPACE}/xxx
pwd
 
echo "====================work space"
echo ${WORKSPACE}
cd ${GOPATH}/src
for file in ls:
do
    if [ -d $file ]
    then
        if [[ "$file" == "a" ]] || [[ "$file" == "b" ]]  || [[ "$file" == "c" ]] || [[ "$file" == "d" ]]
        then
            echo $file
            echo ${GOPATH}"/src/"$file
            cp -r ${GOPATH}/src/tools/qatesting/main_test.go ${GOPATH}/src/$file"/."
            cd ${GOPATH}/src/$file
            make test
            cd ..
        fi
    fi
done
Appendix. Information
·"Test Driven Development"
·"The Art of Unit Testing"
· "Effective Unit Testing"
· "Refactoring to improve the design of existing code"
·"The Art of Modifying Code"
· "Three Practices of Test Driven Development"
·《xUnit Test Patterns》
·The seven deadly sins of mock


腾讯WeTest
590 声望149 粉丝

WeTest是腾讯游戏官方出品的一站式测试服务平台,