Author of this article: Cheng Shengcong-CODING Product Manager
Automated testing is the foundation of continuous testing
In the high-frequency delivery scenario of DevOps, the team is prone to fall into the dilemma of "choosing one" between speed and quality: In order to embrace changes in requirements, a shorter delivery cycle is adopted, and then frequent changes lead to more problems, so development and testing Delays, and finally result in the compression of test time, making it difficult to carry out adequate tests. Faced with such a situation, how can the team improve the efficiency of test execution? The first thing everyone will think of should be automated testing-replace repetitive manual testing with automated testing, execute faster and save testing time . In addition, because the automation execution time is relatively fixed each time, and the preset test behavior of the program brings high consistency, the stability and repeatability of the test reach a very high standard, and it can realize the "fast reproducibility of the software" Defects" goal.
If in the traditional waterfall mode with relatively sufficient testing time, the greatest value of automated testing for regression testing scenarios is to save labor costs, then in the era of agile and DevOps, the greater value of in terms of frequent verification and quick feedback. It can be said that the basis of continuous testing practice is automated testing. Only when the degree of automation is high enough can it meet the high-frequency release requirements of continuous delivery.
Automated testing strategy
Automated testing has very important value, but it does not mean that we should invest in various types of automated testing without limitation. Automated testing is to verify whether the established logic meets expectations. In scenarios where requirements change frequently, the maintenance cost of automated code is huge. Therefore, we need a suitable strategy to guide the realization of automation-the pyramid model.
from "The Test Automation Pyramid: Your essential guide for test automation"
The automated testing pyramid was first proposed by Mike Cohn in "Succeeding with Agile: Software Development using Scrum" in 2009. At that time, the three layers from top to bottom were UI, Service, and Unit. Later, with the implementation of agile testing practices, the cognition that has gradually formed in the industry is user interface tests (UI Tests), interface integration tests (API Tests), and unit tests (Unit Tests) , plus The manual exploratory test at the top is further enriched as a test pyramid (including automation and manual). This triangle with a narrow top and a wide bottom provides a visual guide for our automation investment in each layer: the bottom layer has the most unit tests, the interface tests are in the middle, and the UI tests are the least.
As shown in the pyramid model, the unit test/interface test at the lower level has advantages over the UI test at the upper level: because it is closer to the production code, it is easier to write and locate code defects; because the test object has smaller granularity and less dependency , So the execution efficiency is higher ; because the test object is more stable, the maintenance cost is lower, etc. Of course, the advantage of the test closer to the upper level is that it more reflects the business needs, so it is easier for people to see the value of the test. So in the DevOps era, based on the balance of speed and quality, the interface integration test of the middle layer can not only maintain relatively low maintenance costs, but also reflect the value of business logic, it should become our key investment part, especially in All aspects of automation are still in their infancy.
The testing pyramid originated from agile practices, and using it as a reference to continuously adjust our automated testing investment, the team’s test cases and execution status will gradually form a good balance.
The value of precision testing
Although it can be seen from industry survey reports in recent years, has continued to increase its investment in automated testing with the recognition of DevOps . The direct result is that there are more and more automated testing codes. But with a rapidly increasing amount of automation code, can automation achieve the desired effect?
From the perspective of practical results, companies have not obtained the expected value due to the increase in automated test coverage, because the execution of automated code is not as "free" as we imagined, often due to two reasons:
- Generally, the team will treat automated code execution as a part of CI, and it is only used as a regression scenario, but the time-consuming full regression limits the frequency of execution not too high;
- There are still technical barriers to building the assembly line and matching the corresponding tools on this basis, and the workload required by these operations themselves also makes it difficult for everyone to perform automated testing at any time.
As a result, with the increase in automation coverage, the execution time of regression testing has become longer and longer, so the execution frequency of automated testing can only be reduced, and finally the value of automated code is questioned. In fact, in addition to improving automation coverage, we also need to change the concept of "the more use cases covered by each test execution, the better": we should not make the test set excessively redundant because of "not assured", but need optimizes the test coverage based on business risks, with a view to achieving a high test input-output ratio within a limited range and achieving accurate testing.
Coding makes test execution more free
In order to make test execution more "free", CODING has created the ability to automate execution in the cloud, hoping to solve the "last mile" problem of automated testing, so as to achieve:
- Each execution has the freedom to choose a collection of examples flexibly;
- Everyone has the freedom to perform tests.
Next, let's take a look at how can perform tests "freely" in the
- First, perform automation code registration in the coding automation use case library, confirm that the automation code already exists in the code hosting, register the existing automation code library, and set the relevant language/framework.
- Analyze the test function list of the automation code library, and establish the matching relationship between the function use cases and the automation functions in the use case management to obtain the automation coverage rate. By matching automation code and functional use cases, it can help us establish an intuitive feeling about the value generated by the automation code and achieve "knowledge in mind".
- Create a test plan whose execution method is automated execution, and circle the examples.
Choosing a suitable subset of automated testing requires business testing knowledge, and it takes too long to perform a fixed range of full regression testing, or repeated mechanical smoke testing does not reflect the testing situation of new requirements in time. This is automated test coverage. ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? After the increase in the rate, it still fails to achieve the expected value. test subsets to create test plans, accurately execute related automation code subsets, and quickly feedback results, thereby eliminating concerns about the length of the automation runtime and maximizing the value of the automation code produced by the team .
- The test plan is executed, the automated use cases that have been matched are executed in the background and the execution results of the corresponding functional use cases are updated. After the automated execution is completed, you can manually verify the untested or failed use cases and update the task status of the use cases.
- Click to generate a test report, and the system will automatically perform quality analysis and evaluation based on the collected data. The traceability of the test makes the test report more convincing and helps the team control the risk to a lower level.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。