Introduction
In the process of project testing, Tencent WeTest platform provides a lot of convenience for enterprises and developers to help discover potential problems of products. Next, this article will share some small experiences of using WeTest to improve test efficiency, hoping to help everyone. The author of this article is Lian Linggan, a test and development engineer of Tencent's IEG Growth Collaboration Department.
1. Automated Compatibility Testing
UI automation is an important means to improve test efficiency. Poco and appium are commonly used automation frameworks, and there are a lot of related materials. I will not talk about them here, but mainly share some problems that may be encountered during the practical operation.
1.1 True Pass and False Pass
After each automated compatibility test, the WeTest platform will return a corresponding test report, covering detailed information in the test process, such as device logs, screenshots, performance data, etc., for further analysis of results. The overview data covers the test results of this device dimension, including the number of devices that passed, the number of devices that failed, etc.
If the device dimension fails, it is defined as a compatibility problem, such as Crash, ANR, etc. Sometimes in the test process, the script may not be executed completely, and it will eventually be judged to be passed, which is a false pass and affects the final statistical results.
Every time I submit a compatibility test, there are dozens or hundreds of models. It takes too much time to open the verification for all the models that pass. So what solutions can help us quickly find out the ones that have not actually executed the script? What about equipment?
Based on a less rigorous premise, the same UI automation script execution, theoretically the whole process time is similar on different models (if there is an operation such as wait_until_something_appear, the operation time will vary depending on the model), and WeTest screenshots The time interval is basically fixed. It can be roughly considered that the number of screenshots generated by the complete execution process of each model fluctuates less. We only need to check the test passing use cases with large deviations.
1.2.airtest can access the node, and the script execution reports that poco does not detect the ui control node
In the process of using poco for automated testing, sometimes it is found that the node that airtest can locate, but the error of node not found is reported during the execution process. This is because the ui tree of poco is not refreshed in time. It is recommended to appropriately increase the sleep interval and wait. UI node tree refresh.
1.3. Some models have permission pop-up issues
The pop-up window of some models may also cause some false pass problems, so if you submit a specific model test and the test app has permission to apply, you should pay attention to whether there will be a permission pop-up window. Since the pop-up window has no id for quick positioning, the polling node method is used here. Before starting the test case, the corresponding element is found and clicked by polling the ui node and the keyword matching "allow" or "deny".
2. Log test automation
Log reporting is the cornerstone of product data analysis. During product operation, information will be reported on a large number of nodes. Checking and confirming each item is a time-consuming and labor-intensive process, whether it is from the reporting link check or from the database data check, manual operation, inspection The result is a tedious job and prone to omissions. Therefore, we combine UI automation function test with log test, and build a log automation test module based on BlueShield pipeline and WeTest.
Because the amount of management logs reported is large, we need to accurately capture the logs generated by this operation and search them in db as keywords. Here we use the method of hitting a local file to save the state of the execution process and the keywords used to locate this search for subsequent operation verification and search.
How to pass the data of the automated execution process to the verification module, the following solutions have been considered:
1. Through mq (redis, kafka, etc.), data is transmitted to the query verification module;
2. Start another service to receive data and transfer data through interface calls;
3. Couple the verification module to the log test script;
4. Local log record, transmit information through log file.
Based on the decoupling and maintenance of different functional modules and the consideration of development costs, we finally chose the most primitive form of file storage, which is also convenient for finding key information from WeTest when verifying problems. WeTest compatibility testing supports copying the file to the development machine, and adding to the endTest.sh file:
cp XXX.log $UPLOADDIR/
Can cooperate with pipeline operation.
3. Data-driven increase coverage
Full submission for testing will result in a long task cycle, and there will inevitably be a waiting time for public cloud devices. Each full submission may also bring additional testing costs to the product.
We need to think about how much coverage will each additional test model bring to our traffic? Which models have a higher proportion and are more important? Does the existing WeTest test model library meet the model coverage of our own products?
So, can we use the distribution of users on the existing network and superimpose some models and system distribution conditions to conduct a more accurate compatibility test?
First of all, the compatible models on the Tencent WeTest platform cover the mainstream top models in the market, but the long-tail effect of models in the Android system is particularly large, and the user income faced by the product determines the model used. , Through reporting data, we obtained the user model traffic distribution map of our product, and compared it with the existing top500 benchmark models of the external network.
Using the above data, we made an intersection with WeTest's models according to the product flow top50, 100, 300, and 500, and obtained the following proportion chart.
With the above model data, more detailed test verification can be carried out during the test process:
1. Compatibility test complements the existing WeTest models in the product flow to improve the compatibility test coverage.
2. For different test scenarios, select different test machine groups to narrow the test range and get results faster.
3. You can purchase test machines in a targeted manner, make up for a few models that are not available on the WeTest platform, and improve the coverage of user models in the testing process.
The above is a small practice of using WeTest to improve test efficiency and coverage in product testing. Welcome to discuss~
If you have business needs, please contact customer service for details
Customer Service Tel: 0755-86013388-22126
Customer Service QQ: 2746728701
Working hours (Monday to Friday 9:30-18:30)
About Tencent WeTest
Tencent WeTest is a one-stop quality open platform officially launched by Tencent. With more than ten years of experience in quality management, we are committed to the construction of quality standards and the improvement of product quality. Tencent WeTest provides mobile developers with excellent R&D tools such as compatibility testing, cloud real machine, performance testing, security protection, etc., and provides solutions for more than 100 industries, covering the testing needs of products in various stages of R&D and operation, and has experienced thousands of products. . Gold expert team, through 5 major dimensions, 41 indicators, 360-degree guarantee of your product quality.
Follow Tencent WeTest to learn more popular test products:
WeTest Tencent Quality Open Platform - Your Quality
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。