2

background

Buried points have always played an important guiding role in the business development of e-commerce apps, but its complex data composition makes it difficult to guarantee its stability. Often some refactoring of business logic will lead to some buried point attributes or even the entire buried point. loss of points.

It is precisely because the buried point has multiple data sources, conventional automated verification can only verify whether the buried point exists, and cannot match the business scenario. For manual inspection, although it can solve the problem of matching with the scene, it is also very difficult to do complex verification such as the association between attributes of different buried points or the association between attributes and interface fields.

This article will introduce how we achieved automatic regression of buried points and multi-dimensional buried point verification through the teslaLab+ buried point verification platform, and accumulated dozens of buried point problems in the last three versions.

android

IOS

Pain points

The data in a buried point mainly consists of three parts:

  • Interface sends data
  • user behavior
  • code running locally

To verify whether a buried point conforms to the expected design, the key difficulty lies in how to fix these three data sources so that the results generated each time are consistent or conform to our predetermined rules. Therefore, we have adopted the following schemes respectively to realize the fixation of the data source when the buried point is generated

  • Interface data: The interface mock realizes the recording & playback of interface data
  • User behavior data: recording & playback of user behavior through Google's uiautomator
  • Local code: Get through with the ab platform, and fix the code related to the client ab experiment to the maximum extent by dynamically adjusting the ab configuration delivery interface.

Overview

Glossary

  • teslaLab: It is a wireless testing tool (MAC/WIN application) that provides out-of-the-box wireless performance/experience/UI automation and other special testing tools.
  • ubt-verification: An SDK that implements buried point data recording and interface Mock functions on the Android side.
  • Test scenario: that is, the verification rule group, corresponding to a test case, including all the buried point verification rules of this case
  • Mock record: It contains the data of all interfaces requested by a case during the running process, which is used for interface mocking during automatic regression.
  • Test record: The product of automated regression, that is, all the recorded buried point data.
  • Verification report: the product of the combination of test records and test scenarios, including specific exception information for all exception buried points.
  • Task group: The concept from teslaLab, that is, a collection of test records, which is generally used to aggregate test records of various business lines.
  • Pass rate: It refers to the percentage of buried points (de-duplication) without any abnormal buried points in a test record to the total number of buried points (de-duplication).
  • Task group pass rate: refers to the percentage of the total number of buried points (de-duplication) with no abnormality in all test records in the task group to the sum of all buried points (de-duplication).

System Architecture Diagram

The whole buried point automatic verification platform is mainly composed of three parts, namely:

  • Automation tool: teslaLab
  • Mobile SDK: ubt-verification for Android and kylin for ios
  • Verification platform: embedded point verification module of wireless R&D platform

flow chart

The following figure shows the first verification process of a single test case:

It can be divided into three main stages:

  1. preparation stage

<!---->

  • Record or write automation scripts with teslaLab
  • Manually execute the script on the script editing page and select the Record Mock function, record the interface data and create a Mock record
  • Create a new running task and select the previously created script and Mock record. After executing the task, a set of buried point data (ie, test records) will be obtained.
  • According to the number of buried points that need to be verified, you can choose manual configuration (not recommended, too troublesome) or automatically generate (recommended) the corresponding basic rules (ie test scenarios) from the test records obtained in the previous step.
  • So far, the three elements of automatic verification of buried points (automatic script, Mock record, test scenario) have been collected.

<!---->

  1. run phase

<!---->

  • It can be executed manually or by configuring Cron expressions to periodically execute the new automated tasks in the preparation phase, and produce a test record and a verification report.

<!---->

  1. acceptance stage

<!---->

  • The verification rules automatically generated based on the predetermined strategy and test records may not meet expectations, so the problems in the generated verification report need to be checked manually for validity.

    • If it is invalid, you can quickly correct the rules by pressing the Repair button in the report details page, or go to the test scene details page to manually correct the rules.
    • If it is valid, you can select a valid buried point and submit a fault report to the buried point management platform. After the submission is successful, Feishu will notify the last person responsible for the research and development and testing of the buried point. When the problem is repaired and tested and accepted, it can be accepted. report failure.
  • At this point, the complete verification process of a test case ends.

Details

Next, the detailed processes of these three modules will be introduced one by one in the order of the verification process.

automation module

Embedding automation relies on the automation module of " Tesla-lab " , in which the recording of data mock records and the recording of UI automation scripts rely on the local editor of " Tesla-lab" , and the timing execution of tasks and task groups depends on the scheduling of " Tesla-lab" tasks Module, the actual execution of the task depends on the " Tesla-lab" task executor.

" Tesla-lab " automation flow chart

" Tesla-lab " automated overall architecture implementation

The implementation is divided into three modules:

  • end

<!---->

    • Editor side, responsible for script editing and data recording
    • The TeslaLab client is responsible for the debugging/execution of local scripts/tasks and the deployment of scheduled tasks

<!---->

  • Core-local java agent

<!---->

    • Local core service that encapsulates Quartz-based tasks/task groups
    • Local device management

<!---->

      • iOS based on tidevice/wda
      • Android based on uiautomator2

<!---->

    • The actual performer of the automation task

<!---->

      • pytest-based script executor

<!---->

  • TeslaLab Services

<!---->

    • Provides script management
    • Task/Task Group Management
    • Device remote management and synchronization

" Tesla-lab " automated recording editor

Editor/Recorder (Client Embedded Web)

  • record script

<!---->

    • Support manual writing or generate corresponding code by clicking on the page, support run/pause to debug scripts

  • Data recording, through local socket communication with the mobile SDK, triggers the recording/playback process of the SDK interface/buried point

<!---->

    • Mock interface data recording

" Tesla-lab " automated task executor

  • Playback task creation

The creation of the playback task will create a new local task through the local socket xxxx port, and will also create a remote task. In this way, tasks can be automatically synchronized to the local through the installation of Lab. The core local tasks and data of TeslaLab interact with the server through the agent module.

  • Playback task run

The task supports immediate execution or timed execution by configuring Cron expressions. The task has a protection mechanism. When encountering an application crash or anr (the script stalls for more than a certain period of time), it will try to restart the task, and the total time spent on a single task execution exceeds a certain amount. Time will call the end recording API ahead of time to skip the task.

  • Playback task results

After the execution is completed, you can turn on Feishu notifications in the task configuration to obtain the execution result of the task group or task, including the execution details of the task. You can also directly click to expand on the task list page to view the historical execution records of the task, including screenshots and logcat logs during execution.

  1. " Best Practice 1 " associates a script with a task . Each script is associated with a Mock record and a test scenario.

Associate mock records and verification rules, support fuzzy search

  1. " Best Practice 2 " makes reasonable use of task groups , which can be created according to product lines or business lines.

  • Realize the closing of the API through the TeslaLabEngine singleton mode. At the same time, TeslaLabEngine can be decoupled as an independent Jar package interface and front-end interface, and can be used alone.
  1. " Best Practice 3 " makes reasonable use of Feishu notifications to debug tasks/task groups.

Example report:

  1. " Best Practice 4 " uses Cron expressions to execute task groups regularly in the early morning every day, so as to continuously pay attention to the stability of automated operations. At present, all tasks of an application are in the same task group and run regularly at 0:00 every day. The pass rate of the task group is the pass rate index of the day, which is used to monitor the stability of automated operation.

" Tesla-lab " evolution direction

In the future, relying on the cloud real machine platform to achieve local debugging, cloud deployment and operation capabilities, to achieve a more automated/intelligent way, combined with the client's rich exception collection strategy, will bury point defects, and client defects will be discovered as soon as possible.

Data Collection SDK

The data collection SDK of the mobile terminal is all called up by teslaLab through socket communication, and there is no interactive interface.

The Android side is an independent sdk: ubt-verification, and the ios test is implemented by the offline performance test tool Kylin.

Overall architecture diagram

flow chart

The functions of the mobile SDK mainly include two processes: recording Mock data and recording buried point data:

preparation stage
  • Start the service through the third-party tool nanoHttpPd on the end, and start the mock recording process after receiving the call from teslaLab
  • Generate parameters such as record id locally, cache them locally together with the teslalab parameters, and then restart the application
Record Mock data
  • After restarting, the automated script will be executed, and all network request request&response details obtained through the interceptor will be reported to the backend
  • After the script is executed, teslaLab will call the relevant interface to stop the recording process on the terminal, and the SDK will call the mock record creation interface.
Record buried point data (ie automatic playback)
  • After restarting, the automated script will be executed, and all network requests will call the backend interface after receiving the response to obtain the mock data matching the request and replace it.
  • All monitored buried point data will be asynchronously stored in the local database, and at the same time, there will be a task executed in a timed loop to read the buried point data from the database in batches and integrate and report it to the backend.
  • After the script is executed, teslaLab will call the relevant interface to stop the recording process on the terminal, and the SDK will call the test record creation interface

Buried point data acquisition module

Android implementation

It mainly implements static proxy for the Shence SDK through reflection, and implements the callback when the buried point is reported in the agent. In the callback, the buried point data is copied to the management instance responsible for the buried point report through eventBus, and then stored in sqlite. In the subsequent reporting stage, the embedded data will be extracted from sqlite in batches and reported to the backend through the interface.

IOS implementation

There is an external notification when the Sensors sdk track record, and iOS can get tracked data by monitoring the Sensors track notification.

Interface Mock Module

Android implementation

By instrumenting the build method of the okhttpClientBuilder class, and using the addInterceptor method to add a custom interceptor, it is used to record the interface data and mock the interface. Here we judge whether the current okhttpClient is the target client we need to instrument by whether it contains a business-defined interceptor.

The interface data can be recorded or mocked in the interceptor.

IOS implementation

The network is intercepted through NSURProtocol. When a request is initiated, the mock service interface is requested to determine whether there is mock data. When the mock data is requested, the response is constructed and the mock data is returned. If the request is not received, the request is normally initiated.

socket communication module

Android implementation

By introducing the third-party tool nanoHttpPd and enabling the service on the mobile terminal, the mac client can directly call the get/post interface implemented on the mobile terminal through the IP+port method, thereby realizing the interaction with the automation tool teslaLab.

IOS implementation

By introducing the third-party tool GCDAsync, the service is started on the iOS client, and the interaction logic is the same as that of Android.

Stability Monitoring Module

  • The total amount recorded locally, whether the mock is successfully opened, the md5 value of the package/package id, etc., are uploaded to the backend along with the embedded point, which is used to monitor the running stability of automated regression and troubleshoot after problems are found.

challenges encountered

After running a version stably, a large-scale script execution failure problem suddenly occurred on the Android side. After checking and running the screenshots, it was found that the data structure of the response of the business detailed interface was changed, which made it impossible to complete the Mock using the interface data recorded in the previous stage. Data parsing, automatically downgrade and switch to CDN data, so that page elements cannot be recognized by scripts.

We planned this situation in the early days, so we implemented automatic detection and repair reminders for interface data structure changes, but the business details interface here did not trigger any reminders. Finally, after investigation, it was found that the Mock data we recorded was windy Control the encrypted response of sdk:

At the beginning, in order to record the complete request and response data, we put the interceptor in the last place in the interceptor list. However, in actual operation, the application layer does not need the additional parameters added by these interceptors, so we move the interceptor to the first place and re-record these Mock records involving the negotiated interface. After the final fix the script runs back to normal.

platform

Overall architecture diagram

Wireless R&D Platform:

  • test scene module
  • Mock record module
  • test logging module
  • Validation report module

Buried point management platform

  • Buried point reporting module

flow chart

Buried point verification & acceptance process
  1. Usually there are dozens of buried points in a test case, so it is recommended to generate default rules directly from test records when building scenarios.
  2. The pass rate of the verification report produced by the generated rules is generally 50~70%. This is because the mandatory attributes of some buried point attributes are not fixed, and usually have different performances in different scenarios. However, when this rule is generated, it is generated based on the tracking information returned by the tracking management platform. Therefore, this part of the rule needs to be corrected, which can be directly repaired by one-click shortcut keys in the verification report details page, or manually adjusted.
  3. After correcting and re-checking, the pass rate of the output report will usually increase to about 75%, and the remaining abnormal parts are usually caused by some random factors (such as timestamp, page browsing time). At this time, there are few remaining problems, and you can directly try to reproduce it manually. After confirmation, you can select the buried point and submit a fault report to the buried point management platform.
  4. The buried point management platform will send a Feishu notification to the research and development that has recently been in charge of this buried point. After the problem is repaired, Feishu will notify the recently responsible test. After the test and the failure reporting initiator's acceptance, the failure reporting process ends. If the problem stays in the repair or acceptance stage for more than a certain time, it will rise and Feishu will notify the leader.
Best Practices

Based on user feedback and summaries in recent versions, the current best practices are:

  1. Base rules are automatically generated from test records and used to generate validation reports.
  2. According to the report results, use the shortcut keys in the report details to quickly correct the mandatory rules, and trigger the verification again to get the report.
  3. Directly try to manually reproduce the remaining problems in the report, and directly submit the fault report to the tracking management platform after confirmation.

test scene module

In order to fully and accurately verify the buried points from various aspects, we divide the rules for verification into the buried point dimension and the attribute dimension according to the content of the buried point:

  • Buried point dimension

<!---->

    • The number of buried points
    • Timing relationship between buried points
    • Whether to report repeatedly (report multiple times within 1s)

<!---->

  • attribute dimension

<!---->

    • Is it mandatory
    • Is it nullable
    • Ranges

<!---->

      • enum type
      • Multiple levels of nested JSON arrays
      • digital interval

<!---->

    • Mapping between attribute values (eg tabId and tabName)
    • Regular check
    • The mapping with the interface data in the mock record (for example, the value of some attributes of the buried point of the diamond bit comes from the interface delivery)

And each rule can choose strong/weak verification, the only difference is that weak verification will not affect the final pass rate indicator

challenges encountered
  1. A test case with a running time of 1 minute, assuming that about 100 buried points can be recorded, about 40 after deduplication, and an average of about 7 attributes to be verified for each buried point, which means that each case needs to be manually clicked 40 7 3*3=2520 times to complete the configuration of the basic rules, which is obviously unacceptable, so we have realized the automatic generation of default rules from the recorded buried point data according to the predetermined strategy, so that only the recorded buried point data needs to be guaranteed. Normal, you can complete the construction of the most basic rules (including the number of buried points, whether it must be passed, whether it can be null, and the value range rules).

  1. For some exposure and buried points on the home page, most cases will be reported, and the rules are basically the same. Therefore, in order to avoid wasting manpower with repeated configuration, we have implemented a global scenario and integrated it according to the priority of the configuration and the general rules. The interface mapping rule configuration that has been used for the homepage exposure buried point.

  1. The value of some buried point attributes is the result of directly merging the data issued by the algorithm and the interface data. This type of data is generally a JSON array with a deep nesting level. Since the elements in the array are disordered, the enumeration type cannot satisfy the The verification of its value range. For this reason, we recursively reduced the problem to the comparison between JSON objects to find differences, and realized rule extraction, rule verification and rule repair for such complex hierarchical JSON arrays.
  2. The detailed data of the buried points are all from the buried point management system, and the automatically generated rules are also generated according to the annotation of whether the buried point attributes must be transmitted in the buried point management platform. However, whether most of the buried point attributes must be transmitted is not an absolute situation. In application scenarios, it may or may not be necessary to transmit, which leads to a lot of noise in the verification report we produce. To this end, we have implemented quick repair of rules and interface mapping rules in the report. While quickly eliminating noise, we also established the mapping relationship between buried point data and interface data, and verified the buried point data from the root cause. Although the overall verification report pass rate has not been greatly improved after the repair was launched due to the problem of the business-detailed interface, dozens of buried problems were found in a single version after the repair.

Mock record module

  • The mock record is created by the embedded point data collection sdk. When creating it, check whether the required interface (ab configuration delivery interface, some interfaces on the home page) is recorded for bayonet, so as to avoid the subsequent output report being dirty data.

  • In order to flexibly control the actual data of the mock, every network request will be recorded, and switches are supported. For get requests, it also supports automatically matching the corresponding response according to requestParams.

  • In order to help automated tools solve the problem of pop-up windows, we also implemented a global mock, which uniformly mocks the interface of the red envelope pop-up window and returns an empty result.

  • As an important part of buried data, interface data often determines the stability of automated regression. Therefore, we also implement automatic detection of interface changes. By comparing the key value of the real interface data and the mock data, we can find the data structure. Differences, generate a record of changes and suggest revisions.

And supports the application and withdrawal of changes.

challenges encountered
  1. In the early stage of recording mock data, we encountered that some scripts with search commodity operations would fail to execute. After investigation, it was found that the experimental group code on the terminal had been adjusted, and the interface issued by the ab configuration had been fixed due to the mock, which resulted in The original recorded script cannot adapt to the code of the new experimental group. To this end, we introduce event change monitoring, and filter out all change events from the ab management system, and then go to the ab management system to obtain the latest configuration information based on the ab experiment id in the event details to automatically generate a batch update mock data. It is recommended to modify the results.

Test Record

  • Test record list page

The test record is created by the mobile terminal data acquisition SDK, and will be compared according to the monitoring information uploaded on the terminal. If there is an abnormality, Feishu will alarm.

The red background indicates that the record is abnormal, which is completely unreliable:

  • The mock is not enabled: that is, if the interface for obtaining mock data is not 200, it means that the mock is not enabled or invalid.
  • Inconsistency in the number of buried points: The number of buried points counted by the backend through the buried point reporting interface is inconsistent with the number counted at the callback of the Sensors SDK on the end, indicating that the SDK has missed reports or abnormal reporting interfaces.

The light yellow background indicates that the record may be abnormal and needs to be checked:

  • That is, the ratio of the number of buried points in the test record to the recording duration is abnormal. The normal ratio is greater than 1:1. If the number of buried points is less than the number of seconds of the recording duration, it is usually a script abnormality and needs to be checked manually.

  • Test record details page

Validation report module

  • The overview of the report shows the basic information of the report:

  • The details of the report are aggregated according to the buried points, showing the abnormal information of each buried point and its original data.

  • After manually reproducing the abnormality of the buried points in the report, you can select the abnormal buried points and submit the fault report to the buried point management platform. The fault reporting process is the same as the online fault reporting. Acceptance and other functions, and will automatically rise and send Feishu notifications to the leader after a timeout.

Summarize

The pain points encountered by the buried point verification platform at each stage are actually a microcosm of the problem of "how to fix the buried point data". The rapid iteration of business makes the precipitation products of mock records and verification rules accumulated during the verification process very fragile. Even if we implement manual or automatic updates of these data through various means, they will inevitably fail completely in the end. Therefore, while extracting long-term effective precipitation in the future, we will also focus on improving the automation of verification rules and mock data construction to reduce the labor cost of data reconstruction after failure.

Text / ZHOUXIAOFENG

Pay attention to Dewu Technology and be the most fashionable technical person!


得物技术
846 声望1.5k 粉丝