sequence

Recently, I saw that Alibaba Cloud Performance Test PTS interface test has opened a free public test. In order to communicate with you how to achieve efficient interface testing, this article contains some of my methods and experiences in the field of interface testing. I hope you can discuss and share them together. Content includes but is not limited to:

  • Introduction to Server Interface Testing
  • Introduction to Interface Test Automation
  • Interface Test Automation Practice
  • Thinking and Summary on Interface Test Automation

Introduction to Server Interface Testing

What is server side?

Generally speaking, the server side refers to everything behind providing data services for the Internet functions used by users in the APP or PC. Taking the product link of the Tmall Genie smart speaker series as an example, the server is the link behind the gateway (including the gateway).

 title=

What is an interface?

Officially, it is a shared boundary between two independent components in a computer system for information exchange. In layman's terms, it is the most commonly used information exchange method for the server to provide data services to the outside world. The server that provides data services is an organization that can be large or small. Most of the things it does are more than one thing. It has done so many things. The ultimate goal is to use it for APP or other callers, so the server sends several representatives. , for example, API 1 is responsible for providing user information, API 2 is responsible for providing device information, API 3 is responsible for providing playback audio information, and so on. Colleague, the server stipulates that the passwords of the connectors to communicate with API 1 are param1, param2..., and the passwords of the connectors to communicate with API 2 are param3, param4..., and params is the interface parameter, which is used to tell the server what service you want. what are the requirements. An interface generally consists of three parts: protocol, address and parameters.

What is interface testing?

Generally speaking, the interface test refers to the function test of a given interface. When different parameters are input, whether the return value of the interface is correct. The figure below is a classic test pyramid model.

 title=

In this model, the lower the proportion will be higher, that is to say, in a product test, the unit test proportion is the highest, followed by the interface test and UI automation test, the top is the manual test part. The server interface test is in the middle, linking the previous and the next, which shows its importance.

Why do interface testing?

There are generally the following reasons for interface testing:

  • The interface is the most commonly used information exchange method for the server to provide data services to the outside world. Most of the content of the interface is data. Through data comparison, we can infer the logic of the system. The test interface is actually the test logic.
  • Interface testing is relatively easy to automate and continuous integration, and it is relatively stable compared to UI automation, which can reduce labor costs and time for manual regression testing, shorten the testing cycle, and support rapid back-end release requirements.

How to do interface testing?

As mentioned earlier, the interface is composed of these components: interface address, request protocol, request parameters and expected results. The general steps for testing the interface are: sending request -> parsing result -> verifying result .

Simply put, interface testing is to refer to the interface documentation, call the interface, and see if the result returned is consistent with the documentation; in addition, test the interface's handling of abnormal logic, such as illegal parameters or boundary values.

In depth, the focus of interface testing is to:

1. Whether the data logic of the interface is correct . We need to fully understand the function of the interface, what kind of data logic is inside, and it exchanges those information or resources with the upstream and downstream, not just the parameter call and the apparent data returned by the program. In layman's terms, it is to know what this interface is used for, where it is used, what will happen every time it is called, and then check whether the change has occurred.

Second, the interface's processing mechanism for abnormal parameters and the fault tolerance of upstream and downstream services . As shown in the figure below, the tested interface A depends on the upstream service A, so it is very important whether the tested interface has good fault tolerance when service A is abnormal, otherwise it is possible for the service to hang or fail. In addition, as a service provider interface B, it should be fully compatible with different usage scenarios or the use of callers of different versions. It cannot be used by other service users except E for the needs of service E. In general, the principle is " unreliable upstream, compatible downstream ".

 title=

Introduction to Interface Test Automation

What is interface test automation?

Interface test automation, in simple terms, is to script functional test cases, and then execute the script to generate a visual test report.

Why do interface test automation?

No matter what kind of testing method, it is to verify the function and find bugs. So why do interface test automation? In a word, it is to save labor costs. Specifically, it includes the following points:

  • Reduce your workload and free the test from the tedious and repetitive manual testing;
  • Assist manual testing with tasks that are difficult or impossible to simulate;
  • Improve work efficiency, such as automated compilation, packaging, deployment, continuous integration and even continuous delivery of test environments.
  • Assist in locating problems. For example, if the interface layer finds a problem, you can locate the log error or error code line through the added traceID.
  • Find bugs early and automatically notify testers. Once a problem is found, the testers are notified immediately, quickly and efficiently.

Specification of Interface Test Automation

Based on some of my usual experience in interface testing, here is a summary of some interface testing automation specifications.

  • document preparation

Sharpening the knife does not cut wood by mistake, and preparing detailed interface-related documents can help the efficient development of subsequent interface automation testing. Relevant documents include but are not limited to the following:

1. "Requirements Document" , which clearly defines: the business scenario behind the interface, that is, what the interface is used for, where it is used, what will happen each time it is called, etc.;

2. "Interface Document" , which clearly defines: interface name, each input parameter value, each return value, and other related information;

3. "UI Interaction Diagram" , which clearly defines: the data to be displayed on each single page; the interaction between pages, etc.;

4. "Data Table Design Document" , which clearly defines: table field rules, table N-to-N relationship (one-to-one, one-to-many, many-to-many), etc.;

Be sure to confirm with the relevant demander that the information in the document is reliable and up-to-date. Only by relying on reliable documents can you design correct and detailed interface use cases and get the most correct results.

  • Identify the functions required for interface test automation

1. Check (assert)

A test assertion is a test pass condition in automated testing, which is used to determine whether a test case meets expectations. So support for return value validation is a must.

2. Data isolation

Data isolation means that the specific request interface, parameters, verification and other data are isolated from the code, which is easy to maintain. Once the interface use case needs to be adjusted or added, the location can be quickly found. Another advantage of isolation is reusability. The framework can be promoted to other teams. Users can use the same code and only need to fill in their own use cases as required to test.

3. Data transfer

After the data is isolated and maintainable, data transfer is another more important requirement. When testing interfaces, we first implement single interface decoupling, and then combine multiple interfaces according to business scenarios. Data transfer is a necessary condition for combining multiple interfaces, which allows the interface use cases to pass parameters down. For example, we query the device information of the current Tmall Genie speaker through the device information query interface, which will return a UUID. Next, we need to query the user information bound to the current device through the user information query interface. The request data of the second interface needs to be extracted from the return in the first interface use case.

4. Function function

The actual business scenario test will require the support of various auxiliary functions, such as random generation of timestamps, request IDs, random mobile phone numbers or location information, etc. At this time, we need code that can support the identification of corresponding keywords that can be executed The corresponding function function is filled.

5. Configurable

At present, the test environment includes but is not limited to daily, pre-release 1, pre-release 2, online, etc., so the use case can not only be executed in one environment, but the same interface use case can be used in daily, pre-release, online, etc. can be executed in any environment. Therefore, the framework needs to be configurable to facilitate switching, and calling different configuration files can be executed in different environments. 6. Log The log contains key information such as the specific execution interface, request method, request parameters, return value, verification interface, request time, time consumption, etc. The advantage of the log is that it can be used to quickly locate problems when new use cases have problems. Where there is a problem with filling in, secondly, it is convenient to provide data to the development feedback when a bug is found, and the development can quickly locate the problem from the trigger time and parameters and other information.

7. Visual reporting

After the use case is executed, it is time to show the results to the team. A visual report can help team members understand the success and failure data of each automated interface use case execution.

8. Sustainable integration

For interfaces that already have test cases and have been tested, we hope to be able to form regression use cases. Before the next version iteration or launch, a regression test is conducted through the existing use cases to ensure that the newly launched functions do not affect the existing functions. Therefore, this requires interface automation testing to be continuous integration rather than one-off.

  • Interface test automation framework selection

Combined with our requirements for the interface test automation framework and the characteristics of many test tools currently on the market, the following table is summarized:

 title=

Here is a brief list:

1. fiddler

fiddler is an HTTP protocol debugging proxy tool, which is used for web and mobile phone testing, and also supports interface testing. It can record and inspect all http communication between your computer and the Internet, set breakpoints, and view all data "in and out" of Fiddler (referring to cookie, html, js, css, etc. files).

2. Postman

It is a plug-in developed by Google and installed on the Chrome browser. It can support different interface test requests, manage test suites and automate operations. The weakness is that the automated assertion function is not powerful, and it cannot perform continuous integration testing with Jenkins and code management libraries.

3. wireshak

This is a packet capture tool that supports TCP, UDP, HTTP and other protocols. If you do low-level network data testing, you generally need to use it, but for interface testing, it is a bit unfriendly. Because the refresh data is too fast, it is difficult to locate the interface corresponding to each operation.

4. soupUI

soapUI is an open source testing tool that checks, invokes, and implements functional/load/compliance testing of Web Services through soap/http. The tool can be used as a stand-alone test software, can also be integrated into Eclipse, maven2.X, Netbeans and intellij using plug-ins. Organize one or more test suites (TestSuite) into projects, each test suite contains one or more test cases (TestCase), and each test case contains one or more test steps, including sending requests, receiving responses, and analyzing results , change the test execution flow, etc. This tool can support interface automation testing and interface performance testing, as well as continuous integration testing with Jenkins.

5. Java code for interface testing

Why use code for interface automation testing? Some tool functions are limited, and many companies need some specific functions, which are not supported by the tools, and have to be developed with code. Generally, Java is used for automated testing, mainly using the httpclient.jar package, and then using unit testing tools such as JUnit or TestNG to develop test cases, and then create a job on Jenkins or our aone for continuous integration testing.

6. Python code for interface testing

Like Java, using Python for interface testing, you can use a powerful third-party library Requests, which can easily create interface automation use cases. The unit testing framework under Python generally uses unittest. To generate a test report, generally choose HTMLTestRunner.py. Similarly, continuous integration testing can be done in conjunction with Jenkins.

Interface Test Automation Practice

TestNG vs Junit

  • Comprehensive comparison

In my daily testing work, I use more automated testing tools to do interface testing with Java code. Here I will introduce my comparison of the unit testing tools TestNG and Junit. First, use a table to summarize their characteristics comparison.

 title=

The similarities between TestNG and JUnit are as follows:

1. There are annotations, that is, annotations are used, and most of the annotations are the same;

2. Unit test can be carried out;

3. They are all tools for Java testing;

The differences between TestNG and JUnit are as follows:

1. TestNG supports richer annotations, such as @ExpectedExceptions, @DataProvider, etc.;

2. In JUnit 4, @BeforeClass and @AfterClass methods are required to be declared static, which restricts the variables used in this method to be static. The methods modified by @BeforeClass in TestNG can be exactly the same as ordinary functions;

3. JUnit can only be run using IDE. TestNG runs in the following ways: command line, ant and IDE;

4. JUnit 4 is very dependent, and there is a strict sequence between test cases. If the previous test is unsuccessful, all subsequent dependent tests will fail. TestNG takes advantage of @Test's dependsOnMethods attribute to deal with test dependencies. If a method depends on a method that fails, it will be skipped instead of being marked as failed.

5. For tests with n different parameter combinations, JUnit 4 needs to write n test cases. The tasks performed by each test case are basically the same, but the parameters of the method are changed. The parameterized test of TestNG only needs one test case, and then add the required parameters to the xml configuration file of TestNG or use @DataProvider to inject different parameters. The advantage of this is that the parameters are separated from the test code, and non-programmers can also modify the parameters without recompiling the test code.

6. The test results of JUnit 4 are reflected in the Green/Red bar. In addition to the Green/Red bar, the TestNG results also have the Console window and the test-output folder. The description of the test results is more detailed, which is convenient for locating errors.

  • Detailed feature comparison

The following is a detailed introduction to the comparison of TestNG and Junit features:

1. Framework integration :

Spring+ TestNG +Maven integration:

  • Add testng dependencies to pom.xml:
 <dependency>
  <groupId>org.testng</groupId>
  <artifactId>testng</artifactId>
  <version>6.8.8</version>
  <scope>test</scope>
</dependency>
  • Add an annotation @ContextConfiguration(locations = "classpath:applicationContext.xml") to the test class and inherit AbstractTestNGSpringContextTests, the example is as follows
 @ContextConfiguration(locations = "classpath:applicationContext.xml") 
public class BaseTest extends AbstractTestNGSpringContextTests{     
      @Test
      public void testMethods()     {         ......     } 
}

Spring+ Junit +Maven integration:

  • Add junit dependencies to pom.xml:
 <!--Junit版本-->
<dependency>
  <groupId>junit</groupId>
  <artifactId>junit</artifactId>
  <version>4.4</version>
  <scope>test</scope>
</dependency>
  • Add 2 annotations to the test class

@RunWith(SpringJUnit4ClassRunner.class)

@ContextConfiguration(locations = "classpath:applicationContext.xml"), as follows:

 @RunWith(SpringJUnit4ClassRunner.class) 
@ContextConfiguration(locations = "classpath:applicationContext.xml") 
public class BaseTest{     
    @Test     
    public void testMethods()     {         ......     } 
}

2. Annotation support

 title=

The main difference is the following two points:

1. In JUnit 4, we have to declare "@BeforeClass" and "@AfterClass" methods as static methods. TestNG is more flexible in method declaration, it does not have this constraint.

2. In JUnit 4, annotation naming conventions are a bit confusing, e.g. "Before", "After" and "Expected", we don't really know what comes before "Before" and "After", and what is "expected" in the test to be tested method. TestiNG is easier to understand, it uses things like "BeforeMethod", "AfterMethod" and "ExpectedException".

3. Abnormal test

"Exception testing" refers to exceptions thrown from unit tests, which are available in both JUnit 4 and TestNG. JUnit 4

 @Test(expected = ArithmeticException.class) public void divisionWithException() {   int i = 1/0; }

TestNG

 @Test(expectedExceptions = ArithmeticException.class) public void divisionWithException() {   int i = 1/0; }

4. Ignore the test

Ignoring tests means what can be ignored when unit testing, this feature is implemented in both frameworks.

JUnit 4

 @Ignore("Not Ready to Run")  @Test public void divisionWithException() {      System.out.println("Method is not ready yet"); }

TestNG

 @Test(enabled=false) public void divisionWithException() {      System.out.println("Method is not ready yet"); }

5. Timeout test

Time testing means that if a unit test takes longer than a specified number of milliseconds to run, the test will terminate and be marked as a failed test, a feature implemented in both frameworks.

JUnit 4

 @Test(timeout = 1000)  public void infinity() {      while(true);  }

TestNG

 @Test(timeOut = 1000)  public voi

6. Suite testing

"Suite testing" means bundling several unit tests and running them together. This functionality is available in both JUnit 4 and TestNG. However, both use very different methods to achieve it.

JUnit 4

"@RunWith" and "@Suite" are used to run suite tests. The following class code indicates that after JunitTest3 is executed, the unit tests "JunitTest1" and "JunitTest2" are run together. All declarations are defined within the class.

 @RunWith(Suite.class) @Suite.SuiteClasses({         JunitTest1.class,         JunitTest2.class }) public class JunitTest3 { }

TestNG

The XML file is used to run the suite tests. The following XML file indicates that the unit tests "TestNGTest1" and "TestNGTest2" will be run together.

 <suite name="My test suite">   
  <test name="testing">
    <classes>
      <class name="com.fsecure.demo.testng.TestNGTest1" />
      <class name="com.fsecure.demo.testng.TestNGTest2" />
    </classes>   
  </test>
</suite>

TestNG can do bundled class tests as well as bundled method tests. With TestNG's unique concept of "grouping", each method can be combined with a composition to classify (group) tests according to functionality. E.g,

Below is a class with four methods, three groups (method1, method2 and method3)

 @Test(groups="method1") public void testingMethod1() {   System.out.println("Method - testingMethod1()"); } 
@Test(groups="method2") public void testingMethod2() {   System.out.println("Method - testingMethod2()"); } 
@Test(groups="method1") public void testingMethod1_1() { System.out.println("Method - testingMethod1_1()"); } 
@Test(groups="method4") public void testingMethod4() { System.out.println("Method - testingMethod4()"); }

Using the following XML file, unit tests can be performed using only the group "method1".

 <suite name="My test suite">   
  <test name="testing">       
    <groups>       
      <run>         
        <include name="method1"/>       
      </run>     
    </groups>     
    <classes>        
      <class name="com.fsecure.demo.testng.TestNGTest" /></classes>   
  </test> 
</suite>

7. Parametric test

A "parameterized test" refers to a change in unit test parameter value. This functionality is implemented in both JUnit 4 and TestNG. However, both use very different methods to achieve it.

Junit4 parameterized test:

  • Proceed as follows:

1. Identify static parameter constructors through @Parameters

2. Introduce parameters through the test class constructor

3. Test method using parameters

 @RunWith(value = Parameterized.class) 
public class JunitTest {       
    private int number;       
    public JunitTest6(int number) {         
        this.number = number;      
    }     
    
    @Parameters      
    public static Collection<Object[]> data() {        
        Object[][] data = new Object[][] { { 1 }, { 2 }, { 3 }, { 4 } };        
        return Arrays.asList(data);      
    }      
    
    @Test      
    public void pushTest() {        
        System.out.println("Parameterized Number is : " + number);      
    } 
}
  • shortcoming:
  1. A test class can only have one static parameter constructor;
  2. The test class needs to use @RunWith(Parameterized.class), which is not compatible with the spring-test runner
  3. @RunWith(SpringJUnit4ClassRunner.class), which will cause the service to be tested cannot be injected through annotations
  4. Need to add a constructor to the test class (a redundant design)

TestNG parameterized tests:

  • Proceed as follows:

1. Identify the parameter construction method through the @dataProvider annotation

2. The test method specifies the parameter construction method through the dataProvider attribute in the annotation @Test, and the parameters can be used in the test method

 @Test(dataProvider = "Data-Provider-Function")     
public void parameterIntTest(Class clzz, String[] number) {       
    System.out.println("Parameterized Number is : " + number[0]);       
    System.out.println("Parameterized Number is : " + number[1]);     
}

In addition, TestNG also supports constructing parameters through testng.xml:

 public class TestNGTest {  
    @Test @Parameters(value="number") 
    public void parameterIntTest(int number) {        
        System.out.println("Parameterized Number is : " + number);     
    } 
}

The content of the XML file is as follows

 <suite name="My test suite">   
  <test name="testing">     
    <parameter name="number" value="2"/>     
    <classes>        
      <class name="com.fsecure.demo.testng.TestNGTest" />     
    </classes>   
  </test> 
</suite>

8. Dependency testing

A "parameterized test" means that the method is a dependency test, which will be executed before the required method. If the dependent method fails, all subsequent tests will be skipped and not marked as failed.

JUnit 4

The JUnit framework focuses on test isolation; currently it does not support this feature.

TestNG

It uses "dependOnMethods" to implement dependency tests as follows

 @Test public void method1() {    
    System.out.println("This is method 1"); 
}
@Test(dependsOnMethods={"method1"}) 
public void method2() {     
    System.out.println("This is method 2"); 
}

TestNG Interface Automation Practice

  • Parameterized Test Example

Taking DeviceStatusHSFService as an example, the test class is as follows:

 public class DeviceStatusHSFServiceTest {

    private DeviceStatusHSFService deviceStatusHSFService;
    @BeforeTest(alwaysRun = true)
    public void beforeTest() {
        String envName = System.getProperty("maven.env");  //运行环境可配置
        SwitchENV switchEnv = new SwitchENV(envName);    //运行环境可配置
        deviceStatusHSFService = HsfRepository.getConsumer(DeviceStatusHSFService.class, switchEnv.getEnv(),
            "HSF", switchEnv.getHsfVersion(), "aicloud-device-center", switchEnv.getTargetIp()).getTarget();
    }

    @Test(dataProvider = "updateDeviceStatus", dataProviderClass = DeviceStatusHSFServiceTestDataProvider.class)
    public void updateDeviceStatusTest(Long userId, String uuid, DeviceStatus deviceStatus){
        Result<Boolean> result = deviceStatusHSFService.updateDeviceStatus(userId, uuid, deviceStatus);
        System.out.println("traceId:"+EagleEye.getTraceId()+result.toString());
        Boolean res = result.getResult();
        assertTrue(res);
    }
}

The runtime environment is configurable through the SwitchENV class:

 /**
 * 自定义环境配置
 */
public class SwitchENV {

    /**
     * 运行环境
     */
    private Env env;

    /**
     * hsf环境
     */
    private String hsfVersion;

    /**
     * 目标机器
     */
    private String targetIp;

    /**
     * 环境名称
     */
    private String envName;

    public SwitchENV(String envName) {

        Properties prop = new Properties();

        // TODO: 本地自动化测试切换环境专用
        if (envName == null) {
            envName = "pre1";
        }

        switch (envName) {

            case "online": {
                InputStream in = SwitchENV.class.getClassLoader().getResourceAsStream(
                    "config/application-online.properties");
                try {
                    prop.load(in);
                } catch (IOException e) {
                    e.printStackTrace();
                }
                env = Env.ONLINE;
                break;
            }
            case "pre1": {
                InputStream in = SwitchENV.class.getClassLoader().getResourceAsStream(
                    "config/application-pre1.properties");
                try {
                    prop.load(in);
                } catch (IOException e) {
                    e.printStackTrace();
                }
                env = Env.PREPARE;
                break;
            }
            case "pre2": {
                InputStream in = SwitchENV.class.getClassLoader().getResourceAsStream(
                    "config/application-pre2.properties");
                try {
                    prop.load(in);
                } catch (IOException e) {
                    e.printStackTrace();
                }
                env = Env.PREPARE;
                break;
            }
            case "pre3": {
                InputStream in = SwitchENV.class.getClassLoader().getResourceAsStream(
                    "config/application-pre3.properties");
                try {
                    prop.load(in);
                } catch (IOException e) {
                    e.printStackTrace();
                }
                env = Env.PREPARE;
                break;
            }
            default:
                try {
                    throw new Exception("环境变量输入错误!");
                } catch (Exception e) {
                    e.printStackTrace();
                }
                break;
        }
        hsfVersion = prop.getProperty("hsfVersion").trim();
        targetIp= prop.getProperty("targetIp").trim();
        this.envName = envName;
    }

    public Env getEnv() {
        return env;
    }

    public String getHsfVersion() {
        return hsfVersion;
    }

    public String getTargetIp() {
        return targetIp;
    }

    public String getEnvName() {
        return envName;
    }

}

All test parameters are placed in the DeviceStatusHSFServiceTestDataProvider class, and the specific request interface, parameters, verification and other data are isolated from the code.

 /**
 * 自定义环境配置
 */
public class SwitchENV {

    /**
     * 运行环境
     */
    private Env env;

    /**
     * hsf环境
     */
    private String hsfVersion;

    /**
     * 目标机器
     */
    private String targetIp;

    /**
     * 环境名称
     */
    private String envName;

    public SwitchENV(String envName) {

        Properties prop = new Properties();

        // TODO: 本地自动化测试切换环境专用
        if (envName == null) {
            envName = "pre1";
        }

        switch (envName) {

            case "online": {
                InputStream in = SwitchENV.class.getClassLoader().getResourceAsStream(
                    "config/application-online.properties");
                try {
                    prop.load(in);
                } catch (IOException e) {
                    e.printStackTrace();
                }
                env = Env.ONLINE;
                break;
            }
            case "pre1": {
                InputStream in = SwitchENV.class.getClassLoader().getResourceAsStream(
                    "config/application-pre1.properties");
                try {
                    prop.load(in);
                } catch (IOException e) {
                    e.printStackTrace();
                }
                env = Env.PREPARE;
                break;
            }
            case "pre2": {
                InputStream in = SwitchENV.class.getClassLoader().getResourceAsStream(
                    "config/application-pre2.properties");
                try {
                    prop.load(in);
                } catch (IOException e) {
                    e.printStackTrace();
                }
                env = Env.PREPARE;
                break;
            }
            case "pre3": {
                InputStream in = SwitchENV.class.getClassLoader().getResourceAsStream(
                    "config/application-pre3.properties");
                try {
                    prop.load(in);
                } catch (IOException e) {
                    e.printStackTrace();
                }
                env = Env.PREPARE;
                break;
            }
            default:
                try {
                    throw new Exception("环境变量输入错误!");
                } catch (Exception e) {
                    e.printStackTrace();
                }
                break;
        }
        hsfVersion = prop.getProperty("hsfVersion").trim();
        targetIp= prop.getProperty("targetIp").trim();
        this.envName = envName;
    }

    public Env getEnv() {
        return env;
    }

    public String getHsfVersion() {
        return hsfVersion;
    }

    public String getTargetIp() {
        return targetIp;
    }

    public String getEnvName() {
        return envName;
    }

}

Thinking and Summarizing

For interface automation testing, from use case design to test script implementation, to sum up, we need to have the following ideas:

  • modular thinking
  • data-driven thinking
  • keyword-driven thinking

modular thinking

For our interface automation test project, we need to be able to create small, self - contained modules, snippets, and scripts of the application under test that can be described. These tree-structured scripts can be combined to form scripts that can be used for specific test cases.

data-driven thinking

In short, the test script is separated from the test data . Let the test data exist independently of the test script, and release the strong coupling between the script and the data. Test scripts are no longer responsible for managing test data, which in data-driven testing exists in the form of files or databases. Each time the script is executed, the test data will be mechanically read from the data file or database, and different test paths will be entered according to the different test data. Throughout the test, the test script is immutable, it has been mechanically executing its own code, and our test data set is alive , and we control the direction of the code in the test script through different data. This idea can prevent the test data from being mixed in the test script, which is convenient for the expansion of the test data. Furthermore, in automated testing, in order to maintain the stability and consistency of regression testing, test scripts should be avoided as much as possible. In non-data-driven situations, this principle is violated. In automated testing, with the deepening of the project, the number of test scripts will continue to increase. Test data and scripts are mixed together? It will be a horrible thing to maintain, and mistakes are inevitable, so don't do it at this time, keep data and scripts separate, stick to dead code, live data , and most of the maintenance work will only be data-oriented .

keyword-driven thinking

This is a more advanced data-driven test. The core idea is to encapsulate each step of the test case as a function, and use the function name as a keyword to write the function name and parameters into the file. Map a line of files. By parsing the content of each line of the file, spelling the content into a function call , and calling the encapsulated step function, the test case can be executed step by step. In a keyword-driven test, the functionality of the application under test and the execution steps of each test are written together into a table. The idea is to generate a large number of test cases with very little code . The same code is reused while using the data table to generate the various test cases.

The closer our testing ideas are to the above three types of ideas, the more automated the implementation of interface testing will be. With the continuous development of artificial intelligence, more automated testing tools will be born under the AI wave, such as using artificial intelligence technology to iterate our test cases and generate test scripts through some adaptive algorithm. This means that the future testers will focus on designing more reliable and efficient automated use case generation tools, script construction tools and test execution tools, and let smart machines do the original manual testing work for us. .

at last

The PTS interface test is in the free public beta, everyone is welcome to apply, click to read the original text to visit PTS.

For more communication, welcome to the DingTalk group to communicate, PTS user exchange DingTalk group number: 11774967.

In addition, PTS has recently made a new upgrade to the sales method, and the price of the basic version has dropped by 50%! The price of 5W concurrency is only 199, eliminating the trouble of self-operation and maintenance pressure testing platform! There are also 0.99 trial version for new users, VPC stress test exclusive version, welcome to click here to buy!

 title=


阿里云云原生
1.1k 声望313 粉丝