Jmeter core principle
Based on the protocol, it simulates real user scenarios, and simulates user-initiated requests through multi-threading.
- Protocol-based: The object of performance testing is the software of the network distributed architecture, and the core of the network distributed architecture is the network protocol
- Multithreading: The human brain is single-threaded, and the computer's cpu is multi-threaded. Performance testing is to use multi-threaded technology to simulate multi-user load
- Simulate real scenes. The user’s access time and frequency are not fixed
Performance testing theory
Performance test
- Basic goal: test whether the system performance meets the standard; through certain technical means, simulate the concurrent requests of users to test the maximum processing capacity and stable operation ability of the system, find the performance bottleneck, and improve the overall processing capacity of the system
- Basic methods: benchmark, load, pressure...
Core principle
Based on the protocol, it simulates user concurrency through multi-threading, and puts pressure on the server in different scenarios
- Protocol-based: including http, https, tcp, udp, socket, websocket, request based on protocol
- Multithreading: Simulate concurrent users and put pressure on the server through multithreading
- Involved scenarios: jmeter methods, components; design the user's use of the system's association, thinking time, gathering points, and asserting the results
Application field
Proficiency test: whether the system can have the declared ability under fixed conditions
In the project provided by Party B to Party A, it is stated that the system can support 5000 users to log in at the same time, and the response time does not exceed 3s; Party B needs to pass the performance test to obtain the test results and give Party A an acceptance report
Bottleneck discovery: discover bottlenecks and defects, unreferenced performance indicators and targets
Through a series of performance testing methods, performance bottlenecks and defects are found
Performance tuning: tuning system performance
Tuning for performance bottlenecks found
- TPS bottleneck
- Service resource bottleneck
- Response time bottleneck
- SQL bottleneck
- Capacity planning: Can the system support user growth in the future?
The current user may only support 5000 users concurrently;
It is expected that the number of concurrent users in the future can reach 50,000 or 500,000;
In view of the business volume explosion that may exist in the future, take the expected concurrent user volume as the base, do the corresponding performance test, and adjust the hardware facilities in advance
Test ideas
- What to test?
Front end: web, APP; from the perspective of users, pay more attention to page load time and response time
Server:
- Tool level: focus on error rate and throughput
- Server level: CPU, memory, IO, JVM
Database: including slow SQL, deadlock...
- How to measure?
Requirement-plan-plan-test environment setup-design use case-data preparation-design scenario-script development-data monitoring-result analysis-performance tuning-report submission
- The test result passed the standard:
As required: the test results meet expectations
No standard requirements: for example, testing the maximum concurrency of a page
Performance
- Front-end performance indicators
Response time: The most important indicator of the user’s perspective
Principle 258:
- Within 2s, very satisfied
- 5s, generally
- 8s, unacceptable
Front end corresponding time:
- Front-end resource loading and rendering time
- Front-end interaction time
- The front-end displays the data queried by the back-end on the page
Network connection time:
- connect time-connection time: the network time between the request sent to the server and received
- Latency: network connection time + service processing return time
- Service segment processing time = latency-connect time
Error rate
Click Rate (HPS): the user clicks the button to trigger the request
TPS:
- Single interface service: the number of requests completed per unit time
- Multi-interface business: the number of transactions completed per unit time
RPS: Measure the pressure value directly from the perspective of the server
- Number of requests initiated per unit time
TPS measures the performance of the server/system; RPS measures the pressure
- Server performance indicators
CPU
RAM
Disk IO
JVM
Middleware: Tomcat, redis, Nginx
Test perspective
User perspective
- Response time
- System stability
Operational perspective
- Whether the hardware facilities need to be replaced
Whether the resource utilization rate is up to standard
- Utilization rate exceeded
- Utilization is too low
- System capacity
Development perspective
- Does the code need to be optimized
- SQL optimization
- System architecture optimization
Performance test engineer's perspective
Type of test
Benchmark test: every version iteration requires benchmark test, the purpose is to compare the previous test results and give the basis for tuning
Load test: continuously increase the pressure; need to ensure the continuity of the pressure; find the bottleneck point of the system
- Concurrent user model: continue to increase concurrent users
- RPS model: continuously increasing the number of requests
Stress test: When the resources are in a saturated state, continue to run the service to examine the stability of the system; or the load is at a peak
- Stability test: 80% of the maximum pressure, continuous operation for a period of time
- Destructive test: On the basis of the maximum pressure, the pressure is still continuously increased, the purpose is to make the system crash and report an error
Failure recovery test: whether the system can be recovered in a timely manner after the system is abnormal
Capacity test: examine the number of users that the system can support in the future time period; test the hardware facilities required by the system under large capacity
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。