After an enterprise-level JavaScript application is deployed and running on a production system, if performance problems occur, finding the root cause of these performance problems is often not as simple as finding the root cause of runtime failures or exceptions. The performance problems of JavaScript applications are often manifested in the decline of user request response time, the reduction or even exhaustion of available system resources.
This article takes an open source e-commerce Storefront enterprise application Spartacus as an example to share with you how developers should analyze the possible causes when JavaScript applications encounter performance problems in production systems.
When the user initiates an access request to Storefront in the browser, it will be processed by the following components. So in theory, all of these components could be a source of performance problems.
- load balancing server
- Web Server (eg Nginx, Apache) running JavaScript Storefront
- Server.js: Implementation of JavaScript Storefront running in server-side rendering mode
- CDN
- API server
- database
We can use Dynatrace, a powerful platform, to analyze the execution performance data of JavaScript applications. The Services menu in Dynatrace is the recommended entry to start profiling, as performance data for all of the above components can be seen here.
A recommended way to visualize all components and their response time contributions is to start as far as possible from the outermost layer of the call, which is the Apache service corresponding to the JavaScript Storefront running website, in this case www.demo.com:443.
Click on the icon on the right ...
and select Service Flow. In the page that follows, it is best to narrow down the analysis time to the time window where there are significant performance issues. Another way to add a response time filter to focus only on those requests with the slowest response time.
For example, setting the filter response time >= 6s will allow to visualize only those request hotspots with response times equal to or greater than 6s.
It should be emphasized that each node displayed in the Service Flow chart depends on the previous node. Therefore, when analyzing the performance data of a period of time, the response time percentage of several adjacent nodes may be usually high until the real until the service with performance issues. For example, if the performance bottleneck is the database, all layers before it will also contribute to the overall contribution, even if they are simply waiting for the database operation to finish.
For example, in the scenario shown above, the bottleneck appears to be the Node.js application, as the percentage drop in response time occurs after this Node.js program.
Individual contributors for each level can be viewed by clicking on any tier and using the ...
button, selecting the Response Time Hotspot option. For example, starting at www.demo.com:443
, we found that a particular request contributed the most to the overall response time percentage.
Since we suspect that the performance problem is caused by the Node.js application, we can further check the performance data of jsapps
, as shown in the figure below, its Code Execution took 6.19 seconds.
Click on Code Execution in the above image to enter the details page, and then click ...
to continue viewing until you find the line of function calls that contribute the most to the response time of 6.19 s.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。