37

Performance optimization is a double-edged sword, with both good and bad sides. The good side is that it can improve the performance of the website, the bad side is that it is troublesome to configure, or there are too many rules to follow. Moreover, some performance optimization rules do not apply to all scenarios and need to be used with caution. Readers are invited to read this article with a critical eye.

1. Reduce HTTP requests

A complete HTTP request needs to go through DNS lookup, TCP handshake, the browser sends an HTTP request, the server receives the request, the server processes the request and sends back a response, and the browser receives the response. Next, look at a specific example to help understand HTTP:

图片

This is an HTTP request and the requested file size is 28.4KB.

Glossary:

  • Queueing: Time in the request queue.
  • Stalled: The time difference between when the TCP connection is established and when the data can be transmitted. This time includes the agent negotiation time.
  • Proxy negotiation: The time it takes to negotiate with the proxy server.
  • DNS Lookup: The time it takes to perform a DNS lookup. A DNS lookup is required for each different domain on the page.
  • Initial Connection / Connecting: The time it takes to establish a connection, including TCP handshake/retry and SSL negotiation.
  • SSL: The time it takes to complete the SSL handshake.
  • Request sent: The time it took to send a network request, usually one millisecond.
  • Waiting(TFFB): TFFB is the time from when the page request is sent to the first byte of the response data is received.
  • Content Download: The time it takes to receive the response data.

From this example, it can be seen that the proportion of time to actually download data is 13.05 / 204.16 = 6.39% . The smaller the file, the smaller the proportion, and the larger the file, the higher the proportion. This is why it is recommended to merge multiple small files into one large file to reduce the number of HTTP requests.

2. Use HTTP2

Compared with HTTP1.1, HTTP2 has the following advantages:

Fast resolution

When the server parses the HTTP 1.1 request, it must continue to read in bytes until it encounters the separator CRLF. And parsing HTTP2 requests does not have to be so troublesome, because HTTP2 is a frame-based protocol, and each frame has a field that indicates the length of the frame.

Multiplexing

HTTP1.1 If you want to initiate multiple requests at the same time, you must establish multiple TCP connections, because a TCP connection can only handle one HTTP1.1 request at the same time.

On HTTP2, multiple requests can share a TCP connection, which is called multiplexing. The same request and response are represented by a flow, and are identified by a unique flow ID. Multiple requests and responses can be sent out of order in a TCP connection, and then reconstructed by the flow ID after reaching the destination.

Header compression

HTTP2 provides header compression.

For example, there are the following two requests:

:authority: unpkg.zhimg.com
:method: GET
:path: /za-js-sdk@2.16.0/dist/zap.js
:scheme: https
accept: */*
accept-encoding: gzip, deflate, br
accept-language: zh-CN,zh;q=0.9
cache-control: no-cache
pragma: no-cache
referer: https://www.zhihu.com/
sec-fetch-dest: script
sec-fetch-mode: no-cors
sec-fetch-site: cross-site
user-agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36
:authority: zz.bdstatic.com
:method: GET
:path: /linksubmit/push.js
:scheme: https
accept: */*
accept-encoding: gzip, deflate, br
accept-language: zh-CN,zh;q=0.9
cache-control: no-cache
pragma: no-cache
referer: https://www.zhihu.com/
sec-fetch-dest: script
sec-fetch-mode: no-cors
sec-fetch-site: cross-site
user-agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36

As can be seen from the above two requests, a lot of data is duplicated. If you can store the same header and only send the different parts between them, you can save a lot of traffic and speed up the request time.

HTTP/2 uses "header tables" on the client and server to track and store previously sent key-value pairs. For the same data, it is no longer sent through each request and response.

Let's look at a simplified example again, assuming that the client sends the following request headers in order:

Header1:foo
Header2:bar
Header3:bat

When the client sends a request, it creates a table based on the header value:

indexHeader namevalue
62Header1foo
63Header2bar
64Header3bat

If the server receives the request, it will still create a table. When the client sends the next request, if the headers are the same, it can directly send the header block like this:

62 63 64

The server will look up the previously created table and restore these numbers to the complete header corresponding to the index.

priority

HTTP2 can set a higher priority for more urgent requests, and the server can give priority to processing after receiving such requests.

flow control

Since the traffic bandwidth of a TCP connection (depending on the network bandwidth from the client to the server) is fixed, when there are multiple concurrent requests, one request occupies more traffic, while the other request occupies less traffic. Flow control can accurately control the flow of different streams.

Server push

A powerful new feature of HTTP2 is that the server can send multiple responses to a client request. In other words, in addition to the response to the initial request, the server can additionally push resources to the client without the client explicitly requesting it.

For example, when a browser requests a website, in addition to returning an HTML page, the server can also push resources in advance according to the URL of the resource in the HTML page.

Many websites have already started to use HTTP2, such as Zhihu:

图片

Among them, h2 refers to the HTTP2 protocol, and http/1.1 refers to the HTTP1.1 protocol.

3. Use server-side rendering

Client rendering: Get HTML files, download JavaScript files as needed, run the files, generate DOM, and then render.

Server-side rendering: The server-side returns HTML files, and the client-side only needs to parse the HTML.

  • Advantages: First screen rendering is fast and SEO is good.
  • Disadvantages: Configuration is troublesome, which increases the computing pressure of the server.

Below I use Vue SSR as an example to briefly describe the SSR process.

Client rendering process

  1. Visit the website rendered by the client.
  2. The server returns an HTML file containing the import resource statement and <div id="app"></div>
  3. The client requests resources from the server via HTTP. When all necessary resources are loaded, new Vue() executed to instantiate and render the page.

Server-side rendering process

  1. Visit the website rendered by the server.
  2. The server will check which resource files the current routing component needs, and then fill the content of these files into the HTML file. If there is an ajax request, it will be executed to prefetch the data and fill it into the HTML file, and finally return to this HTML page.
  3. When the client receives this HTML page, it can start rendering the page immediately. At the same time, the page will load resources. When all necessary resources are loaded, new Vue() will be executed to instantiate and take over the page.

As can be seen from the above two processes, the difference lies in the second step. The website rendered by the client will directly return the HTML file, while the website rendered by the server will return the HTML file after rendering the page.

are the benefits of 1609393fa1ce1c? Is the faster content arrival time (time-to-content) .

Suppose your website needs to load four abcd files before rendering. And the size of each file is 1 M.

Calculate like this: The website rendered by the client needs to load 4 files and HTML files to complete the home page rendering, the total size is 4M (ignoring the HTML file size). The server-side rendered website only needs to load a rendered HTML file to complete the home page rendering. The total size is the rendered HTML file (this kind of file is not too big, usually a few hundred K, my personal blog website (SSR) The loaded HTML file is 400K). This is why the server rendering is faster .

4. Use CDN for static resources

A content delivery network (CDN) is a set of Web servers distributed in many different geographical locations. We all know that when the server is farther away from the user, the latency is higher. CDN is to solve this problem by deploying servers in multiple locations, allowing users to be closer to the server, thereby shortening the request time.

CDN principle

When a user visits a website, if there is no CDN, the process is like this:

  1. The browser needs to resolve the domain name to an IP address, so it needs to send a request to the local DNS.
  2. The local DNS sends requests to the root server, top-level domain name server, and authority server in turn to obtain the IP address of the website server.
  3. The local DNS sends the IP address back to the browser, and the browser sends a request to the website server IP address and gets the resource.

图片

If a CDN is deployed on the website that the user visits, the process is as follows:

  1. The browser needs to resolve the domain name to an IP address, so it needs to send a request to the local DNS.
  2. The local DNS sends requests to the root server, top-level domain name server, and authority server in turn to obtain the IP address of the global load balancing system (GSLB).
  3. The local DNS then sends a request to GSLB. The main function of GSLB is to determine the location of the user based on the IP address of the local DNS, filter out the local load balancing system (SLB) that is closer to the user, and return the SLB's IP address as the result to Local DNS.
  4. The local DNS sends the IP address of the SLB back to the browser, and the browser sends a request to the SLB.
  5. SLB selects the best cache server and sends it back to the browser based on the resources and addresses requested by the browser.
  6. The browser then redirects to the cache server according to the address sent back by the SLB.
  7. If the cache server has the resources needed by the browser, it sends the resources back to the browser. If not, request the resource from the origin server, send it to the browser and cache it locally.

图片

5. Put the CSS at the head of the file and the JavaScript file at the bottom

All CSS and JS files placed in the head tag will block rendering (CSS will not block DOM parsing). If these CSS and JS need to be loaded and parsed for a long time, then the page will be blank. So the JS file should be placed at the bottom, and the JS file will be loaded after the HTML parsing is complete.

So why should the CSS file be placed in the head?

Because loading the HTML first and then the CSS will make the page that the user sees at the first time is unstyled and "ugly". In order to avoid this, the CSS file must be placed in the head.

In addition, the JS file is not impossible to put in the head, as long as the defer attribute is added to the script tag, asynchronous download, and delayed execution.

6. Use font icon iconfont instead of picture icon

The font icon is to make the icon into a font. When used, it is the same as the font. You can set the attributes, such as font-size, color, etc., which is very convenient. And the font icon is a vector diagram, which is not distorted. Another advantage is that the generated files are extremely small.

Compress font files

Use fontmin-webpack plugin to compress font files.

图片

7. Make good use of caching and don’t load the same resources repeatedly

In order to avoid users having to request files every time they visit the website, we can control this behavior by adding Expires or max-age. Expires sets a time, as long as before this time, the browser will not request the file, but directly use the cache. While max-age is a relative time, it is recommended to use max-age instead of Expires.

But this will cause a problem, what should I do when the file is updated? How to notify the browser to request the file again?

You can update the resource link address quoted in the page to allow the browser to actively abandon the cache and load new resources.

The specific method is to associate the modification of the resource address URL with the file content, that is, only the file content changes will cause the corresponding URL change, so as to achieve precise cache control at the file level. What is related to the content of the file? We will naturally think of using the data summary algorithm to obtain summary information for the file. The summary information corresponds to the content of the file one-to-one, and there is a cache control basis that can be accurate to the granularity of a single file.

8. Compressed files

Compressing files can reduce file download time and make user experience better.

Thanks to the development of webpack and node, it is now very convenient to compress files.

The following plugins can be used for compression in webpack:

  • JavaScript:UglifyPlugin
  • CSS :MiniCssExtractPlugin
  • HTML:HtmlWebpackPlugin

In fact, we can do better. That is to use gzip compression. You can turn on this feature by adding the gzip flag to the Accept-Encoding header in the HTTP request header. Of course, the server must also support this feature.

gzip is currently the most popular and effective compression method. For example, the size of the app.js file I generated after building the project I developed with Vue is 1.4MB, and compressed with gzip is only 573KB, which is a reduction of nearly 60% in size.

Attached is how to use webpack and node to configure gzip.

download plugin

npm install compression-webpack-plugin --save-dev
npm install compression

webpack configuration

const CompressionPlugin = require('compression-webpack-plugin');

module.exports = {
  plugins: [new CompressionPlugin()],
}

node configuration

const compression = require('compression')
// 在其他中间件前使用
app.use(compression())

9. Picture optimization

(1). Image delay loading

In the page, do not set the path for the picture first, and only load the real picture when the picture appears in the visible area of the browser. This is lazy loading. For websites with a lot of pictures, loading all pictures at one time will have a great impact on the user experience, so it is necessary to use picture lazy loading.

First of all, you can set the picture like this, the picture will not load when the page is not visible:

<img data-src="https://avatars0.githubusercontent.com/u/22117876?s=460&u=7bd8f32788df6988833da6bd155c3cfbebc68006&v=4">

When the page is visible, use JS to load the image:

const img = document.querySelector('img')
img.src = img.dataset.src

Then the picture is loaded.

(2). Responsive pictures

The advantage of responsive images is that the browser can automatically load the appropriate image according to the screen size.

picture by 0609393fa1d2bd

<picture>
 <source srcset="banner_w1000.jpg" media="(min-width: 801px)">
 <source srcset="banner_w800.jpg" media="(max-width: 800px)">
 <img src="banner_w800.jpg" alt="">
</picture>

@media by 0609393fa1d2e1

@media (min-width: 769px) {
 .bg {
  background-image: url(bg1080.jpg);
 }
}
@media (max-width: 768px) {
 .bg {
  background-image: url(bg768.jpg);
 }
}

(3). Resize the picture

For example, you have a 1920 * 1080 size picture, which is displayed to the user as a thumbnail, and the full picture is displayed when the user mouses over it. If the user never actually hovered the mouse over the thumbnail, then the time to download the picture was wasted.

Therefore, we can use two pictures to optimize. In the beginning, only the thumbnails are loaded, and when the user hovers over the image, the large image is loaded. There is also a way to delay loading the large image, and manually change the src of the large image to download after all elements have been loaded.

(4). Reduce picture quality

For example, JPG format pictures, 100% quality and 90% quality are usually indistinguishable, especially when used as a background image. When I cut the background image with PS, I cut the image into JPG format and compressed it to 60% quality. Basically, there is no difference.

There are two compression methods, one is through the webpack plug-in image-webpack-loader , and the other is through the online website.

The usage of webpack plugin image-webpack-loader

npm i -D image-webpack-loader

webpack configuration

{
  test: /\.(png|jpe?g|gif|svg)(\?.*)?$/,
  use:[
    {
    loader: 'url-loader',
    options: {
      limit: 10000, /* 图片大小小于1000字节限制时会自动转成 base64 码引用*/
      name: utils.assetsPath('img/[name].[hash:7].[ext]')
      }
    },
    /*对图片进行压缩*/
    {
      loader: 'image-webpack-loader',
      options: {
        bypassOnDebug: true,
      }
    }
  ]
}

(5). Use CSS3 effects instead of pictures as much as possible

Many pictures can be drawn using CSS effects (gradients, shadows, etc.). In this case, CSS3 is better. Because the code size is usually a fraction or even a tens of the image size.

(6). Use webp format pictures

The advantage of WebP is that it has a better image data compression algorithm, which can bring a smaller picture volume, and has image quality that is indistinguishable from the naked eye; it also has lossless and lossy compression modes, Alpha transparency, and animation. Features, the conversion effects on JPEG and PNG are quite excellent, stable and uniform.

Reference materials:

  • What are the advantages of WebP over PNG and JPG?

https://www.zhihu.com/question/27201061

10. Load the code on demand through webpack, extract the third library code, and reduce the redundant code from ES6 to ES5

Lazy loading or on-demand loading is a good way to optimize web pages or applications. This method actually separates your code at some logical breakpoints, and then after completing certain operations in some code blocks, you immediately quote or will quote other new code blocks. This speeds up the initial loading of the application and reduces its overall size, because some code blocks may never be loaded.

Generate the file name according to the file content, combine with import to dynamically import components to achieve on-demand loading

This requirement can be achieved by configuring the filename attribute of output. There is a [contenthash] in the value option of the filename attribute, which will create a unique hash based on the content of the file. When the content of the file changes, [contenthash] also changes.

output: {
 filename: '[name].[contenthash].js',
    chunkFilename: '[name].[contenthash].js',
    path: path.resolve(__dirname, '../dist'),
},

Extract third-party libraries

Since the introduced third-party libraries are generally relatively stable, they will not change frequently. So extracting them separately, as a long-term cache is a better choice. Here you need to use the splitChunk plugin cacheGroups option of webpack4.

optimization: {
   runtimeChunk: {
        name: 'manifest' // 将 webpack 的 runtime 代码拆分为一个单独的 chunk。
    },
    splitChunks: {
        cacheGroups: {
            vendor: {
                name: 'chunk-vendors',
                test: /[\\/]node_modules[\\/]/,
                priority: -10,
                chunks: 'initial'
            },
            common: {
                name: 'chunk-common',
                minChunks: 2,
                priority: -20,
                chunks: 'initial',
                reuseExistingChunk: true
            }
        },
    }
},
  • test: Used to control which modules are matched by this cache group. If you pass it out intact, it will select all modules by default. Value types that can be passed: RegExp, String and Function;
  • priority: indicates the extraction weight, the larger the number, the higher the priority. Because a module may satisfy the conditions of multiple cacheGroups, the one with the highest weight is the final decision;
  • reuseExistingChunk: Indicates whether to use an existing chunk. If true, it means that if the modules contained in the current chunk have been extracted, then new ones will not be regenerated.
  • minChunks (default is 1): The minimum number of times this code block should be quoted before it is split (Annotation: To ensure the reusability of the code block, the default configuration strategy is that it can be split without multiple quotes)
  • chunks (default is async): initial, async and all
  • name (the name of the packaged chunks): string or function (functions can customize their names according to conditions)

Reduce redundant code from ES6 to ES5

Babel's transformed code needs some helper functions to achieve the same functions as the original code, such as:

class Person {}

Will be converted to:

"use strict";

function _classCallCheck(instance, Constructor) {
  if (!(instance instanceof Constructor)) {
    throw new TypeError("Cannot call a class as a function");
  }
}

var Person = function Person() {
  _classCallCheck(this, Person);
};

Here _classCallCheck is a helper function. If classes are declared in many files, then there will be many such helper functions.

The @babel/runtime package here declares all the help functions that need to be used, and @babel/plugin-transform-runtime is to import all helper function from the @babel/runtime package:

"use strict";

var _classCallCheck2 = require("@babel/runtime/helpers/classCallCheck");

var _classCallCheck3 = _interopRequireDefault(_classCallCheck2);

function _interopRequireDefault(obj) {
  return obj && obj.__esModule ? obj : { default: obj };
}

var Person = function Person() {
  (0, _classCallCheck3.default)(this, Person);
};

Here there is no longer compile helper function classCallCheck , but rather a direct reference to the @babel/runtime in helpers/classCallCheck .

install

npm i -D @babel/plugin-transform-runtime @babel/runtime

uses in the .babelrc file

"plugins": [
        "@babel/plugin-transform-runtime"
]

**
**

11. Reduce redrawing and rearrangement

browser rendering process

  1. Parse HTML to generate DOM tree.
  2. Parse the CSS to generate a CSSOM rule tree.
  3. Parse JS, manipulate DOM tree and CSSOM rule tree.
  4. Combine the DOM tree with the CSSOM rule tree to generate a rendering tree.
  5. Traverse the rendering tree to start the layout, and calculate the position and size information of each node.
  6. The browser sends the data of all layers to the GPU, and the GPU composites the layers and displays them on the screen.

图片

rearrangement

When changing the position or size of a DOM element, it will cause the browser to regenerate the render tree. This process is called rearrangement.

redraw

When the render tree is regenerated, each node of the render tree is drawn to the screen. This process is called redrawing. Not all actions will cause reflow. For example, changing the font color will only cause redrawing. Remember, rearrangement will cause redrawing, and redrawing will not cause rearrangement.

Both rearrangement and redrawing operations are very expensive, because the JavaScript engine thread and the GUI rendering thread are mutually exclusive, and only one of them can work at the same time.

What operation will cause the rearrangement?

  • Add or remove visible DOM elements
  • Element position change
  • Element size change
  • Content change
  • Browser window size changed

How to reduce rearrangement and redrawing?

  • When using JavaScript to modify the style, it is best not to write the style directly, but to change the style by replacing the class.
  • If you want to perform a series of operations on the DOM element, you can take the DOM element out of the document flow, and after the modification is complete, bring it back to the document. It is recommended to use hidden elements (display: none) or document fragments (Document Fragment), both of which can achieve this solution well.

12. Use event delegation

Event delegation makes use of event bubbling, and you can manage all events of a certain type by specifying only one event handler. All events that use buttons (most mouse events and keyboard events) are suitable for event delegation technology. Using event delegation can save memory.

<ul>
  <li>苹果</li>
  <li>香蕉</li>
  <li>凤梨</li>
</ul>

// good
document.querySelector('ul').onclick = (event) => {
  const target = event.target
  if (target.nodeName === 'LI') {
    console.log(target.innerHTML)
  }
}

// bad
document.querySelectorAll('li').forEach((e) => {
  e.onclick = function() {
    console.log(this.innerHTML)
  }
}) 

13. Pay attention to the locality of the program

A well-written computer program often has good locality. They tend to quote data items near the recently cited data item, or the recently cited data item itself. This tendency is called the principle of locality. Programs with good locality run faster than programs with poor locality.

locality usually has two different forms:

  • Time locality: In a program with good time locality, a memory location that has been referenced once is likely to be referenced multiple times in the near future.
  • Spatial locality: In a program with good spatial locality, if a memory location is referenced once, the program is likely to reference a nearby memory location in the near future.

Time locality example

function sum(arry) {
 let i, sum = 0
 let len = arry.length

 for (i = 0; i < len; i++) {
  sum += arry[i]
 }

 return sum
}

In this example, the variable sum is referenced once in each loop iteration, so for sum, it has good temporal locality

Example of spatial locality

with good spatial locality

// 二维数组 
function sum1(arry, rows, cols) {
 let i, j, sum = 0

 for (i = 0; i < rows; i++) {
  for (j = 0; j < cols; j++) {
   sum += arry[i][j]
  }
 }
 return sum
}

Program with poor spatial locality

// 二维数组 
function sum2(arry, rows, cols) {
 let i, j, sum = 0

 for (j = 0; j < cols; j++) {
  for (i = 0; i < rows; i++) {
   sum += arry[i][j]
  }
 }
 return sum
}

Take a look at the two spatial locality examples above. The way in the example to access each element of the array in order from each row is called a reference pattern with a step of 1. If you access every k elements in the array, it is called a reference pattern with a step length of k. Generally speaking, as the step size increases, the spatial locality decreases.

What is the difference between these two examples? The difference is that the first example is to scan the array row by row, and then scan the next row after scanning a row; the second example is to scan the array by column, after scanning an element in a row, immediately scan the same in the next row Column element.

The array is stored in the memory in row order. The result is that the example of scanning the array row by row gets a reference pattern with a step size of 1, which has good spatial locality; while another example has a step size of rows, which has very poor spatial locality. .

Performance Testing

Operating environment:

  • cpu: i5-7400
  • Browser: chrome 70.0.3538.110

Perform 10 spatial locality tests on a two-dimensional array with a length of 9000 (the sub-array length is also 9000), and the time (milliseconds) is averaged. The results are as follows:

The examples used are the above two spatial locality examples

Step size is 1Step size is 9000
1242316

From the above test results, the execution time of an array with a step size of 1 is an order of magnitude faster than an array with a step size of 9000.

to sum up:

  • Programs that repeatedly refer to the same variable have good time locality
  • For a program with a reference pattern with a step size of k, the smaller the step size, the better the spatial locality; while the program that jumps around in memory with large steps will have poor spatial locality

14. If-else vs. switch

When the number of judgment conditions increases, the more inclined to use switch instead of if-else.

if (color == 'blue') {

} else if (color == 'yellow') {

} else if (color == 'white') {

} else if (color == 'black') {

} else if (color == 'green') {

} else if (color == 'orange') {

} else if (color == 'pink') {

}

switch (color) {
    case 'blue':

        break
    case 'yellow':

        break
    case 'white':

        break
    case 'black':

        break
    case 'green':

        break
    case 'orange':

        break
    case 'pink':

        break
}

In the case of the above, it is best to use a switch. Assuming that the color value is pink, the if-else statement needs to be judged 7 times, and the switch only needs to be judged once. In terms of readability, the switch statement is also better.

From the timing of use, when the condition value is greater than two, it is better to use switch. However, if-else also has things that switch cannot do. For example, if there are multiple judgment conditions, switch cannot be used.

15. Lookup Table

When there are too many conditional statements, using switch and if-else is not the best choice. At this time, try a lookup table. Lookup tables can be constructed using arrays and objects.

switch (index) {
    case '0':
        return result0
    case '1':
        return result1
    case '2':
        return result2
    case '3':
        return result3
    case '4':
        return result4
    case '5':
        return result5
    case '6':
        return result6
    case '7':
        return result7
    case '8':
        return result8
    case '9':
        return result9
    case '10':
        return result10
    case '11':
        return result11
}

You can convert this switch statement into a lookup table

const results = [result0,result1,result2,result3,result4,result5,result6,result7,result8,result9,result10,result11]

return results[index]

If the conditional statement is not a numeric value but a string, you can use the object to build a lookup table

const map = {
  red: result0,
  green: result1,
}

return map[color]

16. Avoid page lag

60fps and device refresh rate

The current screen refresh rate of most devices is 60 times per second. Therefore, if there is an animation or gradient effect on the page, or the user is scrolling the page, the rate at which the browser renders the animation or each frame of the page also needs to be consistent with the refresh rate of the device screen.

The budget time of each frame is only a little more than 16 milliseconds (1 second / 60 = 16.66 milliseconds). But in fact, the browser has to do some sorting work, so all your work needs to be completed within 10 milliseconds. If this budget cannot be met, the frame rate will drop and the content will jitter on the screen. This phenomenon is commonly referred to as stuttering and can have a negative impact on the user experience.

图片

Suppose you modify the DOM with JavaScript, trigger a style modification, go through rearrangement and redraw and finally draw on the screen. If the execution time of any one of these is too long, it will take too long to render this frame, and the average frame rate will drop. Assuming that this frame took 50 ms, then the frame rate at this time is 1s / 50ms = 20fps, and the page looks like a freeze.

For some long-running JavaScript, we can use timers to split and delay execution.

for (let i = 0, len = arry.length; i < len; i++) {
 process(arry[i])
}

Assuming that the above loop structure is too complex due to process() or too many array elements, or even both, you can try segmentation.

const todo = arry.concat()
setTimeout(function() {
 process(todo.shift())
 if (todo.length) {
  setTimeout(arguments.callee, 25)
 } else {
  callback(arry)
 }
}, 25)

If you are interested in learning more, you can check Chapter 6 of High-Performance JavaScript and Chapter 3 of Efficient Front End: Web Efficient Programming and Optimization Practice.

17. Use requestAnimationFrame to achieve visual changes

From the 16th point, we can know that the screen refresh rate of most devices is 60 times per second, which means that the average time of each frame is 16.66 milliseconds. When using JavaScript to achieve animation effects, the best situation is that every time the code is executed at the beginning of the frame. The only way to ensure that JavaScript runs at the beginning of the frame is to use requestAnimationFrame .

/**
 * If run as a requestAnimationFrame callback, this
 * will be run at the start of the frame.
 */
function updateScreen(time) {
  // Make visual updates here.
}

requestAnimationFrame(updateScreen);

If you use setTimeout or setInterval to implement animation, the callback function will run at a certain point in the frame, which may be just at the end, which may often cause us to lose frames and cause freezes.

图片

18. Use Web Workers

Web Worker uses other worker threads to be independent of the main thread, and it can perform tasks without interfering with the user interface. A worker can send a message to the JavaScript code that created it, by sending the message to the event handler specified by the code (and vice versa).

Web Workers are suitable for long-running scripts that process pure data or have nothing to do with the browser UI.

Creating a new worker is very simple. Specify the URI of a script to execute the worker thread (main.js):

var myWorker = new Worker('worker.js');
// 你可以通过postMessage() 方法和onmessage事件向worker发送消息。
first.onchange = function() {
  myWorker.postMessage([first.value,second.value]);
  console.log('Message posted to worker');
}

second.onchange = function() {
  myWorker.postMessage([first.value,second.value]);
  console.log('Message posted to worker');
}

After receiving the message in the worker, we can write an event handling function code as a response (worker.js):

onmessage = function(e) {
  console.log('Message received from main script');
  var workerResult = 'Result: ' + (e.data[0] * e.data[1]);
  console.log('Posting message back to main script');
  postMessage(workerResult);
}

The onmessage processing function is executed immediately after receiving the message, and the message itself in the code is used as the data attribute of the event. Here we simply multiply these two numbers and use the postMessage() method again to pass the result back to the main thread.

Back to the main thread, we use onmessage again to respond to the message returned by the worker:

myWorker.onmessage = function(e) {
  result.textContent = e.data;
  console.log('Message received from worker');
}

Here we get the data of the message event and set it to the textContent of the result, so the user can directly see the result of the operation.

However, within the worker, DOM nodes cannot be directly manipulated, nor can the default methods and properties of the window object be used. However, you can use a lot of things under the window object, including data storage mechanisms such as WebSockets, IndexedDB, and the Data Store API dedicated to FireFox OS.

19. Using bit manipulation

The numbers in JavaScript are stored in 64-bit format using the IEEE-754 standard. But in bit operations, the number is converted to a signed 32-bit format. Even if conversion is required, bit operations are much faster than other mathematical operations and Boolean operations.

Modulo

Since the lowest bit of an even number is 0 and an odd number is 1, the modulo operation can be replaced by bit operations.

if (value % 2) {
 // 奇数
} else {
 // 偶数 
}
// 位操作
if (value & 1) {
 // 奇数
} else {
 // 偶数
}
Rounding
~~10.12 // 10
~~10 // 10
~~'1.5' // 1
~~undefined // 0
~~null // 0
Bit mask
const a = 1
const b = 2
const c = 4
const options = a | b | c

By defining these options, you can use bitwise AND operation to determine whether a/b/c is in the options.

// 选项 b 是否在选项中
if (b & options) {
 ...
}

20. Don't overwrite native methods

No matter how optimized your JavaScript code is, it can't compare to native methods. Because the native method is written in a low-level language (C/C++), and it is compiled into machine code and becomes part of the browser. When native methods are available, try to use them, especially mathematical operations and DOM manipulation.

21. Reduce the complexity of CSS selectors

(1). The browser reads the selector, and the principle is to read from the right to the left of the selector.

See an example

#block .text p {
 color: red;
}
  1. Find all P elements.
  2. Find whether the element in the result 1 has a parent element with a class name of text
  3. Find whether the element in the result 2 has a parent element whose id is block

(2). CSS selector priority

内联 > ID选择器 > 类选择器 > 标签选择器

A conclusion can be drawn based on the above two information.

  1. The shorter the selector, the better.
  2. Try to use high-priority selectors, such as ID and class selectors.
  3. Avoid using the wildcard character *.

Finally, according to the information I found, there is no need to optimize the CSS selector, because the performance difference between the slowest and the slowest selector is very small.

22. Use flexbox instead of the earlier layout model

In the early CSS layout, we can implement absolute positioning, relative positioning or floating positioning of elements. And now, we have a new layout method flexbox, which has an advantage over the earlier layout method, that is, better performance.

The screenshot below shows the layout overhead of using floats on 1300 boxes:

图片

Then we use flexbox to reproduce this example:

图片

Now, for the same number of elements and the same visual appearance, the layout time is much less (3.5 milliseconds and 14 milliseconds respectively in this example).

However, flexbox compatibility is still a bit problematic, not all browsers support it, so use it with caution.

Compatibility of various browsers:

  • Chrome 29+
  • Firefox 28+
  • Internet Explorer 11
  • Opera 17+
  • Safari 6.1+ (prefixed with -webkit-)
  • Android 4.4+
  • iOS 7.1+ (prefixed with -webkit-)

23. Use transform and opacity property changes to achieve animation

In CSS, changes to the two properties of transforms and opacity will not trigger rearrangement and redrawing. They are properties that can be handled by the composite alone.

图片

Reference materials:

  • Use transform and opacity property changes to implement animation

24. Reasonable use of rules to avoid over-optimization

Performance optimization is mainly divided into two categories:

  1. Load-time optimization
  2. Runtime optimization

Among the above 23 suggestions, the first 10 suggestions belong to load-time optimization, and the last 13 suggestions belong to runtime optimization. Generally speaking, it is not necessary to use all of the 23 performance optimization rules. It is best to make targeted adjustments according to the user group of the website, which saves energy and time.

Before solving the problem, you must first find out the problem, otherwise there is no way to start. So before doing performance optimization, it is best to investigate the loading performance and running performance of the website.

Check loading performance

The loading performance of a website mainly depends on the white screen time and the first screen time.

  • White screen time: refers to the time from the input of the URL to the beginning of the page to display the content.
  • First screen time: refers to the time from the input of the URL to the complete rendering of the page.

Put the following script in </head> to get the white screen time.

<script>
 new Date() - performance.timing.navigationStart
</script>

In the window.onload event, execute new Date() \- performance.timing.navigationStart to get the time of the first screen.

Check running performance

With chrome's developer tools, we can check the performance of the website at runtime.


兰俊秋雨
5.1k 声望3.5k 粉丝

基于大前端端技术的一些探索反思总结及讨论