Contents
- ## Contents
## Performance optimization index
- 1. Overall operating performance
- 2. Website accessibility
3. Does the website apply best practice strategies?
- > 1) using
target="_blank"
of<a>
link if there is no statementrel="noopener noreferrer"
security risks. - > 2) Check whether there are warnings and error prompts in the browser console, and locate and solve the problem by indicating.
- > 3) http and https protocol addresses are not mixed
- > 4) Avoid using AppCache
- > 5) Avoid using document.write()
- > 6) Avoid using mutation events
- > 7) Avoid using Web SQL
- > 8) Avoid loading too large DOM tree
- > 9) Allow users to paste password
- > 1) using
- 4. Website search engine optimization SEO
- ## Performance test tool lighthouse
## Website optimization based on hexo framework
1. Optimize resource loading time
- ➣ Use defer/async to download script resources asynchronously
- ➣ Use the async function to asynchronously load external resources
- ➣ Use browser onfocus event to delay external resource loading
- ➣ Use preload/prefetch/preconnect for preload optimization
- ➣ Use the hexo plug-in to compress code files and image files
- ➣ Write the hexo-img-lazyload plug-in to increase the image lazy loading feature
- ➣ Use IntersectionObserver API to lazily load other resources
- ➣ Use CDN to load external dependent scripts
- ➣ Use Aplayer instead of iframe to load NetEase Cloud Music
- ## Reference
- ## Conclusion
Preface
1. What is hexo?
Hexo is a website development tool designed for real-time interface data query display without relying on the back-end. For example, I have used Node.js as the backend to develop a blog site before. If you implement the backend logic yourself, you need to consider database storage, front-end interaction, and back-end server deployment. The whole process is complicated, and it can be used as a way for front-end developers to build and learn personally in the early stage. Hexo simplifies this process. It integrates both data storage and data acquisition through compilation and localization into front-end static resources. An example is that a blog site usually needs page turning function to get blog posts. Under the traditional development method, the operation of getting the next page is initiated by the front-end script, and the back-end server processes the request and returns, but after using hexo, the whole process is Completed locally at one time, hexo indexes all static resources locally.
Using hexo, writers usually only need to pay attention to the writing of markdown format articles. The rest of the process of website compilation, construction and publishing can be handled by the framework, and all website content will be packaged into static html/css/js files. Hexo supports custom plug-ins, and there is also a plug-in community. If writers also have front-end capabilities, they can also publish their own plug-ins to the community for open source sharing.
2. What is optimization?
The performance level we usually talk about may focus on the evaluation of the running speed of the website, which includes the speed at which static resources and scripts are obtained and whether the website UI interface is running smoothly. In fact, optimization in a broad sense should include: website performance optimization, website accessibility optimization, website SEO optimization, website best practices, etc.
Performance optimization index
1. Overall operating performance
- FCP (First Contentful Paint): The time it takes from the user to initiate a website request to the first time the browser starts to render website data. The website data mentioned in the first rendering includes the text, images, HTML DOM structure of the webpage, etc., but does not include the webpage data in the iframe. This indicator is usually used to measure the speed at which the local and server establish network communication for the first time.
- TTI (Time To Interactive): The time it takes for the user to start navigating to the website to the page becoming fully interactive. The measurement criteria for website interaction are: the website displays the actual available content, the webpage events of the visible elements on the interface have been successfully bound (such as clicks, drags, etc.), and the feedback time of the interaction between the user and the page is less than 50 ms.
- SI (Speed Index): Measures the speed of visual display of content during page loading. Generally speaking, it is the drawing and rendering speed of website interface elements. If you use the lighthouse measurement tool, it will capture multiple picture frames of the loading page in the browser, and then calculate the visual rendering progress between frames.
- TBT (Total Blocking Time): Measures the time from when the page is first rendered (FCP) to when the page is actually interactive (TTI). Usually when we visit a website, after the website is presented as a whole, there is a short period of time that we cannot interact with the interface, such as mouse clicks, keyboard keys, etc. During this time, the browser is loading and executing scripts and styles.
- LCP (Largest Contentful Paint): measure how the largest content element in the viewport is drawn to the screen
The time required. Usually includes the entire process of downloading, parsing and rendering this element.
- CLS (Cumulative Layout Shift): A numerical indicator that measures the overall layout jitter when a website is loaded. If the user interface of a website jitters and flickers many times during the loading process, it may cause mild discomfort to the user. Therefore, the number of reflows and redraws of the website should be minimized.
2. Website accessibility
- The contrast between the background color of the web page and the text foreground of the website should not be too low, otherwise it will affect the user's reading.
- The web page link tag
<a>
preferably contains the description information of the link, such as:<a href="https://github.com/nojsja">[Description-nojsja's github personal interface]</a>.
lang
attribute of the html element specifies the current locale.- The correct html semantic tags can make the keyboard and screen reader work normally. Usually the structure of a web page can be described by semantic tags as:
<html lang="en">
<head>
<title>Document title</title>
<meta charset="utf-8">
</head>
<body>
<a class="skip-link" href="#maincontent">Skip to main</a>
<h1>Page title</h1>
<nav>
<ul>
<li>
<a href="https://google.com">Nav link</a>
</li>
</ul>
</nav>
<header>header</header>
<main id="maincontent">
<section>
<h2>Section heading</h2>
<p>text info</p>
<h3>Sub-section heading</h3>
<p>texgt info</p>
</section>
</main>
<footer>footer</footer>
</body>
</html>
- The id uniqueness of interface elements.
alt
attribute declaration of the img tag. It specifies alternative text, which is used to replace the content displayed in the browser of the image when the image cannot be displayed or the user disables the image display.label
tag is declared inside the form element to allow the screen reader to work correctly.- The iframe element declares the
title
attribute to describe its internal content to facilitate the work of the screen reader. - The use of aria accessibility attributes and tags, related reference >> aria reference
alt
attribute declaration to input[type=image] and object tags:
<input type="image" alt="Sign in" src="./sign-in-button.png">
<object alt="report.pdf type="application/pdf" data="/report.pdf">
Annual report.
</object>
- Elements that need to use the tab key focus feature can declare the
tabindex
attribute, and the focus will switch in turn when we press the tab key. And according to the order of keyboard sequence navigation, elements with a value of 0, an illegal value, or no tabindex value should be placed after the element with a positive tabindex value:
1) tabindex=负值 (通常是tabindex=“-1”),表示元素是可聚焦的,但是不能通过键盘导航来访问到该元素,用JS做页面小组件内部键盘导航的时候非常有用。
2) tabindex="0" ,表示元素是可聚焦的,并且可以通过键盘导航来聚焦到该元素,它的相对顺序是当前处于的DOM结构来决定的。
3) tabindex=正值,表示元素是可聚焦的,并且可以通过键盘导航来访问到该元素;它的相对顺序按照tabindex 的数值递增而滞后获焦。如果多个元素拥有相同的 tabindex,它们的相对顺序按照他们在当前DOM中的先后顺序决定。
th
andscope
in the table element so that the row header and the list header correspond to their data fields one-to-one:
<table>
<caption>My marathon training log</caption>
<thead>
<tr>
<th scope="col">Week</th>
<th scope="col">Total miles</th>
<th scope="col">Longest run</th>
</tr>
</thead>
<tbody>
<tr>
<th scope="row">1</th>
<td>14</td>
<td>5</td>
</tr>
<tr>
<th scope="row">2</th>
<td>16</td>
<td>6</td>
</tr>
</tbody>
</table>
- The video element specifies
track
text subtitle resources for the convenience of the hearing impaired (subtitle resource files are required):
<video width="300" height="200">
<source src="marathonFinishLine.mp4" type="video/mp4">
<track src="captions_en.vtt" kind="captions" srclang="en" label="english_captions">
<track src="audio_desc_en.vtt" kind="descriptions" srclang="en" label="english_description">
<track src="captions_es.vtt" kind="captions" srclang="es" label="spanish_captions">
<track src="audio_desc_es.vtt" kind="descriptions" srclang="es" label="spanish_description">
</video>
- The li list tag is used in the container component
ul
orol
. - The heading tag is declared strictly in ascending order, with
section
or other elements (such as p tags) to correctly reflect the structure of the interface content:
<h1>Page title</h1>
<section>
<h2>Section Heading</h2>
…
<h3>Sub-section Heading</h3>
</section>
- Use
<meta charset="UTF-8">
specify the website character set encoding. - The aspect ratio of the image resource referenced by the img element should be the same as the aspect ratio of the current application of the img, otherwise the image may be distorted.
- Add
<!DOCTYPE html>
to prevent abnormal rendering of the browsing interface.
3. Does the website apply best practice strategies?
> 1) using target="_blank"
of <a>
link if no declaration rel="noopener noreferrer"
security risk.
When a page is linked to another page using target="_blank", the new page will run in the same process as the old page. If the new page is executing expensive JavaScript, the performance of the old page may be affected. And the new page can window.opener
, for example, it can use window.opener.location = url
to navigate the old page to a different URL.
> 2) Check whether there are warnings and error prompts on the browser console, and locate and solve the problem by indicating.
> 3) The http and https protocol addresses are not mixed
Browsers have gradually begun to prohibit the mixing of resources that do not use protocols. For example, web servers that use the http protocol load resources that start with the https protocol. Therefore, the following situations may occur:
- Mixed content is loaded, but a warning will appear;
- Do not load mixed content, blank content will be displayed directly;
- Before loading mixed content, there will be a prompt similar to whether it is "displayed" or "blocked" due to insecurity risks!
The following methods should be considered for optimization:
- Put part of the protocol loaded on the site mixed with external resources into its own server for hosting;
- For websites deployed with https, declare on the web page
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests"
convert http requests into https requests; - For websites deployed with https, adding the request header on the server side:
header("Content-Security-Policy: upgrade-insecure-requests")
can also achieve the same effect; - For websites that support both http and https access, consider using relative protocols instead of directly specifying the protocol in plain text:
<script src="//path/to/js">
. The browser will dynamically switch according to the current page protocol when sending a request. - Similar to using relative protocols, using relative URLs can also achieve the goal, but it increases the coupling between resources:
<script src="./path/to/js"></script>
> 4) Avoid using AppCache
AppCache has been deprecated. Consider using service worker's Cache API.
> 5) Avoid using document.write()
For users with slower Internet speeds (2G, 3G or slower WLAN), dynamic injection of external scripts through document.write() will delay the display of page content for tens of seconds.
> 6) Avoid using mutation events
The following mutation events can harm performance and have been deprecated in the DOM event specification:
- DOMAttrModified
- DOMAttributeNameChanged
- DOMCharacterDataModified
- DOMElementNameChanged
- DOMNodeInserted
- DOMNodeInsertedIntoDocument
- DOMNodeRemoved
- DOMNodeRemovedFromDocument
- DOMSubtreeModified
It is recommended to use MutationObserver instead
> 7) Avoid using Web SQL
It is recommended to replace with IndexedDB
> 8) Avoid loading too large DOM tree
Large DOM trees can reduce page performance in several ways:
- Network efficiency and load performance. If your server sends a large DOM tree, you may ship a lot of unnecessary bytes. This may also slow down the page load time, because the browser may parse many nodes that are not displayed on the screen.
- Runtime performance. When the user and the script interact with the page, the browser must constantly recalculate the position and style of the node. The combination of a large DOM tree and complex style rules can severely slow down rendering.
- Memory performance. If you use a general query selector (for example, document.querySelectorAll('li'), you may inadvertently store references to a large number of nodes), this may overwhelm the memory capabilities of the user's device.
An optimal DOM tree:
- There are less than 1500 nodes in total.
- The maximum depth is 32 nodes.
- There is no parent node with more than 60 child nodes.
- Generally speaking, you only need to find a way to create a DOM node when you need it, and destroy it when you no longer need it.
> 9) Allow users to paste passwords
Password pasting improves security because it enables users to use a password manager. Password managers usually generate strong passwords for users, store the passwords securely, and then automatically paste them into the password field when the user needs to log in.
Delete the code that prevents users from pasting into the password field. Use Clipboard paste in the event breakpoint to break the breakpoint, you can quickly find the code that prevents the password from being pasted. For example, the following code that prevents the password from being pasted:
var input = document.querySelector('input');
input.addEventListener('paste', (e) => {
e.preventDefault();
});
4. Website search engine optimization SEO
- Add viewport declaration
<meta name="viewport">
and specify with and device-width to optimize mobile display. - Add
title
attribute to document to allow screen readers and search engines to correctly identify the content of the website. - Add
meta
desctiption tag to describe website content<meta name="description" content="Put your description here.">
. <a href="videos.html">basketball videos</a>
to the link label to clearly convey the content of the hyperlink.- Use the correct href link address to allow search engines to track the actual URL correctly. The following is a counter-example:
<a onclick="goto('https://example.com')">
- Do not use the meta tag to disable search engine crawlers from crawling your web pages. The following is a counter-example:
<meta name="robots" content="noindex"/>
. On the other hand, some crawlers can be specifically excluded from crawling:<meta name="AdsBot-Google" content="noindex"/>
. - Image image elements use alt attribute text with clear intent and meaning to describe the image information, and avoid using some non-specific pronouns such as:
"chart", "image", "diagram"
(charts, pictures). - Do not use plug-ins to display your content, that is, avoid using
embed
,object
,applet
to introduce resources. robots.txt
correctly in the root directory of the website. It describes which content in this website should not be obtained by search engines and which can be obtained. Usually some background files (such as css, js) and user privacy files do not need to be crawled by search engines, and some files are frequently crawled by spiders, but these files are not important, so you can use robots.txt to block them.
An example:
User-agent: *
Allow: /
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /?r=*
- Submit the sitemap sitemap.xml to the search engine. The text in xml format is the language specially used for the computer to read, and sitemap.xml is the search engine using this specification, so that the website owner can use it to create a website that contains all the contents of the website. The directory file of the webpage is provided to the crawler of the search engine to read. It's like a map, allowing search engines to know what pages are on the site.
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://nojsja.github.io/blogs/2019/10/26/63567fa4.html/</loc>
<lastmod>2021-04-29T10:02:04.853Z</lastmod>
</url>
<url>
<loc>https://nojsja.github.io/blogs/2020/03/26/7507699.html/</loc>
<lastmod>2021-04-29T10:01:30.661Z</lastmod>
</url>
</urlset>
Performance testing tool lighthouse
The performance monitoring indicators mentioned above can be automatically analyzed and generated by the lighthouse website performance monitoring tool. The newer version of chrome
and the edge
browser that uses the chromium
architecture have their own browsers, and the lower version of the chrome
browser can Install it by searching for the plugin in the store. After installation, use F12 to open the console and switch to the lighthouse
column to use it directly:
As shown in the figure, you can decide to check the test item and target the test platform (desktop/mobile), and finally click Generate Report to start running the automated test task, and open an analysis result page after the test is completed:
Result interface:
So far we can make some assessments of the overall performance of the website. The Performance
, Accessibility
, Best Practices
, and SEO
in the above figure correspond to the overall website performance, website accessibility, website best practices, and search engines mentioned above. SEO optimization and other indicators. We can click on each specific item to view the optimization suggestions given by the testing tool:
The test result largely depends on the resource loading speed of your http static resource server. For example, without using a proxy, using the github pages service to host static resources will be slightly slower than using the domestic gitee pages hosting service, and may Some resources failed to load. Therefore, domestic users can use gitee pages instead of github pages, but gitee non-paying users do not have the automatic code construction and deployment function. They need to use the github action
mentioned below to automatically log in and trigger the construction and deployment.
Note: The plug-ins installed on some browsers will affect the test results, because the plug-ins may initiate requests to load other scripts. In this case, you can directly use the npm package manager to install npm install -g lighthouse
, and then use the command line to start the test process: lighthouse https://nojsja.gitee.io/blogs --view --preset=desktop --output-path='/home/nojsja/Desktop/lighthouse.html'
. lighthouse.html
that can be directly browser test results is generated according to the specified address, which can be directly opened for performance troubleshooting.
Website optimization based on the hexo framework
A previously written article "Detailed Explanation of Front-end Performance Optimization Techniques (1)" , which describes in detail some aspects of front-end performance optimization. This article will not list every point, only the optimization methods actually applied in the blog To explain, it is roughly divided into these aspects:
- Optimize resource loading time
- Optimize interface performance
- Website best practices
- Website SEO optimization
1. Optimize resource loading time
Common load time optimization methods include the following:
- Improve the ability of parallel loading of web resources
- Lazy loading of unnecessary external resources
- Reduce fragmented external resource requests and merge small files
- Increase website bandwidth
➣ Use defer/async to download script resources asynchronously
When HTML encounters the declared <script>
script during parsing, it will be downloaded and executed immediately, which often delays the parsing of the rest of the interface, resulting in a white screen. One of the more ancient optimization methods is to put the script at the end of the HTML document, which solves the problem of white screen, but when the DOM document structure is complex and lengthy, it will also cause a certain interface script download and execution delay. The script tag new attribute Async and defer can solve such problems:
- Defer script: The external
<script>
script that declares the defer attribute will not block the HTML parsing and rendering when downloaded, and will start execution after the HTML rendering is complete and can be actually operated (DOMContentLoaded
event is triggered), and the execution order of each script parsing corresponds to the time of declaration The sequence of the positions, the pageDOMContentLoaded
event will be triggered after the execution is completed. <script>
script: The external 0609b51842b34d script that declares the async attribute will not block HTML parsing and rendering when downloading. The download and execution of each script is completely independent, and the execution starts after the download is completed, so the execution sequence is not fixed, and it is not related to the triggering of the DOMContentLoaded event. Sex.
In my blog site, there are Bootstrap
external dependencies, but you need to ensure that their declaration order is in the front position, because after using asynchronous technology, <script>
scripts that are executed synchronously inline cannot be guaranteed. defer/async
attribute cannot be used for optimization.
Usually async/defer
will be used to optimize the loading of some independent subcomponents that rely on scripts, such as the script used to jump to the navigation bar in a blog post. Its execution order is not restricted by other parts at all, so it can be used independently and use the async
attribute optimize. However, it is necessary to ensure that the navigation bar DOM element declaration used by the script is located before the script introduction position to prevent the DOM element from being rendered when the script is executed, causing script errors.
➣ Use async function to load external resources asynchronously
The role of the following async function is to create a link/script
tag based on the incoming url and add it to the <head>
tag to dynamically load external resources, and monitor resource loading when there is a callback function, and execute the callback function after loading. It is worth noting that the difference between this method and the direct script
is that it does not block the interface, and the download and execution of the script are asynchronous.
Blogs are often used to dynamically load external dependent libraries using programming methods in a special situation, and initialize the external libraries in the callback function. For example, my blog uses a music player component. When scrolling to the uninitialized DOM element that contains this component in the visible area of the webpage, it uses async
to request the js script resource of the server, and initialize this in the callback function after loading. music player.
/**
* async [异步脚本加载]
* @author nojsja
* @param {String} u [资源地址]
* @param {Function} c [回调函数]
* @param {String} tag [加载资源类型 link | script]
* @param {Boolean} async [加载 script 资源时是否需要声明 async 属性]
*/
function async(u, c, tag, async) {
var head = document.head ||
document.getElementsByTagName('head')[0] ||
document.documentElement;
var d = document, t = tag || 'script',
o = d.createElement(t),
s = d.getElementsByTagName(t)[0];
async = ['async', 'defer'].includes(async) ? async : !!async;
switch(t) {
case 'script':
o.src = u;
if (async) o[async] = true;
break;
case 'link':
o.type = "text/css";
o.href = u;
o.rel = "stylesheet";
break;
default:
o.src = u;
break;
}
/* callback */
if (c) {
if (o.readyState) {//IE
o.onreadystatechange = function (e) {
if (o.readyState == "loaded"
|| o.readyState == "complete") {
o.onreadystatechange = null;
c(null, e)();
}
};
} else {//其他浏览器
o.onload = function (e) {
c(null, e);
};
}
}
s.parentNode.insertBefore(o, head.firstChild);
}
➣ Use browser onfocus event to delay external resource loading
After some events are triggered through the interaction between the user and the interface, the conditions for external resource loading are met, and then the external resource loading is triggered, which is also an optimization of delayed loading.
For example, there is a search box in the navigation bar area on the right side of my blog to search for blog posts. The search itself is realized by searching a static XML file of a resource index generated locally in advance. With more articles and content, this XML file will become huge. If you download it when the web page is first loaded, it will definitely cause the web bandwidth and the number of network requests to be occupied. Therefore, consider triggering the asynchronous download of XML when the user clicks the search box to focus on it. At the same time, in order not to affect the user experience, the loading effect can be set during the loading process to instruct the user to delay the operation.
➣ Use preload/prefetch/preconnect for preload optimization
- Preload is used to preload the resources required by the current page, such as images, CSS, JavaScript, and font files. Its loading priority is higher than prefetch. At the same time, it should be noted that preload does not block the onload event of the window. It is used in the blog to preload the fonts referenced in css:
<link href="/blogs/fonts/fontawesome-webfont.woff2?v=4.3.0" rel=preload as="font">
, different resource types need to add differentas
tag information, if it is cross-domain loading,crossorigin
pay attention to add 0609b51842b541 attribute declaration. - Prefetch is a low-priority resource prompt. Once a page is loaded, it will start downloading other preloaded resources and store them in the browser's cache. Among them, prefretch also includes:
link, DNS and prerendering three types of preloading requests. link prefetch For example:
<link rel="prefetch" href="/path/to/pic.png">
allows browsers to obtain resources and store them in the cache; DNS prefetch allows browsers to run in the background when users browse pages. DNS prerender and prefetch are very similar, they both optimize future requests for resources on the next page. The difference is that prerender renders the entire page in the background, so it should be used with care, which may cause a waste of network bandwidth. - preconnect allows the browser to perform some operations in advance before an HTTP request is officially sent to the server. This includes DNS resolution, TLS negotiation, and TCP handshake, which eliminates round-trip delay and saves time for users. For example, in the blog: Pre-link
<link href="http://busuanzi.ibruce.info" rel="preconnect" crossorigin>
the web pages of the non-counter statistics library.
➣ Use the hexo plugin to compress code files and image files
Compressing static resources is also a way to save network bandwidth and improve the response speed of requests. It is usually configured in an engineered way, instead of manually compressing each picture. My blog uses a compression plug-in Hexo-all-minifier to optimize the speed of blog access by compressing HTML, CSS, JS and pictures.
installation:
npm install hexo-all-minifier --save
Enable in the config.yml
configuration file:
# ---- 代码和资源压缩
html_minifier:
enable: true
exclude:
css_minifier:
enable: true
exclude:
- '*.min.css'
js_minifier:
enable: true
mangle: true
compress: true
exclude:
- '*.min.js'
image_minifier:
enable: true
interlaced: false
multipass: false
optimizationLevel: 2 # 压缩等级
pngquant: false
progressive: true # 是否启用渐进式图片压缩
The compression of resources consumes performance and time. You can consider not enabling these plug-ins in the development environment to speed up the startup of the development environment. For example, specify a _config.dev.yml
separately and then close all the above plug-ins. Refer to the script field declaration in package.json
{
...
"scripts": {
"prestart": "hexo clean --config config.dev.yml; hexo generate --config config.dev.yml",
"prestart:prod": "hexo clean; hexo generate",
"predeploy": "hexo clean; hexo generate",
"start": "hexo generate --config config.dev.yml; hexo server --config config.dev.yml",
"start:prod": "hexo generate --config config.dev.yml; hexo server",
"performance:prod": "lighthouse https://nojsja.gitee.io/blogs --view --preset=desktop --output-path='/home/nojsja/Desktop/lighthouse.html'",
"performance": "lighthouse http://localhost:4000/blogs --view --preset=desktop --output-path='/home/nojsja/Desktop/lighthouse.html'",
"deploy": "hexo deploy"
}
}
➣ Write the hexo-img-lazyload plug-in to increase the image lazy loading feature
In order to learn hexo's own plug-in system during blog optimization, I used IntersectionObserver API
to write an image lazy loading plug-in: hexo-img-lazyload , which can be installed through the npm command: npm i hexo-img-lazyload
.
Effect preview:
The main principle of the plug-in is to listen to the hook event of the blog construction process, get the built code string, and then the native image declaration in the code such as: <img src="path/to/xx.jpg">
through regular global matching and replace it with: <img src="path/to/loading" data-src="path/to/xx.jpg">
.
function lazyProcessor(content, replacement) {
return content.replace(/<img(.*?)src="(.*?)"(.*?)>/gi, function (str, p1, p2, p3) {
if (/data-loaded/gi.test(str)) {
return str;
}
if (/no-lazy/gi.test(str)) {
return str;
}
return `<img ${p1} src="${emptyStr}" lazyload data-loading="${replacement}" data-src="${p2}" ${p3}>`;
});
}
After the replacement, we need to use the code injection function of hexo to inject our own code into each built interface.
hexo code injection:
/* registry scroll listener */
hexo.extend.injector.register('body_end', function() {
const script = `
<script>
${observerStr}
</script>`;
return script;
}, injectionType)
The injected part of the code used to monitor whether the image element to be loaded enters the visible area for dynamic loading, using IntersectionObserver API
instead of window.onscroll
event method, the former has better performance, the browser uniformly monitors all element location information changes And distribute the scroll event results:
(function() {
/* avoid garbage collection */
window.hexoLoadingImages = window.hexoLoadingImages || {};
function query(selector) {
return document.querySelectorAll(selector);
}
/* registry listener */
if (window.IntersectionObserver) {
var observer = new IntersectionObserver(function (entries) {
entries.forEach(function (entry) {
// in view port
if (entry.isIntersecting) {
observer.unobserve(entry.target);
// proxy image
var img = new Image();
var imgId = "_img_" + Math.random();
window.hexoLoadingImages[imgId] = img;
img.onload = function() {
entry.target.src = entry.target.getAttribute('data-src');
window.hexoLoadingImages[imgId] = null;
};
img.onerror = function() {
window.hexoLoadingImages[imgId] = null;
}
entry.target.src = entry.target.getAttribute('data-loading');
img.src = entry.target.getAttribute('data-src');
}
});
});
query('img[lazyload]').forEach(function (item) {
observer.observe(item);
});
} else {
/* fallback */
query('img[lazyload]').forEach(function (img) {
img.src = img.getAttribute('data-src');
});
}
}).bind(window)();
➣ Use IntersectionObserver API to lazily load other resources
IntersectionObserver API
has been used as a main resource lazy loading method in my blog due to its better performance. I also use it to optimize the loading of the Valine Generally, the comment component is located at the bottom of the blog post, so there is no need to load the resource of the comment component when the article page is just loaded. You can consider using IntersectionObserver
monitor whether the comment component enters the viewport, and then use the async
asynchronous script to download and call back. Perform comment system initialization.
On the other hand, the music player Aplayer at the bottom of each article also uses a similar loading strategy. It can be said that the optimization effect has been tried and tested!
➣ Use CDN to load external dependency scripts
CDN stands for Content Delivery Network. CDN service providers cache static resources on high-performance acceleration nodes all over the country. When users access the corresponding business resources, the CDN system can in real time according to the network traffic and the connection load status of each node, the distance to the user and the response time, etc. The comprehensive information redirects the user's request to the service node closest to the user, so that the content can be transmitted faster and more stable. The ability to respond to the first request can be improved.
Some public external libraries in the blog, such as Bootstrap
and jQuery
are all used external CDN resource addresses. On the one hand, it can reduce the current webpage bandwidth consumption of the main website. On the other hand, CDN can also provide some resource download acceleration.
➣ Use Aplayer instead of iframe to load NetEase Cloud Music
The previous version of the blog will embed a NetEase Cloud Music's own player at the bottom of the article interface. This player is actually an iframe like this:
<iframe frameborder="no" border="0" marginwidth="0" marginheight="0" width=330 height=86 src="//music.163.com/outchain/player?type=2&id=781246&auto=1&height=66"></iframe>
When the iframe loads, it will load a bunch of things. Although the lazy
attribute can be used for lazy loading, the iframe also has many disadvantages:
- iframe will block the onload event of the main page
- The iframe and the main page share the HTTP connection pool, and the browser has restrictions on connections to the same domain, so it will affect the parallel loading of the page
- iframe is not good for page layout
- iframe is not mobile friendly
- Repeated reloading of iframes may cause memory leaks in some browsers
- Data transmission in iframe is complicated
- iframe is not good for SEO
The new version changed the iframe player to Aplayer
and uploaded a list of favorite songs to another gitee pages repository for static hosting. You can load a custom song list at the bottom of the blog in the following ways:
var ap = new APlayer({
container: document.getElementById('aplayer'),
theme: '#e9e9e9',
audio: [{
name: '存在信号',
artist: 'AcuticNotes',
url: 'https://nojsja.gitee.io/static-resources/audio/life-signal.mp3',
cover: 'https://nojsja.gitee.io/static-resources/audio/life-signal.jpg'
},
{
name: '遺サレタ場所/斜光',
artist: '岡部啓一',
url: 'https://nojsja.gitee.io/static-resources/audio/%E6%96%9C%E5%85%89.mp3',
cover: 'https://nojsja.gitee.io/static-resources/audio/%E6%96%9C%E5%85%89.jpg'
}]
});
preview:
2. Optimize interface performance
➣ Optimize the page reflow and redraw situation
1) Concept
Reflow (reflow) and repaint (repaint) is an indispensable process for browsers to render web pages. The DOM tree is re-rendered due to changes in the spatial position and size of elements during the main HTML rendering process, and repainting is due to nodes The style attribute of changes will not affect the space layout. In terms of performance, reflow consumes a lot of performance and is prone to cascading effects. That is, in the flow layout of a normal DOM tree, after an element is reflowed, all the elements after the position of the element will be reflowed and re-rendered. Redrawing is relatively Performance consumption is smaller.
2) How to effectively judge the reflow and redraw status of the interface?
In fact, browsers based on chromium architecture generally come with a web development tool Devtools, but it can be said that most front-end developers have not seriously understood the specific purpose of this tool, but used it for simple log debugging, web request tracking and style Debug these basic functions. Reflow and redrawing can also be visualized and measured through it: F12 opens Devtools, finds the three-point folding button in the upper right corner and opens in turn -> More Tools -> Rendering-check the first two items Paint Flashing
(highlighted repainted) and Layout Shift Regions
(reflux highlighted area), and now back to your open pages operated, regional reflux occurs during the operation of the turns blue, there have been redrawn area becomes Become green, the duration is not long, pay attention to observation.
Repaint:
Reflow:
In addition to visual interface reflow/redrawing, Devtools has some other very useful functions, such as: Coverage Tools
can be used to analyze css/js
script content introduced on the interface, that is, we can measure it with this tool For the use of imported external files, external resources that are used less frequently can be implemented inline or by handwriting directly to improve the cost-effectiveness of imported external resources.
➣ Use throttling and debounce ideas to optimize scroll event monitoring
When faced with some high-frequency triggering scenarios of functions that require call control, some people may have questions about when to use throttling and when to use debounce. Here is a simple distinction: if you need to retain the last result of the high-frequency trigger in a short time, then use the debounce function, if you need to limit the number of function calls, keep the function with the best call interval time When the continuous call does not care whether it is the result of the last call, please use the throttle function.
For example, echarts graphs often need to be re-rendered with data after the window is resized, but directly monitoring the resize event may cause the rendering function to be triggered multiple times in a short time. We can use the idea of function debounce. After listening to the resize event, get the parameters in the listener function and then use the parameters to call the pre-initialized throttle function to ensure that an actual echarts re-rendering can be triggered after the resize process is over.
Here is a simple implementation of the throttling function and debounce function:
/**
* fnDebounce [去抖函数]
* @author nojsja
* @param {Function} fn [需要被包裹的原始函数逻辑]
* @param {Numberl} timeout [延迟时间]
* @return {Function} [高阶函数]
*/
var fnDebounce = function(fn, timeout) {
var time = null;
return function() {
if (!time) return time = Date.now();
if (Date.now() - time >= timeout) {
time = null;
return fn.apply(this, [].slice.call(arguments));
} else {
time = Date.now();
}
};
};
/**
* fnThrottle [节流函数]
* @author nojsja
* @param {Function} fn [需要被包裹的原始函数逻辑]
* @param {Numberl} timeout [延迟时间]
* @return {Function} [高阶函数]
*/
var fnThrottle = function(fn, timeout) {
var time = null;
return function() {
if (!time) return time = Date.now();
if ((Date.now() - time) >= timeout) {
time = null;
return fn.apply(this, [].slice.call(arguments));
}
};
};
The content navigation bar on the right side of the article in the blog will automatically switch between the fixed
layout and the general flow layout according to the position of the scroll bar. This is to allow the navigation bar to be displayed normally during the reading of the article without being hidden to the top:
/* 限150ms才能触发一次滚动检测 */
(window).on('scroll', fnThrottle(function() {
var rectbase = $$tocBar.getBoundingClientRect();
if (rectbase.top <= 0) {
$toc.css('left', left);
(!$toc.hasClass('toc-fixed')) && $toc.addClass('toc-fixed');
$toc.hasClass('toc-normal') && $toc.removeClass('toc-normal');
} else {
$toc.css('left', '');
$toc.hasClass('toc-fixed') && $toc.removeClass('toc-fixed');
(!$toc.hasClass('toc-normal')) && $toc.addClass('toc-normal');
($$toc.scrollTop > 0) && ($$toc.scrollTop = 0);
}
}, 150));
➣ IntersectionObserver API polyfill compatibility strategy
The article mentioned that IntersectionObserver API
has been used for the lazy loading function of various interface components in the blog. It has better performance and more comprehensive functions. However, in web development, we usually consider the compatibility of each API. Here you can Can I Use 1609b51843a507. From the figure below, we can see that the compatibility of this API is still possible. Many higher versions of the desktop are available for viewing. All devices have supported:
Therefore, in order to solve the compatibility problem of some low-version browsers, a more extreme approach is adopted here. Under normal circumstances, we need to introduce external [xxx].polyfill.js
(xxx is the corresponding API) to add corresponding functions to the lower version browsers, but for the higher version browsers, they already support this API, but need to download the polyfill library repeatedly, resulting in the number of web page requests and Waste of bandwidth resources. So I don’t use this method here, because most browsers already support this API. We don’t use the <script>
tag to introduce polyfill.js by default. Instead, we use scripts to determine whether the current browser supports this API. If not, use synchronize. XHR requests to download the polyfill file remotely. After downloading, use the
eval(...)
method to execute the entire script. Using the synchronization method will block the current js execution thread. Please use it with caution. This is to ensure that IntersectionObserver
is injected into the web page in a high-priority manner, otherwise it may cause some script errors that use this API.
<!-- 此脚本被放置在靠近页面首部某个位置 -->
<script>
if ('IntersectionObserver' in window &&
'IntersectionObserverEntry' in window &&
'intersectionRatio' in window.IntersectionObserverEntry.prototype) {
if (!('isIntersecting' in window.IntersectionObserverEntry.prototype)) {
Object.defineProperty(window.IntersectionObserverEntry.prototype,
'isIntersecting', {
get: function () {
return this.intersectionRatio > 0;
}
});
}
} else {
/* load polyfill sync */
sendXMLHttpRequest({
url: '/blogs/js/intersection-observer.js',
async: false,
method: 'get',
callback: function(txt) {
eval(txt);
}
});
}
</script>
➣ Use IntersectionObserver instead of native onscroll event monitoring
IntersectionObserver
is usually used in some intersection detection scenarios in the interface:
- Image lazy loading-load when the image is scrolled to be visible
- Content infinite scrolling-when the user scrolls to the bottom of the scroll container, load more data directly, without the need for the user to operate to turn the page, giving the user the illusion that the web page can be scrolled infinitely
- Detect the exposure of advertisements-in order to calculate advertising revenue, you need to know the exposure of advertising elements
- Perform tasks and play videos when the user sees an area
Taking content infinite scrolling as an example, the ancient intersection detection scheme is to use the scroll event to monitor the scroll container, and obtain the geometric attributes of the scroll element in the listener function to determine whether the element has scrolled to the bottom. We know that the acquisition and setting of attributes such as scrollTop will cause the page to reflow, and if the interface needs to bind multiple listener functions to the scroll event for similar operations, the page performance will be greatly reduced:
/* 滚动监听 */
onScroll = () => {
const {
scrollTop, scrollHeight, clientHeight
} = document.querySelector('#target');
/* 已经滚动到底部 */
// scrollTop(向上滚动的高度);clientHeight(容器可视总高度);scrollHeight(容器的总内容长度)
if (scrollTop + clientHeight === scrollHeight) { /* do something ... */ }
}
Here is a simple implementation of the image lazy loading function to introduce its use, detailed use can be viewed in the blog: "Front-end performance optimization techniques detailed (1)" .
(function lazyload() {
var imagesToLoad = document.querySelectorAll('image[data-src]');
function loadImage(image) {
image.src = image.getAttribute('data-src');
image.addEventListener('load', function() {
image.removeAttribute('data-src');
});
}
var intersectionObserver = new IntersectionObserver(function(items, observer) {
items.forEach(function(item) {
/* 所有属性:
item.boundingClientRect - 目标元素的几何边界信息
item.intersectionRatio - 相交比 intersectionRect/boundingClientRect
item.intersectionRect - 描述根和目标元素的相交区域
item.isIntersecting - true(相交开始),false(相交结束)
item.rootBounds - 描述根元素
item.target - 目标元素
item.time - 时间原点(网页在窗口加载完成时的时间点)到交叉被触发的时间的时间戳
*/
if (item.isIntersecting) {
loadImage(item.target);
observer.unobserve(item.target);
}
});
});
imagesToLoad.forEach(function(image) {
intersectionObserver.observe(image);
});
})();
3. Website best practices
➣ Use the hexo-abbrlink plugin to generate article links
The blog address generated by the hexo framework is in :year/:month/:day/:title
default, which is /year/month/day/title. When the blog title is in Chinese, Chinese also appears in the generated url link, and the Chinese path is not friendly to search engine optimization. The copied link will be coded, which is not conducive to reading and not concise.
Use hexo-abbrlink to solve this problem, install the plug-in: npm install hexo-abbrlink --save
, add the configuration _config.yml
permalink: :year/:month/:day/:abbrlink.html/
permalink_defaults:
abbrlink:
alg: crc32 # 算法:crc16(default) and crc32
rep: hex # 进制:dec(default) and hex
The generated blog post will become this: https://post.zz173.com/posts/8ddf18fb.html/
, even if the title of the article is updated, the link to the article will not change.
➣ Use hexo-filter-nofollow to avoid security risks
hexo-filter-nofollow plugin will add attribute rel="noopener external nofollow noreferrer"
<a>
links.
There are a large number of external links inside the website that will affect the weight of the website, which is not conducive to SEO.
nofollow
: It is an attribute proposed by Google, Yahoo and Microsoft a few years ago. After adding this attribute to the link, the weight will not be calculated. nofollow tells the crawler that there is no need to track the target page. In order to combat blogspam (blog spam messages), Google recommends using nofollow, which tells the search engine that crawlers do not need to crawl the target page, and tells the search engine that it does not need to pass the Pagerank of the current page to the target page. But if you submit the page directly through the sitemap, the crawler will still crawl it. The nofollow here is just an attitude of the current page to the target page, and does not represent the attitude of other pages to the target page.noreferrer
andnoopener
: When the<a>
tag uses thetarget="_blank"
attribute to link to another page, the new page will run in the same process as your page. If the new page is executing expensive JavaScript, the performance of the old page may be affected. And the new page canwindow.opener
to perform any operation, which has great security risks. After usingnoopener
(compatible attributenoreferrer
), the newly opened page cannot get the old page window object.external
: Tell search engines that this is a non-site link. This function is equivalent totarget=“_blank”
, reducing the SEO weight impact of external links.
➣ Use hexo-deployer-git and github workflow for automated deployment
After the static resource is packaged and generated, I need to submit it to the corresponding github pages
or gitee pages
warehouse. When multiple warehouses need to be deployed, manual operation efficiency is very low. Therefore, the hexo-deployer-git
plug-in is used here for automated deployment. You can declare the warehouse information that needs to be deployed as follows. If there are multiple warehouses, you can directly declare multiple deploy
fields:
# Deployment
deploy:
type: git
repository: https://github.com/nojsja/blogs
branch: master
ignore_hidden:
public: false
message: update
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。