16

Overview

PWA (Progressive Web App) refers to web applications developed using specified technologies and standard patterns, so that web applications have the characteristics and experience of native applications. For example, we feel that local applications are convenient to use and respond faster.

PWA was proposed by Google in 2016, officially launched in 2017, and ushered in a major breakthrough in 2018. The world's top browser manufacturers, Google, Microsoft, and Apple have all announced their support for PWA technology.

There are two key technologies of PWA:

  1. Manifest: The browser allows you to provide a manifest file, thus implementing A2HS
  2. ServiceWorker: By proxying network requests, scenarios such as resource caching, site acceleration, and offline applications can be implemented.

These two are currently the most used technologies by most developers to build PWA applications.

Secondly, there are APIs such as: message push, WebStream, Web Bluetooth, Web sharing, hardware access, etc. Due to the different support of browser manufacturers, the popularity is not high.

In any case, using ServiceWorker to optimize user experience has become the mainstream technology for web front-end optimization.

Tools and Frameworks

Before 2018, the mainstream tools were:

  1. google/sw-toolbox : Provides a set of tools for easily building ServiceWorkers.
  2. google/sw-precache : Provides the pre-cache function by injecting the resource list into the ServiceWorker during the construction phase.
  3. baidu/Lavas : A Vue-based PWA integration solution developed by Baidu.

Later, sw-toolbox and sw-precache were able to exit the stage due to Google's development of a better toolset, Workbox .

However, due to the disbandment of the team and the departure of the main author, Lavas has been in a state of suspension of maintenance.

Pain points

Workbox provides a set of tools to help us manage ServiceWorker, and its encapsulation of CacheStorage also allows us to manage resources more easily.

But when building the actual PWA application, we still need to pay attention to many issues:

  1. How to organize engineering and code?
  2. How to do unit testing?
  3. How to solve the ServiceWorker scope conflict between MPA (Multiple Page Application) applications?
  4. How to control our ServiceWorker remotely?
  5. The optimal resource caching solution?
  6. How to monitor our ServiceWorker, collect data?

Since the positioning of Workbox is "Library" , we need a "Framework" to provide a unified solution for these general problems.

And, we want it to be progressive, as PWA advocates.

code decoupling

what is the problem?

When our ServiceWorker program code is more and more, it will cause the code to be bloated, the management is chaotic, and the reuse is difficult.
At the same time, some common implementations, such as: remote control, process communication, data reporting, etc., hope to achieve on-demand plug-in reuse, so as to achieve the purpose of "progressive".

We all know that ServiceWorker provides a series of events at runtime, commonly used are:

 self.addEventListener('install', event => { });
self.addEventListener('activate', event => { });
self.addEventListener("fetch", event => { });
self.addEventListener('message', event => { });

When we have multiple function implementations that must listen to the same event, the code in the same file will become more and more bloated:

 self.addEventListener('install', event => {
  // 远程控制模块 - 配置初始化
  ...
  // 资源预缓存模块 - 缓存资源
  ...
  // 数据上报模块 - 收集事件
  ...
});
  
self.addEventListener('activate', event => {
  // 远程控制模块 - 刷新配置
  ...
  // 数据上报模块 - 收集事件
  ...
});
  
self.addEventListener("fetch", event => {
  // 远程控制模块 - 心跳检查
  ...
  // 资源缓存模块 - 缓存匹配
  ...
  // 数据上报模块 - 收集事件
  ...
});

self.addEventListener('message', event => {
  // 数据上报模块 - 收集事件
  ...
});

You might say "modular" is possible:

 import remoteController from './remoete-controller.ts';  // 远程控制模块
import assetsCache from './assets-cache.ts';  // 资源缓存模块
import collector from './collector.ts';  // 数据收集模块
import precache from './pre-cache.ts';  // 资源预缓存模块

self.addEventListener('install', event => {
  // 远程控制模块 - 配置初始化
  remoteController.init(...);
  // 资源预缓存模块 - 缓存资源
  assetsCache.store(...);
  // 数据上报模块 - 收集事件
  collector.log(...);
});
  
self.addEventListener('activate', event => {
  // 远程控制模块 - 刷新配置
  remoteController.refresh(..);
  // 数据上报模块 - 收集事件
  collector.log(...);
});
  
self.addEventListener("fetch", event => {
  // 远程控制模块 - 心跳检查
  remoteController.heartbeat(...);
  // 资源缓存模块 - 缓存匹配
  assetsCache.match(...);
  // 数据上报模块 - 收集事件
  collector.log(...);
});

self.addEventListener('message', event => {
  // 数据上报模块 - 收集事件
  collector.log(...);
});

Modularization can reduce the amount of code in the main file and decouple functions to a certain extent, but there are still some problems in this way:

  1. Difficulty in reuse : When using the functions of a module, it is necessary to correctly call the interface of the module in multiple events. Similarly, to remove a module thing, it is necessary to modify it in multiple events.
  2. High cost of use : The module exposes various interfaces, and users must understand the operation mode of the module and the use of the interface in order to use it well.
  3. Limited decoupling : If there are more modules, it will even be difficult to solve the namespace conflict of multiple front-end applications under the same domain name.

To achieve our goal: "incremental" , we need to optimize the organization of the code.

Plug-in implementation

We can hand over the control of a series of events of ServiceWorker, and each module uses these events through plug-ins.

We know the famous onion model of Koa.js:

koa洋葱模型

The onion model is a good idea of "plug-in", but it is "one-dimensional" . Koa completes a response to a network request, and each middleware only needs to monitor one event.

In ServiceWorker, in addition to the four commonly used events mentioned above, he has more events, such as: SyncEvent , NotificationEvent .

Therefore, we have to get a few more "onions" to satisfy more events.

At the same time, the code of the PWA application generally runs in two threads: the main thread and the ServiceWorker thread.

Finally, we encapsulate the native events and provide plug-in support, thus we have: "Multidimensional Onion Plug-in System" :

GlacierJS 多维洋葱插件系统

After encapsulating native events and life cycles, we provide more elegant life cycle hook functions for each plugin:

GlacierJS 生命周期图示

If we are based on GlacierJS , we can easily do plug-in modules.

Register the plugin in the main file of the ServiceWorker thread:

 import { GlacierSW } from '@glacierjs/sw';
import RemoteController from './remoete-controller.ts';  // 远程控制模块
import AssetsCache from './assets-cache.ts';  // 资源缓存模块
import Collector from './collector.ts';  // 数据收集模块
import Precache from './pre-cache.ts';  // 资源预缓存模块
import MyPluginSW from './my-plugin.ts'

const glacier = new GlacierSW();

glacier.use(new Log(...));
glacier.use(new RemoteController(...));
glacier.use(new AssetsCache(...));
glacier.use(new Collector(...));
glacier.use(new Precache(...));

glacier.listen();

In the plugin, we can recover the logic of an independent module by listening to events:

 import { ServiceWorkerPlugin } from '@glacierjs/sw';
import type { FetchContext, UseContext  } from '@glacierjs/sw';

export class MyPluginSW implements ServiceWorkerPlugin {
    constructor() {...}
    public async onUse(context: UseContext) {...}
    public async onInstall(event) {...}
    public async onActivate() {...}
    public async onFetch(context: FetchContext) {...}
    public async onMessage(event) {...}
    public async onUninstall() {...}
}

scope conflict

We all know that there are two key features about the scope of ServiceWorker:

  1. The default scope is the Path at the time of registration.
  2. Only one ServiceWorker can gain control in the same path at the same time.

Scope narrowing and widening

Regarding the first feature, for example, the registered Service Worker file is /a/b/sw.js , then the scope defaults to /a/b/ :

 if (navigator.serviceWorker) {
    navigator.serviceWorker.register('/a/b/sw.js').then(function (reg) {
        console.log(reg.scope);
        // scope => https://yourhost/a/b/
    });
}

Of course, we can specify scope when registering to narrow down the scope, for example:

 if (navigator.serviceWorker) {
    navigator.serviceWorker.register('/a/b/sw.js', {scope: '/a/b/c/'})
        .then(function (reg) {
            console.log(reg.scope);
            // scope => https://yourhost/a/b/c/
        });
}

You can also expand the scope by setting the Service-Worker-Allowed header of the server's response to the ServiceWorker file.

For example, Google Docs registers a ServiceWorker from https://docs.google.com/document/offline/serviceworker.js in scope https://docs.google.com/document/u/0/

img

ServiceWorker governance under MPA

There are two main architectural forms of modern Web App projects: SPA (Single Page Application) and MPA (Multiple Page Application)

The architecture model of MPA is very common in today's large-scale Web App. Compared with SPA, this kind of Web App can bear heavier business volume and is conducive to the later maintenance and expansion of large-scale Web App. It often has multiple team to maintain.

Suppose we have an MPA site:

 .
|-- app1
|   |-- app1-service-worker.js
|   `-- index.html
|-- app2
|   `-- index.html
|-- index.html
`-- root-service-worker.js

app1 and app2 are maintained by different teams respectively.

If we register ---4a2ee9d8ed6c5bf8eceeb2e3ce1ec8f3--- in the root directory '/' root-service-worker.js , to complete some common functions, such as: "log collection", "static resource cache" and so on.

Then the app1 team uses the capabilities of ServiceWorker to develop some specific functional requirements, such as the "offline function" of app1.

They are registered in the app1/index.html directory app1-service-worker.js .

At this time, if you visit all pages under app1/* , the ServiceWorker control will be handed over to app1-service-worker.js , that is, only the "offline function" of app1 is working, and the original "log collection" , "Static Cache" and other functions will be invalid.

Obviously this situation is not what we want to see, and the probability of occurrence in actual development will be very high.

There are two solutions to this problem:

  1. Encapsulate the functions of "log collection" and "static resource cache", app1-service-worker.js introduce and use these functions.
  2. Integrate the "offline function" into root-service-worker.js , and only allow this ServiceWorker to be registered.

Regarding solution 1, it is correct to encapsulate general functions, but the functions in the main domain may not be able to be disassembled one by one, and the ServiceWorker in the main domain has updated new functions, and the ServiceWorker in the subdomain needs to be actively updated and upgraded. .

Regarding the second solution, the problem of the first solution can obviously be solved, but other applications, such as app2 , may not need the "offline function".

Based on this, we introduce scheme three: functions are integrated into the main domain, and the combination of supported functions is isolated according to the scope.

Based on GlacierJS , the code might look like this:

 const mainPlugins = [
  new Collector(); // 日志收集功能
  new AssetsCache(); // 静态资源缓存功能
];

glacier.use('/', mainPlugins);
glacier.use('/app1', [
  ...mainPlugins,
  new Offiline(),  // 离线化功能
]);

resource cache

One of the core capabilities of ServiceWorker is that it can combine CacheAPI to flexibly cache resources, so as to optimize site loading speed, weak network access, and offline applications.

image-20220414092525515

There are five common caching strategies for static resources:

  1. stale-while-revalidate
    This mode allows you to use the cache (if available) to respond to the request as soon as possible, fall back to the network request if there is no cache, and then use the network request to update the cache, it is a relatively safe caching strategy.
  2. cache-first
    Offline web applications will rely heavily on caching, but for non-critical resources that can be cached gradually, "cache-first" is the best option.
    If there is a response in the cache, the cached response will be used to satisfy the request and the network will not be used at all.
    If the response is not cached, the request will be fulfilled by a network request, and then the response will be cached so that the next request will be served directly from the cache next time.
  3. network-first
    For frequently updated requests, the "network first" policy is the ideal solution.
    By default it will try to get the latest response from the network. If the request is successful, it will put the response into the cache. If the network fails to return a response, the cached response will be used.
  4. network-only
    If you need to fulfill a specific request from the network, network-only mode will transparently transmit the resource request to the network.
  5. cache-only
    This strategy ensures that the response is fetched from the cache. This scenario is less common, and it is generally useful to match the "pre-cache" strategy.

Which of these strategies should we use? The answer is a specific choice based on the type of resource.

For example, if some resources are only updated when the web application is published, we can use the cache-first strategy, such as some JS, styles, images, etc.

And index.html as the main entry of the page loading, it is more suitable to use the stale-while-revalidate strategy.

Let's take GlacierJS's cache plugin ( @glacierjs/plugin-assets-cache ) as an example:

 // in service-worker.js
importScripts("//cdn.jsdelivr.net/npm/@glacierjs/core/dist/index.min.js");
importScripts('//cdn.jsdelivr.net/npm/@glacierjs/sw/dist/index.min.js');
importScripts('//cdn.jsdelivr.net/npm/@glacierjs/plugin-assets-cache/dist/index.min.js');

const { GlacierSW } = self['@glacierjs/sw'];
const { AssetsCacheSW, Strategy } = self['@glacierjs/plugin-assets-cache'];

const glacierSW = new GlacierSW();

glacierSW.use(new AssetsCacheSW({
    routes: [{
        // capture as string: store index.html with stale-while-revalidate strategy.
        capture: 'https://mysite.com/index.html',
        strategy: Strategy.STALE_WHILE_REVALIDATE,
    }, {
        // capture as RegExp: store all images with cache-first strategy
        capture: /\.(png|jpg)$/,
        strategy: Strategy.CACHE_FIRST
    }, {
        // capture as function: store all stylesheet with cache-first strategy
        capture: ({ request }) => request.destination === 'style',
        strategy: Strategy.CACHE_FIRST
    }],
}));

remote control

Based on the principle of ServiceWorker, once it is installed in the browser, if there is an urgent online problem, only a new ServiceWorker can be released to solve the problem. However, the installation of ServiceWorker is time-lag, and coupled with the process of some teams from modifying the code to the release, the reflex arc is very long. How can we shorten the reflex arc for online problems?

We can store a configuration remotely and perform "remote control" for foreseeable scenarios :

remote-controller.drawio

So how do we get the configuration?

Scenario one , if we get the configuration in the main thread:

  1. The user needs to actively refresh the page to take effect.
  2. We can't turn off the lightweight function. What do you mean? We will have a switch scenario. The main thread can only be "closed" by unloading or clearing the cache. This is too heavy.

Option 2 , if we get the configuration in the ServiceWorker thread:

  1. Lightweight function can be closed, and the request can be transparently transmitted.
  2. However, if you encounter the need to clean up the user environment, when you uninstall ServiceWorker, it will cause the main process to register every time it arrives at ServiceWorker and uninstall it, resulting in frequent installation and uninstallation.

image-20220417012859191

So our final solution is "real-time configuration acquisition based on dual threads" .

The main thread should also obtain the configuration, and then add anti-shake protection in front of the configuration to prevent the problem of short-term concurrency of onFetch events.

image-20220417012934418

In terms of code, we use Glacier's plugin @glacierjs/plugin-remote-controller to easily implement remote control:

 // in ./remote-controller-sw.ts
import { RemoteControllerSW } from '@glacierjs/plugin-remote-controller';
import { GlacierSW } from '@glacierjs/sw';
import { options } from './options';

const glacierSW = new GlacierSW();
glacierSW.use(new RemoteControllerSW({
  fetchConfig: () => getMyRemoteConfig();
}));

// 其中 getMyRemoteConfig 用于获取你存在远端的配置,返回的格式规定如下:
const getMyRemoteConfig = async () => {
    const config: RemoteConfig = {
        // 全局关闭,卸载 ServiceWorker
        switch: true,
      
          // 缓存功能开关
          assetsEnable: true,

                // 精细控制特定缓存
        assetsCacheRoutes: [{
            capture: 'https://mysite.com/index.html',
            strategy: Strategy.STALE_WHILE_REVALIDATE,
        }],
    },
}

data collection

After the ServiceWorker is released, we need to maintain control of the online situation. For some necessary statistical indicators, we may need to perform statistics and reports.

There are five common data events built into @glacierjs/plugin-collector :

  1. ServiceWorker registration: SW_REGISTER
  2. ServiceWorker installed successfully: SW_INSTALLED
  3. In ServiceWorker control: SW_CONTROLLED
  4. Hit onFetch event: SW_FETCH
  5. Hit browser cache: CACHE_HIT of CacheFrom.Window
  6. Hit CacheAPI cache: CACHE_HIT of CacheFrom.SW

Based on the above data collection, we can get some common general indicators:

  1. ServiceWorker install rate = SW_REGISTER / SW_INSTALLED
  2. ServiceWorker control rate = SW_REGISTER / SW_CONTROLLED
  3. ServiceWorker cache hit ratio = SW_FETCH / CACHE_HIT (of CacheFrom.SW)

First we register the plugin-collector in the ServiceWorker thread:

 import { AssetsCacheSW } from '@glacierjs/plugin-assets-cache';
import { CollectorSW } from '@glacierjs/plugin-collector';
import { GlacierSW } from '@glacierjs/sw';

const glacierSW = new GlacierSW();

// should use plugin-assets-cache first in order to make CollectedDataType.CACHE_HIT work.
glacierSW.use(new AssetsCacheSW({...}));
glacierSW.use(new CollectorSW());

Then register the plugin-collector in the main thread, monitor data events, and report data:

 import {
  CollectorWindow,
  CollectedData,
  CollectedDataType,
} from '@glacierjs/plugin-collector';
import { CacheFrom } from '@glacierjs/plugin-assets-cache';
import { GlacierWindow } from '@glacierjs/window';

const glacierWindow = new GlacierWindow('./service-worker.js');

glacierWindow.use(new CollectorWindow({
    send(data: CollectedData) {
      const { type, data } = data;

      switch (type) {
        case CollectedDataType.SW_REGISTER:
          myReporter.event('sw-register-count');
          break;

        case CollectedDataType.SW_INSTALLED:
          myReporter.event('sw-installed-count');
          break;

        case CollectedDataType.SW_CONTROLLED:
          myReporter.event('sw-controlled-count');
          break;

        case CollectedDataType.SW_FETCH:
          myReporter.event('sw-fetch-count');
          break;

        case CollectedDataType.CACHE_HIT:
          // hit service worker cache
          if (data?.from === CacheFrom.SW) {
            myReporter.event(`sw-assets-count:hit-sw-${data?.url}`);
          }

          // hit browser cache or network
          if (data?.from === CacheFrom.Window) {
            myReporter.event(`sw-assets-count:hit-window-${data?.url}`);
          }
          break;
      }
    },
}));

Among them myReporter.event is the data reporting library you may implement.

unit test

ServiceWorker tests can be broken down into common test groups.

img

At the top level are "integration tests" , at this level we check the overall behavior, for example: test page loadability, ServiceWorker registration, offline functionality, etc. Integration tests are the slowest, but also the closest to reality.

The next layer is "browser unit testing" . Due to the life cycle of ServiceWorker and some APIs only available in the browser environment, we use the browser to perform unit testing, which will reduce many environmental problems.

Next is the "ServiceWorker unit test" , which is also a unit test performed on the premise that the ServiceWorker for testing is registered in the browser environment.

The last one is "Mock ServiceWorker" . This kind of test granularity will be finer, down to a certain class and a certain method, and only detect input parameters and returns. This means no browser startup costs and ultimately a predictable way to test your code.

But mocking a ServiceWorker is difficult, and if the mock's API surface is not correct, the problem won't be discovered until integration testing or browser unit testing. We can use service-worker-mock or MSW to unit test ServiceWorker in NodeJS environment.

Due to the limited space, I will open a separate topic to talk about the practice of ServiceWorker unit testing.

Summarize

This article begins with a description of the basic concepts of PWA, and then introduces some excellent tools in the community, as well as the actual pain points faced in building a "controllable, reliable, and scalable PWA application".

So some practical suggestions are given in the three "may":

  1. Ensure the "controllability" of our published PWA applications through "data collection" and "remote control "
  2. Ensure the "reliability" of our PWA application through "unit testing" and "integration testing"
  3. Through the "multi-dimensional onion plug-in model", it supports plug-in and MPA applications, and integrates multiple plug-ins, so as to achieve the "extensibility" of PWA applications.

refer to


JerryC
3.5k 声望1.1k 粉丝

Peace of mind, Code of enjoy