Introduction
You may not know that before I was given the nickname "overdesign". As a confident developer, I always hope that the system I develop can solve all subsequent problems and cover all subsequent expansion scenarios with a set of abstractions. Of course, I can often prove my ignorance and lack of thinking in the end. The Prophet once said, "When something can be done, he often can't do anything." Excessive abstraction and excessive openness often make people who contact him at a loss what to do. At this point, you may think that I am going to talk about the topic of over-design, but it is not. I just want to use this topic as an introduction to discuss with you how I think about designing a plug-in architecture.
Why do I need a plug-in
Our software system is often oriented to continuous iteration. It is difficult to think clearly about all the functions that need to be supported at the beginning of development. Sometimes it is necessary to use the power of the community to continuously produce new function points or optimize existing ones. Function. This requires our software system to have a certain degree of scalability. The plug-in mode is the method we often choose.
In fact, a large number of existing software systems or tools use plug-ins to achieve scalability. For example, the most familiar little cute-VSCode, its plug-in has surpassed his predecessor Atom, and the number released to the market is currently 24,894. These plug-ins help us customize the appearance or behavior of the editor, add additional features, support more syntax types, greatly improve development efficiency, and continue to expand our user base. Or the browser Chrome we know well, one of its core competencies is also a rich plug-in market, making it an inaccessible tool for both developers and ordinary users. In addition, there are various tools such as Webpack, Nginx, etc., so I won't repeat them here.
According to the current plug-in design of each system, in summary, we create plug-ins mainly to help us solve the following two types of problems:
- Provide new capabilities for the system
- Customize the existing capabilities of the system
At the same time, when solving the above problems:
- The plug-in code and system code are decoupled in engineering, can be developed independently, and isolate the developer from the complexity of the internal logic of the framework
- Can be dynamically introduced and configured
And further can be achieved:
- By combining multiple plug-ins with a single responsibility, a variety of complex logic can be realized, and the reuse of logic in complex scenarios can be realized
Whether it is to provide new capabilities or to customize capabilities mentioned here, it can be aimed at the system developers themselves or third-party developers.
Combining the above characteristics, let's try to briefly describe what a plug-in is. A plug-in is generally a module that can independently complete a certain function or a series of functions. Whether a plug-in is introduced or not will not affect the original normal operation of the system (unless it has a dependency with another plug-in). The plug-in is introduced into the system at runtime, and the system controls the scheduling. A system can have multiple plug-ins, and these plug-ins can be combined in a predetermined manner by the system.
How to implement the plug-in mode
The plug-in mode is essentially a design idea, and there is no immutable or panacea implementation. But after long-term code practice, we can actually summarize a set of methodology to guide the implementation of the plug-in system, and some of the implementation details are "best practices" that are highly recognized by the community. In the process of writing this article, I also studied the plug-in mode design of some well-known projects in the community, including but not limited to Koa, Webpack, Babel, etc.
1. Before solving the problem, first define the problem
The first step in implementing a plug-in model is always to first define what problems you need to plug-in to help you solve. This is often a specific analysis of specific issues, and always requires you to make a certain degree of abstraction on the capabilities of the current system. For example, Babel, whose core function is to transform the code of one language into the code of another language, the problem he faces is that he cannot exhaustively list grammatical types when designing, nor does he know how to convert a new kind of code. Grammar, so you need to provide the corresponding extension method. To this end, he abstracted his overall process into three steps of parse, transform, and generate, and mainly provided plug-in support for parse and transform. At the parse level, the core problem he wants to solve is how to do word segmentation and how to understand word meaning and grammar. What needs to be done at the transform layer is how to transform a specific syntax tree structure into a known syntax tree structure.
Obviously, Babel clearly defines what the parse and transform two-tier plug-ins are to accomplish. Of course, some people may say, why I have to define the problem clearly. The plug-in system originally serves the uncertainty of the future. This statement is right and wrong. Computer programs are always deterministic. We need a clear input format, a clear output format, and a clear ability to rely on. The problem must be solved within a known framework. This leads to the art of defining the problem-how to give certainty to uncertainty, and to find certainty in uncertainty. Speaking of people, it is "abstraction", which is why I used over-design as a primer in the first place.
When I define a problem, I use the sample analysis method most often. This method is not a shortcut, but it is somewhat effective. The sample analysis method first focuses on sorting out the known problems to be solved, using these problems as samples to try to classify and extract commonalities, thereby forming a set of abstract patterns. Then pass some uncertain but may be solved in the future to test whether there is a situation that cannot be applied. Just say it's useless, let's take babel as a chestnut below. Of course, the abstract design of babel is actually theoretically supported. When there are existing theories that have been abstracted for you, it is better to use ready-made ones as much as possible.
The main problem that Babel solves is how to convert the code of the new syntax into the code of the old syntax without changing the logic. Simply put, it is a problem of code => code. But what needs to be transferred, how to transfer, these will be constantly updated and changed with the grammar specification, so it is necessary to use the plug-in mode to improve its future scalability. The problem we are trying to solve may be how to transform the content of es6's new grammar, and the DSL customized by the framework of JSX. Of course, we can simply connect a series of regular processing, but you will find that each plug-in will have a lot of repetitive recognition and analysis logic, which not only increases the running cost, but also hardly avoids problems caused by mutual influence. Babel chose to separate the two actions of parsing and conversion and use plug-ins to implement them. The problem to be solved by the parsing plug-in is how to parse the code and convert the Code to AST. This problem can be broken down into the same two things for different languages, how to segment words, and how to do word meaning analysis. Of course, the word meaning analysis can also be how to construct the context, how to produce AST nodes, etc., which will not be subdivided. The final result is the model shown in the figure below, and the plug-in focuses on solving these subdivision problems. The conversion here can be divided into how to find the fixed AST node, and how to convert, and finally form the Visitor mode, which will not be described in detail here. So let's think about it again. If new grammars such as ES7, 8, and 9 (relative to the future of design scenarios) are released in the future, can we still use such a model to solve problems? It seems feasible.
This is the aforementioned finding of certainty in uncertainty, to minimize the uncertainty faced by the system itself, and to limit the problem by disassembling the problem.
Then clearly define the problem, we have probably completed 1/3 of the work, the following is to officially start thinking about how to design.
2. Several major elements that cannot be avoided in plug-in architecture design
The design of the plug-in mode can be simple or complex. We can't expect a set of plug-in modes to be suitable for all scenarios. If it is really possible, I don't need to write this article. Just give you an npm address and it's done. This is why we must first define the problem clearly before designing. The specific method chosen for implementation must be based on the weighing of the specific problem to be solved. However, there are still traces and laws to follow in this matter.
When we formally design our plug-in architecture, the issues we have to think about are often inseparable from the following points. The whole design process is actually to choose a suitable solution for each point, and finally form a set of plug-in systems. These points are:
- How to inject, configure, and initialize plugins
- How plug-ins affect the system
- The meaning of plug-in input and output and the capabilities that can be used
- What is the relationship between multiple plug-ins
Let's explain in detail for each point
How to inject, configure, and initialize plugins
Injection, configuration, and initialization are actually separate things. But they are all things that belong to Before, so I will talk about them together.
Let’s first talk about injecting . In fact, it is essentially how to make the system perceive the existence of the plug-in. The injection method can generally be divided into declarative and programmatic. Declarative is to tell the system where to get what plug-ins through configuration information. When the system is running, it will load the corresponding plug-ins in accordance with the convention and configuration. Similar to Babel, you can fill in the plug-in name in the configuration file, and it will go to the modules directory to find the corresponding plug-in and load it at runtime. Programmatically, the system provides a certain registration API, and the developer completes the registration by passing the plug-in into the API. In the comparison between the two, the declarative style is mainly suitable for the scenario where you do not need to access another software system when you start it alone. In this case, the cost will be higher if you use the programmatic style to customize, but relatively, there will be some plug-in naming and release channels. limit. The programming style is suitable for situations that need to be introduced into an external system during development. Of course, it can also be supported in both ways.
Then there is the plug-in configuration . The main purpose of the configuration is to realize the customization of the plug-in, because a plug-in may need to do some fine-tuning of its behavior in different usage scenarios. At this time, if each scenario is to be a separate plug-in It's a little fuss. Configuration information is generally passed in during injection, and reconfiguration after injection is rarely supported. How the configuration takes effect is actually related to plug-in initialization. Initialization can be divided into two details: method and timing. Let's talk about the method first. I will list two common ways. One is the factory mode. What a plug-in exposes is a factory function. The caller or the plug-in architecture passes in the configuration information to generate a plug-in instance. The other is passed in at runtime. The plug-in architecture will send configuration information to the plug-in through the agreed context when scheduling the plug-in. Let's continue to use babel as an example of the factory model.
function declare<
O extends Record<string, any>,
R extends babel.PluginObj = babel.PluginObj
>(
builder: (api: BabelAPI, options: O, dirname: string) => R,
): (api: object, options: O | null | undefined, dirname: string) => R;
The builder in the above code is the factory function we are talking about, and it will eventually produce a Plugin instance. Builder obtains configuration information through options, and the design here also supports setting some operating environment information through api, but this is not necessary, so I won't go into details. To simplify it:
type TPluginFactory<OPTIONS, PLUGIN> = (options: OPTIONS) => PLUGIN;
So initializes . Naturally, it can also be initialized by calling the factory function, injected after the initialization is completed, and does not need to be initialized. Generally, we do not choose to inject after the initialization is completed, because of the demand for decoupling, we try to make only declarations in the plug-in. Whether to use the factory mode depends on whether the plug-in needs to be initialized. In most cases, if you make a bad decision, it is recommended to choose the factory mode first, which can deal with more complex scenarios later. The timing of initialization can also be divided into injection or initialization, unified initialization, and initialization at runtime. In many cases, injection means initialization and unified initialization can be used in combination. I tried to use a table to illustrate the specific distinction:
Injection is initialization | Unified initialization | Initialize at runtime | |
---|---|---|---|
Is it purely logical | Can be used | Yes | |
Do you need to pre-mount or modify the system | Yes | Is not | |
Does the plugin initialization have mutual dependencies? | Is not | Yes | Is not |
Does plug-in initialization have performance overhead | Can be used | Is not |
Another problem is also mentioned here. In some systems, we may rely on a combination of many plug-ins to complete a complex thing. In order to shield the complexity of separately introducing and configuring plug-ins, we will also provide a concept of Preset , To package multiple plug-ins and their configuration. Users only need to import Preset, and don't care about which plug-ins are in it. For example, when Babel supports the react grammar, it actually needs to introduce multiple plug-ins such as syntax-jsx
transform-react-jsx
transform-react-display-name
transform-react-pure-annotationsd
, and finally give a package such preset-react
How plug-ins affect the system
The impact of plug-ins on the system can be summarized in three aspects: behavior, interaction, and display . A single plug-in may only involve one of them. Depending on the specific scenario, some aspects do not need to be affected, such as a logic engine type system, there is a high probability that there is no need to show this piece of stuff.
The VSCode plug-in roughly covers these three, so we can take a simple plug-in to take a look. Here we have chosen the Clock in status bar plug-in. The function of this plug-in is very simple, that is, add a clock to the status bar, or you can quickly insert the current time in the editing content.
The most important things in the whole project are the following:
In package.json, a command and a configuration menu are registered for the plug-in through the extended contributions field.
"main": "./extension", // 入口文件地址
"contributes": {
"commands": [{
"command": "clock.insertDateTime",
"title": "Clock: Insert date and time"
}],
"configuration": {
"type": "object",
"title": "Clock configuration",
"properties": {
"clock.dateFormat": {
"type": "string",
"default": "hh:MM TT",
"description": "Clock: Date format according to https://github.com/felixge/node-dateformat"
}
}
}
},
In the entry file extension.js, the UI of the status bar is created through the API exposed by the system, and the specific behavior of the command is registered.
'use strict';
// The module 'vscode' contains the VS Code extensibility API
// Import the module and reference it with the alias vscode in your code below
const
clockService = require('./clockservice'),
ClockStatusBarItem = require('./clockstatusbaritem'),
vscode = require('vscode');
// this method is called when your extension is activated
// your extension is activated the very first time the command is executed
function activate(context) {
// Use the console to output diagnostic information (console.log) and errors (console.error)
// This line of code will only be executed once when your extension is activated
// The command has been defined in the package.json file
// Now provide the implementation of the command with registerCommand
// The commandId parameter must match the command field in package.json
context.subscriptions.push(new ClockStatusBarItem());
context.subscriptions.push(vscode.commands.registerTextEditorCommand('clock.insertDateTime', (textEditor, edit) => {
textEditor.selections.forEach(selection => {
const
start = selection.start,
end = selection.end;
if (start.line === end.line && start.character === end.character) {
edit.insert(start, clockService());
} else {
edit.replace(selection, clockService());
}
});
}));
}
exports.activate = activate;
// this method is called when your extension is deactivated
function deactivate() {
}
exports.deactivate = deactivate;
The above example is a bit big and a bit rough. So in summary, let’s take a look at how the three aspects we mentioned at the beginning are reflected.
- UI: We created a status bar component through the system API. We built a configuration page with configuration information.
- Interaction: We have added an instruction interaction through the registration command.
- Logic: We have added a new ability logic to insert the current time.
So when we design a plug-in architecture, we mainly consider whether these three aspects will be affected. So how does the plug-in affect the system? The premise of this process is to establish a contract between the plug-in and the system and agree on the way of docking. This contract can include file structure, configuration format, and API signature. Let's take a look at the example of VSCode:
- File structure: Following the tradition of NPM, it is agreed that package.json in the directory carries meta-information.
- Configuration format: The configuration path of main is agreed as the code entry, and the private field contributions declare commands and configurations.
- API signature: It is agreed that the extension must provide two interfaces, activate and deactivate. And provides various APIs under vscode to complete the registration.
The custom logic of UI and interaction essentially depends on the implementation of the system itself. Here we will focus on which modes are generally used to call the logic in the plug-in.
Direct call
This mode is very straightforward, that is, in the system's own logic, call the agreed API in the registered plug-in as needed, and sometimes the plug-in itself is just an API. For example, the two interfaces activate and deactivate in the above example. This mode is very common, but the caller may focus on more plug-in processing related logic.
Hook mechanism (event mechanism)
The system defines a series of events, the plug-in mounts its own logic on the event monitor, and the system dispatches by triggering the event. The clock.insertDateTime command in the above example can also be regarded as this type, which is a command trigger event. In this mechanism, webpack is a more obvious example, let's look at a simple webpack plugin:
// 一个 JavaScript 命名函数。
function MyExampleWebpackPlugin() {
};
// 在插件函数的 prototype 上定义一个 `apply` 方法。
MyExampleWebpackPlugin.prototype.apply = function(compiler) {
// 指定一个挂载到 webpack 自身的事件钩子。
compiler.plugin('webpacksEventHook', function(compilation /* 处理 webpack 内部实例的特定数据。*/, callback) {
console.log("This is an example plugin!!!");
// 功能完成后调用 webpack 提供的回调。
callback();
});
};
The plugin here registers the behavior of "Print This is an example plugin!!! on the console" to the webpacksEventHook hook, and this logic is called once every time this hook is triggered. This mode is relatively common, and webpack has also made a special package service mode, https://github.com/webpack/tapable . By defining a variety of hooks with different scheduling logic, you can implant this mode in any system and meet your different scheduling needs (the scheduling mode will be described in detail in the next part).
const {
SyncHook,
SyncBailHook,
SyncWaterfallHook,
SyncLoopHook,
AsyncParallelHook,
AsyncParallelBailHook,
AsyncSeriesHook,
AsyncSeriesBailHook,
AsyncSeriesWaterfallHook
} = require("tapable");
The hook mechanism is suitable for plug-in scenarios with many injection points and high loose coupling requirements, and can reduce the complexity of plug-in scheduling in the entire system. The cost is to introduce a set of hook mechanism, which is not a high cost, but it is not necessary.
User scheduling mechanism
The essence of this model is to unify the capabilities provided by the plug-in as an extra capability of the system, and finally the system developer decides when to call it. For example, the JQuery plug-in can register additional behaviors in fn, or the Egg plug-in can register additional interface capabilities to the context. I personally think that this model is more suitable for scenarios where more external capabilities need to be customized, and the export of capabilities needs to be closed. If you want users to call your capabilities through a unified model, you can give it a try. You can try to use the new Proxy feature to implement this mode.
Regardless of whether it is the ability of the system to call the plug-in or the ability of the plug-in to call the system, we all need a certain input and output information, which is also the information covered by our API signature above. We will talk about it in the next part.
The meaning of plug-in input and output and the capabilities that can be used
The most important contract between the plug-in and the system is the API signature, which involves which APIs can be used and what the input and output of these APIs are.
Available capabilities
It refers to a public tool that the logic of the plug-in can use, or can obtain or affect the state of the system itself in some ways. The methods we often use to inject capabilities are parameters, context objects, or factory function closures.
There are mainly four types of capabilities provided:
- Pure tool: does not affect the system state
- Get current system status
- Modify the current system state
- API form injection functions: such as registering UI, registering events, etc.
Regarding what capabilities need to be provided, the general recommendation is to provide the capabilities within the minimum sufficient range according to the work that the plug-in needs to complete, so as to minimize the possibility of the plug-in destroying the system. In some scenarios, if the scope of influence cannot be effectively controlled through the API, you can consider creating a sandbox environment for the plug-in. For example, the global interface may be called in the plug-in.
input Output
When our plug-in is in a specific processing logic flow of our system (commonly used in direct call mechanism or hook mechanism), our plug-in focuses on input and output. The input and output at this time must be determined by the logic of the logic flow itself. The structure of input and output needs to be strongly related to the responsibilities of the plug-in, try to ensure serialization capabilities (in order to prevent excessive expansion and readability), and have additional restrictions based on the scheduling mode (described below). If your plug-in input and output are too complex, you may want to reflect on whether the abstraction is too coarse-grained.
In addition, the plug-in logic needs to be guaranteed to catch exceptions to prevent damage to the system itself.
Still the example of Babel Parser.
{
parseExprAtom(refExpressionErrors: ?ExpressionErrors): N.Expression;
getTokenFromCode(code: number): void; // 内部再调用 finishToken 来影响逻辑
updateContext(prevType: TokenType): void; // 内部通过修改 this.state 来改变上下文信息
}
Unexpected input,
What is the relationship between multiple plug-ins
Each plugin should only do a small amount of work, so you can connect them like building blocks. You may need to combine a bunch of them to get the desired result.
What we are discussing here is the combination of plug-ins injected on the same extension point. The common forms are as follows:
Covered
Only execute the newly registered logic, skip the original logic
Pipeline
Input and output are connected to each other, and generally input and output are of the same data type.
Onion ring style
On a pipeline basis, if the core logic of the system is in the middle, and the plug-in pays attention to the logic of in and out at the same time, the onion ring model can be used.
Here you can also refer to the middleware scheduling mode https://github.com/koajs/compose
const middleware = async (...params, next) => {
// before
await next();
// after
};
Distributed
Distributed mode means that every plug-in will be executed, and if there is an output, the result will be merged finally. The premise here is that there is a plan to merge the execution results.
In addition, scheduling can be divided into two methods: synchronous and asynchronous, depending on whether the plug-in logic contains asynchronous behavior. The implementation of synchronization will be a bit simpler, but if you are not sure, you can also consider the asynchronous first. Tools like https://www.npmjs.com/package/neo-async can help you very well. If you use tapble, there is already a corresponding definition in there.
The other details that need to be noted are:
- Whether the order is to be registered first, or vice versa, requires a clear explanation or consistent cognition.
- What to do if the same plug-in is registered repeatedly.
to sum up
When you follow the ideas in this article and think about these issues clearly, you must have a prototype plug-in architecture in your mind. The rest may be combined with specific issues, and then through some design patterns to optimize the developer's experience. I personally think that designing a plug-in architecture must not escape thinking about these problems, and only by paying attention to these problems can we avoid the mistakes that will often be made in future-oriented development, such as dazzling technology and over-designing. Of course, there may be something missing, and some of the recommended implementations may also be outdated. Please help me to correct these.
Author: ES2049 / armslave00
The article can be reprinted at will, but please keep this link to the original text.
You are very welcome to join ES2049 Studio if you are passionate. Please send your resume to caijun.hcj@alibaba-inc.com.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。