Summary
Computer-based students are probably not unfamiliar with the word pipeline, especially in the Linux system, the pipeline operator has been widely used, and it has brought great convenience to our transformation. The scaffolding "gulp" that is more noted in the front-end field is also known for its pipeline operations.
Today, let's take a step-by-step look at how to design the "pipeline data flow" in the front-end field.
I. Introduction
Students with computer knowledge are probably familiar with the word pipeline. Especially in Linux systems, the pipeline operator has been widely used, and it has brought great convenience to our transformation. Pipeline operations are usually divided into one-way pipelines and two-way pipelines. When data flows from the previous pipeline to the next pipeline, our data will be processed by this pipeline to a certain extent, and then sent to the next pipeline after processing. , and so on, so that some original data can be continuously processed in the continuous pipeline flow, and finally the target data we want can be obtained.
In our daily programming and development process, we can also try to use the concept of pipeline data to optimize our program architecture to make the data flow of our program clearer and make us like a pipeline, each pipeline is specialized Responsible for their respective work to perform a rough processing of the data source to achieve the purpose of clear responsibilities and program decoupling.
2. Program design
Now we use Typescript to implement a basic pipeline class design, the pipeline we use today is a one-way pipeline.
2.1 Pipe-Adapter
As the name implies, the adapter is the connection port that needs to connect different multi-section pipes together to form a whole pipeline. Through this connection head, we can control the flow of data and let the data flow to where it really should go.
First, let's design the type structure of our adapter:
type Pipeline<T = any> = {
/**
* 将多节管道链接起来
* e.g.
* const app = new BaseApp();
* app.pipe(new TestApp1()).pipe(new TestApp2()).pipe(new TestApp3()).pipe(new Output()).pipe(new End())
* @param _next
*/
pipe(_next: Pipeline<T>): Pipeline<T>;
};
The above code describes what kind of adapter a class that supports pipeline data needs to have. In program design, our adapter is actually a function for linking multi-section pipes to each other.
As you can see from the above code, for the high reuse of the program, we choose to genericize the data type transmitted in the pipeline, so that when we implement a specific program, we can use the type more flexibly. E.g:
// 时间类型的管道
type DatePipeline = Pipeline<Date>
// 数组类型的管道
type ArrayPipeLine = Pipeline<string[]>
// 自定义数据类型的管道
type CustomPipeLine = Pipeline<{name: string, age: number}>
In addition, the incoming parameters and return values of our function are also particular. As can be seen from the above code, we receive a pipeline type of data and return a pipeline type of data. Among them, the parameter passed in is the next pipeline, so we connect the two pipelines together. The reason for returning a pipeline type of data is to allow us to chain calls when using it, which is more in line with the design concept of pipeline data, such as:
const app = new AppWithPipleline();
app.pipe(new WorkerPipeline1())
.pipe(new WorkerPipeline2())
.pipe(new WorkerPipeline3())
.pipe(new WorkerPipeline4())
In other words, what we return is actually a reference to the next section of the pipeline.
2.2 Push-pump
With the adapter, we also need a "water pump" to continuously push our data to different pipes, and finally reach the target point.
type Pipeline<T = any> = {
/**
* 实现该方法可以将数据通过管道一层层传递下去
* @param data
*/
push(data: T[]): Promise<void>;
/**
* 将多节管道链接起来
* e.g.
* const app = new BaseApp();
* app.pipe(new TestApp1()).pipe(new TestApp2()).pipe(new TestApp3()).pipe(new Output()).pipe(new End())
* @param _next
*/
pipe(_next: Pipeline<T>): Pipeline<T>;
};
In order to adapt to more scenarios, we design this pump to accept an array of type T[]. In the first section of the pipeline, when we get the initial data source, we can use this pump (method) to push the data out, Let each shop in the back process the data.
2.3 resolveData - Machining Shop
When our data is pushed to a certain section of pipeline, there will be a processing workshop to rough process the pushed data according to their different processes.
Note: Each of our processing workshops should ensure the separation of duties as much as possible. Each processing workshop is responsible for a part of the work and rough processing the data, instead of putting all the work into one processing workshop, otherwise the pipeline data will be lost. meaning.
type Pipeline<T = any> = {
/**
* 实现该方法可以将数据通过管道一层层传递下去
* @param data
*/
push(data: T[]): Promise<void>;
/**
* 将多节管道链接起来
* e.g.
* const app = new BaseApp();
* app.pipe(new TestApp1()).pipe(new TestApp2()).pipe(new TestApp3()).pipe(new Output()).pipe(new End())
* @param _next
*/
pipe(_next: Pipeline<T>): Pipeline<T>;
/**
* 用于接受从上一节管道传递下来的数据,可进行加工后传递到下一节管道
* @param data
*/
resolveData(data: T[]): T[] | Promise<T[]>;
};
The processing workshop still receives a T[] type data array. After getting this data, it processes the data according to the respective procedures. After processing, it is put back on the conveyor belt of the return line (return value) and sent to the next The processing workshop of a section of the pipeline continues to process.
Third, the specific implementation
Above we just defined the most basic behavior that a pipeline should have, and only the class with the above behavior capabilities we consider it to be a qualified pipeline. So, next, let's take a look at how a pipeline class needs to be implemented.
3.1 Basic pipeline model class
class BaseApp<P = any> implements Pipeline<P> {
constructor(data?: P[]) {
data && this.push(data);
}
/**
* 仅内部使用,下一节管道的引用
*/
protected next: Pipeline<P> | undefined;
/**
* 接受到数据后,使用 resolveData 处理获得新书局后,将新数据推送到下一节管道
* @param data
*/
async push(data: P[]): Promise<void> {
data = await this.resolveData(data);
this.next?.push(data);
}
/**
* 链接管道
* 让 pipe 的返回值始终是下一节管道的引用,这样就可以链式调用
* @param _next
* @returns
*/
pipe(_next: Pipeline<P>): Pipeline<P> {
this.next = _next;
return _next;
}
/**
* 数据处理,返回最新的数据对象
* @param data
* @returns
*/
resolveData(data: P[]): P[] | Promise<P[]> {
return data;
}
}
We define a base class that implements the Pipeline interface to describe what all the pipelines look like, and all our pipelines need to inherit from this base class.
In the constructor, we accept an optional parameter, which represents our initial data source. Only the first section of the pipeline needs to pass in this parameter to inject initial data into the entire pipeline. After we get this initial data, we will use the water pump ( push) to push this data out.
3.2 Pipeline Unified Data Objects
Usually when the program is implemented, we will define a unified data object as the data flowing in the pipeline, which is better to maintain and manage.
type PipeLineData = {
datasource: {
userInfo: {
firstName: string;
lastName: string;
age: number,
}
}
}
3.3 The first section of the pipeline
Since there is no pipeline before the first section of the pipeline, if we want to make the data flow, we need to use the water pump at the first section of the pipeline to give the data an initial kinetic energy so that it can flow. Therefore, the implementation of the first section of the pipeline will Slightly different from other pipes.
export class PipelineWorker1 extends BaseApp<PipeLineData> {
constructor(data: T[]) {
super(data);
}
}
The main function of the first section of the pipeline is to accept the original data source and use the water pump to send the data, so it is relatively simple to implement. You only need to inherit our base class BaseApp, and submit the initial data source to the base class, and the base class can be reused The water pump can push the data out.
3.4 Other pipelines
Other pipelines will have a data processing workshop for each pipeline to process the data flowing to the current pipeline, so we also need to override the resolveData method of the base class.
export class PipelineWorker2 extends BaseApp<PipeLineData> {
constructor() {
super();
}
resolveData(data: PipeLineData[]): PipeLineData[] | Promise<PipeLineData[]> {
// 在这里我们可以对数据进行一些特定的处理
// 注意我们尽可能在传入的 data 上进行操作,保持引用
data.forEach(item => {
item.userInfo.name = `${item.userInfo.firstName} · ${item.userInfo.lastName}`
});
// 最后,我们再调用基类的 resolveData 方法,把处理好的数据传进去,
// 这样就完成了一道工序的加工了
return super.resolveData(data);
}
}
export class PipelineWorker3 extends BaseApp<PipeLineData> {
constructor() {
super();
}
resolveData(data: PipeLineData[]): PipeLineData[] | Promise<PipeLineData[]> {
// 在这里我们可以对数据进行一些特定的处理
// 注意我们尽可能在传入的 data 上进行操作,保持引用
data.forEach(item => {
item.userInfo.age += 10;
});
// 最后,我们再调用基类的 resolveData 方法,把处理好的数据传进去,
// 这样就完成了一道工序的加工了
return super.resolveData(data);
}
}
export class Output extends BaseApp<PipeLineData> {
constructor() {
super();
}
resolveData(data: PipeLineData[]): PipeLineData[] | Promise<PipeLineData[]> {
// 在这里我们可以对数据进行一些特定的处理
// 注意我们尽可能在传入的 data 上进行操作,保持引用
console.log(data);
// 最后,我们再调用基类的 resolveData 方法,把处理好的数据传进去,
// 这样就完成了一道工序的加工了
return super.resolveData(data);
}
}
// 我们还可以利用管道组装灵活的特性开发出各种各样的插件,可随时插拔
export class Plugin1 extends BaseApp<PipeLineData> {
constructor() {
super();
}
resolveData(data: PipeLineData[]): PipeLineData[] | Promise<PipeLineData[]> {
// 在这里我们可以对数据进行一些特定的处理
// 注意我们尽可能在传入的 data 上进行操作,保持引用
console.log("这是一个插件");
// 最后,我们再调用基类的 resolveData 方法,把处理好的数据传进去,
// 这样就完成了一道工序的加工了
return super.resolveData(data);
}
}
3.5 Assembling the pipes
Above we have prepared each section of the pipe, and now it is time to assemble them and put them into use.
const datasource = {
userInfo: {
firstName: "kiner",
lastName: "tang",
age: 18
}
};
const app = new PipelineWorker1(datasource);
// 管道可以随意组合
app.pipe(new Output())
.pipe(new PipelineWorker2())
.pipe(new Output())
.pipe(new PipelineWorker3())
.pipe(new Output())
.pipe(new Plugin1());
4. Conclusion
So far, we have completed the design of a pipeline architecture. Do you think that after using the pipeline data, the data flow of our entire program code is clearer, the division of labor before each module is more clear, and the cooperation between the module and the project before the module is more flexible?
Using the pipeline design also allows us to expand an additional plug-in library. Users can customize plug-ins that meet various business scenarios at will, making our program extremely scalable.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。