3
头图
In the development process, the problem of data format conversion will be encountered more or less. If it is just simple data, then any method can be used. If you encounter data with a lot of data, complex levels, and strong correlation, you need to To find a suitable set, this article will introduce two feasible conversion models, each of which is suitable for different scenarios and preferences.

The development of import/export plug-ins on different platforms is naturally constrained by the hard conditions of the platform, such as the function name for the main system to call, the agreed format of incoming and returning parameters, the environment variables shared with the main system, and so on. The respective platforms of these aspects will inevitably be different, and it is difficult to discuss them in a unified manner, but their essence is the transformation of data structures, which is also the topic of this article.

Whether functionally importing or exporting, what we do is similar, transforming one data structure into another.
Before we start, let's agree on terminology. We call the data before conversion as source data and the data after conversion as target data.

As an example, we have an interface data that belongs to our own system and need to convert it into the data of the postman platform. This article will discuss how to deal with various scenarios.

Field Difference <br>The simplest case, the field content is the same, but the field name is different. For example, in our data, the interface request address is called url , and in the target data format, the request interface address is called uri , then just do a simple conversion:

 const translate = ({ url }) => {
    return {
        uri: url
    }
}

We will increase the difficulty a little bit, and consider a more complex situation, where the relationship between fields is one-to-many, or many-to-one. For example, in the source data, you need to use two fields host + path to describe the interface address, while in the target data only one field url is used to describe it. In this case we are better off using another small function:

 const translatePath = ({ host, path }) => ({
        url: host + path
})

const translate = ({ host, path }) => {
    return {
        ...translatePath({ host, path })
    }
}

In one-to-many, or many-to-one, many-to-many field relationships, this idea can be used to flexibly encapsulate multiple functions such as translateRequest , translateResponse , etc. Stack them together in the return value:

 const translate = ({ host, path, request, response }) => {
    return {
        ...translatePath({ host, path }),
        ...translateRequest({ path, request }),
        ...translateResponse({ response })
    }
}

The general processing model is like this. Since each sub-function runs independently and parses independently, there is no cache and side effects, so it is easier to maintain. When passing and fetching parameters, you can flexibly control the parameters you need or don’t need; some non-empty parameters Processing and boundary processing can also be operated in the corresponding sub-functions; fortunately, ES6+ provides a more elegant destructuring syntax, which can make the program look more concise as a whole.

level difference

All of the above are about the relationship between fields, but the difference between data structures is not only fields, but also levels. If you continue to use the above processing mode, it may be difficult to deal with data structures with deep levels and cascading relationships. Therefore, another processing model needs to be introduced here - the model based on the class syntax.

Why need to introduce class ?

It is not to abstract the data structure, but it is easier to use the chain call syntax based on the class. If necessary, we can also use the cache more conveniently.

Taking the interface data as an example, let's consider such a scenario. In the source data, the rest parameter of the interface is defined in the outermost layer, while in the target data, the rest parameter is defined in request field, there is a level difference between the two. I would like to handle conversions between data structures with a syntax like this:
new Translate(source).translateRest()
Similarly, a function only does one thing, but in terms of syntax, each sub-function is combined in the form of a chain call:

 new Translate(source)
    .translateRest()
    .translateRequest()
    .translateResponse()

So how does the Translate class need to be implemented?

In fact, it is not difficult, as long as we pick up the ancient memory, go back to the original jQuery , and return this after each sub-function is processed. As for the data converted by each sub-function, it is naturally stored in the private properties inside the class, which can also be understood as a cache. So in the end, we need an extra function to help us get the converted cached data out.

 new Translate(source)
    .translateRest()
    ...
    .translateResponse()
    .getResult()

Translate class roughly needs the following components:

  • At least two cache variables are required, which are temporarily called source and result , the former is used to store the source data, and the latter is used to store the data in the conversion.
  • In constructor , the received source data is cached for easy access by subsequent sub-functions.
  • Each sub-function fetches the source data from the cache each time, and according to the data characteristics of the source data, fetches specific fields from a specific level, converts them, and then inserts them into the level corresponding to the result, and then remember to return this.
  • After the conversion, call the getResult method to take out result , and it will not return this in this step.
  • The last one, according to my own experience, also needs a function that is convenient for printing data, let's call it log for now.
    Let's implement the infrastructure immediately. In addition to the syntax of Class , it can also be used in combination with the processing model introduced in the first half:
 const translateRest = ({ rest }) => ({
    ...    // 返回处理好的格式
})

class Translate {
    constructor(source) {
        // 初始化
        this.source = source
        this.result = {}
    }
    translateRest() {
        // 在该函数中,比较好地处理了层级问题
        this.result.request = {
            ...translateRest(source)
        }
        return this
    }
    translateResponse() {
        this.result.response = {
            ...
        }
        return this
    }
    log() {
        // 格式化输出
        console.log(JSON.stringify(this.result, null, 2))
        return this
    }
    getResult() {
        // 这是链式处理的最后一步
        return this.result
    }
}

Such a processing model does destroy the atomicity of the sub-function, because it has to read and write to the external cache, which introduces some side effects, but there are only two basic caches, so the complexity will not be too high. But using this model makes it easy to deal with level differences. You can use multiple sub-functions to process multiple fields in a certain level (may pay attention to the order), for example, the next example uses three sub-functions to process request Multiple fields under the hierarchy:

 new Translate(source)
    .translateRequestHeader()
    .translateRequestBody()
    .translateRequestQuery()
    ...

To sum up, this model has the following obvious advantages:

  1. Thanks to caching, subfunctions can arbitrarily read fields at multiple levels in the source data. Similarly, subfunctions can also write target data at any level.
  2. Due to the chain syntax, sub-functions can be inserted flexibly and the order can be adjusted in the processing chain. For example, the log function can be inserted after any sub-function for printing. This feature is particularly convenient in development and debugging.
 new Translate(source)
    .translateRest()
    ...
    .log()    // 随时插入打印
    .translateResponse()
    .getResult()
It should be noted that due to the use of caching, in the development process, you need to pay attention to the problem of deep and shallow copies.

end

This article roughly introduces two processing models from shallow to deep, which should be sufficient for most data structure translation scenarios. When the entire algorithm is implemented, the next thing to do is to package it into a qualified plug-in according to the platform's plug-in rules.

Based on such an independent model, we implement the import and export plug-ins of OpenApi format in the open source product Eoapi product. Of course, the data structure of OpenApi is relatively complex, and we are just beginning to improve it gradually. Eoapi's plugin repository
Thanks for reading this article, I hope you will be able to easily handle data structure transformations from now on.

Eoapi is an open source API tool similar to Postman, which is lighter and extensible.
Project address: https://github.com/eolinker/eoapi
Documentation address: https://docs.eoapi.io/?utm_source=SF0802
Official website address: https://www.eoapi.io/

If you have any questions or suggestions about Eoapi , you can go to Github or come here to find me, raise an Issue, and I will reply in time when I see it.


Postcat
320 声望14 粉丝

我叫 Postcat,我是一款开源 API 工具,我更轻量,同时可拓展!我可以简化你的 API 开发工作,让你更快更好地创建 API。