What is API Engineering
API engineering is to automate and standardize the process of writing, building, publishing, testing, updating, and managing APIs through a combination of a series of tools. Reduce the communication cost of each end at the API level, reduce the cost of managing and updating API, and improve the development efficiency of each end.
The effect of API engineering of Hundred Bottles Technology
Back-end developers write Protobuf files and submit them to GitLab, and initiate MergeRequest in GitLab. GitLab will send an email to the MergeRequest merger. After the merger receives the email reminder, the MergeRequest will be merged after CodeReview on GitLab. Workgroups receive API build messages. The developer clicks the Import Now button on Apifox, and the interface documentation on Apifox is updated. Client personnel configure new interface addresses in their own projects, and a new request model is constructed.
The process of API engineering of Baibo Technology
Write and manage Protobuf interface files
The basic environment construction and use of Protobuf will not be repeated here.
For example, what Mr. Jianyu summed up is really a headache, where is the Proto code? Maybe each company has different management methods for proto files. This article adopts the management method of centralized warehouse. As shown below:
Mr. Mao Jian from Kratos also shared API 工程化
once, and made some interpretations of this article by Mr. Fried Fish. I benefited a lot after listening to it.
The project structure of this article is as follows:
The basis of this project is a Go project. The api package is divided into app client interface and backstage management background interface. As can be seen from the user directory under the app, there is a v1 package in the user domain to distinguish the interface version, and there is a user_enums.proto file for the enumeration shared by the user domain. The enumeration files are as follows:
syntax = "proto3";
package app.user;
option go_package = "gitlab.bb.local/bb/proto-api-client/api/app/user;user";
// Type 用户类型
enum Type {
// 0 值
INVALID = 0;
// 普通用户
NORMAL = 1;
// VIP 用户
VIP = 2;
}
There is a user_errors.proto file that holds errors common to the user domain. The error handling here uses the kratos error handling method.
The error file is as follows:
syntax = "proto3";
package app.user;
import "errors/errors.proto";
option go_package = "gitlab.bb.local/bb/proto-api-client/api/app/user;user";
option java_multiple_files = true;
enum UserErrorReason {
option (pkg.errors.default_code) = 500;
// 未知错误
UNKNOWN_ERROR = 0;
// 资源不存在
NOT_EXIST = 1[(pkg.errors.code) = 404];
}
The errors package in pkg contains common models for compiling error files, and the model package contains business-independent data models, such as page and address. The transport package stores the code from Grpc code to http code, which is used in error handling. The validate package stores the files used for interface parameter verification, as follows:
type validator interface {
Validate() error
}
// Interceptor 参数拦截器
var Interceptor = func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp interface{}, err error) {
if r, ok := req.(validator); ok {
if err := r.Validate(); err != nil {
return nil, status.Error(codes.InvalidArgument, err.Error())
}
}
return handler(ctx, req)
}
third_party stores the third-party proto files needed to compile and compile proto files, and other files will be explained in subsequent processes.
The core interface file is written as follows:
syntax = "proto3";
package app.user.v1;
option go_package = "api/app/user/v1;v1";
import "google/api/annotations.proto";
import "validate/validate.proto";
import "app/user/user_enums.proto";
// 用户
service User {
// 添加用户
rpc AddUser(AddUserRequest) returns (AddUserReply) {
option (google.api.http) = {
post: "/userGlue/v1/user/addUser"
body:"*"
};
}
// 获取用户
rpc GetUser(GetUserRequest) returns (GetUserReply) {
option (google.api.http) = {
get: "/userGlue/1/user/getUser"
};
}
}
message AddUserRequest {
// User 用户
message User {
// 用户名
string name = 1[(validate.rules).string = {min_len:1,max_len:10}];
// 用户头像
string avatar = 2;
}
// 用户基本信息
User user = 1;
// 用户类型
Type type = 2;
}
message AddUserReply {
// 用户 id
string user_id = 1;
// 创建时间
int64 create_time = 2;
}
message GetUserRequest {
// 用户 id
string user_id = 1[(validate.rules).string = {min_len:1,max_len:8}];
}
message GetUserReply {
// 用户名
string name = 1;
// 用户头像
string avatar = 2;
// 用户类型
Type type = 3;
}
From the above code, we can see that the interface defined in a business domain and the message used to define the interface are defined in a file. The request messages used by the interface all end with the method name + Request, and the returned messages used by the interface all end with the method name + Reply. The advantage of this is: the specification is unified, and the model with the same message is avoided when the swagger document is generated and imported into Apifox. In order to quickly write the interface, you can use the code templates that come with GoLand and IDEA to write quickly.
![create_proto_gif]](assets/proto.gif)
So the writing of the proto interface file has ended here, and the whole idea is borrowed from the official example project beer-shop of kratos.
Compile and publish Protobuf files
Because the written proto file requires CodeReview, and each developer's local compilation environment may be inconsistent, the compilation process is unified on GitRunner, and after MergerRequest is merged, GitRunner is triggered to compile all proto files on Linux. About installing the Go environment and related compilation plugins on Linux, I won't go into details here. GitRunner configuration file:
before_script:
- echo "Before script section"
- whoami
- sudo chmod +x ./*
- sudo chmod +x ./shell/*
- sudo chmod +x ./pkg/*
- sudo chmod +x ./third_party/*
- sudo chmod +x ./api/app/*
- sudo chmod +x ./api/backstage/*
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
after_script:
- echo "end"
build1:
stage: build
only:
refs:
- master
script:
- ./index.sh
- ./gen_app.sh
- ./gen_backstage.sh
- ./format_json.sh
- ./git.sh
- curl 'https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=xxx' -H 'Content-Type:application/json' -d "{\"msgtype\":\"markdown\",\"markdown\":{\"content\":\"构建结果:<font color=\\"info\\">成功</font>\n>项目名称:$CI_PROJECT_NAME\n>提交日志:$CI_COMMIT_MESSAGE\n>流水线地址:[$CI_PIPELINE_URL]($CI_PIPELINE_URL)\"}}"
- ./index.sh
The content of before_script
is the configuration file permissions and git account password, after_script
outputs the statement that the compilation ends build1
only.refs
is only specified in dbba3a409f4 master
Branch triggered. script
is the core execution process.
index.sh
is used to copy the code of GitLab to the server where GitRunner is located.
cd ..
echo "当前目录 `pwd`"
rm -rf ./proto-api-client
git clone http://xx:xxx!@gitlab.xx.xx/xx/proto-api-client.git
gen_app.sh
Used to compile the client interface.
#!/bin/bash
# errors
API_PROTO_ERRORS_FILES=$( find api/app -name *errors.proto)
protoc --proto_path=. \
--proto_path=pkg \
--proto_path=third_party \
--go_out=paths=source_relative:. \
--client-errors_out=paths=source_relative:. \
$API_PROTO_ERRORS_FILES
# enums
API_PROTO_ENUMS_FILES=$( find api/app -name *enums.proto)
protoc --proto_path=. \
--proto_path=third_party \
--go_out=paths=source_relative:. \
$API_PROTO_ENUMS_FILES
# api
API_PROTO_API_FILES=$( find api/app/*/v* -name *.proto)
protoc --proto_path=. \
--proto_path=api \
--proto_path=pkg \
--proto_path=third_party \
--go_out=paths=source_relative:. \
--new-http_out=paths=source_relative,plugins=http:. \
--new-grpc_out=paths=source_relative,plugins=grpc:. \
--new-validate_out=paths=source_relative,lang=go:. \
--openapiv2_out . \
--openapiv2_opt allow_merge=true,merge_file_name=app \
--openapiv2_opt logtostderr=true \
$API_PROTO_API_FILES
error handling
$(find api/app -name *errors.proto)
exhaustive all files ending with errors.proto
, client-errors_out is the command to recompile the source code of kratos errors, the same as the usage of kratos errors .
enumeration processing
$(find api/app -name *enums.proto)
Exhaustive all files ending with enums.proto
.
Interface handling
$(find api/app/*/v* -name *.proto)
exhaustive all interface files, new-http_out
and new-grpc_out
is the commands compiled to support the company's self-developed framework.
parameter verification
new-validate_out
because the parameter validation plugin of validate conflicts with enumeration when compiling in linux environment (the author has not solved it yet), so download the source code and recompile the command. The compilation result is as follows:
openapiv2_out
uses the openapiv2 plugin, allow_merge=true,merge_file_name=app
parameters combine all interface files into an interface document named app.swagger.json. logtostderr=true
The parameter is to enable the log, the command will go to a file of app.swagger.json, which can be imported into Apifox for use. Apifox is really an artifact, which greatly simplifies the work related to the interface. The use of Apifox will not be repeated here, please refer to the official website . The compiled documentation is as follows:
format_json.sh
because openapiv2
the plug-in will display the int64 type data in the interface document as string type, in order to facilitate the front-end students to distinguish whether the string type in the interface document is converted from int64 type, Therefore, a js file is written to modify the generated swagger.json document. The modified document will add the int64 identifier to the field description of the string type converted from int64. As shown in the figure:
The script is as follows:
#!/bin/bash
node ./format.js
Use node to execute the js code that modifies the compiled swagger.json document.
const fs = require('fs');
const path = require('path');
const jsonFileUrl = path.join(__dirname, 'app.swagger.json');
function deepFormat(obj) {
if (typeof obj == 'object') {
const keys = Object.keys(obj);
const hasFormat = keys.includes('format');
const hasTitle = keys.includes('title');
const hasDescription = keys.includes('description');
const hasName = keys.includes('name');
const hasType = keys.includes('type');
if (hasFormat && hasTitle) {
obj.title = `${obj.title} (${obj.format})`;
return;
}
if (hasFormat && hasDescription) {
obj.description = `${obj.description} (${obj.format})`;
return;
}
if (hasFormat && hasName && !hasDescription) {
obj.description = `原类型为 (${obj.format})`;
return;
}
if (hasFormat && hasType && !hasName && !hasDescription) {
obj.description = `原类型为 (${obj.format})`;
return;
}
for (let i = 0; i < keys.length; i++) {
const key = keys[i];
const value = obj[key];
if (typeof value == 'object') {
deepFormat(value);
}
}
return;
}
if (Array.isArray(obj)) {
for (let i = 0; i < obj.length; i++) {
const value = obj[i];
if (typeof value == 'object') {
deepFormat(value);
}
}
}
}
async function main() {
const jsonOriginString = fs.readFileSync(jsonFileUrl, 'utf8');
const jsonOrigin = JSON.parse(jsonOriginString);
deepFormat(jsonOrigin);
fs.writeFileSync(jsonFileUrl, JSON.stringify(jsonOrigin, null, 2));
}
main();
git.sh
is used to submit compiled code, -o ci.skip
parameter is used to no longer trigger GitRunner in this submission to avoid loop triggering.
#!/bin/bash
# 获取最后一次 提交记录
result=$(git log -1 --online)
# git
git status
git add .
git commit -m "$result 编译 pb 和生成 openapiv2 文档"
git push -o ci.skip http://xx:xx!@gitlab.xx.xx/xx/proto-api-client.git HEAD:master
curl https://qyapi.weixin.qq.com/cgi-bin/webhook/send...
Used to send the build result to the work group after the build is successful. Here we use enterprise WeChat. How to use it will not be repeated here. The effect is as follows:
index.sh
Clone the compiled code again to the GitRunner server.
Apifox update interface
Apifox supports the use of online data sources for importing data, because authentication is required when using GitLab's data source url, and Apifox currently does not support authentication, so I thought of a compromise solution. After submitting the compiled code, the The code is then cloned to GitRunner, and a data source url that does not require authentication is mapped through nginx. Fill in Apifox with URLs that do not require authentication.
Client update request model
As we all know, most languages except JavaScript need corresponding data models when using JSON. Although Apifox provides the function of generating data models, it is not easy enough. If the interface is changed, it needs to be manually generated and replaced in the project. The development experience is not very good. it is good.
In response to the above pain points, a simple and powerful tool was developed based on Node.js.
data model generation
The first problem to be solved is how to generate the data model. After investigation, it is found that there are many excellent wheels that can be used out of the box. Here I feel that the power of open source is unlimited.
In the end, quicktype was selected. The developer provides online tools and will use its core dependency package quicktype-core to develop their own tools.
quicktype can receive a Model description string in JSON Schema format, convert it into an array of model strings according to the settings of the target language, and output it to the specified file after assembly.
The calling method is as follows:
/**
* @description: 单个 Model 转换
* @param {string} language 目标语言
* @param {string} messageName Model 名称
* @param {string} jsonSchemaString Model JSON Schema 字符串
* @param {LanguageOptions} option 目标语言的附加设置
* @return {string} 转换后的 Model 内容
*/
async function convertSingleModel(
language: string,
messageName: string,
jsonSchemaString: string,
option: LanguageOptions
): Promise<string> {
const schemaInput = new JSONSchemaInput(new FetchingJSONSchemaStore());
await schemaInput.addSource({
name: messageName,
schema: jsonSchemaString,
});
const inputData = new InputData();
inputData.addInput(schemaInput);
const { lines } = await quicktype({
inputData,
lang: language,
rendererOptions: option,
});
return lines.join('\n');
}
...
/**
* @description: 单个转换后的 Model 写入文件
* @param {ModelInfo} modelInfo 转换后的 Model 信息
* @param {string} outputDir 输出目录
* @return {*}
*/
function outputSingleModel(modelInfo: ModelInfo, outputDir: string): void {
const {
name, type, region, suffix, snake,
} = modelInfo;
let filePath = join(region, type, `${name}.${suffix}`);
if (snake) {
filePath = snakeNamedConvert(filePath); // 对有蛇形命名要求的语言转换输出路径
}
filePath = join(outputDir, filePath);
const outputDirPath = dirname(filePath);
try {
fs.mkdirSync(outputDirPath, { recursive: true });
} catch (error) {
errorLog(`创建目录失败:${outputDirPath}`);
}
let { content } = modelInfo;
// 后置钩子,在转换后,输出前调用,用于统一修改输出内容的格式
if (hooks[modelInfo.language]?.after) {
content = hooks[modelInfo.language].after(content);
}
try {
writeFileSync(filePath, content);
} catch (error) {
errorLog(`写入文件失败:${filePath}`);
}
successLog(`${filePath} 转换成功`);
}
It should be noted that when there are nested objects in the input object, the converter will look for the corresponding reference in the definitions
field in the incoming JSON Schema, so the complete definitions
, or extract the objects that will be referenced by recursive search of the object in advance, and then reassemble the JSON Schema.
Improve efficiency
The above has completed the conversion and output of a Model, so the efficiency can not be improved. Wouldn't it be beautiful if you could batch convert the Model of the desired interface?
In order to meet the above goals, the tool is provided in the form of an npm package. The global installation can use the bb-model
command to trigger the conversion. Just place a configuration file in the project. The content of the configuration file is as follows:
Specific field meanings:
language : target language
indexUrl : swagger document Url
output : output path, relative to the current configuration file
apis : the interface that needs to be converted
Use the bb-model
command to output the following
The configuration file of this scheme can be managed by the version control tool along with the project, which is conducive to multi-member collaboration, and the subsequent integration into CI/CD is also very simple.
Model conversion is the last piece of the puzzle in the first phase of Baibo API engineering, which greatly improves the development efficiency of client students.
summary
So far, the whole process of the first phase of API engineering has been completed. In the future, support for lint checking of proto files will be added, and interface compilation files will be released in the form of tags, adding support for java language.
References
[1] protobuf: https://github.com/protocolbuffers/protobuf
[2] beer-shop: https://github.com/go-kratos/beer-shop
[3] kratos-errors: https://go-kratos.dev/docs/component/errors/
[4] openapiv2: https://github.com/grpc-ecosystem/grpc-gateway/tree/master/protoc-gen-openapiv2
[5] validate: https://github.com/envoyproxy/protoc-gen-validate
[6] apifox: https://www.apifox.cn/
[7] quicktype-core: https://www.npmjs.com/package/quicktype-core
[8] gitrunner: https://docs.gitlab.com/runner/
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。