头图

Guided reading

Star Chain is a tool platform developed by the R&D Department of JD.com's Gold Consumption Fundamentals to improve R&D efficiency. It faces the R&D needs of back-end services, especially the less difficult but cumbersome needs such as integration, scenario-based, and customization. For example, the back-end of service front-end (BFF), service process orchestration, asynchronous message processing, timed tasks, operation back-end, workflow automation, temporary requirements, etc., develop microservices in a low-code manner, and perform serverless deployment in a declarative manner, enabling research and development Staff focus on business logic rather than various details, greatly improving R&D efficiency, reducing costs and increasing efficiency.

1 What is Starlink

Star Chain is a tool platform developed by the R&D Department of JD.com's Gold Consumption Fundamentals to improve R&D efficiency. It faces the R&D needs of back-end services, especially the less difficult but cumbersome needs such as integration, scenario-based, and customization. For example, the back-end of service front-end (BFF), service process orchestration, asynchronous message processing, timed tasks, operation back-end, workflow automation, temporary requirements, etc., develop microservices in a low-code manner, and perform serverless deployment in a declarative manner, enabling research and development Staff focus on business logic rather than various details, greatly improving R&D efficiency, reducing costs and increasing efficiency.

Starlink provides a unified web interface through which users can complete the complete process of developing, debugging, building, testing, and deploying microservices. In terms of development, in addition to supporting visual configuration and componentized process orchestration, it also supports languages such as Java/JavaScript/Groovy, supports the introduction of third-party packages, and integrates with Git, which does not reduce flexibility and controllability while maintaining low code. In terms of deployment, declarative deployment is implemented, users do not need to pay attention to server details, and the system intelligently schedules shared computing resources across clusters, which automatically meets deployment requirements while saving computing costs. Second-level deployment, in addition, StarChain fully considers the actual situation inside large and medium-sized enterprises, the services of different teams are completely isolated, testing, pre-release, and production services are completely isolated, supporting grayscale publishing, encryption parameters, etc., computing resources support both Traditional virtual machines also support K8s, support deployment to private clouds and other public clouds, and achieve enterprise-level serverless.

Since the first version of StarChain was launched in March 2019, it has been iteratively improved. In addition to widely supporting various scenarios of consumer finance, it also supports the business of many other departments of the group, such as wealth, payment, marketing center, insurance, etc. Repeated 618 and Double 11 performance and stability tests. In order to support external delivery and apply to a wider range of scenarios, StarChain has carried out modular and product transformation in 21 years, no longer relying on JD's unique middleware, and launched JD's public cloud in March 2022 to support external services.

2 Star Chain Core Concepts

StarChain has two core concepts: VMS and Serverless, which are introduced separately below.

2.1 VMS

VMS refers to Visual MicroService, which represents a lightweight microservice application and is the basic unit of development and deployment in StarChain. It is called "visualization", which is mainly supported by StarChain and advocates visual arrangement of microservices. Service logic, VMS also represents a simple and flexible microservice programming model, as shown in Figure 1.

image.png
Figure 1 VMS programming model

The basic idea of this model is to componentize and configure the internal and external dependencies of microservice applications, so that developers can focus on business logic, including three core concepts: functions, connectors, and triggers. The function encapsulates business logic and consists of standard input/output and function bodies, and supports code functions and BPMN functions. The code function supports three languages: Java, Groovy, and JavaScript, and more languages will be supported in the future. BPMN functions use the BPMN standard to visually arrange business logic. You can reference code functions or other BPMN functions to build complex business processes. Connectors encapsulate third-party services, including RPC services, HTTP services, asynchronous messages, cache services, configuration services, databases, and more. The trigger encapsulation function provides external services. It supports RPC, HTTP REST API, timed tasks and MQ message triggering. Each trigger is associated with a function. StarChain will continue to expand to support new trigger and connector types, and will also open the SDK in the future to allow users to customize the types that StarChain does not support.

2.2 Serveless

The serverless concept of Starlink is not without a server, but as abstract as possible, so that users can minimize the attention to the details of the server. Users still need to pay attention to computing resources, especially in enterprises, users need to pay attention to resource cost allocation and the computer room where resources are located to ensure high availability. Starlink abstracts what users need to pay attention to, and establishes a computing resource model and Declarative deployment model. The StarChain computing resource model is shown in Figure 2.

image.png
Figure 2 Computing resource model

This model supports both traditional virtual machines and K8s. Clusters and groups are abstractions of computing resources, and there are multiple groups under a cluster. The cluster is mainly to facilitate the management of multiple groups. The main attributes are name and remarks. The group has an environment attribute: test, pre-release or production, and resource type attributes. The resource type supports virtual machines and K8s. For the K8s type, a group is associated with a K8s namespace, and StarChain automatically creates and manages the computing engine Pod. For virtual machines, you need to create a computing engine in other ways, and then associate the engine IP with the Starlink group. Users do not need to apply for any computing resources during the development, debugging, and testing stages. When deploying for production, you need to apply for K8s resources or virtual machine resources yourself, and then associate them with StarChain clusters and groups. Each team only needs to apply and configure it once. Declarative deployment is the same for K8s grouping and virtual machine grouping. The model is shown in Figure 3.

image.png
Figure 3 Declarative deployment model

Each environment (test, staging, production) has a different deployment configuration. There can be multiple deployment configurations per environment, differentiated by traffic entry. Each entry configuration may include multiple entries, each of which declares which cluster to group in, which version to deploy, the desired number of instances to configure, etc. The system will intelligently allocate computing resources, monitor health, and ensure deployment claims are met.

3 Star Chain Application Scenarios

StarChain has many application scenarios, which are described below.

3.1 Business Process Orchestration

In the microservice system, there are many types of microservices, but they can be roughly divided into two categories: one is relatively stable, less related to the scene, and has settled down in the field of atomic microservices, and the other is relatively changeable. There are a large number of scenario-based microservices that are scenario-oriented. Scenario-based microservices often realize business processes by integrating and orchestrating atomic microservices. However, for different scenarios, the process is different. For a new scenario, by creating VMS can quickly and reliably orchestrate the process, and visualize the process, so that the business, product, and test can understand the process and improve the efficiency of collaboration. At the same time, each new scenario is independently developed and deployed, which is easy to manage and does not affect the existing business.

3.2 Backend of Service Frontend (BFF)

The front-end has a variety of media, such as PC, mobile APP, H5, applet, etc. The interfaces required by each medium may be different, and the data format required by the front-end may also be different from the back-end microservices. In addition, the front-end has one interface. The required data often needs to be provided by a combination of multiple microservices in the backend. You can quickly meet these front-end-oriented interface data aggregation and interface data adaptation requirements by creating a VMS.

3.3 Asynchronous message processing

In the microservice architecture, different microservices often rely on asynchronous messages for coordination. There is often a large amount of message monitoring logic in a system, many of which are often relatively simple, such as maintaining cache, synchronizing state, and converting message formats, etc. , the message processing logic of these glue layers can be put into the VMS for development and management.

3.4 Operating background services

The operation background often has many customization requirements. These requirements are often only some query requirements or some simple update logic. It is not difficult to realize these requirements, but it is very cumbersome and time-consuming. To achieve these requirements through VMS, use process orchestration and configuration database The relevant connectors can meet the requirements without writing code.

3.5 Scheduled batch tasks

In a system with a microservice architecture, there are often many scheduled batch tasks, and these tasks are often executed only in the early morning. By building these tasks as VMS, serverless deployment can greatly save computing resources.

3.6 Temporary business needs

In actual business, there are often many temporary business requirements, such as providing temporary operational activity interfaces, reports, temporary data processing, etc. To achieve these requirements through VMS, on the one hand, it can be quickly delivered, and on the other hand, serverless deployment does not It needs to pay attention to computing resources. In addition, it is isolated from the existing stable business code, independently developed and deployed, easy to manage, and can be offline at any time when not in use.

3.7 Workflow Automation

In daily work, there are often some tasks that need to be automated, such as exception log management: query online exception logs every day, summarize important exception logs, and send them to team members via email for feedback. The traditional method is manual operation, which is cumbersome and cumbersome to implement with programs, and there is no suitable application/server to carry these functions. The workflow is automated through StarChain development and deployment, which is convenient for development and easy to deploy.

3.8 Workflow Automation

Multiple triggers, functions and connectors can be created in a VMS, Java/Groovy/JavaScript code can be written, third libraries can be referenced, the database can be accessed through configuration, and transactions are supported. In this way, the business logic is not too complicated. Common business needs can be achieved through VMS.

4 Star Chain Advantages

What Star Chain can do can also be done through traditional development and deployment, so what are the advantages of using Star Chain? The following is feedback from a large number of users.

  • Fast development: For example, it used to take four or five days to develop a function, but now it takes two or three days, and the process is visualized, and the function logic is clear at a glance.
  • Fast and worry-free deployment: It used to take 1~2 hours to deploy, but now it is completed in a few seconds; VMS deployment granularity is smaller, independent deployment, unlike the original large application where some public code was changed, and I was always worried about the impact other business processes.
  • No need to manage servers: There is no need to apply for computing resources for each application, and the team can apply for it once.
  • Cost saving: Most servers have low computing resource utilization. By allowing multiple VMSs to share computing engines and dynamic scheduling, StarChain can greatly improve computing resource utilization and save costs.
  • Common libraries and middleware are easy to upgrade: Various middleware within the company are frequently upgraded, and various common libraries also need to be upgraded frequently due to security vulnerabilities and other reasons. Using StarChain does not require users to modify the code, and only needs to update the StarChain engine with one click. That's it.
  • Ease of collaboration: Collaborate on products, R&D, and testing through a unified Web view. There is no need to draw separate flowcharts for program review and code review. The BPMN diagram is the real process. Click the node to view the implementation details.
  • Enabling front-end R&D: Many back-end services only have Java SDK, but no Node.js SDK. For the work of the BFF layer, even if front-end R&D has time to do, it cannot be done due to lack of Java skills. Starchain provides a low-code platform , and supports JavaScript, so that front-end research and development can also be done that must be done by the back-end.
  • Promote better design: Expressing business logic through BPMN diagrams can more easily prompt users to think and sort out the overall logic, and encapsulate implementation details into specific nodes, resulting in a clearer and more complete design.

5 Starlink product features

The following introduces the various functions and features provided by Starlink.

5.1 Visual low-code Cloud Web IDE

Starlink provides a low-code Cloud Web IDE to improve development efficiency through visual microservice orchestration, as shown in Figure 4.

image.png
Figure 4 Cloud Web IDE

The features supported by Starlink IDE are as follows.

  • Visual BPMN orchestration: supports the orchestration of various connector methods, code functions, and other BPMN functions, supports branching, exception handling, supports DB transactions, automatically prompts context information such as request parameters, environment variables, and output results of intermediate nodes, and supports the use of expressions Express complex conditional branches and input variables.
  • Custom DB/HTTP connector: You can customize the list of connector methods. The input and output parameter definitions support three modes: table, YAML, and CSV. It supports inferring parameter definitions from JSON examples. The configured connector methods can be used for BPMN orchestration and code functions. , DB connector methods can participate in transactions.
  • Visual trigger configuration: Various triggers can configure the trigger mechanism and the function to be called through the interface, and no development is required.
  • Expression editor: supports syntax highlighting, code hints, convenient examples and help documents, rich built-in function library, can easily construct complex objects composed of Map and List, and supports various operations.
  • Online development of code functions: Visually define function input and output parameters, support Java/JavaScript/Groovy, syntax highlighting, built-in API access logs, various connectors, environment variables, etc.
  • Online function Debug: You can run functions directly on the Web, view the running results, view logs, and diagnose problems. For BPMN functions, the execution trajectory can be displayed visually, and the input, output or exception information of each node that the execution passes through can be displayed. Supports setting the Mock return value for remote calls to facilitate joint debugging.
  • Multi-Tab page editing: You can edit multiple functions at the same time, and the debugging and running history are independent.
  • Real-time save and verification: save the user's changes to the Cloud workspace in real-time, verify and prompt in real-time.

In the DB/HTTP connector, the list of methods can be customized. Figure 5 shows an example of a custom DB connector. The user only needs to define SQL and input and output parameters to generate a method. During the definition process, it can be tested at any time to verify whether the definition is correct. Currently, users are required to input SQL using MyBatis syntax. In the future, Starlink will provide an easier-to-use and smarter way to define DB connectors.

image.png
Figure 5 Example of custom DB connector

Figure 6 shows an example of a custom HTTP connector. Familiar with the basic HTTP protocol can quickly define a method. In the future, Starlink will provide more easy-to-use and rich functions, and support the OpenAPI specification. Import the OpenAPI definition file to generate an HTTP connector.

image.png
Figure 6 Example of a custom HTTP connector

When debugging the BPMN function online, Starlink will visually display the execution trajectory, which is convenient for users to locate the problem. The correct execution is displayed in green, the wrong one is displayed in red, and the error message is displayed, as shown in Figure 7. In the future, Starlink will support BPMN breakpoint debugging.

image.png
Figure 7 Example of BPMN function execution trajectory

5.2 Custom business component library

StarChain provides a unique business component library function. In addition to providing system common components, StarChain also supports custom team components. Common components are maintained by the system and are available to every user. The system will continue to improve the construction of the public component library. In privatized deployment, customers can customize system components. Team components are maintained by the team itself and are not visible to other teams. User-defined connector components in VMS (such as DB/HTTP connectors) can be exported as team components. Users can maintain team components in the Starlink console, including component grouping, etc. In BPMN orchestration, users can browse, query teams or common components, and drag and drop directly into the orchestration panel. As shown in Figure 8.

image.png
Figure 8 Business Component Library

5.3 Support local IDE development

Unlike most low-code platforms, which are black boxes, the VMS developed by StarChain is stored on the Git code repository, and the source code is fully visible. Users can clone it locally, use the local IDE to develop, debug, and run unit tests. In local development , you can introduce third-party packages (such as jar packages in Java language, npm packages in JavaScript language, etc.), and use these packages in code functions, and, after local submission, it is also fully compatible and visible on Cloud Web IDE. Git operations are also supported on Cloud Web IDE, including switching branches, committing, viewing commit history, and comparing changes, as shown in Figure 9.

image.png
Figure 9 Git-based local and cloud collaborative development

5.4 Integrated Build and Release Process

Starlink supports one-click online build and packaging, and you can view build logs in real time. In order to control the online quality, StarChain integrates the online approval process, as shown in Figure 10.

image.png
Figure 10 Integrated build and release process

5.5 Enterprise Serveless

When developing, testing, and deploying in StarChain, you do not need to pay attention to the details of the server. When deploying, you only need to specify the deployed cluster group, the number of instances to be deployed, and the deployed version. You can declare multiple clusters in one deployment. Grouping, these groups can be located in different computer rooms and availability zones, and the system is automatically scheduled. Declarative deployment can easily support grayscale publishing, expansion and contraction, and only need to modify the version declaration and the number of instances, and the system will automatically adjust. Support deployment environment isolation, distinguish between different environments for testing, pre-release, and production. Reuse computing engine resources, no cold start, hot load, fast deployment. Supports deployment-state encrypted environment variables, and encrypts storage for sensitive variables to ensure security. An example deployment configuration is shown in Figure 11.

image.png
Figure 11 Declarative deployment configuration

StarChain has independent serverless deployment control gateways and computing engine modules, which can be deployed to private clouds and other public clouds. For computing resources created by users, the system provides several management functions, including adding engines, disabling engines, enabling engines, etc. The system will dynamically schedule VMS according to the available engines.

5.6 Integrated observability

The StarChain deployment status is clear at a glance, including whether the overall status is in line with expectations, the deployed clusters, groups, and engine details, as shown in Figure 12.

image.png
Figure 12 Deployment status

Starlink automatically adds logs for the main execution node of the function. The log level can be set in the deployment state, and the logs can be directly viewed and searched on StarChain for easy problem diagnosis, as shown in Figure 13.

image.png
Figure 13 Integrated log

Within JD.com and on JD.com's public cloud, StarChain integrates JD.com's log, monitoring and other more professional services. Users can use these tools to view, monitor, and alert the logs.

5.7 High-performance and scalable multi-language execution engine

Since its launch in March 2019, the StarChain execution engine has undergone many 618 and Double 11 tests in JD.com's internal applications. The Starlink engine is designed for high-performance and low-latency scenarios. If the user does not configure stateful nodes (such as waiting for asynchronous messages in the middle of the process), Starlink is executed in stateless memory, which is mostly based on databases or message systems in the industry. The orchestration engine has low latency and is more suitable for service orchestration. It is widely used in high-concurrency and low-latency scenarios for C-end users. In addition, the Starlink engine is a multi-language execution engine. It currently supports Java, JavaScript, and Groovy. Each language supports calling various connector methods and provides common APIs. It is also planning to support more languages (such as Python). In addition, the Starlink engine adopts a micro-kernel and modular architecture, which can be easily expanded and adjusted according to the scene. In the future, users will also be allowed to customize triggers, connectors, and other functional components.

5.8 Team collaboration and management

Starlink provides convenient VMS collaboration and management functions. In Starlink, both VMS and computing resources belong to a team. Members of the team have four roles: development, testing, guest, and administrator. Each role has different permissions, and administrators can add/delete members. A user can join multiple teams. By default, there is an exclusive personal team, where you can try Starlink, and join the demo team by default to view the sample VMS provided by the system. When creating a VMS, you can quickly create a new VMS by cloning an existing VMS. VMS supports grouping and hierarchical management. Administrators can migrate VMS to other teams to facilitate organizational and business adjustment needs that often occur in medium to large enterprises.

6 Summary

This article introduces the functions, core concepts, application scenarios, advantages and main product functions of Star Chain. In short, Star Chain is a microservice low-code serverless platform. Users can complete the visualization of microservices (VMS) through a unified web interface. The complete process of development, debugging, construction, testing, and deployment, low-code development of microservices with visualization and component arrangement, declarative serverless deployment, and rapid delivery of service front-end (BFF), service process orchestration, and asynchronous message processing , timed tasks, operation background, workflow automation, temporary needs and other scenario-based and customized business research and development needs, reducing costs and increasing efficiency. The characteristics of Starlink are to meet the requirements of high concurrency and low latency C-side services, low code without sacrificing flexibility and controllability, support for enterprise-level serverless, high computing resource utilization and low cost.


京东云开发者
3.4k 声望5.4k 粉丝

京东云开发者(Developer of JD Technology)是京东云旗下为AI、云计算、IoT等相关领域开发者提供技术分享交流的平台。