Text|Zhou Qunli (Flower Name: Ceremony)
Layotto PMC
Layotto and SOFAStack open source community building, Dapr contributor, Co-chair of Dapr sig-api
This article 10963 words read 20 minutes
In 2019, Microsoft open sourced the Dapr project. In 2021, Ant will open source the Layotto project with reference to Dapr's ideas. Today, Ant has landed on Layotto, serving many applications. In the process of moving from ideal to reality, we encountered many problems and made many changes to the project. Looking back, what do you think of the multi-runtime architecture of Dapr and Layotto? What can we learn from this?
This time, I will share the thoughts of Ant after implementing the multi-runtime architecture from the following aspects:
1. How to think about "portability"
2. What value can a multi-runtime architecture bring?
3. Differences from Service Mesh and Event Mesh
4. How to view different deployment forms
PART. 1 QUICK REVIEW
If you are familiar with the concepts of Multi-Runtime, Dapr and Layotto, you can skip this chapter and go directly to the next chapter.
Quick Review: What is a Multi-Runtime Architecture?
Multi-Runtime is a server-side architecture idea. If it can be summed up in one sentence, it is to move all the middleware in the application to the sidecar, so that the "business runtime" and "technical runtime" are separated.
A more detailed explanation is as follows: First, let's look at Service Mesh. Compared with the traditional RPC framework, the innovation of Service Mesh is the introduction of the Sidecar mode. Service Mesh only solves the needs of communication between services, but there are more needs in distributed applications in reality, such as "protocol conversion", "state management" and so on. The Multi-Runtime architecture proposes to move various distributed capabilities to independent Runtimes, and finally form microservices together with the application Runtimes to form the so-called "Multi-Runtime" (multi-runtime) architecture.
For details, please refer to "Multi-Runtime Microservices Architecture" and "Mecha: Carrying Mesh to the End".
Which projects implement the Multi-Runtime architecture?
Dapr
Dapr's full name is "Distributed Application Runtime", which is "distributed application runtime", which is an open source project initiated by Microsoft.
The Dapr project is the industry's first Multi-Runtime practice project. Dapr's Sidecar, in addition to supporting inter-service communication like Service Mesh, can also support more functions, such as state (state management) , pub-sub (message communication) , resource binding (resource binding, including input and output) . Dapr abstracts each function from a standardized API (such as state API) . Each API has multiple implementations. For example, users can program for the state API, but they can switch storage components at will. This year, Redis is used, and next year, MongoDB is used. The business code does not need to be changed.
If you have not been exposed to Dapr before, you can read the article "Dapr v1.0 Outlook: From Service Mesh to Cloud Native" for a more detailed introduction.
Layotto
Layotto is a project open sourced by Ant Group in 2021 to implement the Multi-Runtime architecture. The core idea is to support Dapr API and WebAssembly runtime in the data plane (MOSN) of Service Mesh, and to implement a Sidecar as the data plane of Service Mesh. Runtime Runtime, FaaS Runtime. The project address is: https://github.com/mosn/layotto
The above is the background of this article, followed by this topic sharing.
PART. 2 Do you really need this "portability"?
The community pays more attention to the "portability" of the Dapr API, but in the process of implementation, we can't help but reflect: Do you really need this "portability"?
Can a standardized API meet all needs?
There has been an interesting discussion in the database field: can the same database be suitable for all scenarios and meet all needs? For example, can a database support OLAP+OLTP+ACID and other requirements at the same time?
Today, we also encountered an interesting problem in the process of building Dapr API: in a certain product area (such as message queue) , can we define a "standard API" that applies to all message queues at the same time ?
Of course, these two issues cannot be confused: even if there are two different types of databases, such as two databases, one only does OLAP and the other only OLTP, they can both support the SQL protocol. Two databases with such a large gap can use the same protocol, and we have reason to believe that in a specific field, it is feasible to design a "standard API" that applies to all products.
It works, but not quite yet.
The current Dapr API is relatively simple, and it is sufficient for simple scenarios, but in complex business scenarios, it cannot "help applications write once, run on any cloud". Regarding this issue, Mr. Ao Xiaojian's article "The Land of the Dead and the Dead: On the Importance of API Standardization to Dapr" has a detailed description, to the effect that:
The current Dapr API is relatively simple, and cannot meet complex requirements when it is implemented in production. Therefore, developers can only add a lot of custom extension fields and do special processing in Sidecar's components. For example, the following are some custom extension fields when using the State API:
(The picture is taken from the article of Mr. Ao Xiaojian)
These custom extension fields break portability: if you change a component, the new component will definitely not recognize these fields, so you have to change the code.
The root cause behind this problem is the design philosophy of the Dapr API.
When the community designs the Dapr API, for portability, the designed API tends to "intersection of functions" . For example, when designing the Configuration API, various configuration centers A, B, and C will be examined. If A, B, and C have the same function, then this function will appear in the Dapr API:
However, in the real world, people's needs may be the intersection of A and B, the intersection of B and C (the red part in the figure below) , rather than the intersection of A, B, and C:
Or more commonly, the user's requirement is "all the functions of B", which must include some functions unique to B that the Dapr API cannot cover:
Dapr provides "standard API", "language SDK" and "Runtime", which requires application adaptation (which means that old applications need to be modified) , which is relatively intrusive.
Therefore, Dapr is more suitable for new application development (the so-called Green Field) , while for the existing old applications (the so-called Brown Field) , it needs to pay a higher cost of transformation. But after paying these costs, Dapr can provide cross-cloud and cross-platform portability, which is one of Dapr's core values.
These sound like unsolvable problems. What to do then?
Do you really need to switch from Redis to Memcached when deploying across clouds?
Similar discussions often arise when designing APIs:
A: Hey, only Redis and xxx have this feature, but Memcached and other storage systems don't. What should we do, do we include this feature in the API specification?
B: What would be the problem if we incorporated this functionality into the API?
A: That way, users of our API wouldn't be able to migrate from Redis to Memcached, which breaks portability!
Wait a minute... do you really need to switch from Redis to Memcached?
Do you really need this "portability"?
Don't need it! If your application is programmed for Redis, it is inherently deployable to different clouds because each cloud environment has a hosted Redis service. If you don't have such a service, you can deploy a Redis yourself and have it.
And not only Redis, other open source products can also operate similarly.
The dog licking theorem .
I once heard a very interesting point (not what I said) : commercial companies are like licking dogs, which open source product has commercial opportunities, commercial companies will follow up soon, and that product will appear on various clouds baby sitting program. Although the words are rough, it reveals a truth: the protocols of open source products are inherently portable.
The value of standardized APIs is to limit proprietary protocols
To make the discussion more concrete, let's divide the infrastructure protocols that applications rely on into two categories: trusted protocols and proprietary protocols.
Trusted Protocol
Refers to an agreement that has a relatively large influence in a certain field. The measurement standard is: cloud environment with managed services>=k (k is a number that makes you feel safe, such as 3, 5)
For example, the Redis protocol can basically be considered to be the same de facto standard as SQL, and each cloud vendor provides Redis hosting services; another example is the MySQL protocol, each cloud vendor provides database hosting services compatible with the MySQL protocol.
Opinion 1: Trusted protocols are inherently portable.
There is no need to worry about "what if I can't switch from Redis to Memcached when I want to switch to cloud deployment in the future". Because there are Redis compatible hosting services on every cloud.
Worrying about switching from Redis to another caching product is like worrying about "If I introduce Sidecar today, what if the Sidecar architecture becomes unpopular in the future, what if I want to remove Sidecar", or "If I introduce Spring Cloud today, in the future, Other frameworks are on fire, what should I do if I want to change to another framework." Of course that day will come, but most businesses will not live that day. If you can, congratulations, you will have enough resources to refactor by then.
private protocol
For example, the agreement of closed source products, or the agreement of open source products with little influence, the measurement standard is: cloud environment with managed services < k.
For example, Ant's internal MQ is a self-built MQ that uses a private protocol. Business code that relies on this private protocol is not easy to deploy to other cloud environments, so it is suitable to use a standardized API package.
For another example, you are investigating and accessing an MQ provided by Alibaba Cloud, but you find that the API of this MQ is unique to Alibaba Cloud, and other cloud vendors do not provide this service. If you are afraid of being bound by Alibaba Cloud, it is best to Wrap this private MQ API with a standardized API.
After reading this, you should understand what I want to say:
View 2:
The value of Dapr's standardized API is to limit proprietary protocols.
Off topic Sky Computing, in 2021, UC Berkeley published a paper predicting that the future of cloud computing is Sky Computing, to the effect: Looking back at the history of the Internet, the Internet connects various heterogeneous networks, exposing users to a unified Network, users do not need to care about the details of each heterogeneous network when programming for this network; today, the environment of different cloud vendors is different, just like the state of "everywhere" before the advent of the Internet, in order to make it more convenient for users, we can Design an "interconnected cloud" that connects various heterogeneous cloud environments, shields differences, and exposes only a unified abstraction to users. Connecting different clouds can be called "empty computing".
How to achieve it?
The author proposes a 3-layer conceptual model. The most basic first layer is the "compatibility layer", which is responsible for abstracting different cloud services, so that applications can be deployed on different clouds without changing the code. The author believes that open source software has hosting services on various clouds, so different open source software can be integrated into a platform to form a "compatibility layer", and there are already projects doing this, such as Cloud Foundry.
On top of the "compatibility layer", the author believes that there should be an "Intercloud layer" and a "Peering layer", you can read the original text if you are interested.
What kind of "portability" do we need
As an aside, there is an idea in computer science: if a problem is too hard to solve, then relax assumptions and weaken requirements. In the vernacular: if a problem is too difficult to solve, solve some simpler problems first. There are many such examples. For example, implementing the "isolation" of database transactions will lead to poor performance, which can only be used in the laboratory environment and cannot be used in the real world. Therefore, people propose "weak isolation" and list "read and commit". , "weak isolation level" such as "repeatable read", the weaker the problem, the better it can be solved.
For example, in the real world, it is too slow to solve the optimal solution of the NP-Hard problem, so people propose to give up the pursuit of the optimal solution, as long as the results given by the algorithm can be guaranteed to be within the "acceptable range". So there are "approximation algorithms"; if this is too difficult, then use metaphysical algorithms - "heuristic algorithms";
For example, it is difficult to achieve distributed transactions that are "transparent to the business", and it costs a lot, so people propose to give up "transparency to the business", so there are TCC and Saga;...
Since the problem of "portability" is too hard, let's soften the requirements and solve some simpler problems first: "weak portability".
Portability classification
The requirement of "portability" is too vague, let's clarify the requirement first. We can divide portability into several levels:
level 0 : When the business system is deployed on a cloud platform, the business code needs to be changed (for example, changing a set of infrastructure SDK, and then refactoring the business code) .
This is a common state. For example, a company has a self-developed message queue system "XX MQ", and there is a "xx-mq-java-sdk" for the business system to introduce. When the business system wants to go to the cloud/change to the cloud for deployment, because there is no "XX MQ" on the cloud, it needs to be replaced by another MQ (for example, RocketMQ) , and the business system needs to be refactored.
level 1 : When deploying on a cloud platform, the business code does not need to be changed, but a set of SDK needs to be changed and recompiled.
The community has some cross-platform solutions through the SDK, which belong to this level. For example, Ctrip's open source Capa project, such as Tencent's open source Femas project.
level 2 : When deploying on a cloud platform, the business system does not need to change the code or recompile, but Sidecar needs to change the code.
Level 3: When deploying on a cloud platform, neither the business system nor the Sidecar need to change the code, do not need to recompile, only need to change the configuration.
Level 4: When changing the dependent open source products (for example, Redis was originally used, and now it needs to be replaced by another distributed cache) , neither the business system nor Sidecar needs to change the code.
The ultimate goal of the community is level 4, but as mentioned above, it has not been fully realized yet, and there are various problems. For commercial companies that need to land quickly and solve business problems, the goal that can be achieved now is to pursue the portability of level 2, and some scenarios can reach level 3, which is enough to solve business problems.
For example, in the distributed cache scenario, Ant has built a set of distributed cache middleware in MOSN to support Redis protocol access. If you believe that the Redis protocol is portable, then the application can communicate with MOSN through the Redis protocol, no need Forced migration to Dapr's "State API". In this case, the standardized API is just a supplement.
Off topic, what kind of portability is required for Sky Computing's "compatibility layer"? In this way, the "compatibility layer" proposed by Sky Computing requires portability of level 3 and above.
How to achieve level 3 portability?
If we set the goal as level 3, then the "compatibility layer" protocols exposed by Runtime should be diverse, including trusted protocols in various fields (such as Redis protocol, MySQL protocol, AWS S3 protocol, etc.) , as well as Dapr style standardized API.
From this, we can draw two points:
View 3:
Embrace trusted protocols: Dapr's standardized API should be positioned as a supplement to trusted protocols, rather than trying to make users abandon trusted protocols and migrate to Dapr APIs.
Viewpoint 4:
When designing Dapr's standardized API, focus on areas that have not yet formed a trusted protocol, and design standardized APIs for these areas, instead of spending energy on designing "Another SQL" or struggling with "How to migrate from Redis to Memcached". For example, the APIs improved by the configuration centers of different cloud vendors are different, and a de facto standard has not yet been formed, so designing a set of cross-platform Configuration APIs can fill this gap.
Evolution route
Now let's answer the question raised at the beginning: the current Dapr API has many problems, such as too many custom extension fields, destroying portability, such as designing for "intersection", and the function is too weak to evolve, such as being intrusive, etc. ,What should I do?
The answer is: gradual evolution , first consider the evolution from level 2 to level 3.
In order to achieve level 3, we need to: abandon the design for "functional intersection" and change to design for "functional union"; directly support various "trusted protocols" in Sidecar.
In order to achieve the final level 4, we need: the standardized API is a complete "feature union" to ensure that all business scenarios are covered; there is a set of "feature discovery mechanism", and the application negotiates with the infrastructure when deploying "what do I need?" feature", the infrastructure automatically binds components according to the needs of the application.
PART. 3 The value brought by the Runtime architecture
In addition to standardizing the API, the greater value of the runtime architecture in practice lies in the following aspects:
1. Possibly the most important value: Rationalizing "sinking"
An interesting observation is: the previous concept of Mesh emphasized "agent", so some infrastructure products may be rejected by the Mesh team when they want to "sink" their code logic into Sidecar, or they can "sink" into it. , but the implementation is relatively hack, which is not standardized; and with the concept of Runtime, various products move the code logic to the Sidecar behavior to rationalize.
The "sinking" mentioned here refers to "moving the public components that the application depends on from the application to the sidecar", separating the core business logic and technical parts. There are many benefits, such as:
1. Multi-language reuse middleware
One of the benefits of Service Mesh promotion is to allow multilingual applications to reuse traffic management middleware. Now Runtime emphasizes putting more middleware into Sidecar, which means that more middleware can be reused by multilingual applications. For example, the previous middleware was developed for Java, and C++ was not used. Now, applications in the Node.js/Python/C++ language can call Sidecar through gRPC to reuse the middleware.
2. Microservice startup acceleration, FaaS cold startup acceleration
The framework of the original microservice application is relatively heavy, for example, there are logics such as connection with the configuration center, initialization, and cache warm-up. Now these startup logics are moved to the Runtime. When the application or function needs to be expanded, the original Runtime can be reused, and there is no need to do a similar connection preheating action again, so as to achieve the effect of startup acceleration.
3. No need to push users to upgrade the SDK
This is the benefit that Mesh has been talking about: with Sidecar, there is no need to urge each business party to upgrade the SDK every day, which improves the iterative efficiency of the infrastructure.
4. Let business logic sink
In addition to infrastructure, some business logic also has requirements to be put into Sidecar, such as processing user information and other logic.
Putting business logic into Sidecar needs to ensure isolation. Last year, I tried to use WebAssembly to do it, but it was not mature enough to use it in production. I will try other solutions this year.
2. Standardize "sinking": Constrain "private protocol" to ensure level 2 portability
In the process of "sinking", standardized APIs play a more role in constraining "private protocols", such as:
- Communication models that limit proprietary protocols
When designing a private protocol (Layotto supports the "API plug-in" function, allowing the extension of private gRPC APIs) , it is necessary to prove that "when this private protocol is deployed on other clouds, there is a switchable component"
- As a guide for designing private protocols
With reference to standardized APIs to design private protocols, there are reasons to believe that the designed protocols can achieve level 2 portability when deployed in a cloud.
3. RPC protocol conversion, microservice interworking
Dapr's InvokeService (the API used for RPC calls) is relatively simple in design, but has some shortcomings. In the actual RPC scenario, Layotto adjusted its positioning as an assistant for Service Mesh:
The RPC traffic of the existing Java microservices is still forwarded through Service Mesh (MOSN) . For microservices in other languages, or microservices in other protocol stacks, Sidecar can be called through gRPC, and Sidecar helps to do the protocol conversion, and then convert the Traffic access to the existing service system.
For example, many languages do not have the Hessian library. You can tune Layotto through gRPC. Layotto helps to serialize the Hessian, and then connect the traffic to MOSN.
There are also some projects in the industry to open up multi-language microservices, such as the dubbogo-pixiu project, the difference is that it is deployed in the form of a gateway.
PART. 4
How to divide the boundaries of Serivce Mesh, Event Mesh and Multi-Runtime? What is the difference between Serivce Mesh and Event Mesh?
The online saying is that Event Mesh handles the traffic of asynchronous calls, and Service Mesh handles synchronous calls.
What is the difference between Service Mesh and Dapr? It is said on the Internet that Service Mesh is an agent and Dapr is a runtime. It is necessary to abstract the API and do protocol conversion.
However, with the evolution of the implementation, we gradually found that the boundaries of these technical concepts became very blurred.
As shown in the figure below, the sidecar of Layotto supports various protocols, and it seems to be "not a donkey not a horse": not only Dapr-style external exposure of standardized http/gRPC API, abstract distributed capabilities, but also Service Mesh-style traffic interception and proxy Forwarding, can handle synchronous calls, asynchronous calls, and can handle requests from open source protocols such as Redis. It seems that the event mesh has also been done, and it has become a hybrid sidecar:
So, how to divide the boundaries of Serivce Mesh, Event Mesh and Multi-Runtime?
My personal opinion is that Dapr's "standardized API" can be thought of as a "Sidecar enhancement" . For example, "InvokeService API" can be regarded as "Service Mesh Enhancement", "Pubsub API" can be regarded as "Event Mesh Enhancement", and "State API" can be regarded as "Data Middleware Enhancement". The data middleware mentioned here includes cache Traffic forwarding and DB Mesh.
From this perspective, Layotto is more like an "API gateway" in Sidecar.
PART. 5 The battle of deployment forms
1. What’s wrong with the current architecture?
There is a problem with the current architecture: The Runtime is a monolithic application .
Whether it is Dapr or Layotto, they tend to carry all functions that are not related to the business.
If you compare Runtime to the kernel of an operating system, then the API layer is the system call, responsible for abstracting infrastructure and simplifying programming, while different components are similar to drivers, responsible for translating system calls into protocols for different infrastructures.
Runtime puts all components in one process, and an operating system similar to a "macro kernel" stuffs all submodules together and turns it into a monolithic application.
What's wrong with the boulder app? The modules are coupled with each other, the isolation is not good, and the stability is reduced. For example, previous studies have pointed out that most of the code in Linux is drivers, and many drivers are written by "amateur players", and the stability is not good. Problems in driver writing are the main reasons for kernel crashes.
Likewise, if a bug occurs in one of Dapr or Layotto's components, it will affect the entire Sidecar.
How to solve the problem of boulder application? dismantle! One idea is to split the Runtime into modules, each module is a Container, and the entire Runtime is deployed in the form of DaemonSet:
This solution is like the "microkernel" of the operating system. There is a certain degree of isolation between different sub-modules, but the performance loss of mutual communication will be higher. For example, when the Event Mesh container wants to read the configuration of the configuration center, it needs to call the Configuration container through the network; if the calling frequency is too high, it is necessary to consider doing some configuration caching in the Event Mesh container, and each container may finally need to do a set of caches.
So should you choose a single-container Runtime or a multi-container Runtime? It's like an operating system chooses a "macrokernel" or a "microkernel" architecture, it all depends on the trade-off. The advantage of the boulder application is that the sub-modules have good communication performance with each other, but the disadvantage is tight coupling and poor isolation; if the Runtime is split into multiple sidecars, it is just the opposite.
Currently, both Dapr and Layotto are single-container runtimes.
A possible splitting solution is to "vertically split" the Runtime into multiple containers according to their capabilities. For example, one container is responsible for state storage, one container is responsible for asynchronous communication, etc. The communication between containers is optimized through eBPF. However, I haven't seen any projects to do so yet.
2. What other optimizations can be made in the current architecture?
Optimization point 1 : When starting the application, you need to start the sidecar container first, and then start the application container. Can the app start up faster?
Intuitively, if the newly launched application (or function) can reuse the existing Runtime, it can save some initialization actions and speed up the startup.
Optimization point 2 : Can the resource usage of Runtime be reduced?
Each Pod has a Sidecar container. If a node has 20 Pods, there must be 20 Sidecars. In a large-scale cluster, Sidecars alone take up a lot of memory.
Can the resource usage of Runtime be reduced?
Intuitively, if you can have multiple containers share the same agent (instead of each container sharing a single agent) , you can reduce resource usage.
Both of the above points seem to be optimized by "having multiple containers sharing the same agent". But is it really that simple?
Service Mesh Community Discussion on "Shared Proxy"
In fact, the Service Mesh community has had a lot of debates about the deployment form of the data plane. There are roughly the following solutions:
- Sidecar mode, each application has an exclusive proxy
Image via <eBPF for Service Mesh? Yes, but Envoy Proxy is here to stay>
- All pods on a node share the same proxy
Image via <eBPF for Service Mesh? Yes, but Envoy Proxy is here to stay>
- No proxy process required, traffic is handled with eBPF
It's elegant, but it's limited in functionality and doesn't meet all needs.
- Each Service Account on the node shares a proxy
Image via <eBPF for Service Mesh? Yes, but Envoy Proxy is here to stay>
- Hybrid Mode: Lightweight Sidecar+ Remote Proxy
Image via <eBPF for Service Mesh? Yes, but Envoy Proxy is here to stay>
Does the runtime community still need a shared proxy?
The above solutions seem to be all right, it's just a matter of trade-offs, but when it comes to Runtime, the situation changes!
Scenario 1: There are various middleware in the cluster, various infrastructures
If the cluster has all kinds of middleware, all kinds of infrastructure, then don't use the "all Pods on a node share the same agent" model.
For example, there are various MQs in a cluster. If all Pods on a node share the same Runtime, the Runtime does not know in advance what MQ the Pod will use, so it must bring all MQ components at compile time. Every time a new Pod is created, the Pod must dynamically pass the configuration to the Runtime, tell the Runtime which MQ to use, and then the Runtime will establish a connection with the corresponding MQ according to the configuration.
For example, in the figure below, on a certain node, Pod 1, Pod 2, and Pod 3 use RocketMQ, Kafka, and ActiveMQ respectively. At this time, a new Pod 4 is started. Pod 4 tells the Runtime that it has a personality, and it uses Pulsar! So Runtime has to establish a connection with Pulsar and do some initialization actions. So, Pod 4 doesn't start "speeding up" because it doesn't reuse the connections it already had.
In this case, the shared runtime cannot help the application startup speed up, and cannot reuse the number of connections to the back-end server. Although it can save some memory, it brings some disadvantages: increased complexity, reduced isolation, and so on.
If the Runtime of the Sidecar model is forcibly changed to a shared proxy, it is useful, but the input-output ratio is not high.
Case 2: The technology stack of the infrastructure in the cluster is relatively uniform
In this case, a shared proxy model may have some value.
For example, a cluster uses only one MQ, RocketMQ. If the shared proxy model is used, Pod 1, Pod 2, and Pod 3 have been started on a node. At this time, RocketMQ is also used to start a new Pod 4. At this time, some existing metadata can be reused, and it is even possible to restore Use the connection to the MQ server.
In this case, the benefits of the shared proxy model are: application startup acceleration, multiplexing and back-end server connections.
However, the so-called "startup acceleration" also depends on the situation. For example, through optimization, the runtime startup is 2 seconds faster, but the application startup takes 2 minutes, so optimizing for 2 seconds is not really useful. Especially in clusters with many Java applications, most of the Java applications do not start quickly, and this optimization has limited value. Therefore, startup acceleration is more useful in FaaS scenarios. If the function itself starts up and loads quickly, optimizing for a few seconds is still valuable. Improve resource utilization without deploying so many sidecars.
PART. 6 Summary
This article discusses the thoughts on the "portability", landing value and deployment form of the Multi-Runtime architecture after Layotto is launched. And the discussion in this article is not limited to a specific project.
【Reference link】
Multi-Runtime Microservices Architecture: https://www.infoq.com/articles/multi-runtime-microservice-architecture/
"Mecha: Carrying Mesh to the End": https://mp.weixin.qq.com/s/sLnfZoVimiieCbhtYMMi1A
"From Service Mesh to Cloud Native": https://mp.weixin.qq.com/s/KSln4MPWQHICIDeHiY-nWg
Dapr project address: https://github.com/dapr/dapr
Layotto project address: https://github.com/mosn/layotto
Capa project address: https://github.com/capa-cloud/cloud-runtimes-jvm
Femas project address: https://github.com/polarismesh/femas
【Recommended reading this week】
Review and Prospect of Ant Group's Service Mesh Progress
Layotto enters the CNCF cloud native panorama when the application is running
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。