Introduction to an architectural model. The cloud native architecture uses several principles to control the application architecture. These principles can help technical directors and architects to be more efficient and accurate in technology selection. This article will introduce these principles in detail.
As an architectural model, the cloud native architecture uses several principles to exercise core control over the application architecture. These principles can help technical directors and architects to be more efficient and accurate in technology selection, which will be introduced in detail below.
Service principle
In the software development process, when the number of codes and the size of the development team are expanded to a certain extent, the application needs to be refactored, and the concerns are separated through modularization and componentization to reduce the complexity of the application and improve the efficiency of software development. Reduce maintenance costs.
As shown in Figure 1, with the continuous development of business, the capacity that a single application can carry will gradually reach the upper limit, even if the bottleneck of vertical expansion (Scale Up) is broken through application transformation, and it is transformed into support for horizontal expansion (Scale Out) In the case of global concurrent access, it will still face the problems of data calculation complexity and storage capacity. Therefore, it is necessary to further split the single application and re-divide it into distributed applications according to business boundaries, so that applications and applications no longer directly share data, but communicate through agreed contracts to improve scalability.
Figure 1 Application service extension
The principle of service-oriented design refers to the separation of business units of different life cycles through a service-oriented architecture to realize independent iteration of business units, thereby accelerating the overall iteration speed and ensuring the stability of iteration. At the same time, the service-oriented architecture adopts the interface-oriented programming method, which increases the degree of software reuse and enhances the ability of horizontal expansion. The principle of service-oriented design also emphasizes the abstraction of the relationship between business modules at the architectural level, so as to help business modules achieve policy control and governance based on service traffic (rather than network traffic), without having to pay attention to what programming language these services are based on. of.
There have been many successful cases in the industry regarding the practice of service-oriented design principles. Among them, the most influential and most praised in the industry is the large-scale microservices practice carried out by Netflix on the production system. Through this practice, Netflix has not only undertaken up to 167 million subscribers worldwide and more than 15% of the global Internet bandwidth capacity, but also contributed outstanding microservice components such as Eureka, Zuul, and Hystrix in the open source field.
Not only overseas companies are constantly carrying out service-oriented practices, but domestic companies also have a high awareness of service-oriented practices. With the development of Internet in recent years, both emerging Internet companies and traditional large enterprises have good practices and successful cases in the practice of servicing. Alibaba's service-oriented practice started with the colorful stone project in 2008. After 10 years of development, it has steadily supported major promotion activities over the years. Take the data on the day of “Double 11” in 2019 as an example. Alibaba’s distributed system hit a peak of 544,000 transactions per second, and real-time computing and processing was 2.55 billion transactions per second. Alibaba's practice in the service-oriented field has been shared with the industry through open source projects such as Apache Dubbo, Nacos, Sentinel, Seata, Chaos Blade, etc. At the same time, the integration of these components with Spring Cloud Spring Cloud Alibaba has become the successor of Spring Cloud Netflix.
Although with the rise of the cloud-native wave, the principle of servicing is constantly evolving and landing in actual business, but enterprises will also encounter many challenges in the actual landing process. For example, compared with self-built data centers, servitization under public clouds may have a huge resource pool, resulting in a significant increase in machine error rates; paying on demand increases the frequency of operations for expansion and contraction; new environments require more application startups There are many practical issues that need to be considered, such as fast speed, no strong dependence between applications and applications, and the ability of applications to be freely scheduled between nodes of different specifications. But it is foreseeable that these problems will be solved one by one as the cloud native architecture continues to evolve.
Principle of flexibility
The principle of elasticity means that the scale of system deployment can be automatically adjusted with changes in business volume, without the need to prepare fixed hardware and software resources according to a prior capacity plan. Excellent elasticity can not only change the IT cost model of enterprises, so that enterprises do not need to consider additional software and hardware resource costs (idle costs), but also can better support the explosive expansion of business scale, no longer because of software and hardware resource reserves Lack and leave regrets.
In the cloud-native era, the threshold for enterprises to build IT systems is greatly reduced, which greatly improves the efficiency of enterprises in implementing business planning into products and services. This is particularly prominent in the mobile Internet and gaming industries. After an app has become a hit, there are not a few cases where the number of its users has increased exponentially. The exponential growth of business will bring a huge test to the performance of enterprise IT systems. Faced with such challenges, in traditional architectures, developers and operation and maintenance personnel are usually tired of tuning system performance. However, even if they do their best, they may not be able to completely solve the bottleneck problem of the system. In the end, the system cannot cope with it. The influx of massive users has caused the application to be paralyzed.
In addition to facing the test of exponential business growth, the peak characteristics of the business will be another important challenge. For example, the traffic of the movie ticket booking system in the afternoon far exceeds that of the early morning hours, and the traffic on weekends will even double compared to weekdays; there is also a take-out ordering system, where peak orders often occur before and after lunch and dinner. In the traditional architecture, in order to cope with such scenarios with obvious peak characteristics, enterprises need to prepare a large amount of computing, storage, and network resources in advance for peak traffic and pay for these resources, but these resources are idle for most of the time state.
Therefore, in the cloud-native era, when building IT systems, enterprises should consider making the application architecture flexible as early as possible, so as to flexibly respond to various scenarios and make full use of cloud-native technology and cost advantages in the face of rapidly developing business scale.
To build a flexible system architecture, you need to follow the following four basic principles.
1. Cutting application by function
A large complex system may consist of hundreds or thousands of services. When designing the architecture, the architect needs to follow the principle: put the related logic together, and disassemble the unrelated logic into independent services. They find each other through standard service discovery (Service Discovery), and communicate with each other using standard interfaces. Loose coupling between services, which enables each service to perform elastic scaling independently, thereby avoiding the occurrence of associated failures between upstream and downstream services.
2. Support horizontal segmentation
Cutting applications by function does not completely solve the problem of flexibility. After an application is disassembled into many services, as user traffic increases, a single service will eventually encounter system bottlenecks. Therefore, in design, each service needs to have the ability to be horizontally segmented, in order to divide the service into different logical units, and each unit will process part of the user traffic, so that the service itself has good scalability. The biggest challenge lies in the database system, because the database system itself is stateful, so it will be a very complicated project to divide the data reasonably and provide the correct transaction mechanism. However, in the cloud-native era, the cloud-native database service provided by the cloud platform can solve most complex distributed system problems. Therefore, if an enterprise builds an elastic system through the capabilities provided by the cloud platform, it will naturally have a database system. Flexibility.
3. Automated deployment
The burst traffic of the system is usually unpredictable, so the commonly used solution is to make the system capable of supporting larger-scale user access by manually expanding the system. After completing the architecture split, the elastic system also needs to have the ability to automatically deploy, so as to trigger the system's automatic expansion function according to established rules or external traffic burst signals, and meet the system's timeliness requirements for shortening the impact of burst traffic. After the peak period is over, the system will be automatically scaled down to reduce the resource occupation cost of system operation.
4. Support service downgrade
Resilient systems need to design anomaly response plans in advance, such as hierarchical management of services. In abnormal situations such as failure of elastic mechanisms, insufficient elastic resources, or peak traffic exceeding expectations, the system architecture needs to be capable of service degradation by reducing some non-critical services Or close some enhanced functions to give up resources, and expand the service capacity corresponding to important functions to ensure that the main functions of the product are not affected.
There have been many practical cases of successfully building large-scale flexible systems at home and abroad, the most representative of which is Alibaba’s annual "Double 11" promotion. In order to cope with traffic peaks that are hundreds of times higher than usual, Alibaba purchases flexible resources from Alibaba Cloud to deploy its own applications every year, and releases these resources after the "Double 11" event, paying on demand, thereby greatly reducing big promotions. The cost of resources. Another example is the flexible architecture of Sina Weibo. When a social hot event occurs, Sina Weibo uses an elastic system to expand the application container to Alibaba Cloud to respond to a large number of search and forwarding requests caused by the hot event. Through minute-level on-demand expansion and response capabilities, the system greatly reduces the resource cost generated by hot searches.
With the development of cloud native technologies, the technology ecosystems such as FaaS and Serverless have gradually matured, and the difficulty of building large-scale flexible systems has gradually decreased. When an enterprise takes FaaS, Serverless and other technical concepts as the design principles of the system architecture, the system has the ability to scale elastically, and the enterprise does not need to pay additional costs for "maintaining the elastic system itself".
Observable principle
Unlike the passive capabilities provided by systems such as monitoring, business probing, and APM (Application Performance Management), observability emphasizes proactiveness. In distributed systems such as cloud computing, it is proactively tracked through logs and links. And measurement and other means, so that the time consumption, return value and parameters of multiple service calls generated by one App click are clearly visible, and it can even be drilled down to each third-party software call, SQL request, node topology, network response and other information. Operation and maintenance, development, and business personnel can grasp the operation of the software in real time through such observation capabilities, and obtain unprecedented correlation analysis capabilities in order to continuously optimize business health and user experience.
With the all-round development of cloud computing, significant changes have taken place in the application architecture of enterprises, and they are gradually transitioning from traditional monolithic applications to microservices. In the microservice architecture, the loosely coupled design between services makes version iterations faster and shorter cycles; Kubernetes in the infrastructure layer has become the default platform for containers; services can achieve continuous integration and deployment through pipelines. These changes can minimize the risk of service changes and improve the efficiency of research and development.
In the microservice architecture, the point of failure of the system may appear anywhere, so we need to systematically design for observability to reduce MTTR (Mean Time to Repair).
To build an observability system, the following three basic principles need to be followed.
1. Comprehensive data collection
The three types of data-metrics (Metric), link tracking (Tracing) and logging (Logging) are the "three pillars" for building a complete observability system. The observability of the system requires the complete collection, analysis and display of these three types of data.
(1) Index
Indicators refer to KPI values used for measurement in multiple consecutive time periods. Under normal circumstances, the indicators will be layered according to the software architecture, divided into system resource indicators (such as CPU usage, disk usage and network bandwidth, etc.), application indicators (such as error rate, service level agreement SLA, service satisfaction APDEX) , Average delay, etc.), business indicators (such as the number of user sessions, the number of orders, and turnover, etc.).
(2) Link tracking
Link tracing refers to the entire process of recording and restoring a distributed call through the unique identification of TraceId. It runs through the entire process of data processing from the browser or mobile terminal through the server to executing SQL or initiating remote calls.
(3) Log
Logs are usually used to record information such as the execution process, code debugging, and error exceptions of application running. For example, Nginx logs can record information such as remote IP, request time, data size and so on. Log data needs to be stored centrally and have the ability to be retrieved.
2. Association analysis of data
It is particularly important for an observability system to generate more correlations between various data. When a fault occurs, effective correlation analysis can realize rapid delimitation and location of the fault, thereby improving the efficiency of fault handling and reducing unnecessary losses. Under normal circumstances, we will use the server address, service interface and other information of the application as additional attributes to bind to indicators, call chains, logs and other information, and give the observable system certain customization capabilities to flexibly meet more complex operation and maintenance Scenario requirements.
3. Unified monitoring view and display
Multiple forms and multiple dimensions of monitoring views can help operation and maintenance personnel and developers quickly find system bottlenecks and eliminate system hazards. The presentation form of monitoring data should not only be indicator trend charts, histograms, etc., but also need to combine the needs of complex actual application scenarios, so that the view has drill-down analysis and customization capabilities to meet operation and maintenance monitoring, version release management, troubleshooting, etc. Multi-scene requirements.
With the development of cloud native technology, there will be more and more scenarios based on heterogeneous microservice architecture, and observability is the basis for the construction of all automation capabilities. Only by realizing comprehensive observability can the stability of the system be truly improved and the MTTR can be reduced. Therefore, how to build a full-stack observable system of system resources, containers, networks, applications, and services is a problem that every enterprise needs to think about.
Principle of Resilience
Resilience refers to the ability of software to resist when the software and hardware components on which the software depends are abnormal. These exceptions usually include hardware failures, hardware resource bottlenecks (such as CPU or network card bandwidth exhaustion), business traffic exceeding the software design capacity, failures or disasters that affect the normal operation of the computer room, and the failure of dependent software, which may cause business unavailability. Influencing factors.
After the business goes online, during most of the operation period, various uncertain inputs and unstable dependencies may still be encountered. When these abnormal scenarios occur, the business needs to ensure the quality of service as much as possible to meet the current "always online" requirements represented by networked services. Therefore, the core design concept of resilience is to design for failure, that is, consider how to reduce the impact of abnormalities on the system and service quality and return to normal as soon as possible under various abnormal dependencies.
The practice and common architecture of the principle of resilience mainly include service asynchronous capability, retry/current limit/downgrade/fuse/back pressure, master-slave mode, cluster mode, multi-AZ (Availability Zone, availability zone) high availability, unitization, Cross-region (Region) disaster recovery, remote disaster recovery, etc.
The following is a detailed description of how to design resilience in large-scale systems in combination with specific cases. "Double 11" is a battle that cannot be lost for Alibaba, so the design of its system needs to strictly follow the principle of resilience in strategy. For example, at the unified access layer, a security strategy is implemented through traffic cleaning to prevent black production attacks; a refined flow-limiting strategy is used to ensure the stability of peak traffic, thereby ensuring the normal operation of the back-end. In order to improve the overall high-availability capability, Alibaba has implemented cross-regional multi-active disaster recovery through a unitized mechanism, and implemented intra-city dual-active disaster recovery through the same-city disaster recovery mechanism, thereby maximizing IDC (Internet Data Center, Internet Data Center) service quality. In the same IDC, stateless business migration is realized through microservices and container technology; high availability is improved through multi-copy deployment; asynchronous decoupling between microservices is completed through messages to reduce service dependency and improve system throughput. From the perspective of each application, we will sort out our own dependencies, set the downgrade switch, and continuously strengthen the robustness of the system through fault drills to ensure the normal and stable progress of Alibaba's "Double 11" promotion activities.
With the acceleration of the digitalization process, more and more digital services have become the infrastructure for the normal operation of the entire social economy. However, as the systems supporting these digital services become more and more complex, the risk of relying on uncertain service quality is becoming more and more. The higher the value, the system must be designed with sufficient resilience to better cope with various uncertainties. Especially when it comes to core business links in core industries (such as financial payment links, e-commerce transaction links), business traffic entrances, and dependence on complex links, resilient design is essential.
All process automation principles
Technology is a "double-edged sword". The use of containers, microservices, DevOps, and a large number of third-party components reduces distributed complexity and speeds up iteration, while also increasing the complexity of the software technology stack and increasing the number of components. Scale, which inevitably leads to the complexity of software delivery. If it is not properly controlled, applications will not be able to appreciate the advantages of cloud native technology. Through the practice of IaC, GitOps, OAM, Operator, and a large number of automated delivery tools in the CI/CD (Continuous Integration/Continuous Delivery) pipeline, companies can standardize the internal software delivery process of the enterprise, or achieve automation on the basis of standardization, namely Through configuration data self-description and final state-oriented delivery process, the entire software delivery and operation and maintenance are automated.
To achieve large-scale automation, the following four basic principles need to be followed.
1. Standardization
To implement automation, first of all, through containerization, IaC, OAM and other means, standardize the business operation infrastructure, and further standardize the application definition and delivery process. Only by realizing standardization can the business's dependence on specific personnel and platforms be relieved, and business unity and large-scale automated operations can be realized.
2. Face the final state
The end-oriented state refers to the declarative description of the desired configuration of infrastructure and applications, continuous attention to the actual operating state of the application, and the system itself repeatedly changing and adjusting until it approaches the end state. The final state-oriented principle emphasizes that the application should be avoided directly through the work order system and workflow system assembling a series of procedural commands to change the application, but by setting the final state, let the system decide how to execute the change.
3. Separation of
The final result of automation depends not only on the capabilities of the tools and the system, but also on the person who sets the goals for the system, so make sure to find the right person who sets the goals. When describing the final state of the system, it is necessary to separate the configuration concerned by the main roles of application development, application operation and maintenance, and infrastructure operation and maintenance. The final state of the system is reasonable.
4. Design for failure
If you want to automate the entire process, you must ensure that the automated process is controllable and the impact on the system is predictable. We can't expect the automated system to make no mistakes, but we can guarantee that even in the event of an abnormality, the scope of the error is controllable and acceptable. Therefore, when an automated system performs changes, it also needs to follow the best practices of manual changes to ensure that the changes can be executed in gray scale, the execution results are observable, the changes can be quickly rolled back, and the impact of the changes can be traced.
The failure self-healing of business instances is a typical process automation scenario. After the business is migrated to the cloud, although the cloud platform has greatly reduced the probability of server failure through various technical means, it cannot eliminate the software failure of the business itself. Software failures include crashes caused by application software's own defects, memory overflow (OOM) caused by insufficient resources, and ram death caused by excessive load, as well as system software problems such as the kernel and daemon processes. It also includes the interference problem of other applications or operations in the mixed department. With the increase of business scale, the risk of software failure is becoming higher and higher. The traditional operation and maintenance failure handling method requires the intervention of operation and maintenance personnel to perform repair operations such as restarting or moving. However, in large-scale scenarios, operation and maintenance personnel are often tired of dealing with various failures and even need to work overtime overnight. The quality of service is difficult to guarantee, and neither the customer nor the development, operation and maintenance personnel can be satisfied.
In order to enable automatic repair of faults, cloud native applications require developers to use standard declarative configuration to describe the detection method of application health and the startup method of the application, the service discovery that needs to be mounted and registered after the application is started, and the configuration management database (Configuration Management Data Base, CMDB) information. Through these standard configurations, the cloud platform can repeatedly detect applications and perform automated repair operations when failures occur. In addition, in order to prevent possible false alarms in fault detection itself, application operation and maintenance personnel can also set the proportion of service unavailable instances according to their own capacity, so that the cloud platform can ensure business availability while performing automatic fault recovery. The realization of instance failure self-healing not only frees developers and operation and maintenance personnel from cumbersome operation and maintenance operations, but also can handle various failures in time to ensure business continuity and high service availability.
Zero trust principle
The traditional security architecture design based on the boundary model is to build a wall between trusted and untrusted resources. For example, the company's intranet is trusted, while the Internet is untrusted. In this security architecture design model, once an intruder penetrates into the boundary, he can access resources within the boundary at will. The application of cloud native architecture, the popularization of employee remote office mode, and the current situation of using mobile devices such as mobile phones to process work have completely broken the physical boundaries under the traditional security architecture. Employees working from home can also share data with partners, because applications and data are hosted on the cloud.
Today, the boundary is no longer defined by the physical location of the organization, but has expanded to all places that need to access the organization's resources and services. Traditional firewalls and VPNs can no longer reliably and flexibly deal with this new boundary. Therefore, we need a new security architecture to flexibly adapt to the characteristics of the cloud-native and mobile era environment. No matter where employees work, where devices are connected, or applications are deployed, data security can be effectively protected. If you want to implement this new security architecture, you must rely on the zero-trust model.
The traditional security architecture believes that everything inside the firewall is safe, while the zero-trust model assumes that the firewall boundary has been breached, and every request comes from an untrusted network, so every request needs to be verified. Simply put, "Never trust, always verify." Under the zero-trust model, each request must undergo strong authentication and be verified and authorized based on the security policy. The user identity, device identity, application identity, etc. related to the request will be used as core information to determine whether the request is safe.
If we discuss the security architecture around the boundary, then the boundary of the traditional security architecture is the physical network, and the boundary of the zero-trust security architecture is the identity. This identity includes the identity of the person, the identity of the device, and the identity of the application. To achieve a zero-trust security architecture, the following three basic principles need to be followed.
1. Explicit verification
Each access request is authenticated and authorized. Authentication and authorization need to be based on information such as user identity, location, device information, service and workload information, as well as data classification and anomaly detection. For example, for communication between internal applications in an enterprise, you cannot simply determine that the source IP is the internal IP and directly authorize access. Instead, you should determine the identity and device information of the source application, and then combine the current policy authorization.
2. Minimum authority
**For each request, only the permissions necessary for the moment are granted, and the permission policy should be able to adapt based on the current request context. For example, employees in the HR department should have access to HR-related applications, but should not have access to applications in the finance department.
3. Suppose
Assuming that the physical boundary is breached, the security explosion radius needs to be strictly controlled, and the entire network is cut into multiple parts that are aware of users, devices, and applications. Encrypt all sessions and use data analysis techniques to ensure visibility into the security status.
The evolution from a traditional security architecture to a zero-trust architecture will have a profound impact on the software architecture, which is embodied in the following three aspects.
First, you cannot configure security policies based on IP. Under the cloud-native architecture, it cannot be assumed that IP is bound to services or applications. This is because the application of automatic elasticity and other technologies makes IP may change at any time, so IP cannot represent the identity of the application and establish security on this basis Strategy.
Second, identity should become the infrastructure. authorizing communication between services and people accessing services is that the identity of the visitor has been clearly known. In an enterprise, human identity management is usually part of the security infrastructure, but application identities also need to be managed.
Third, the standard release pipeline. In enterprises, R&D work is usually distributed, including code version management, construction, testing, and online processes, which are relatively independent. This decentralized model will cause the security of services running in the actual production environment to not be effectively guaranteed. If the process of version management, construction, and launch of the code can be standardized, the security of application release can be centrally enhanced.
Generally speaking, the construction of the entire zero trust model includes several parts such as identity, equipment, applications, infrastructure, network, and data. The realization of zero trust is a gradual process. For example, when all the traffic transmitted within the organization is not encrypted, the first step should be to ensure that the traffic of visitors to the application is encrypted, and then gradually realize the encryption of all traffic. If a cloud-native architecture is adopted, the security infrastructure and services provided by the cloud platform can be directly used to help enterprises quickly implement a zero-trust architecture.
The principle of continuous evolution of the architecture
Nowadays, technology and business are developing very fast. In engineering practice, there are few architecture patterns that can be clearly defined and applied to the entire software life cycle from the beginning. Instead, they need to be refactored continuously within a certain range to Adapt to changing technical and business needs. In the same way, the cloud native architecture itself should and must have the ability to continue to evolve, rather than a closed architecture that is designed to be immutable. Therefore, in addition to factors such as incremental iteration and rationalization target selection, the design also needs to consider the organizational governance and risk control specifications at the organizational level (such as the architecture control committee) and the characteristics of the business itself, especially in the high-speed iteration of the business. In this case, it is more important to consider how to ensure the balance between architecture evolution and business development.
1. Features and value of evolutionary architecture
Evolutionary architecture means that in the initial stage of software development, through a scalable and loosely coupled design, it makes possible subsequent changes easier, and the cost of upgradeable refactoring is lower, and it can occur in development practices and releases. Any stage in the software life cycle such as practice and overall agility.
The fundamental reason why evolutionary architecture is of great significance in industrial practice is that in the consensus reached in the field of modern software engineering, changes are difficult to predict, and the cost of transformation is extremely high. The evolutionary architecture cannot avoid refactoring, but it emphasizes the evolvability of the architecture, that is, when the entire architecture needs to evolve forward due to changes in technology, organization, or external environment, the project as a whole can still follow the principle of strong boundary context to ensure The logical division described in domain-driven design becomes physical isolation. The evolutionary architecture adopts a large number of advanced cloud-native application architecture practices such as standardized application models and modular operation and maintenance capabilities through standardized and highly scalable infrastructure systems to achieve the physical modularity and reproducibility of the entire system architecture. Separation of usability and responsibilities. In an evolutionary architecture, each service of the system is decoupled from other services at the structural level, and replacing services is as convenient as replacing Lego bricks.
2. Application of Evolutionary Architecture
In modern software engineering practice, evolutionary architecture has different practices and manifestations at different levels of the system.
In business-oriented application architecture, evolutionary architecture is usually inseparable from microservice design. For example, in Alibaba's Internet e-commerce applications (such as the familiar Taobao and Tmall, etc.), the entire system architecture is actually finely designed into thousands of components with clear boundaries. The purpose is to make Developers of non-destructive changes provide greater convenience to avoid improper coupling to direct changes in unforeseen directions, thereby hindering the evolution of the architecture. It can be found that the software of the evolutionary architecture supports a certain degree of modularity, which is usually reflected in the classic layered architecture and the best practices of microservices.
At the platform R&D level, the evolutionary architecture is more embodied as a Capability Oriented Architecture (COA). After the gradual popularization of cloud-native technologies such as Kubernetes, standardized cloud-native infrastructure is rapidly becoming the capability provider of platform architecture, and the concept of Open Application Model (OAM) based on this is just a kind of application. From an architectural perspective, the COA practice of modularizing standardized infrastructure according to capabilities.
3. Architecture evolution
At present, the evolutionary architecture is still in the stage of rapid growth and popularization. However, the entire software engineering field has reached a consensus that the software world is constantly changing, and it is dynamic rather than static. The architecture is not a simple equation, it is a snapshot of the ongoing process. Therefore, evolutionary architecture is an inevitable development trend whether it is in business applications or in platform research and development. The engineering practice of a large number of architecture updates in the industry has explained a problem, that is, the effort required to keep the application constantly new due to the neglect of the implementation of the architecture is very huge. But a good architecture plan can help applications reduce the cost of introducing new technologies. This requires applications and platforms to meet at the architectural level: architecture standardization, separation of duties, and modularity. In the cloud-native era, the development of application models (OAM) is rapidly becoming an important boost for the advancement of evolutionary architectures.
Concluding remarks
We can see that the construction and evolution of cloud native architecture are based on the core characteristics of cloud computing (for example, elasticity, automation, resilience) and combined with business goals and characteristics, so as to help enterprises and technical personnel to fully release cloud computing Technology dividends. With the continuous exploration of cloud native, various technologies continue to expand, various scenarios are constantly enriched, and cloud native architecture will continue to evolve. But in the course of these changes. Typical architectural design principles have always been of great significance, guiding us in architectural design and technology implementation.
Copyright Statement: content of this article was contributed spontaneously by Alibaba Cloud real-name registered users. The copyright belongs to the original author. The Alibaba Cloud Developer Community does not own its copyright and does not assume corresponding legal responsibilities. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and the "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find suspected plagiarism in this community, fill in the infringement complaint form to report it. Once verified, the community will immediately delete the suspected infringing content.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。