Description: Recently, sponsored by globally distributed cloud alliance "Distributed Cloud | 2021 Conference of globally distributed cloud-cloud native Forum", Ali cloud senior technical experts Huang Yuqi published a report entitled "Cloud native new boundaries: Ali The keynote speech of "Cloud Edge Computing Cloud Native Landing Practice".
Author | Huang Yuqi
Source | Alibaba Cloud Native
A few days ago, at the "Distributed Cloud | 2021 Global Distributed Cloud Conference · Cloud Native Forum" hosted by the Global Distributed Cloud Alliance, Cloud Senior Technical Expert Huang Yuqi published a titled "New Frontier of Cloud Native: Alibaba Cloud Edge The keynote speech of "Compute Cloud Native Landing Practice".
Hello everyone, I am Huang Yuqi from the Alibaba Cloud native team. Thank you very much for having this opportunity to share with you. The topic shared today is "New Boundary of Cloud Native: Alibaba Cloud Edge Computing Cloud Native Landing Practice". From the topic, you can also see that the content to be shared should include several parts: cloud native, edge computing, the architecture design of the combination of the two, and Alibaba. Practices and cases of cloud in business and open source.
The concept of Cloud Native as everyone knows today is essentially a set of best practices and methodology of "using cloud computing technology to reduce costs and increase efficiency for users". Therefore, the term cloud native has been in a process of continuous self-evolution and innovation since its birth, its growth, and its huge popularity today. Cloud native has been deeply rooted in the hearts of the people as a series of tools, architectures, and methodology, and is widely used; then how exactly is cloud native defined? In the early days, the meaning of cloud native included: containers, microservices, Devops, CI/CD; after 2018, CNCF added service grids and declarative APIs.
Looking back, let's take a rough look at the history of cloud native development. In the early days, because of the emergence of Docker, a large number of businesses began to be containerized and Dockerized. Containerization has brought about the rapid development of Devops through unified deliverables and isolation; the emergence of Kubernetes has decoupled resource orchestration and scheduling from the underlying infrastructure, and the management and control of applications and resources has begun to be handy, and container orchestration has achieved resource orchestration and efficient scheduling; Subsequently, the service grid technology represented by lstio decoupled service realization and service governance capabilities. Today, cloud native is almost "all-encompassing" everywhere, and more and more companies and industries are beginning to embrace cloud native.
As one of the practitioners of cloud native technology, Alibaba Cloud Native has already become one of Alibaba's core technology strategies. This is derived from the accumulation, precipitation and practice of Alibaba in the cloud native field over the past ten years. It can be roughly divided into three stages:
- In the first stage, basic cloud native capabilities such as core middleware, containers, and Feitian Cloud operating system were precipitated through the Internetization of application architecture;
- The second stage is the full cloud native of the core system and the full commercialization of cloud native technology;
- The third is the full implementation and upgrade stage of cloud native technology, especially the next generation of cloud native technology represented by Serverless at the core is leading the upgrade of the entire technical architecture.
Alibaba Cloud Container Service ACK, as a commercial platform related to Alibaba Cloud's native capabilities, is providing customers with a wealth of cloud native products and capabilities. These are the best evidence for embracing cloud native. We firmly believe that cloud native is the future.
Cloud-native technology is ubiquitous. As a provider of cloud-native services, Alibaba Cloud believes that cloud-native technology will continue to develop rapidly and be applied to "new application loads", "new computing forms" and "new physics". Boundary"; from the big picture of the Alibaba Cloud cloud native product family, we can see: Containers are being used in more and more types of applications and cloud services; and they are carried by more and more computing forms, such as serverless, function computing, etc. And so on; and the rich forms have also begun to move from the traditional central cloud to edge computing to the terminal. This brings us to the topic shared today: cloud native in edge computing, let's take a look at what is edge computing.
First, let's take a look at what is edge computing from an intuitive perspective. With the development of industries and services such as 5G, IoT, audio and video, live broadcasting, and CDN, we see an industry trend that more and more computing power and services begin to sink to places close to data sources or end users. , So as to obtain a good response time and reduce costs; this is clearly different from the traditional central cloud computing model. And more and more widely used in various industries such as automobile, agriculture, energy, transportation and so on.
Looking at edge computing from the perspective of IT architecture, we can see that it has an obvious hierarchical structure determined by business delay and computing form. Here are the explanations of Gartner and IDC on the top-level architecture of edge computing: Gartner divides edge computing into The three parts "Near Edge", "Far Edge" and "Cloud" correspond to common device terminals, IDC/CDN nodes under the cloud, and public cloud/private cloud. IDC defines edge computing as a more intuitive "Heavy" "Edge" and "Light Edge" respectively represent the data center dimension and the end-side of low-power computing. From the figure we can see that in the hierarchical structure, layers are interdependent. Collaborate with each other.
This definition is also a consensus reached by the industry on the relationship between edge computing and cloud computing. After talking about the background and architecture, let's take a look at the trend of edge computing; we try to analyze the three major trends of edge computing from the three dimensions of business, architecture, and scale:
First, the integration of Al, IoT and edge computing will have more and more types, larger scales, and higher complexity of services running in edge computing scenarios. From the figure, we can also see some Very shocking numbers.
Second, as an extension of cloud computing, edge computing will be widely used in hybrid cloud scenarios. This requires future infrastructure decentralization, edge facility autonomy, and edge cloud hosting capabilities. There are also some quoted figures on the graph.
Third, the development of infrastructure will detonate the growth of edge computing. With the development of 5G, IoT, and audio and video industries, the outbreak of edge computing is a matter of course. The explosive growth of online live broadcast and online education during the epidemic last year is also an example. .
With the formation of a consensus on the architecture, in the process of landing, we found that the scale and complexity of edge computing are increasing day by day, and the shortage of operation and maintenance methods and capabilities have finally begun to be overwhelmed, so how to solve this problem?
Cloud and edge are naturally an indivisible organic whole, and the operation and maintenance collaboration of "cloud edge and end integration" is a solution that can form a consensus at present. As practitioners in the cloud-native field, we try to think and solve this problem from the perspective of cloud-native; just imagine that if the "cloud-side-end integration" has the blessing of cloud-native, it will better accelerate the process of cloud-side integration.
Under this top-level architecture design, we abstracted the cloud-native architecture of cloud-side-end collaboration: At the center (cloud), we retain the original cloud-native control and productization capabilities, and sink down through the cloud-side control channel. At the edge, a large number of edge nodes and edge businesses become cloud-native system workloads. Through service traffic and service governance, it can better interact with the end; thus completing the integration of business, operation and maintenance, and ecology; and through edge cloud native , We can get the same operation and maintenance experience as on the cloud, better isolation, security and efficiency. The product landing is a lot of logic.
Next, we introduce Alibaba Cloud's edge computing cloud native practice in commercialization and open source.
Alibaba Cloud ACK@Edge focuses on the service concept of "cloud standard management and control, moderate edge autonomy"; the three-tier structure of "cloud edge" is clearly layered and capabilities are coordinated. The first layer is the central cloud-native control capability, providing standard cloud-native northbound interfaces for upper-level business integration, such as urban brain, industrial brain, CDN PaaS, IoT PaaS, etc.; the second layer is the cloud-side operation and maintenance control channel. Specifications, soft and hard multi-link solutions to carry cloud edge sinking control traffic and business traffic; further down is the key edge side, we have superimposed similarities on the basis of native K8s capabilities: edge autonomy, unitized management, traffic topology , Edge computing power state refined detection and other capabilities; edge cloud collaboration, thus forming a complete cloud edge management and control closed loop; currently, this set of architecture has been widely used in CDN, IoT and other fields.
So what core capabilities and business goals are needed for edge containers? The figure includes: Four capabilities and five goals; the four capabilities are edge computing power management, edge business containerization management, edge business high availability support and final requirements Realize the native ecology of edge cloud; thus realize edge computing power can be accessed, can be operated and maintained, business can be managed, orchestrated, service is highly available, and business has an ecosystem. The following is a brief description of the design of these core capabilities.
Alibaba Cloud's edge container product ACK@Edge realizes cloud-to-cloud network interconnection through built-in SD-WAN capabilities, and business traffic interoperability, which greatly improves the efficiency, stability, and security of cloud-side collaboration; and through cloud resource docking, it realizes cloud-to-cloud The flexible interoperability of resources under the network improves business resilience in edge scenarios.
The second core capability is edge autonomy. In the cloud-side integrated architecture, operation and maintenance collaboration is an important capability, but it is usually limited by the network conditions before the cloud and the edge. The edge needs appropriate autonomy to ensure business Continuity and continuous and stable operation, in other words, edge resources and edge services can continue to complete the full life cycle management of the business without the cloud management and control, including creation, startup and shutdown, migration, expansion and contraction, etc.
The third support for heterogeneous resources is easier to understand. One of the hallmark features that distinguishes edge computing from traditional central cloud computing is that there are various types of computing resources and storage resources, and the heterogeneity is obvious. ACK@Edge currently supports arm x86 cpu architecture, linux and windows operating systems, and supports mixed deployment of linux applications and windows applications to solve the problem of heterogeneous resources in edge scenarios.
By cooperating with Alibaba Cloud's container mirroring service, it provides multi-regional delivery capabilities in edge scenarios, which can support the multi-regional delivery of a variety of cloud-native products, including container mirroring, application orchestration resource packs, etc.
Here is another piece of information that we need to synchronize with you. All of the above-mentioned core edge container capabilities are open sourced in the edge container platform project OpenYurt. OpenYurt is the official edge container project of CNCF. It is an intelligent open platform project that extends native Kubernetes to edge computing. .
As the core framework of ACK@Edge, it has served more than one million container instances and has been widely used in mainstream edge computing scenarios. After introducing commercial and open source practices, there are several cases to share with you:
The first one is Hema Xiansheng's digital integration and transformation of "people and goods yard" based on the native edge cloud. Through the cloud native system, various types of edge heterogeneous computing power are unified access and unified dispatch, including: ENS (Alibaba Public Cloud Edge Node service), offline self-built edge gateway nodes, GPU nodes, etc. Acquired strong resource elasticity and flexibility brought by business mixing; and through the edge cloud native Al solution provided by ACK@Edge, the SkyEyes Al system that integrates cloud, edge, and end is built to give full play to the nearby edge computing With the advantages of access and real-time processing, it is easy to achieve all-round cost reduction and efficiency improvement, saving 50% of store computing resource costs, and increasing the efficiency of new store opening by 70%.
The second is our case in a video website customer, using ACK@Edge to manage edge computing power across regions, types, and regions, and to deploy video acceleration services. Through the support of heterogeneous resources, customers are at the edge. The computing scenario has gained a strong resource elasticity; there is a number to share with everyone, through the flexibility of edge containers and heterogeneous resource management capabilities, it can save about 50% of the cost.
The third case is the landing of ACK@Edge in the loT smart building project, where the loT gateway device is hosted as an edge node in the cloud management and control of ACK@Edge, and the business on the gateway device interacts with the smart device of the building; the gateway and the end device Operation and maintenance are unified into the central cloud, greatly improving efficiency.
After full discussion with the community students, the OpenYurt community also released the 2021 roadmap. Interested students are welcome to contribute together.
OpenYurt Community 2021 roadmap:
https://github.com/openyurtio/openyurt/blob/master/docs/roadmap.md
- OpenYurt official website: https://openyurt.io
- GitHub project address: https://github.com/openyurtio/openyurt
- Dingding search group number: 31993519, you can enter the group for communication!
Copyright statement: content of this article is contributed spontaneously by Alibaba Cloud real-name registered users, and the copyright belongs to the original author. The Alibaba Cloud Developer Community does not own its copyright and does not assume corresponding legal responsibilities. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and the "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find suspected plagiarism in this community, fill in the infringement complaint form to report it. Once verified, the community will immediately delete the suspected infringing content.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。