Author: Zhao Zhen, Sangfor Cloud Computing Development Engineer, OpenYurt Community Member

Editor's Note: With the continuous promotion of new technologies such as 5G and the Internet of Things, the edge computing industry has already become a big trend. In the future, more and more types, larger and more complex applications and workloads will be used. Deploy to the edge side. Based on the sharing and sorting of Sangfor cloud computing engineer Zhao Zhen in the KubeMeet, a developer salon in the field of cloud native containers jointly organized by CNCF and Alibaba Cloud, this article introduces the opportunities and challenges of edge computing, as well as the edge container open source project OpenYurt in the enterprise production environment. The practice program below.

This sharing is a practical case, mainly about how the OpenYurt product solution is deployed in the real landing situation.

Mainly from four aspects, the first is about the opportunities and challenges encountered by edge computing, and the second part is what solutions Sangfor platform has made to solve these challenges, so that users can make better use of edge computing. The third part is the combination of the solution and OpenYurt, and what are the key points to be implemented. The last part is to make some expectations for the future outlook of the entire industry and the future development of the community.

Opportunities and challenges of edge computing

With the advent of 5G and the emergence of live broadcast and the Internet of Things, more and more edge devices have been used by everyone, and the data generated is also very large. For example, a 1080p video surveillance head of a smart terminal will generate 10GB of data per minute. In a small to medium-sized city, there are 1 million to 1.5 million such cameras, and the number is increasing. In such an edge scenario, its data application is very huge.

In the era of the Internet of Everything, many smart homes have been produced. In addition to simple access gateways, they also have a lot of data to process. This part is also an application scenario on the edge side.

All of the above are the opportunities we have encountered, but what are the challenges?

For some traditional industries, their cloud computing may be very small. For example, there are many private cloud scenarios and government proprietary cloud scenarios on the market. They are not enough to achieve unlimited expansion like those of big manufacturers. Computational processing. In the current market environment, the cloud and terminal environments are very unsatisfactory. The main reasons are as follows.

First, because the device penetration rate of end-side data collection is low, many useful data cannot be collected for analysis and operation by the brain in the cloud.

Second, the data collected is of low dimension and single function, which will miss some valuable data.

Third, the maintenance of front-end equipment is very difficult. Taking cameras as an example, we can't monitor and maintain every camera closely. It may have been several days since the accident to trace the problem. In this case, this part of the data will be lost.

Fourth, the data standards of the industry are different. Equipment has been updated and iterated, and the standards of data have been constantly updated. There are many different types of devices on the market, and the data of these devices is centralized in the cloud for processing, and the cloud's capabilities cannot keep up.

The main bottleneck of traditional cloud is resources and efficiency. A 1080p camera may generate 10GB of data per minute, and the bandwidth of the cloud and the edge is very limited. Just one camera may fill up the bandwidth of the entire network, making other services unusable. Another is the limitation of efficiency. Many private clouds are not very capable, so the data processing cannot achieve the desired effect, and there is no way to respond in a timely manner. For some industries that require low latency, this is very dangerous.

At the same time, the traditional link between the terminal and the cloud is uncontrollable. For example, the terminal loses contact with the cloud due to network jitter, and the instructions from the cloud cannot be delivered to the edge in time, which also brings certain risks.

Furthermore, device updates in the traditional sense are very slow, and there will be no iterations for a long time after a one-time deployment. However, in some emerging industry scenarios, such as smart intersections, its AI algorithms require continuous model training. After they are deployed, they will collect some data. After the data is uploaded to the cloud, the model will be trained to obtain a more optimized version, and then this more optimized version will be pushed to the terminal for more intelligent operations. This part is the process of continuous software update and iteration, which is also impossible in the traditional sense of the cloud.

Sangfor Intelligent Edge Platform Solutions

For these problems, we provide users with solutions. Let's first look at the overall architecture of the solution. It is done from two sides - the edge side and the center side.

First, we adopt a cloud-integrated architecture. Deploying a cloud-edge all-in-one machine for users at the edge can also be understood as a small server, which can be placed in the same place as the terminal device. So they will form an independent small network as a whole. In this way, the devices at the edge can send data to the cloud-edge integrated machine, and the data can be processed and responded to as soon as possible.

Second, even if the cloud side is disconnected from the network, the terminal side can access the network with the side and side all-in-one machine, and we can build some AI algorithms into it, so that the commands in specific occasions can also be responded to.

Finally, it is about data processing. The network bandwidth on the cloud side is limited. We can first collect the data in the all-in-one machine, do a round of processing first, and process some valid data. Then, the SD-WAN network through which the data passes is reported to the center side for processing, which reduces the pressure on bandwidth on the one hand, and improves the data processing capability of the center side on the other hand.

The edge autonomy capability in the case of cloud-side disconnection is actually based on the combination of our and the community's OpenYurt, which organically combines cloud-side operation and maintenance channels, edge-side autonomy and unitized deployment to form such an edge computing architecture diagram .

The ultimate goal of the cloud-edge all-in-one machine is to open up the last mile for intelligent transformation.

It provides a lot of functions, including a control panel, a platform for AI algorithms, and the collection of monitoring logs, of course, the most important security network management, and some video decoding and encoding. At the same time, this box also supports hardware adaptation, such as arm architecture, x86 architecture, as well as the cooperation of different network GPUs and the adaptation of the underlying data operating system.

After completing the adaptation of the underlying hardware, the adaptation of AI algorithms, and the adaptation of network equipment and video decoding, the overall solution is handed over to the user, which can help the user to realize the containerized deployment of the business faster, which greatly improves the The efficiency of intelligent transformation of products.

Combining technical solutions with OpenYurt

One of the more important usage scenarios of edge computing is smart intersections. Every intersection in the city has a different strategy. For example, at some intersections with very large traffic flow, its focus is more on traffic control. Due to the dense number of vehicles, the traffic lights may not have time to respond, and they need to be supported by AI algorithms.

For another example, in the case of very dense traffic, some people with criminal records that the public security system focuses on passing by. At this time, the face recognition function of the AI ​​algorithm should be used to notify the surrounding police in time to remind them to take precautions.

There are also many urban roads that will be combined with village roads. This combination of urban and rural roads requires not only traffic control, but also traffic safety control. We need to implant some intelligent voice service shouting systems into the AI ​​algorithm, combined with the dynamic alarm function, to avoid the occurrence of traffic accidents.

These are the access scenarios of AI algorithms at smart intersections. These AI algorithms can be injected intelligently according to different areas. This actually uses the unitized management of edge computing OpenYurt. We set different units for it. After connecting to the network, different AI algorithms can be pushed according to the current area.

Having said so much about the real business landing scenario, we will explain our transformation ideas in combination with the architecture of the entire platform.

The KubeManager (KM) architecture is a self-developed product of our company. It is a container management platform. The bottom layer is managed by multiple K8s clusters. It integrates multiple application stores and software, as well as some data collection and monitoring. , Visual display to users.

It is mainly divided into two large modules. The lower left of the above figure is the management cluster. The series of content mentioned above are all undertaken in the management cluster. The user cluster can access data through the access layer, and then send the API data to the API business layer, and then store the data in the etcd of the native K8s.

The part of our transformation is mainly aimed at the user cluster, which is combined with OpenYurt. There were also many problems in the process of remodeling and landing.

In the case of multiple masters, it is necessary to adapt to the tunnel traffic, and it is very troublesome for users to adapt themselves. Therefore, we have completed the docking with the community and integrated them into the platform. Users can use it directly without considering various adaptation issues.

After the user cluster is connected to the km cluster, it needs to be converted from the K8s cluster to the edge cluster. We also provide an automatic conversion.

OpenYurt is based on native K8s. Due to the different construction methods, there will be some differences in the later platform docking process, such as automatic certificate management and control, polling operation, and issuance, which need to be solved in the early docking process. Then you can use OpenYurt.

After the transformation, the user cluster architecture is switched from the left side to the right side. The main changes here are as follows:

First, changes were made to the YurtControlManager component, which used to be a deployment, and its replica count was 1. Now change it to DaemonSet, which will automatically expand and shrink as the number of masters changes, this is one.

Second, because the overall traffic is proxied through Nginx to find different APS servers, YurtHub does not directly access the APIserver, but through Nginx. But it can now also achieve the desired effects of the combination of edge clusters and OpenYurt - such as traffic filtering and edge autonomy.

Industry Outlook and Community Development Prospects

Finally, let’s talk about the development and future expectations of the entire industry.

As can be seen from the above figure, the growth of edge devices is a continuous process. The development of the entire industry will have a lot of demand for edge devices. Such a large demand will drive the development of the entire industry. The development of the industry is also inseparable from the contribution of the edge community, including the OpenYurt community. It is hoped that every user can be more marginalized, secure and intelligent when using OpenYurt.

If you are interested, you can also search the group number: 31993519, and join the Dingding group of the OpenYurt project.

Stamp here , immediately understand OpenYurt project!


阿里云云原生
1k 声望302 粉丝