Author|Liu Yu
Preface: Knative is a serverless framework based on Kubernetes. Its goal is to develop a cloud-native, cross-platform serverless orchestration standard.
Knative introduction
Knative implements its Serverless standard by integrating container construction (or function), workload management (dynamic scaling) and event model.
Under the Knative architecture, the collaboration relationship of each role is shown in the figure below.
- Developers mean that serverless service developers can directly use the native Kubernetes API to deploy serverless services based on Knative.
- Contributors mainly refer to contributors to the community.
- Knative can be integrated into supporting environments, such as cloud vendors or internal enterprises. Currently, Knative is implemented based on Kubernetes, so it can be considered that Knative can be deployed wherever Kubernetes exists.
- The user refers to the end user who accesses the service through the Istio gateway or the event system triggers the serverless service in Knative.
- As a general serverless framework, Knative consists of 3 core components.
- Tekton: Provides general-purpose build capabilities from source code to image. The Tekton component is mainly responsible for obtaining the source code from the code repository, compiling it into a mirror image, and pushing it to the mirror repository. All these operations are performed in the Kubernetes Pod.
- Eventing: Provides a complete set of event management capabilities such as event access and triggering. The Eventing component has made a complete design for the Serverless event-driven model, including functions such as access to external event sources, event registration, subscription, and event filtering. The event model can effectively decouple the dependency between producers and consumers. The producer can generate events before the consumer starts, and the consumer can also listen for events before the producer starts.
Collaboration of roles under Knative architecture
- Serving: Manage serverless workloads, which can be well integrated with events, and provide request-driven auto-scaling capabilities, and can be scaled to zero when there is no service to process. The responsibility of the Serving component is to manage the workload to provide external services. The most important feature of the Serving component is the ability to automatically scale. Currently, there is no limit to its expansion and contraction boundary. Serving also has the ability to publish in grayscale.
Knative deployment
This article will take the deployment of Kantive services on Alibaba Cloud as an example to explain in detail how to deploy Knative related services. First, log in to the container service management console, as shown in the figure.
Alibaba Cloud Container Service Management Console
If there is no cluster, you can choose to create a cluster first, as shown in the figure below.
Configure and create a cluster
It is relatively slow to create a cluster, and wait patiently for the completion of the cluster creation, as shown in the figure after success.
Schematic diagram of successful cluster creation
After entering the cluster, select "application" on the left, find "Knative" and click "one-click deployment", as shown in the figure.
Create Knative application
After a while, after the Knative installation is complete, you can see that the core components are already in the "deployed" state, as shown in the figure.
Knative application deployment completed
At this point, we have completed the deployment of Knative.
Experience test
First, you need to create an EIP and bind it to the API Server service, as shown in the figure below.
The picture shows the API Server binding EIP
After completion, test the serverless application. Select "Kantive Application" in the application, and select "Create using template" in the service management, as shown in the figure.
Quickly create sample applications
After the creation is complete, you can see that a Serverless application has appeared in the console, as shown in the figure.
The sample application was created successfully
At this point, we can click on the application name to view the details of the application, as shown in the figure below.
View sample application details
In order to facilitate testing, you can set the Host locally:
101.200.87.158 helloworld-go.default.example.com
After the setting is completed, open the domain name assigned by the system in the browser, and you can see that the expected result has been output, as shown in the figure.
Browser test sample application
So far, we have completed the deployment and testing of a serverless application based on Knative.
At this point, we can also manage the cluster through CloudShell. On the cluster list page, choose to manage through CloudShell, as shown in the figure.
Cluster management list
Manage the created cluster through CloudShell, as shown in the figure.
CloudShell window
Execution instructions:
kubectl get knative
You can see the newly deployed Knative application, as shown in the figure.
CloudShell view Knative application
Electronic Information, National University of Defense Technology, Alibaba Cloud Serverless Product Manager, Alibaba Cloud Serverless Cloud Evangelist, Special Lecturer of CIO Academy.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。