Introduction to This article will introduce the construction of the Kubernetes observability system and the best practices for implementing the Kubernetes observability system based on Alibaba Cloud products.
Author: Jia Xu, Alibaba Cloud Container Service Technical Expert
introduction
The popularity and complexity of Kubernetes applications in the production environment are increasing, and the stability assurance challenges that come with it are also increasing.
How to build a comprehensive and in-depth observability architecture and system is one of the key factors to improve system stability. ACK precipitates the best practices of observability and reveals it to users with the capabilities of Alibaba Cloud product functions. Observability tools and services become infrastructure, empowering and helping users use product functions, and improving the stability of users' Kubernetes clusters. And experience.
This article will introduce the construction of the Kubernetes observability system and the best practices for implementing the Kubernetes observability system based on Alibaba Cloud products.
Observability architecture of Kubernetes system
The observability challenges of the Kubernetes system include:
- The complexity of the K8s system architecture. system includes a control plane and a data plane, each of which contains multiple components that communicate with each other. The control plane and data are bridged and aggregated through kube-apiserver.
- dynamic. Pod, Service and other resources are dynamically created and assigned IP. After the Pod is rebuilt, new resources and IP will be assigned. This requires dynamic service discovery to obtain monitoring objects.
- architecture. application is decomposed into multiple components according to the microservice architecture, and the number of copies of each component can be automatically or manually controlled according to flexibility.
In response to the observability challenge of the Kubernetes system, especially in the case of the rapid growth of the cluster size, the efficient and reliable observability of the Kubernetes system is the cornerstone of system stability.
So, how to improve the observability of the Kubernetes system in a production environment?
The observability scheme of the Kubernetes system includes indicators, logs, link tracking, K8s Event events, NPD framework and other methods. Each method can see the status and data of the Kubernetes system from different dimensions. In the production environment, we usually need to use a variety of methods, and sometimes use multiple methods to link observations to form a complete three-dimensional observability system, improve the coverage of various scenes, and improve the overall stability of the Kubernetes system . The following will outline the observability solution for the K8s system in the production environment.
Metrics
Prometheus is the de facto standard for the industry's indicator data collection solutions. It is an open source system monitoring and alarm framework, inspired by Google's Borgmon monitoring system. In 2012, former Google employees of SoundCloud created Prometheus and developed it as a community open source project. In 2015, the project was officially released. In 2016, Prometheus joined the CNCF Cloud Native Computing Foundation.
Prometheus has the following characteristics:
- Multi-dimensional data model (Key and Value key-value pairs based on time series)
- Flexible query and aggregation language PromQL
- Provide local storage and distributed storage
- Collect time series data through HTTP-based Pull model
- Pushgateway (optional middleware for Prometheus) can be used to implement Push mode
- The target machine can be discovered through dynamic service discovery or static configuration
- Support a variety of charts and data market
Prometheus can periodically collect the index data of the component exposed under /metrics of the HTTP(s) endpoint and store it in TSDB to implement PromQL-based query and aggregation functions.
For the indicators in the Kubernetes scenario, they can be classified from the following perspectives:
- Container Basic Resource Index
The collection source is the built-in cAdvisor of kubelet, which provides relevant indicators such as container memory, CPU, network, and file system. Examples of indicators include:
The current memory usage of the container: container\_memory\_usage\_bytes;
The number of bytes received by the container network container\_network\_receive\_bytes\_total;
The number of bytes sent by the container network container\_network\_transmit\_bytes\_total, etc.
- Kubernetes node resource indicators
The collection source is node\_exporter, which provides node system and hardware-related indicators. Examples of indicators include: node total memory node\_memory\_MemTotal\_bytes, node file system space node\_filesystem\_size\_bytes, node network interface ID node\ _network\_iface\_id, etc. Based on this type of indicators, node-level indicators such as CPU/memory/disk usage of the node can be counted.
- Kubernetes resource indicators
The collection source is kube-state-metrics, which generates metrics based on Kubernetes API objects, and provides K8s cluster resource metrics, such as Node, ConfigMap, Deployment, DaemonSet, etc. Take Node type indicators as an example, including node Ready status indicator kube\_node\_status\_condition, node information kube\_node\_info and so on.
- Kubernetes component indicators
Kubernetes system component indicators. For example, kube-controller-manager, kube-apiserver, kube-scheduler, kubelet, kube-proxy, coredns, etc.
Kubernetes operation and maintenance component indicators. Observable classes include blackbox\_operator, which realizes the definition of user-defined detection rules; gpu\_exporter, which realizes the ability to reveal GPU resources.
Kubernetes business application indicators. Including specific business Pod indicators revealed in the /metrics path for external query and aggregation.
In addition to the above indicators, K8s provides monitoring interface standards for external disclosure of indicators through API, which specifically include Resource Metrics, Custom Metrics and External Metrics.
<span> monitoring interface standard </ span> | <span> APIService address </ span> | <span> interface scene description </ span> |
<span >Resource Metrics</span> | metrics.k8s.io<span class="lake-fontsize-11"> http://metrics.k8s.io/ </span> | Used for the built-in consumer link in Kubernetes, usually provided by Metrois-Server. </span> |
<span>Custom Metrics</span> | custom.metrics.k8s.io<span class="lake-fontsize-11"> http://customs.8015587.k io/ </span> | <span>The main implementation is Prometheus, which provides resource monitoring and custom monitoring. </ span> |
<span> External Metrics </ span> | external.metrics.k8s.io <span class = "Lake--fontSize. 11"> HTTP: //external.metrics.k8s. io/ </span> | <span>The main implementation is the provider of cloud vendors, providing monitoring indicators of cloud resources. </span> |
Resource Metrics corresponds to the interface metrics.k8s.io. The main implementation is metrics-server, which provides resource monitoring. The more common ones are node level, pod level, and namespace level. These indicators can be directly accessed through kubectl top, or through K8s controller, such as HPA (Horizontal Pod Autoscaler). The system architecture and access links are as follows:
Custom Metrics corresponds to the API custom.metrics.k8s.io, and the main implementation is Prometheus. It provides resource monitoring and custom monitoring. Resource monitoring and the above resource monitoring are actually related to coverage. This custom monitoring refers to: for example, the application wants to expose a similar online number, or call the back Slow query of MySQL for this database. These can actually be defined in the application layer, and then the corresponding metrics are exposed through the standard Prometheus client, and then collected by Prometheus.
And once this type of interface is collected, data can be consumed through a standard similar to an interface like custom.metrics.k8s.io, which means that if you access Prometheus in this way, you can use custom.metrics.k8s.io This interface is used for HPA and data consumption. The system architecture and access links are as follows:
External Metrics . Because we know that K8s has now become an implementation standard for cloud native interfaces. In many cases, cloud services are used in the cloud. For example, in an application, the message queue is used in the front and the RBS database is used in the back. Sometimes when data is consumed, it is necessary to consume some monitoring indicators of cloud products at the same time, such as the number of messages in the message queue, or the number of connections of the access layer SLB, the number of 200 requests in the upper layer of the SLB, etc. Monitoring indicators.
How to consume? A standard is also implemented in K8s, which is external.metrics.k8s.io. The main implementation vendors are providers of various cloud vendors, through which cloud resource monitoring indicators can be passed. Alibaba cloud metrics adapter is also implemented on Alibaba Cloud to provide an implementation of this standard external.metrics.k8s.io.
## Logging
In summary:
* The log of the host kernel. The host kernel log can assist developers in diagnosing, for example, network stack abnormalities, driver abnormalities, file system abnormalities, and abnormalities that affect the stability of the node (kernel).
* Runtime log. The most common runtime is Docker. You can use Docker's logs to troubleshoot problems such as deleting Pod Hang.
* K8s component log. APIServer logs can be used for auditing, Scheduler logs can be used to diagnose scheduling, etcd logs can be used to view storage status, and Ingress logs can be used to analyze access layer traffic.
* Application log. You can view the status of the business layer through application log analysis and diagnose abnormalities.
Log collection methods are divided into passive collection and active push. In K8s, passive collection is generally divided into two methods: Sidecar and DaemonSet. Active push has two methods: DockerEngine push and business direct write.
* DockerEngine itself has the LogDriver function. You can configure different LogDriver to write the container's stdout to remote storage through DockerEngine to achieve the purpose of log collection. The customization, flexibility, and resource isolation of this method are very low, and it is generally not recommended to use it in a production environment;
* Business write-through is an SDK that integrates log collection in the application, and sends the log directly to the server through the SDK. This method saves the logic of placing the disk collection and does not need to deploy an additional agent. The resource consumption of the system is the lowest. However, due to the strong binding of the business and log SDK, the overall flexibility is very low. Generally, there are only scenarios with a large amount of logs. Used in
* In DaemonSet mode, only one log agent is run on each node, and all logs on this node are collected. DaemonSet has a much smaller resource footprint, but its scalability and tenant isolation are limited. It is more suitable for clusters with single functions or not many businesses;
* Sidecar is used to deploy a separate log agent for each POD. This agent is only responsible for the log collection of one business application. Sidecar occupies more resources, but its flexibility and multi-tenant isolation are strong. It is recommended to use this method for large-scale K8s clusters or as a PaaS platform for clusters serving multiple business parties.
Mount host collection, standard input and output collection, and Sidecar collection.
To summarize:
* DockerEngine direct writing is generally not recommended;
* Business write-through is recommended to be used in scenarios with a large amount of logs;
* DaemonSet is generally used in small and medium-sized clusters;
* Sidecar is recommended for use in very large clusters.
####
## Event
Event monitoring is a monitoring method suitable for Kubernetes scenarios. The event includes the time, component, level (Normal, Warning), type, and detailed information. Through the event, we can know the entire life cycle of the application's deployment, scheduling, operation, and stop, and we can also understand what is happening in the system through the event. Some exceptions.
A design concept in K8s is a state transition based on a state machine. When transitioning from a normal state to another normal state, a Normal event will occur, and when transitioning from a normal state to an abnormal state, a Warning event will occur. Normally, we are more concerned about Warning events. Event monitoring is to aggregate Normal events or Warning events to the data center, and then through data center analysis and alarms, the corresponding abnormalities are exposed through methods such as nails, text messages, emails, etc., so as to complement other monitoring. perfect.
Events in Kubernetes are stored in etcd. By default, they are only stored for 1 hour, which makes it impossible to realize analysis in a long period. After long-term storage and customized development of events, more diverse analysis and alarms can be realized:
* Real-time alarm for abnormal events in the system, such as Failed, Evicted, FailedMount, FailedScheduling, etc.
* Usually, troubleshooting may involve searching for historical data, so it is necessary to query events in a longer time range (days or even months).
* Events support classification statistics, such as the ability to calculate the trend of event occurrence and compare it with the previous time period (yesterday/last week/before release) in order to make judgments and decisions based on statistical indicators.
* Support different personnel to filter and screen according to various dimensions.
* Support custom subscription to these events to do custom monitoring in order to integrate with the company's internal deployment operation and maintenance platform.
## NPD (Node Problem Detector) framework
The stability of the Kubernetes cluster and its running containers strongly depends on the stability of the nodes. Related components in Kubernetes only focus on issues related to container management, and will not provide more detection capabilities for hardware, operating systems, container runtimes, and dependent systems (network, storage, etc.). NPD (Node Problem Detector) provides a diagnostic inspection framework for the stability of nodes. On the basis of the default inspection strategy, the inspection strategy can be flexibly expanded. The abnormality of the node can be converted into Node events and pushed to the APIServer. APIServer performs event management.
NPD supports a variety of exception checks, such as:
* Basic service problem: NTP service is not started
* Hardware problem: CPU, memory, disk, and network card are damaged
* Kernel problem: Kernel hang, file system is damaged
* Container runtime problem: Docker hang, Docker cannot start
* Resource issues: OOM, etc.
In summary, this chapter summarizes common Kubernetes observability schemes. In the production environment, we usually need to comprehensively use various solutions to form a three-dimensional, multi-dimensional, and mutually complementary observability system; after the observability solution is deployed, it is necessary to quickly diagnose abnormalities and errors based on the output results of the above solutions to effectively reduce false alarms It has the ability to save, review and analyze historical data; further extension, data can be provided to machine learning and AI frameworks to realize advanced application scenarios such as elastic prediction, abnormal diagnosis analysis, and intelligent operation and maintenance AIOps.
This requires observability best practices as the foundation, including how to design, plug-in deployment, configuration, and upgrade of the above-mentioned observability scheme architecture, and how to quickly and accurately diagnose and analyze the cause based on the output results. Alibaba Cloud Container Service ACK and related cloud products (monitoring service ARMS, log service SLS, etc.), realize the best practices of cloud vendors through productization capabilities, empower users, and provide a complete and comprehensive solution that allows users to quickly deploy , Configure, upgrade, and master Alibaba Cloud’s observability program, which significantly improves the efficiency and stability of enterprise cloud migration and cloud nativeization, and reduces technical thresholds and overall costs.
The following will take ACK Pro, the latest product form of ACK, as an example, combined with related cloud products, to introduce ACK's observability solutions and best practices.
# ACK observability capability
## Metrics Observability Scheme
For the observability of indicators, ACK can support two observability schemes: open source Prometheus monitoring and Alibaba Cloud Prometheus monitoring (Alibaba Cloud Prometheus monitoring is a sub-product of the ARMS product).
Open source Prometheus monitoring, provided in the form of a helm package, adapted to the Alibaba Cloud environment, integrated with DingTalk alarms, storage and other functions; the deployment entry is in the application directory of the console, ack-prometheus-operator, and the user can be configured in the ACK console. Key deployment. Users only need to configure the helm package parameters in the Alibaba Cloud ACK console to customize the deployment.
Cloud Prometheus monitors , which is a sub-product of the ARMS product. Application Real-Time Monitoring Service (ARMS) is an application performance management product, including front-end monitoring, application monitoring and Prometheus monitoring three sub-products.
In Gartner's APM Magic Quadrant evaluation in 2021, Alibaba Cloud Application Real-time Monitoring Service (ARMS), as the core product of Alibaba Cloud APM, participates in joint cloud monitoring and log services. Gartner commented on Alibaba Cloud APM:
* China's strongest influence : Alibaba Cloud is China's largest cloud service provider, and Alibaba Cloud users can use cloud monitoring tools to meet their observability needs.
* open source integration : Alibaba Cloud attaches great importance to integrating open source standards and products (such as Prometheus) into its platform.
* cost advantage : Compared with using third-party APM products on Alibaba Cloud, Aliyun APM products are more cost-effective.
The following figure summarizes the module division and data link of open source Prometheus and Alibaba Cloud Prometheus.
ACK supports K8s observability capabilities such as CoreDNS, cluster nodes, and cluster profiles; in addition, ACK Pro also supports the observability capabilities of the managed control components Kube API Server, Kube Scheduler and Etcd, and continues to iterate. Users can quickly discover the system problems and potential risks of the K8s cluster through the rich monitoring of the market in Alibaba Cloud Prometheus, combined with the alarm capabilities, and take corresponding measures in time to ensure cluster stability. The monitoring market integrates ACK best practice experience, which can help users analyze, analyze, and locate problems from multiple dimensions. The following introduces how to design the observability market based on best practices, and lists specific cases of using the monitoring market to locate problems to help understand how to use the observability capability.
First look at the observability capabilities of ACK Pro. The monitoring market entrance is as follows:
APIServer is one of the core components of K8s and the hub for interaction between K8s components. The monitoring market design of ACK Pro APIServer takes into account that users can select the APIServer Pod to be monitored to analyze single indicators, aggregate indicators and request sources, etc., and can also drill down to One or more API resources are linked to observe the indicators of APIServer. The advantage of this is that it can not only observe the global view of all APIServer Pods, but also drill down to observe the monitoring of specific APIServer Pods and specific API resources, and monitor all and partial observations. Ability, very effective for locating problems. Therefore, according to the best practice of ACK, the implementation includes the following 5 modules:
* Provide APIServer Pod, API resources (Pods, Nodes, ConfigMaps, etc.), quantile (0.99, 0.9, 0.5), and filter box for statistical time interval. By controlling the filter box, users can control and monitor the market in linkage to achieve linkage
* Highlight key indicators in order to identify the critical state of the system
* Display the monitoring market of single indicators such as APIServer RT, QPS, etc., to realize the observation of single-dimensional indicators
* Display the monitoring market of aggregated indicators such as APIServer RT and QPS to realize the observation of multi-dimensional indicators
* Show the client source analysis of APIServer access to realize the analysis of the access source
The following outlines the implementation of the module.
#####
### Key indicators
Shows the core indicators, including APIServer total QPS, read request success rate, write request success rate, Read Inflight Request, Mutating Inflight Request, and the number of dropped requests per unit time Dropped Requests Rate.
These indicators can summarize whether the system status is normal. For example, if the Dropped Requests Rate is not NA, it means that the APIServer has dropped the request because the ability to process the request cannot satisfy the request and needs to be located immediately.
### Cluster-Level Summary
Including read non-LIST read request RT, LIST read request RT, write request RT, read request Inflight Request, modification request Inflight Request, and the number of discarded requests per unit time. The implementation of this part of the market combines the best practice experience of ACK.
For the observability of the response time, you can intuitively observe the response time for different resources, different operations, and different ranges at different time points and intervals. You can select different quantiles to filter. There are two more important considerations:
1. Is the curve continuous?
2. RT time
Let me explain the continuity of the curve first. Through the continuity of the curve, it is very intuitive to see whether the request is a continuous request or a single request.
The following figure shows that during the sampling period, APIServer receives a request for PUT leases, and the P90 RT is 45ms during each sampling period.
Because the curve in the figure is continuous, indicating that the request exists in all sampling periods, it is a continuous request.
The following figure shows that during the sampling period, APIServer receives the request of LIST daemonsets, and the P90 RT is 45ms in the sampling period with samples.
Because there is only one time in the figure, it means that the request only exists in one sampling period. This scenario comes from the request record generated by the user executing kubectl get ds --all-namespaces.
Let's explain the RT reflected by the curve.
The user executes the command to create a 1MB configmap and requests to connect to the public network SLBkubectl create configmap cm1MB --from-file=cm1MB=./configmap.file
In the log recorded by APIServer, the POST configmaps RT of this request is 9.740961791s. This value can fall into the (8, 9) interval of apiserver\_request\_duration\_seconds\_bucket, so it will be in apiserver\_request\_duration\_seconds\_bucket A sample point is added to the bucket corresponding to le=9. In the observability display, according to the 90th quantile, 9.9s is calculated and displayed graphically. This is the actual RT recorded in the log and the display in the observability display The relationship of RT.
Therefore, the monitoring market can be used in conjunction with the log observability function, and the information in the log can be displayed in a global view in a visual and summary view. The best practice is to combine the monitoring market and log observability for comprehensive analysis.
`
I0215 23:32:19.226433 1 trace.go:116] Trace[1528486772]: "Create" url:/api/v1/namespaces/default/configmaps,user-agent:kubectl/v1.18.8 (linux/amd64) kubernetes/d2f5a0f,client:39.x.x.10,request_id:a1724f0b-39f1-40da-b36c-e447933ef37e (started: 2021-02-15 23:32:09.485986411 +0800 CST m=+114176.845042584) (total time: 9.740403082s):
Trace[1528486772]: [9.647465583s] [9.647465583s] About to convert to expected version
Trace[1528486772]: [9.660554709s] [13.089126ms] Conversion done
Trace[1528486772]: [9.660561026s] [6.317µs] About to store object in database
Trace[1528486772]: [9.687076754s] [26.515728ms] Object stored in database
Trace[1528486772]: [9.740403082s] [53.326328ms] END
I0215 23:32:19.226568 1 httplog.go:102] requestID=a1724f0b-39f1-40da-b36c-e447933ef37e verb=POST URI=/api/v1/namespaces/default/configmaps latency=9.740961791s resp=201 UserAgent=kubectl/v1.18.8 (linux/amd64) kubernetes/d2f5a0f srcIP="10.x.x.10:59256" ContentType=application/json:
`
The following explains that RT is directly related to the specific content of the request and the size of the cluster.
In the above example of creating a configmap, it is also creating a 1MB configmap. The public network link is affected by network bandwidth and delay, reaching 9s; while in the test of the internal network link, it only takes 145ms, which is affected by network factors. Is remarkable.
Therefore, RT is related to the requested resource object, byte size, network, etc. The slower the network, the larger the byte size, the larger the RT.
For large-scale K8s clusters, the data volume of the full LIST (such as pods, nodes and other resources) can sometimes be very large, resulting in an increase in the amount of transmitted data and an increase in RT. Therefore, for the RT index, there is no absolute health threshold. It must be comprehensively evaluated based on the specific request operation, cluster size, and network bandwidth. If it does not affect the business, it can be accepted.
For a small-scale K8s cluster, an average RT of 45ms to 100ms is acceptable; for a cluster with a node size of 100, an average RT of 100ms to 200ms is acceptable.
However, if the RT continues to reach the second level, or even the RT reaches 60s, causing the request to time out, in most cases an exception occurs, and further positioning and processing are required to meet expectations.
These two indicators are exposed to the outside world through APIServer /metrics. You can execute the following commands to view inflight requests, which are indicators that measure the ability of APIServer to handle concurrent requests. If too many concurrent requests reach the threshold specified by the APIServer parameters max-requests-inflight and max-mutating-requests-inflight, the APIServer current limit will be triggered. Usually this is an abnormal situation that needs to be quickly located and handled.
### QPS & Latency
This part can visually display the classification of request QPS and RT according to Verb and API resources for aggregate analysis. You can also display the error code classification of read and write requests, and you can intuitively discover the types of error codes returned by the request at different points in time.
### Client Summary
This part can visually display the requested client as well as operations and resources.
QPS By Client can count the QPS value of different clients according to the client dimension.
QPS By Verb + Resource + Client can count the distribution of requests in unit time (1s) according to the dimensions of client, Verb, and Resource.
Based on ARMS Prometheus, in addition to the APIServer market, ACK Pro also provides Etcd and Kube Scheduler monitoring disks; ACK and ACK Pro also provide CoreDNS, K8s cluster, K8s nodes, Ingress and other large disks, which will not be introduced here. Users can view ARMS's market. These markets combine the best practices of ACK and ARMS in the production environment, which can help users observe the system with the shortest path, find the root cause of the problem, and improve the efficiency of operation and maintenance.
## Logging Observability Scheme
SLS Alibaba Cloud Log Service is Alibaba Cloud's standard log solution, which connects to various types of log storage.
For the logs of the hosting side components, ACK supports the log display of the hosting cluster control plane component (kube-apiserver/kube-controller-manager/kube-scheduler), and collects the logs from the ACK control layer to the user's SLS log service Log Project .
For user-side logs, users can use Alibaba Cloud's logtail and log-pilot technical solutions to collect the required container, system, and node logs into the SLS logstore, and then they can conveniently view the logs in SLS.
## Event observability plan + NPD observability plan
The architecture design of Kubernetes is based on a state machine. The transition between different states will generate corresponding events. The transition between normal states will generate Normal-level events, and the transition between normal and abnormal states will generate Warning-level events. .
ACK provides an out-of-the-box container scene event monitoring solution, through the NPD (node-problem-detector) maintained by ACK and the kube-eventer included in the NPD to provide container event monitoring capabilities.
* NPD (node-problem-detector) is a tool for Kubernetes node diagnosis, which can convert node abnormalities, such as Docker Engine Hang, Linux Kernel Hang, network out-of-network abnormalities, and file descriptor abnormalities into Node events, combined with kube-eventer The closed loop of node event alarm can be realized.
* kube-eventer is an open source Kubernetes event offline tool maintained by ACK. It can offline cluster events to Dingding, SLS, EventBridge and other systems, and provides different levels of filter conditions to achieve real-time event collection, targeted alarms, and asynchronous archiving.
NPD detects node problems or failures based on configuration and third-party plug-ins, and generates corresponding cluster events. The Kubernetes cluster itself will also generate various events due to the switching of the cluster state. For example, Pod eviction, mirror pull failure and other abnormal situations. The Kubernetes event center of Log Service (SLS) gathers all events in Kubernetes in real time and provides storage, query, analysis, visualization, and alarm capabilities.
#ACK Observability Outlook
ACK and related cloud products have achieved comprehensive observation capabilities for Kubernetes clusters, including indicators, logs, link tracking, events, etc. The following development directions include:
* Explore more application scenarios and associate application scenarios with observability to help users better use K8s. For example, monitor the memory/CPU and other resource water levels of the container in the Pod for a period of time, use historical data to analyze whether the user's Kubernets container resource requests/limits are reasonable, if not reasonable, give the recommended container resource requests/limits; monitor the cluster APIServer RT is too large , Automatically analyze the cause of the abnormal request and suggestions for handling;
* Link multiple observability technical solutions, such as K8s event and indicator monitoring, to provide richer and more dimensional observability capabilities.
We believe that the future development direction of ACK observability will become wider and wider, bringing more and more outstanding technical and social value to customers!
> Copyright Statement: content of this article is contributed spontaneously by Alibaba Cloud real-name registered users. The copyright belongs to the original author. The Alibaba Cloud Developer Community does not own its copyright and does not assume corresponding legal responsibilities. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and the "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find suspected plagiarism in this community, fill in the infringement complaint form to report it. Once verified, the community will immediately delete the suspected infringing content.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。