Introduction to "Real-time Data Warehouse Introductory Training Camp" consists of Alibaba Cloud researcher Wang Feng, Alibaba Cloud senior product expert Liu Yiming and other real-time computing Flink version and Hologres technology/product front-line experts joined forces to build the training camp course. System, carefully polished the content of the course, and directly hit the pain points encountered by the current students. Analyze the architecture, scenarios, and practical applications of real-time data warehouses from the shallower to the deeper, and 7 high-quality courses will help you grow from a small white to a big cow in 5 days!
This article is compiled from the live broadcast "Hologres Performance Tuning Practice-Qingfen"
Video link: https://developer.aliyun.com/learning/course/807/detail/13889
Brief content:
First, the best practice of Hologres table building
Two, Hologres performance problem analysis and optimization
First, the best practice of Hologres table building
(1) The need for table optimization
Why is Hologres table optimization important?
First of all, in terms of overall query performance and write performance, there is a big difference in performance between a good built table and a poorly built table.
Secondly, table optimization needs to be done as soon as possible, because Hologres may require users to repeat some data import while changing DDL. This repetitive work makes us hope to complete table optimization as soon as possible.
Finally, a good table is also helpful to the user's data storage cost. If the table is not built properly, it may lead to the creation of some unnecessary indexes, and then result in more redundant storage of data, thereby increasing the cost.
Therefore, table optimization is very important, which is why it is included as the first part of this article.
(2) Business modeling is the prerequisite for performance optimization
After talking about the importance of table building, let's look at the optimization of the entire business modeling before table building optimization. While considering the use of Hologres, we need to know what kind of business problems can be solved through Hologres and how to solve them.
Hologres itself is a HASP product. When using Hologres, it needs to be combined with business scenarios. We need to know whether this scenario is an analysis scenario or an online service scenario. If it is an analytical type, it is more friendly to use Hologres' column storage. If it is an online service type, it is more friendly to use row storage. These are related to business scenarios.
The second is to be able to combine the advantages of Hologres itself. Hologres is an online service and interactive analysis product. It is not suitable for ETL and massive data dragging scenarios. Therefore, when moving the business to Hologres, you cannot move all the scenes, otherwise it may cause Hologres to do some things that are not suitable for itself, and the trust will not be good.
The third is to make some trade-offs. In order to achieve the expected performance, it may be necessary to do some pre-calculation or data processing operations in advance to reduce the complexity of subsequent calculations and speed up the calculation speed.
All of the above are closely related to pre-data modeling and overall business expectations.
(3) Choice of storage method
After finishing the above preparations, we need to select the Hologres management storage method.
Hologres itself supports two storage methods, namely row storage and column storage.
The main application scenario of row storage is to perform high QPS queries on the primary key, and when our table is relatively wide, a query will read a large number of columns. This scenario is very suitable for Hologres.
In addition, Blink's dimension table query must use row storage, because in general, Blink's dimension table is a high-QPS, Key-based query, and column storage cannot withstand such a high pressure.
Column storage is suitable for complex interactive analysis queries, such as a query, which has various complex calculations such as associations, aggregations, and so on. At the same time, it covers a lot of scenarios, including filtering, aggregation, etc., column storage is a more common storage method.
Row storage is mainly suitable for online service scenarios, and column storage is mainly suitable for analytical scenarios. This is the difference between the two storage options.
(4) Optimize the number of shards
Shard\_count: Shard realizes the effect of physical sub-table, and multiple Shards serve and query in parallel.
Increasing shards can increase the distributed parallelism of queries. More shards may not necessarily query faster, and will also bring about the scheduling overhead of concurrent queries.
After talking about the storage method, let's look at the number of shards.
When Hologres stores, it divides the physical tables into Shard storage. Each table is distributed to all physical nodes according to a certain distribution method, and then each Shard can perform concurrent queries. The more Shards, the equivalent The higher the concurrency of the entire query. But the number of shards is not the better, because it has some additional overhead, so we need to design the number of shards for each table according to the data volume of the entire lookup table and the complexity of the query.
When the cluster is expanding, for example, we originally had a 128 Core instance. After expanding to 256 Core, we need to make certain adjustments to the entire number of shards so that we can enjoy the performance improvement brought by the expansion.
Because our entire concurrency is above the number of shards, if the instance is expanded but the number of shards does not change, then the concurrency of the entire calculation has not changed. This situation will lead to the fact that although the capacity is expanded, the query performance is not improved.
Under normal circumstances, we would recommend users to set the number of shards to be similar to the number of instance specifications. For example, for a 64 Core, the number of shards is set to 40 or 64, which is closer to the number of instance specifications. When the specifications increase, we hope that the number of shards will also increase, thereby increasing the concurrency of the entire query.
(5) Optimize the Distribution Key
Then after talking about the number of shards, let's take a look at the very important Distribution Key in Hologres, which is mainly used to determine how data is allocated to each shard.
Distribution\_key: Distribute data to multiple shards in a balanced manner to make the query load more balanced. Directly locate the corresponding shard when querying.
If a Primary Key index (used for data update) is created, the default is distribution\_key. If Distribution\_key is empty, the default is Random, and Distribution\_key must be a subset of the Primary Key.
A good Distribution Key design requires that the user's data be divided evenly on the Distribution Key.
For example, a user ID or a product baby ID, there is only one Key in general, so it is very uniform, and it is a very good example of being used as a distribution key. However, age or gender is not suitable as a distribution key, because it may shuffle a large amount of data to one node, resulting in an uneven distribution of the entire data.
The main function of Distribution Key is to reduce the shuffle of data in related queries and aggregation operations.
If the user does not set the Distribution Key, then our default is Random, because we will ensure that the user's data can be distributed as evenly as possible to all shards.
Next we look at the main role of Distribution Key.
In Hologres, we will have different tables and put them in different TableGroups. For tables with the same number of shards, they will all be put under a TG.
Assuming that two tables are associated, if the Distribution Key is designed according to the associated Key, then the association of the two tables can be a Local Join, as shown on the left side of the above figure. All data does not need to be shuffled, each table is on each shard, and the result is directly generated after the association is completed.
If the amount of data increases, it may need to be expanded later. We hope that all tables under this TG will be expanded, so as to ensure the consistency of data distribution and maintain the entire Local Join, instead of being unable to do Local Join due to expansion.
Compared with non-Local Join, the performance difference of Local Join is very big, usually there is a difference of about an order of magnitude.
The most related to Local Join is the design of Distribution Key. If the design of Distribution key is unreasonable, it may cause a large amount of data shuffle during Join, which affects efficiency.
As shown in the figure above, suppose that table A and table B are to be associated. If it is not a distribution key scenario, then we need to shuffle the data of table A and B according to its Join Key. Shuffle will bring a lot of The high cost also affects the efficiency of the entire query.
Therefore, under normal circumstances, for needs to be connected, the Join relationship can be set as the distribution key to realize the Local Join of the Table in the same Shard.
(6) Optimize the partition table
Partition table: It is also a physical table, with the same shard capabilities, but with the ability to perform Table Pruning based on the partition key.
As shown in the figure above, assuming that the query filter condition only hits part of the partition, then the remaining partition table does not need to be scanned, which can greatly save the IO of the entire query and speed up the query.
Normally, the partition key is static and the number is not too much. The most suitable partition key is the date. For example, some business parties have one partition a day, or partition by hour, then the data will be filtered according to a certain period of time when querying.
Through the partition table, when the user's query conditions include time filtering, unnecessary partitions can be filtered out, which greatly improves the query performance.
usually uses fields with low cardinality (less than ten thousand) such as date columns as partition fields. If there are too many partition tables, and the query does not have partition filter conditions, performance will decrease.
(7) Optimize Segment Key
Shard is a logical data unit, physically a set of files (distributed to multiple tables of the same shard, and corresponding indexes).
Segment Key is mainly used for column storage.
In the column storage, the files are stored as segments. When a certain shard is queried, because there are a bunch of files inside the shard, we need to find which files will hit the query and need to be scanned. The segment key is used to jump Files that don’t need to be searched.
Assuming that the segment is set to a time, the data is written according to the time, for example, from 10 am to 11 am is a file, 11 am to 12 am is a file, and 12 am to 13 am is a file. When we want to query the data in the range of 12:15~12:35, in this case, we can quickly find the file hitting the query from 12 o'clock to 13 o'clock through the Segment Key, so just open this file. In this way, unnecessary file scanning can be quickly skipped, IO is reduced, and the entire query is faster.
The above figure mainly introduces the entire data writing process to help everyone understand what a segment key is like.
As mentioned earlier, when data is written to Hologres, it will be written to the memory table. When the memory is full, it is asynchronously flushed to the file, and Append Only writes, and the writing efficiency is the highest. Because there is no global sorting and single file update for performance when writing, there is overlap between files.
What does it mean?
Segment\_key is used to divide the boundary of the file block. When querying, it can quickly locate the file block where the data is located based on the Segment\_key. When Segment\_key has multiple columns, it is aligned to the left.
As in the example mentioned above, 11 to 12 o’clock is ideal for one file, the actual situation may be 11 to 12 o’clock in one file, 11:30~12:30 is in the second file, 12 :30 ~13:00 is another file, there may be overlaps between files. The query may hit multiple files, and this scenario may result in the need to open multiple files.
Therefore, when designing segment keys, try to avoid overlaps and increase them sequentially as much as possible. If the data is written out of order, such as the data written in, first 123, then 678, then 456, this kind of out-of-order writing will cause the segment key to have duplicate data in different files, making the segment key completely Does not play a role in query filtering.
Therefore, the most critical point in the design of Segment Key is to be as monotonous as possible, and there is no overlap, so that we can skip this unnecessary data scan as much as possible.
The design of Segment Key is mainly used in the real-time writing scenario of Blink, and the Key is set as the time field, because the time of real-time data writing is increasing, and each value will not have a large overlap, which is more suitable for use. Segment Key.
In addition, in other scenarios, it is not recommended that users set the Segment Key by themselves.
(8) Optimize ClusteringKey
Clustering\_key, the clustering layout in the file, represents the sorting information, which is different from the MySQL clustering index. Hologres is used to lay out the data, not the layout index. Therefore, to modify the clustering, you need to re-import the data, and there is only one Clustering\_key. The sorting operation is The SST is generated in the memory.
The above picture is an example. The left side of the above picture is a completely disordered situation. If Date is used as the ClusteringKey, it will become the picture in the upper right corner. After sorting by Date, when you perform a Date query, such as data whose Date is greater than 1/1 and less than 1/3, you can quickly find the corresponding time period.
Assuming that this Index is not done, we need to scan all the data again. This is the principle of accelerating query through ClusteringKey.
Assuming that the sorting is done by Class, if Class and Date are used as ClusteringKey, then it will be sorted by Class first, and then sorted by Date, as shown in the lower right of the figure above. For this scenario, ClusteringKey is designed according to the leftmost matching principle, that is, when encountering the user's query conditions, it is also matched according to the leftmost matching principle.
For example, if the queried Key is Class, then ClusterKey can be hit. If the query condition is Class and Date, ClusterKey can also be hit, but if the query condition is only Date, then ClusterKey cannot be hit. The left-most matching followed is equivalent to that in the left-to-right conditions, no matter how many user query conditions there are, the leftmost condition must match.
The main function of ClusteringKey is to accelerate the filtering of queries, the filtering of Range queries, and the filtering of point-checks. The disadvantage of ClusteringKey is that each table can only have at most one ClusteringKey, and there can only be one sorting method.
(9) Optimize dictionary coding
Dictionary encoding can effectively compress string types, especially columns with small cardinality. The encoding value can speed up the comparison operation, which is good for Group By and Filter. Holo is automatically set after 0.9.
The picture above is an example of dictionary encoding.
As shown on the left side of the picture above, there are Card No and gender gender. Because there are only two values for male and female gender, it is very suitable for dictionary coding. If gender is coded as 0 and 1, it becomes the way in the middle of the picture.
When querying data, you need to encode the filter condition. For example, if you want to check the Card No of all men, the filter condition becomes Gender 0, and you can perform a digital query in this way.
However, dictionary encoding has a disadvantage, that is, for scenarios where the cardinality column is relatively large, its overhead is very high.
Because we encode the data first, during the encoding process, if the data is a total of 1 million rows, there are 990,000 rows of different values, which will result in us having 990,000 Encoded values. This situation will cause the entire encoding to follow The query consumption is very high, and this situation is not suitable for dictionary encoding.
After Hologres 0.9, we support automatic setting of dictionary encoding, and users do not need to configure dictionary encoding by themselves.
(10) Optimize bitmap index
Bitmap index has obvious optimization effect for equivalent filtering scenes. Multiple equivalent filtering conditions are calculated through vector comparison.
The bitmap index is equivalent to the data of each column through the bitmap to identify whether it exists.
As shown in the figure above, we encode the gender and class of the students in the table on the left with bitmaps to get the image on the right, and we can quickly filter through these bitmap information.
For example, if we want to check all male students, we can filter by "1 0" to get the four rows of data with PK values of 1, 2, 5, and 6 in the figure on the right that meet the query conditions. Suppose we want to filter out the classmates of the third class, then we build a bitmap "0 0 1", and then filter with the class data, we can get the information with the PK values of 2 and 6.
It can be seen that the main application scenario of bitmap index is in the spot check. For example, the query condition is male and the age is equal to 32 years old. This kind of scenario is also very suitable for query acceleration with bitmaps.
Similarly, bitmap indexes also have a problem, that is will form a sparse array (many columns and few values) during bitmap index coding, which has little impact on query performance improvement.
(11) Physical topology
Several indexes and the entire storage method are explained above. Let's take a look at how to distinguish them, and what kind of abstraction it looks like from the entire user perspective.
As shown above, after the user writes a SQL, it will first be routed to the corresponding table to find the logical object Table according to the user partition key.
The second step is to find the corresponding Shard through the Distribution Key.
The third step is the Segment Key. After finding the Shard, look for the files on the corresponding Shard. Because the actual data is stored as a file, we can find the file we want to open through the Segment Key.
The fourth step is to find out whether the data is in order within the file. This is searched through the Clustering Key. The Clustering Key helps us find the actual file interval.
The fifth step is Bitmap. Because Hologres stores data in batches, in a batch, we need to quickly locate a row through Bitmap, otherwise we need to scan all the data in a certain range.
The different processes from the top to the bottom in the figure, more and more into the file, the higher the top, the larger the scope.
Two, Hologres performance problem analysis and optimization
(1) Performance white paper
The question most asked by users is how the performance of Hologres is, and we have an approximate performance estimate.
When using Hologres, the QPS of real-time writing to a single core is 5000. For offline writing scenarios, such as Max Computer writing to Hologres, the QPS of a single core can reach 5W under normal circumstances. For OLAP queries, a single Core processes 2 million data volumes. For enquiry scenarios, a single Core QPS is about 1W.
Users can use the above information to evaluate how much resources their queries and business scenarios require.
(2) Real-time writing and checking
For different application scenarios, our optimization methods are not the same.
For real-time writing and point-checking scenarios, first check whether the table creation is appropriate. For high-QPS writing and checking, we hope that the Distribution Key is consistent with the query conditions. Because the Distribution Key is used to find the corresponding shard, when the written QPS is very high, if the filter condition is consistent with the distribution key, we can quickly route to a certain shard, and this query does not need to be sent to all shards , There is a great performance improvement for this scenario, so the Distribution Key and the query conditions are required to be consistent.
The second is that our table is best to be a row-stored table, because the row-stored table is very friendly in terms of performance for real-time writing and checking.
The third scenario is to assume that it is not a row storage table but a column storage table. We hope that Pk, Clustering Key and query conditions are consistent, so that the ability of Clustering Index can be used.
In addition to table optimization, you also need to optimize query writing code. Because if the code of Hologres is written unreasonably, it will bring very high additional costs. Users may find that QPS seems to be unable to go up, but in fact, the internal CPU usage of Hologres is very low. This is because the user's own writing code is not particularly efficient.
For such a problem, first of all, we hope that users can use the Preparemt method as much as possible. Its main advantage is to save the cost of the entire execution plan. After submitting a SQL, the SQL is generally compiled and parsed, and then an execution plan is generated, and finally submitted to the execution engine to execute such a process. When the data repeatedly executes SQL, the use of Preparemt can eliminate the need to do the process of generation, execution plan, and analysis. The cost is greatly reduced, and the QPS of query and write will be higher.
The second point is that we hope that the data written by the user is as small as possible. For example, we often encounter some users, he will write insert into values1 and insert into values2 first, then write insert into values2 and insert into values3, and then keep sending this kind of small SQL for data insertion, which will bring Very high data RPC cost, and the entire QPS can't go up at the same time.
We can make the write performance much higher by pooling batches. For example, by inserting into values, a value contains 1,000 values or 10,000 values, and the writing of these 10,000 values requires only one data transfer. NS. Compared to the previous method, there may be a 10,000-fold difference in performance.
The third piece is the use of the entire Holo Client. Some users may not know how to optimize the code, or can’t make good batches. Holo Client can help users solve these problems.
Compared with the traditional JDBC Client, Holo Client helps users to do a variety of asynchronous packaging and batching logic, and it does not have the additional overhead of the SQL engine, does not need to perform some SQL parsing operations, so it is written Incoming and querying performance is much better than using JDBC.
Holo Client is also a built-in plug-in written by Blink Client, so it has better writing performance than users' own tools.
Another point is that when we connect, we use the VPC domain name to write and query data as much as possible.
Because the public network is used directly, the RT between the networks is relatively high. If you use a VPC network, because it is in the same website, the RT between machines is relatively low, which can reduce the overhead on the entire network. In many application scenarios, its impact is very large and very important.
(2) Offline writing and query common problems
Next, let's take a look at the common problems of offline writing and querying.
Why put offline writing and query together? This is because offline writing is based on the same principle as querying. Offline writing is also done by running a Query. The problems of running Query are similar.
The first is the lack of statistical information.
Hologres itself is a distributed engine, it needs to run the SQL written by the user on the distributed engine, so it needs to go to the optimizer to generate an execution plan, and the optimizer needs statistical information to help it generate a better execution plan. When encountering the lack of user statistical information, the optimizer is equivalent to losing input and unable to generate an execution plan. This is the biggest and most problem we encounter online.
The second big problem is that the table creation is not optimal. When the table creation is inconsistent with the query, the performance of the entire query will be very poor.
In addition, Hologres is a self-developed engine, but in order to be compatible with the open source ecology of Postgres, there will be a federated query mechanism, so that the Query that users can run in Postgres can also run in Hologres, but it will bring some Additional performance loss.
(3) View the execution plan
The execution plan has a great impact on query performance. Hologres' query optimizer will select the least time-consuming query plan to execute the user's query according to Cost, but it is inevitable that some query plans will not be optimal. Users can query the execution plan through the Explain command. The execution plan generally contains the following operators:
1. Data scanning operators: Seq Scan, Table Scan, Index Scan, etc.
Mainly used for data access, Seq Scan, Table Scan and Index Scan correspond to sequential scan, table scan and index-based scan respectively.
2. Connection operators: Hash Join and Nested Loop.
Hash Join means that when two tables are associated, one table will be made into a Hash Table, and the other table will be associated with the lookup of the Hash Table.
Nested Loop will make two tables into two For loops. On the outside, it is a table For loop to traverse all data, and the other table is also a For loop. This is the difference between Hash Join and Nested Loop.
3. Aggregation operators: Hash Aggregate and Streaming Aggregate.
Implementation of AGG based on Hash search and implementation of AGG based on sorting.
4. Data movement operators: Redistribute Motion, Broadcast Motion and Gather Motion, etc.
Hologres is a distributed engine, so it is inevitable that you will encounter data shuffle. Redistribute Motion is mainly for Shuffle of data, Broadcast Motion is for data broadcasting, and Gather Motion is to pull data together. These data movement operators are mainly used to solve distributed data problems.
5. Other operators: Hash, Sort, Limit, Append, etc.
(4) Missing statistical information
When we look at the performance of Query, we often encounter several problems. One of them is the lack of statistical information. How do we know whether the statistical information is missing?
The above is an example of an explain query. When Hologres has no statistics, the default value of the number of rows is 1000. Here we can see that the current number of rows in the two tables of tmp and tmp1 are both 1000, indicating that neither of these two tables currently have statistical information.
Problems that are prone to without statistical information:
- Run query OOM
- Poor write performance
(5) Update statistical information
So how do we solve the problem of no statistical information?
Through the Analyze command, users can update statistical information.
Still the previous example, through analyze tmp; and analyze tmp1, you can see that these two tables have statistical information, as shown above.
As you can see from the top, tmp1 has 10 million rows of data, and its 10 million data joins 1000 rows of data. At this time, we found that although this table has made a Hash Join, its Join order is wrong. Because the amount of data in tmp before this was very small, and the amount of data in temp1 was very large, it would cause tmp1 to be placed on the side of the Hash Table, and then the Hash Table was very large, resulting in very poor performance of the entire Join.
After Analyze, you can adjust the order of Join, put the small table tmp on the side of the Hash, and put the big table tmp1 on the side of the association, forming a better query plan, and the query performance is compared There was also a big improvement before.
(6) Choose the appropriate distribution column
We need to choose the appropriate distribution column, which is the case of doing Local Join mentioned earlier in this article. If the user does not set the distribution column, Hologres needs to shuffle the data of the two tables together according to the join key to ensure the correctness of the data when performing related queries. If the amount of data in shuffle is large, it will cause high query latency.
How do we judge whether we have done Local Join?
In the example above, we can see that the execution plan is viewed through explain, Redistribute Motion represents the Shuffle operator, and the two tables tmp and tmp1 are shuffled through the Join condition before the association is made, which shows that the entire association is not a Local Join.
The solution is to set the associated Key to Distribution Key.
The specific method is to rebuild the table and set the association key to a and b. At this time, we will check the execution plan. There is no Redistribute Motion. This association is a Local Join.
In this way, the entire association can be changed from a non-local join to a local join, and the performance has a relatively large improvement compared to the previous one.
(7) Determine whether to use Clustering Key
Next, let's take a look at how to use the execution plan to look at the Indexes of the tables we built before to determine whether to use the Clustering Key.
As shown above, if we write a query:
explain select * from tmp where a > 1;
Assuming that the a field is a Clustering Key, we can see a Cluster Filter in the execution plan of the explain query at this time, indicating that we have used the Clustering Key.
(8) Determine whether to use Bitmap
Next we determine whether to use Bitmap.
As shown in the figure above, our query conditions are:
explain select * from tmp where c = 1;
This c is the Bitmap Key. In the execution plan, you can see that there is a filter condition such as Bitmap Filter: (c = 1). At this time, we know that Bitmap is used.
(9) Determine whether to use Segment Key
Next, determine whether to use Segment Key.
As shown in the figure above, our query conditions are:
explain select * from tmp where b > 1;
This b is the Segment Key. We can see in the execution plan that there is a filter condition of Segment Filter: (b> 1). At this time, we know that the Segment Key is used.
Through the above several explain query examples, we can know whether the query uses the Index of the previously created table. If it is not used, it means that the table may be built incorrectly, or the query mode is not well adapted to the table built.
(10) Federated query optimization
There are two sets of calculation engines in Hologres, one of which is a completely self-developed Holo calculation engine. Its performance is excellent, but because it is completely self-developed, it cannot be compared with the open source Postgres calculation engine. All Postgres features are supported from the beginning, and some Postgres features are missing.
Another set of computing engine is Postgres, it is a completely open source ecosystem, performance is slightly worse than the self-developed computing engine, but it is fully compatible with Postgres functions.
Therefore, in Hologres, a query may use both the Holo computing engine and the Postgres computing engine.
(11) Optimize the federated query
To determine whether a federated query is used, we can use explain.
Hologres self-developed engine does not support not in. For queries:
explain select * from tmp where a not in (select a from tmp1);
As follows:
From the execution plan, we can see that there is External SQL (Postgres): such an operator, which indicates that the query engine ran to the Postgres engine for execution.
Because the Holo calculation engine does not support not in, it is said that this part of the calculation is performed on Postgres. When you see External SQL (Postgres): This operator, users need to be wary of using functions that are not supported by the Holo calculation engine. At this time, it is best to rewrite the query and replace it with an operator supported by Holo to execute its Query, thereby improving the query performance.
For the scenario of the above example, we can change not in to not exist:
explain select * from tmp where not exists (select a from tmp1 where a = tmp.a);
When the user table must be non-empty, you can directly change not in to not exit, and then check the execution, you will find that the entire query is on the Holo engine, and you did not see the External SQL (Postgres) operator just now.
The execution plan generated by this query may be several times different in query performance compared to the plan previously executed on the Postgres engine.
Through all the above examples, we have understood the whole process of Hologres performance tuning and the key points that need to be paid attention to. Interested students are welcome to pay more attention to and use Hologres.
Copyright Notice: content of this article is contributed spontaneously by Alibaba Cloud real-name registered users, and the copyright belongs to the original author. The Alibaba Cloud Developer Community does not own its copyright and does not assume corresponding legal responsibilities. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and the "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find suspected plagiarism in this community, fill in the infringement complaint form to report it. Once verified, the community will immediately delete the suspected infringing content.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。