: Shi Mian
review & proofreading:
Editing & Typesetting:
On Double 11 this year, cloud-native middleware completed the trinity of open source, self-research, and commercialization, and was fully upgraded to middleware cloud products. MSE microservice governance has supported the traffic peak of Alibaba Group’s core business on Double 11 through Dubbo3.0. Up to now, 50% of the users in the group have become accustomed to using MSE microservices to manage HSF and Dubbo3.0 applications. Let’s talk about MSE services today. Governance the full-link gray-scale capability in the professional version, and some scenarios where it is practiced in large-scale production.
background
Under the microservice architecture, there are some demand developments that involve changes to multiple microservices on the microservice invocation link at the same time. It is necessary to better control the risk and explosion radius of the new version of the service through the gray release method. Generally, each microservice will have a grayscale environment or group to receive grayscale traffic. We hope that the traffic entering the upstream grayscale environment can also enter the downstream grayscale environment to ensure that 1 request is always delivered in the grayscale environment , Even if there are some microservices on this call link that do not have a grayscale environment, these applications can still return to the grayscale environment when requesting downstream. With the full-link grayscale capability provided by MSE, the above capabilities can be easily achieved without modifying any of your business codes.
MSE microservice governance full-link grayscale characteristics
As a fist function in the professional version of MSE service management, full-link grayscale has the following six characteristics
- Refined traffic can be introduced through customized rules
In addition to simply introducing traffic in proportion, we also support the introduction of Spring Cloud and Dubbo traffic according to rules. Spring Cloud traffic can be introduced according to the requested cookie, header, param parameters or random percentages, and Dubbo traffic can be introduced according to services, methods, and parameters. To introduce.
- Full link isolation traffic lanes
1) "Dyeing" the required traffic by setting the traffic rules, and the "dyeing" traffic will be routed to the gray machine.
2) The gray-scale traffic carries the gray-scale mark to be transmitted downstream, forming a gray-scale exclusive environment flow lane. The application in the non-gray-scale environment will default to the unmarked baseline environment.
- End-to-end stable baseline environment
Unmarked applications belong to the stable version of the baseline, that is, a stable online environment. When we will release the corresponding gray version code, then we can configure rules to introduce specific online traffic to control the risk of gray code.
- One-key dynamic flow cut
After the flow rules are customized, one-click stop and start, add, delete, modify, and check can be carried out according to requirements, and they will take effect in real time. Gray-scale drainage is more convenient.
- Low-cost access, based on Java Agent technology, no need to modify a line of business code
The MSE microservice governance capability is based on the technology of Java Agent bytecode enhancement, and it seamlessly supports all Spring Cloud and Dubbo versions on the market for the past 5 years. Users can use it without changing a single line of code and do not need to change the existing business architecture. , Can go up and down at any time, without binding. Just open the professional version of MSE microservice management, online configuration, and real-time effect.
- Possess lossless online and offline capabilities, making the release more silky
After the application opens the MSE microservice management, it has the ability to go online and offline without loss. The release, rollback, expansion, shrinkage and other scenarios under high traffic can ensure that the traffic is lossless.
Scenarios for large-scale production practices
This article mainly introduces the production and implementation scenarios of several commonly used full-link gray-scale solutions summarized and abstracted in the process of supporting major customers by MSE microservice governance.
Scenario 1: Automatically dye the traffic passing through the machine to achieve full-link grayscale
- After entering the node with tag, subsequent calls will give priority to the node with the same tag, that is, "coloring" the traffic passing through the tag node.
- If no node with the same tag is found on the call link with tag, it will fallback to the node without tag.
- The call link with tag passes through the node without tag, if the link subsequently calls the node with tag, the mode of tag call will be restored.
Scenario 2: Realize full link grayscale by bringing a specific header to the traffic
The client adds the identification of the specified environment to the request, and the access layer forwards it to the gateway representing the corresponding environment according to the representation. The gateway of the corresponding environment calls the identified project isolation environment through the isolation plug-in, requesting to close the loop in the business project isolation environment.
Scenario 3: Perform full link grayscale through custom routing rules
By adding the specified header to the grayscale request, and the entire call link will transmit the header transparently, you only need to configure the header-related routing rules in the corresponding application, and the grayscale request with the specified header enters the grayscale machine , You can achieve full-link traffic grayscale on demand.
The practice of full-link grayscale
How can we quickly obtain the above-mentioned full-link grayscale capability of the same model? Below I will take you from 0 to 1 to quickly build our full-link grayscale capability.
We assume that the architecture of the application consists of Ingress-nginx and the back-end microservice architecture (Spring Cloud). The back-end call link has 3 hops, shopping cart (a), transaction center (b), inventory center (c), and they Service discovery is done through the Nacos registry, and the client accesses the back-end service through the client or H5 page.
Prerequisites
Install the Ingress-nginx component
Visit the container service console, open the application directory, search for ack-ingress-nginx, select the namespace kube-system, click create, after the installation is complete, you will see a deployment ack-ingress-nginx-default in the kube-system namespace -controller, indicating that the installation was successful.
$ kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
ack-ingress-nginx-default-controller 2/2 2 2 18h
Open MSE Microservice Governance Professional Edition
- Click to activate the MSE Microservice Management Professional Edition to use the full-link grayscale capability.
- Visit the Container Service console, open the application directory, search for ack-mse-pilot, and click Create.
- In the MSE service management console, open the K8s cluster list, select the corresponding cluster, the corresponding namespace, and open the microservice management.
Deploy the demo application
Save the following file to ingress-gray.yaml, and execute kubectl apply -f ingress-gray.yaml to deploy the application. Here we will deploy three applications A, B, and C, each of which is deployed with a baseline version and A grayscale version.
# A 应用 base 版本
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: spring-cloud-a
name: spring-cloud-a
spec:
replicas: 2
selector:
matchLabels:
app: spring-cloud-a
template:
metadata:
annotations:
msePilotCreateAppName: spring-cloud-a
labels:
app: spring-cloud-a
spec:
containers:
- env:
- name: LANG
value: C.UTF-8
- name: JAVA_HOME
value: /usr/lib/jvm/java-1.8-openjdk/jre
image: registry.cn-shanghai.aliyuncs.com/yizhan/spring-cloud-a:0.1-SNAPSHOT
imagePullPolicy: Always
name: spring-cloud-a
ports:
- containerPort: 20001
protocol: TCP
resources:
requests:
cpu: 250m
memory: 512Mi
livenessProbe:
tcpSocket:
port: 20001
initialDelaySeconds: 10
periodSeconds: 30
# A 应用 gray 版本
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: spring-cloud-a-new
name: spring-cloud-a-new
spec:
replicas: 2
selector:
matchLabels:
app: spring-cloud-a-new
strategy:
template:
metadata:
annotations:
alicloud.service.tag: gray
msePilotCreateAppName: spring-cloud-a
labels:
app: spring-cloud-a-new
spec:
containers:
- env:
- name: LANG
value: C.UTF-8
- name: JAVA_HOME
value: /usr/lib/jvm/java-1.8-openjdk/jre
- name: profiler.micro.service.tag.trace.enable
value: "true"
image: registry.cn-shanghai.aliyuncs.com/yizhan/spring-cloud-a:0.1-SNAPSHOT
imagePullPolicy: Always
name: spring-cloud-a-new
ports:
- containerPort: 20001
protocol: TCP
resources:
requests:
cpu: 250m
memory: 512Mi
livenessProbe:
tcpSocket:
port: 20001
initialDelaySeconds: 10
periodSeconds: 30
# B 应用 base 版本
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: spring-cloud-b
name: spring-cloud-b
spec:
replicas: 2
selector:
matchLabels:
app: spring-cloud-b
strategy:
template:
metadata:
annotations:
msePilotCreateAppName: spring-cloud-b
labels:
app: spring-cloud-b
spec:
containers:
- env:
- name: LANG
value: C.UTF-8
- name: JAVA_HOME
value: /usr/lib/jvm/java-1.8-openjdk/jre
image: registry.cn-shanghai.aliyuncs.com/yizhan/spring-cloud-b:0.1-SNAPSHOT
imagePullPolicy: Always
name: spring-cloud-b
ports:
- containerPort: 8080
protocol: TCP
resources:
requests:
cpu: 250m
memory: 512Mi
livenessProbe:
tcpSocket:
port: 20002
initialDelaySeconds: 10
periodSeconds: 30
# B 应用 gray 版本
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: spring-cloud-b-new
name: spring-cloud-b-new
spec:
replicas: 2
selector:
matchLabels:
app: spring-cloud-b-new
template:
metadata:
annotations:
alicloud.service.tag: gray
msePilotCreateAppName: spring-cloud-b
labels:
app: spring-cloud-b-new
spec:
containers:
- env:
- name: LANG
value: C.UTF-8
- name: JAVA_HOME
value: /usr/lib/jvm/java-1.8-openjdk/jre
image: registry.cn-shanghai.aliyuncs.com/yizhan/spring-cloud-b:0.1-SNAPSHOT
imagePullPolicy: Always
name: spring-cloud-b-new
ports:
- containerPort: 8080
protocol: TCP
resources:
requests:
cpu: 250m
memory: 512Mi
livenessProbe:
tcpSocket:
port: 20002
initialDelaySeconds: 10
periodSeconds: 30
# C 应用 base 版本
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: spring-cloud-c
name: spring-cloud-c
spec:
replicas: 2
selector:
matchLabels:
app: spring-cloud-c
template:
metadata:
annotations:
msePilotCreateAppName: spring-cloud-c
labels:
app: spring-cloud-c
spec:
containers:
- env:
- name: LANG
value: C.UTF-8
- name: JAVA_HOME
value: /usr/lib/jvm/java-1.8-openjdk/jre
image: registry.cn-shanghai.aliyuncs.com/yizhan/spring-cloud-c:0.1-SNAPSHOT
imagePullPolicy: Always
name: spring-cloud-c
ports:
- containerPort: 8080
protocol: TCP
resources:
requests:
cpu: 250m
memory: 512Mi
livenessProbe:
tcpSocket:
port: 20003
initialDelaySeconds: 10
periodSeconds: 30
# C 应用 gray 版本
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: spring-cloud-c-new
name: spring-cloud-c-new
spec:
replicas: 2
selector:
matchLabels:
app: spring-cloud-c-new
template:
metadata:
annotations:
alicloud.service.tag: gray
msePilotCreateAppName: spring-cloud-c
labels:
app: spring-cloud-c-new
spec:
containers:
- env:
- name: LANG
value: C.UTF-8
- name: JAVA_HOME
value: /usr/lib/jvm/java-1.8-openjdk/jre
image: registry.cn-shanghai.aliyuncs.com/yizhan/spring-cloud-c:0.1-SNAPSHOT
imagePullPolicy: IfNotPresent
name: spring-cloud-c-new
ports:
- containerPort: 8080
protocol: TCP
resources:
requests:
cpu: 250m
memory: 512Mi
livenessProbe:
tcpSocket:
port: 20003
initialDelaySeconds: 10
periodSeconds: 30
# Nacos Server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nacos-server
name: nacos-server
spec:
replicas: 1
selector:
matchLabels:
app: nacos-server
template:
metadata:
labels:
app: nacos-server
spec:
containers:
- env:
- name: MODE
value: standalone
image: nacos/nacos-server:latest
imagePullPolicy: Always
name: nacos-server
resources:
requests:
cpu: 250m
memory: 512Mi
dnsPolicy: ClusterFirst
restartPolicy: Always
# Nacos Server Service 配置
---
apiVersion: v1
kind: Service
metadata:
name: nacos-server
spec:
ports:
- port: 8848
protocol: TCP
targetPort: 8848
selector:
app: nacos-server
type: ClusterIP
Hands
Scenario 1: Automatically dye the traffic passing through the machine to achieve full-link grayscale
Sometimes, we can distinguish the online baseline environment and the grayscale environment through different domain names. The grayscale environment has a separate domain name that can be configured. Assuming that we request the grayscale environment by visiting www.gray.com, visit www.base. com takes the baseline environment.
Call the link Ingress-nginx -> A -> B -> C, where A can be a spring-boot application.
Note: In the portal application A's gray and A's base environment, you need to turn on the A application's transparent transmission switch according to the traffic ratio in the MSE service management console, which means that the function of backward transparent transmission of the label of the current environment is enabled. In this way, after Ingress-nginx routes the gray of A, even if the request does not carry any header, because this switch is turned on, the x-mse-tag:gray header will be automatically added when it is called in the future, and the value of the header is automatically added. gray comes from the label information of the A application configuration. If the original request contains x-mse-tag:gray, the tag in the original request will take precedence.
For entry application A, configure two k8s services, spring-cloud-a-base corresponds to the base version of A, and spring-cloud-a-gray corresponds to the gray version of A.
apiVersion: v1
kind: Service
metadata:
name: spring-cloud-a-base
spec:
ports:
- name: http
port: 20001
protocol: TCP
targetPort: 20001
selector:
app: spring-cloud-a
---
apiVersion: v1
kind: Service
metadata:
name: spring-cloud-a-gray
spec:
ports:
- name: http
port: 20001
protocol: TCP
targetPort: 20001
selector:
app: spring-cloud-a-new
Configure the Ingress rule of the entrance, visit www.base.com to route to the base version of A application, and visit www.gray.com to route to the gray version of A application.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-cloud-a-base
spec:
rules:
- host: www.base.com
http:
paths:
- backend:
serviceName: spring-cloud-a-base
servicePort: 20001
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-cloud-a-gray
spec:
rules:
- host: www.gray.com
http:
paths:
- backend:
serviceName: spring-cloud-a-gray
servicePort: 20001
path: /
Result verification
At this point, visit www.base.com to be routed to the baseline environment
curl -H"Host:www.base.com" http://106.14.155.223/a
A[172.18.144.155] -> B[172.18.144.120] -> C[172.18.144.79]
At this point, visit www.gray.com and be routed to the gray environment
curl -H"Host:www.gray.com" http://106.14.155.223/a
Agray[172.18.144.160] -> Bgray[172.18.144.57] -> Cgray[172.18.144.157]
Furthermore, if the entry application A does not have a grayscale environment, accesses the base environment of A, and needs to enter the grayscale environment when A -> B, it can be achieved by adding a special header x-mse-tag, header The value of is the label of the environment you want to go to, such as gray.
curl -H"Host:www.base.com" -H"x-mse-tag:gray" http://106.14.155.223/a
A[172.18.144.155] -> Bgray[172.18.144.139] -> Cgray[172.18.144.8]
You can see that in the first jump, you entered the base environment of A, but when A->B, it returned to the grayscale environment.
The advantage of this method of use is that the configuration is simple. You only need to configure the rules at Ingress. When an application needs to be released in grayscale, you only need to deploy the application in the grayscale environment, and the grayscale traffic will naturally enter the grayscale. In the machine, if the verification is no problem, the gray-scale image is released to the baseline environment; if there are multiple applications that need to be released in the gray-scale environment for one change, then they can all be added to the gray-scale environment.
Best Practices
- Mark the gray mark for all applications in the gray scale environment, and not mark the application in the baseline environment by default.
- Online normalization drains 2% of the traffic into the grayscale environment
Scenario 2: Realize full link grayscale by bringing a specific header to the traffic
Some clients cannot rewrite the domain name, and hope to visit www.demo.com to route to the grayscale environment by passing in different headers. For example, in the figure below, by adding the x-mse-tag:gray header, you can access the grayscale environment.
At this time, the demo's Ingress rules are as follows. Note that there are multiple rules related to nginx.ingress.kubernetes.io/canary.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-cloud-a-base
spec:
rules:
- host: www.demo.com
http:
paths:
- backend:
serviceName: spring-cloud-a-base
servicePort: 20001
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-cloud-a-gray
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "x-mse-tag"
nginx.ingress.kubernetes.io/canary-by-header-value: "gray"
nginx.ingress.kubernetes.io/canary-weight: "0"
spec:
rules:
- host: www.base.com
http:
paths:
- backend:
serviceName: spring-cloud-a-gray
servicePort: 20001
path: /
Result verification
At this point, visit www.demo.com to be routed to the baseline environment
curl -H"Host:www.demo.com" http://106.14.155.223/a
A[172.18.144.155] -> B[172.18.144.56] -> C[172.18.144.156]
How to access the grayscale environment? Just add a header x-mse-tag:gray to the request.
curl -H"Host:www.demo.com" -H"x-mse-tag:gray" http://106.14.155.223/a
Agray[172.18.144.82] -> Bgray[172.18.144.57] -> Cgray[172.18.144.8]
You can see that Ingress is directly routed to A's gray environment based on this header.
Go further
You can also use Ingress to implement more complex routing. For example, the client has already brought a certain header and wants to use an existing header to implement routing instead of adding a new header. For example, as shown in the figure below, suppose we want x- A request with a user-id of 100 enters the grayscale environment.
Only need to add the following 4 rules:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-cloud-a-base
spec:
rules:
- host: www.demo.com
http:
paths:
- backend:
serviceName: spring-cloud-a-base
servicePort: 20001
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-cloud-a-base-gray
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "x-user-id"
nginx.ingress.kubernetes.io/canary-by-header-value: "100"
nginx.ingress.kubernetes.io/canary-weight: "0"
spec:
rules:
- host: www.demo.com
http:
paths:
- backend:
serviceName: spring-cloud-a-gray
servicePort: 20001
path: /
Bring a special header when visiting, and enter the grayscale environment if the conditions are met
curl -H"Host:www.demo.com" -H"x-user-id:100" http://106.14.155.223/a
Agray[172.18.144.93] -> Bgray[172.18.144.24] -> Cgray[172.18.144.25]
Requests that do not meet the conditions enter the baseline environment:
curl -H"Host:www.demo.com" -H"x-user-id:101" http://106.14.155.223/a
A[172.18.144.91] -> B[172.18.144.22] -> C[172.18.144.95]
Compared with scenario one, the advantage is that the domain name of the client remains unchanged and only needs to be distinguished by request.
Scenario 3: Perform full link grayscale through custom routing rules
Sometimes we don’t want automatic transparent transmission and automatic routing, but hope that each application on the upstream and downstream of the microservice call chain can customize the grayscale rules. For example, application B wants to control that only requests that meet the custom rules will be routed to B Apply here, and the C application may want to define a grayscale rule different from that of B. How to configure it at this time, see the following picture for the scene:
Note that it is best to clear the parameters configured in scene one and two.
The first step is to add an environment variable at the entry application A (preferably all entry applications add this environment variable, including gray and base): alicloud.service.header=x-user-id, x-user- id is the header that needs to be transparently transmitted, and its function is to identify the header and perform automatic transparent transmission.
Note that x-mse-tag is not used here, it is a default header of the system and has special logic.
The second step is to configure label routing rules in the MSE console at the application B in the middle
The third step is to configure routing rules at Ingress. For this step, refer to scenario two and use the following configuration:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-cloud-a-base
spec:
rules:
- host: www.base.com
http:
paths:
- backend:
serviceName: spring-cloud-a-base
servicePort: 20001
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: 'true'
nginx.ingress.kubernetes.io/canary-by-header: x-user-id
nginx.ingress.kubernetes.io/canary-by-header-value: '100'
nginx.ingress.kubernetes.io/canary-weight: '0'
name: spring-cloud-a-gray
spec:
rules:
- host: www.base.com
http:
paths:
- backend:
serviceName: spring-cloud-a-gray
servicePort: 20001
path: /
Result verification
Test verification, visit the gray-scale environment, bring the header that meets the conditions, and route to the gray-scale environment of B.
curl 120.77.215.62/a -H "Host: www.base.com" -H "x-user-id: 100"
Agray[192.168.86.42] -> Bgray[192.168.74.4] -> C[192.168.86.33]
Visit the grayscale environment, bring the header that does not meet the conditions, and route to the base environment of B.
curl 120.77.215.62/a -H "Host: www.base.com" -H "x-user-id: 101"
A[192.168.86.35] -> B[192.168.73.249] -> C[192.168.86.33]
Remove the Ingress Canary configuration, access the base A service (the entry application of the baseline environment needs to add the alicloud.service.header environment variable), bring the header that meets the conditions, and route it to the gray environment of B.
curl 120.77.215.62/a -H "Host: www.base.com" -H "x-user-id: 100"
A[192.168.86.35] -> Bgray[192.168.74.4] -> C[192.168.86.33]
Visit the base environment, bring the header that does not meet the conditions, and route to B's base environment.
curl 120.77.215.62/a -H "Host: www.base.com" -H "x-user-id: 101"
A[192.168.86.35] -> B[192.168.73.249] -> C[192.168.86.33]
Summarize
The full-link gray-scale capability, which is very technically difficult, can be quickly practiced in 20 minutes. The full-link grayscale is actually not that difficult!
Based on the full-link gray-scale capability of MSE service governance, we can quickly implement enterprise-level full-link gray-scale capabilities. The above three scenarios are standard scenarios for large-scale implementation in our production practice. Of course, we can based on MSE service governance. The ability to customize and adapt according to its own business; even in the context of multiple traffic sources, it can achieve precise drainage according to business customization.
At the same time, the observability capability of the professional version of MSE Service Management makes the gray-level effectiveness measurable.
Second-level monitoring of gray-scale flow
Standardize the release process
In daily releases, we often have the following wrong ideas:
- The content of this change is relatively small, and the online requirements are relatively urgent, so you don't need to test and release it directly.
- The release does not need to go through the grayscale process, just quickly release and go online.
- Gray-scale publishing is useless, it is just a process. After publishing, you can publish directly online without waiting for observation.
- Although grayscale publishing is very important, it is difficult to build a grayscale environment, time-consuming and labor-intensive, and the priority is not high.
These ideas may make us make a wrong release. Many failures are directly or indirectly caused by the release. Therefore, improving the quality of publishing and reducing the occurrence of errors is a key link in effectively reducing online failures. To achieve safe release, we need to standardize the release process.
tail
With the popularity of microservices, more and more companies are using microservice frameworks. Microservices provide better fault tolerance due to their high cohesion and low coupling characteristics, and they are also more suitable for rapid business iterations. Brings a lot of convenience. However, with the development of business, the splitting of microservices has become more and more complicated, and the governance of microservices has become a more troublesome issue.
Taking the full link gray level only, in order to ensure the verification of the functional correctness of the new version of the application before it goes online, it is necessary to take into account the efficiency of application release. If the scale of our application is small, we can directly guarantee the release by maintaining multiple sets of environments. The correctness. But when our business develops to a large and complex level, suppose our system is composed of 100 microservices. Even if the test/grayscale environment occupies 1 to 2 pods for each service, we need to face huge challenges in so many environments. Cost and efficiency challenges brought by the operation and maintenance environment.
Is there a simpler and more efficient way to solve the problem of microservice governance?
The MSE microservice engine will launch the service governance professional version, providing out-of-the-box and complete professional microservice governance solutions to help companies better realize their microservice governance capabilities. If your system can also quickly have complete full-link grayscale capabilities as described in this article, and further microservice governance practices based on this capability, not only can you save objective manpower and costs, but also enable your company The exploration in the field of microservices is more confident.
For more information, please scan the QR code below or search for WeChat account (AlibabaCloud888) to add a cloud native assistant! Get more information!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。