In the first part of this series of articles, we have introduced the basic ideas of AKS blue-green deployment, and introduced how to deploy related resources and integrate the application gateway with AKS. For those who missed the previous part, please click here Look back.
In this article, based on the content of the previous article, we will further introduce how to deploy applications, how to deploy a new AKS cluster, and how to switch the AKS version.
Without further ado, let's start!
Application deployment
Let's deploy a demo application to verify that the application gateway has been successfully integrated with the AKS cluster. Please copy and save the following YAML source code as deployment_aspnet.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
spec:
replicas: 3
selector:
matchLabels:
app: aspnetapp
template:
metadata:
labels:
app: aspnetapp
spec:
containers:
- name: aspnetapp
# Sample ASP.Net application from Microsoft which can get private IP.
image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: aspnetapp
spec:
selector:
app: aspnetapp
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aspnetapp
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: aspnetapp
servicePort: 80
Run the following command to deploy the application:
kubectl apply -f deployment_aspnet.yaml
View the Pod in the list and confirm that the application deployment is running:
kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
aad-pod-identity-mic-787c5958fd-kmx9b 1/1 Running 0 177m 10.240.0.33 aks-nodepool1-94448771-vmss000000 <none> <none>
aad-pod-identity-mic-787c5958fd-nkpv4 1/1 Running 0 177m 10.240.0.63 aks-nodepool1-94448771-vmss000001 <none> <none>
aad-pod-identity-nmi-mhp86 1/1 Running 0 177m 10.240.0.4 aks-nodepool1-94448771-vmss000000 <none> <none>
aad-pod-identity-nmi-sjpvw 1/1 Running 0 177m 10.240.0.35 aks-nodepool1-94448771-vmss000001 <none> <none>
aad-pod-identity-nmi-xnfxh 1/1 Running 0 177m 10.240.0.66 aks-nodepool1-94448771-vmss000002 <none> <none>
agic-ingress-azure-84967fc5b6-cqcn4 1/1 Running 0 111m 10.240.0.79 aks-nodepool1-94448771-vmss000002 <none> <none>
aspnetapp-68784d6544-j99qg 1/1 Running 0 96 10.240.0.75 aks-nodepool1-94448771-vmss000002 <none> <none>
aspnetapp-68784d6544-v9449 1/1 Running 0 96 10.240.0.13 aks-nodepool1-94448771-vmss000000 <none> <none>
aspnetapp-68784d6544-ztbd9 1/1 Running 0 96 10.240.0.50 aks-nodepool1-94448771-vmss000001 <none> <none>
You can see that the application pods are all up and running normally. Note that their IP is 10.240.0.13, 10.240.0.50 and 10.240.0.75.
The back end of the application gateway can see the above IP:
az network application-gateway show-backend-health \
-g $RESOURCE_GROUP \
-n $APP_GATEWAY \
--query backendAddressPools[].backendHttpSettingsCollection[].servers[][address,health]
-o tsv
10.240.0.13 Healthy
10.240.0.50 Healthy
10.240.0.75 Healthy
Run the following command to check the IP address of the front end:
az network public-ip show -g $RESOURCE_GROUP -n $APPGW_IP --query ipAddress -o tsv
Then use a browser to visit this IP, you can see:
Refresh a few more times, and you will find that Host name and Server IP address will alternately display 3 host names and IPs, which are exactly the 3 Pod names and intranet IPs of the previously deployed Pod. It shows that the Pod integration in the application gateway and AKS has been successfully implemented.
Deploy AKS new cluster
Create a new version of AKS cluster
In the second AKS subnet, create a new AKS cluster. Our previous AKS version uses the current default version 1.19.11, the new AKS cluster uses 1.20.7, and all other parameters remain unchanged.
Declare the variable of the new AKS cluster name:
AKS_NEW=new
Get the ID of the subnet where the new cluster is located:
NEW_AKS_SUBNET_ID=$(az network vnet subnet show -g $RESOURCE_GROUP --vnet-name $VNET_NAME --name $NEW_AKS_SUBNET --query id -o tsv)
Create a new AKS cluster:
az aks create -n $AKS_NEW \
-g $RESOURCE_GROUP \
-l $AZ_REGION \
--generate-ssh-keys \
--network-plugin azure \
--enable-managed-identity \
--vnet-subnet-id $NEW_AKS_SUBNET_ID \
--kubernetes-version 1.20.7
The new AKS cluster still uses Helm to install application-gateway-kubernetes-ingress.
on the new AKS cluster
Connect to the AKS cluster:
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NEW
Install AAD Pod Identify:
kubectl create serviceaccount --namespace kube-system tiller-sa
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller-sa
helm repo add aad-pod-identity https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts
helm install aad-pod-identity aad-pod-identity/aad-pod-identity
Helm install Application Gateway Ingress Controller:
helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/
helm repo update
Deploy the application on the new AKS cluster
First install the same application on the new AKS cluster:kubectl apply -f deployment_aspnet.yaml
After the application is deployed, make a list of Pods:
kubectl get po -o=custom-columns=NAME:.metadata.name,\
podIP:.status.podIP,NODE:.spec.nodeName,\
READY-true:.status.containerStatuses[*].ready
NAME podIP NODE READY-true
aad-pod-identity-mic-644c7c9f6-cqkxr 10.241.0.25 aks-nodepool1-20247409-vmss000000 true
aad-pod-identity-mic-644c7c9f6-xpwlt 10.241.0.67 aks-nodepool1-20247409-vmss000002 true
aad-pod-identity-nmi-k2c8s 10.241.0.35 aks-nodepool1-20247409-vmss000001 true
aad-pod-identity-nmi-vqqzq 10.241.0.66 aks-nodepool1-20247409-vmss000002 true
aad-pod-identity-nmi-xvcxm 10.241.0.4 aks-nodepool1-20247409-vmss000000 true
aspnetapp-5844845bdc-82lcw 10.241.0.33 aks-nodepool1-20247409-vmss000000 true
aspnetapp-5844845bdc-hskvg 10.241.0.43 aks-nodepool1-20247409-vmss000001 true
aspnetapp-5844845bdc-qzt7f 10.241.0.84 aks-nodepool1-20247409-vmss000002 true
In the actual production operation process, after deploying the application, do not associate with the existing application gateway, but log in remotely, and test it through the intranet IP access:
kubectl run -it --rm aks-ssh --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
After the container is started, it will directly enter this container. Let's visit the aforementioned 3 intranet IPs: 10.241.0.33, 10.241.0.43, 10.241.0.84. for example:
root@aks-ssh:/# curl http://10.241.0.33
root@aks-ssh:/# curl http://10.241.0.43
root@aks-ssh:/# curl http://10.241.0.84
You can return the content normally if you see it. This can demonstrate that the new environment has passed the test, and finally the new AKS cluster is associated with the existing application gateway.
Switch between different versions of AKS cluster
application gateway switches to integrate with the new version of AKS
Execute the following command to install AGIC:
helm install agic application-gateway-kubernetes-ingress/ingress-azure -f helm_agic.yaml
After waiting a few seconds, run:
kubectl get po -o=custom-columns=NAME:. metadata.name,podIP:.status.podIP,NODE:.spec.nodeName,READY-true:.status.containerStatuses[*].ready
NAME podIP NODE READY-true
aad-pod-identity-mic-644c7c9f6-cqkxr 10.241.0.25 aks-nodepool1-20247409-vmss000000 true
aad-pod-identity-mic-644c7c9f6-xpwlt 10.241.0.67 aks-nodepool1-20247409-vmss000002 true
aad-pod-identity-nmi-k2c8s 10.241.0.35 aks-nodepool1-20247409-vmss000001 true
aad-pod-identity-nmi-vqqzq 10.241.0.66 aks-nodepool1-20247409-vmss000002 true
aad-pod-identity-nmi-xvcxm 10.241.0.4 aks-nodepool1-20247409-vmss000000 true
agic-ingress-azure-84967fc5b6-6x4dd 10.241.0.79 aks-nodepool1-20247409-vmss000002 true
aspnetapp-5844845bdc-82lcw 10.241.0.33 aks-nodepool1-20247409-vmss000000 true
aspnetapp-5844845bdc-hskvg 10.241.0.43 aks-nodepool1-20247409-vmss000001 true
aspnetapp-5844845bdc-qzt7f 10.241.0.84 aks-nodepool1-20247409-vmss000002 true
You can see that the Pod of agic-ingress-azure- * has been running normally.
First use the command line to check that the backend of the application gateway has been updated to a new Pod:
az network application-gateway show-backend-health \
-g $RESOURCE_GROUP \
-n $APP_GATEWAY \
--query backendAddressPools[].backendHttpSettingsCollection[].servers[][address,health]
-o tsv
10.241.0.33 Healthy
10.241.0.43 Healthy
10.241.0.84 Healthy
Go back to the browser and refresh the public IP of the application gateway, you can see that the Host name and IP in the displayed content have been switched to the new backend:
version rollback
If the new version of the AKS cluster fails, we need to switch back to the old AKS cluster. At this point, you only need to go back to the old AKS cluster and reinstall AGIC to re-associate the application gateway to the application Pod in the old AKS cluster.
To do this, first run:
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_OLD
Then execute:
helm uninstall agic
helm install agic application-gateway-kubernetes-ingress/ingress-azure -f helm_agic.yaml
Soon you can see that AGIC’s Pod is up and running:
kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
aad-pod-identity-mic-787c5958fd-kmx9b 1/1 Running 0 2d1h 10.240.0.33 aks-nodepool1-94448771-vmss000000 <none> <none>
aad-pod-identity-mic-787c5958fd-nkpv4 1/1 Running 1 2d1h 10.240.0.63 aks-nodepool1-94448771-vmss000001 <none> <none>
aad-pod-identity-nmi-mhp86 1/1 Running 0 2d1h 10.240.0.4 aks-nodepool1-94448771-vmss000000 <none> <none>
aad-pod-identity-nmi-sjpvw 1/1 Running 0 2d1h 10.240.0.35 aks-nodepool1-94448771-vmss000001 <none> <none>
aad-pod-identity-nmi-xnfxh 1/1 Running 0 2d1h 10.240.0.66 aks-nodepool1-94448771-vmss000002 <none> <none>
agic-ingress-azure-84967fc5b6-nwbh4 1/1 Running 0 8s 10.240.0.70 aks-nodepool1-94448771-vmss000002 <none> <none>
aspnetapp-68784d6544-j99qg 1/1 Running 0 2d 10.240.0.75 aks-nodepool1-94448771-vmss000002 <none> <none>
aspnetapp-68784d6544-v9449 1/1 Running 0 2d 10.240.0.13 aks-nodepool1-94448771-vmss000000 <none> <none>
aspnetapp-68784d6544-ztbd9 1/1 Running 0 2d 10.240.0.50 aks-nodepool1-94448771-vmss000001 <none>
Look at the back end of the application gateway:
az network application-gateway show-backend-health \
-g $RESOURCE_GROUP \
-n $APP_GATEWAY \
--query backendAddressPools[].backendHttpSettingsCollection[].servers[][address,health]
-o tsv
10.240.0.13 Healthy
10.240.0.50 Healthy
10.240.0.75 Healthy
As you can see, the backend of the same application gateway has been restored to the old AKS cluster IP.
Application usability test
We use continuous HTTP requests to verify that the service was not interrupted during the switch.
Open another command line window and execute:
while(true); \
do curl -s http://139.217.117.86/ |ts '[%Y-%m-%d %H:%M:%S]' | grep 10.24; \
sleep 0.1; done
[2021-08-03 16:35:09] 10.240.0.13 <br />
[2021-08-03 16:35:10] 10.240.0.50 <br />
[2021-08-03 16:35:11] 10.240.0.13 <br />
[2021-08-03 16:35:12] 10.240.0.75 <br />
[2021-08-03 16:35:12] 10.240.0.50 <br />
[2021-08-03 16:35:13] 10.240.0.13 <br />
[2021-08-03 16:35:14] 10.240.0.75 <br />
It can be seen that the private IP of the pod in the old AKS cluster is output in turn.
Go back to the previous AKS operation window, switch to the new AKS cluster, and execute the command to delete and install AGIC again:
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NEW
Then execute:
helm uninstall agic
Observing in the second window, you will find that the IP of the old AKS cluster is still returned. Because at this time, we only delete it in the new AKS cluster, and both the application gateway and the old AKS cluster are running normally.
Then execute on the new AKS cluster:
helm install agic application-gateway-kubernetes-ingress/ingress-azure -f helm_agic.yaml
Observing in the second window, you will find that the IP address of the new AKS cluster is directly replaced from a certain line, without any interruption:
[2021-08-03 16:42:08] 10.240.0.13 <br />
[2021-08-03 16:42:09] 10.240.0.50 <br />
[2021-08-03 16:42:09] 10.240.0.75 <br />
[2021-08-03 16:42:10] 10.240.0.13 <br />
[2021-08-03 16:42:11] 10.240.0.50 <br />
[2021-08-03 16:42:11] 10.240.0.75 <br />
[2021-08-03 16:42:12] 10.241.0.33 <br />
[2021-08-03 16:42:13] 10.241.0.33 <br />
[2021-08-03 16:42:13] 10.241.0.43 <br />
[2021-08-03 16:42:15] 10.241.0.43 <br />
[2021-08-03 16:42:15] 10.241.0.84 <br />
[2021-08-03 16:42:16] 10.241.0.84 <br />
This verifies that the external services of the application gateway always operate normally during the handover. Through this operation, the new and old AKS clusters can be retained at the same time, and can be switched in real time.
summary
The above is an example of a common web application, demonstrating that the newly-built AKS cluster can achieve a steady version upgrade through blue-green deployment.
In addition to web applications, applications of other types and scenarios can be referred to, and switch between AKS clusters and upstream integration places, so as to realize real-time switching and rollback.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。