In the past, poor canaries were used as guinea pigs to test methane levels in coal mines. Use a rope to put the cage with the canary in the mine for a while, then pull it back up, if the canary is alive, the mine is safe to mine; if the canary is dead, it cannot be mined. This method is now deprecated because it is too inhumane for animals.
The canary is always lingering around the miner, if it stops tweeting, it means the miner must leave the mine.
canary deployment is when two versions of an application coexist, with the new version starting out smaller and handling less traffic. As the new deployment is analyzed, all requests are gradually switched to the new version, and the old version of the application is removed.
It is generally accepted that managing traffic for these deployments requires the use of a Service Mesh , however, to manage inbound traffic you can simply set annotations on the nginx ingress controller:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: <num>
The disadvantage of this approach is that it has to be managed manually. For automation, we can use Argo Rollouts ( https://argoproj.github.io/argo-rollouts/ ).
Run Argo Rollouts
Added helm-repo: https://argoproj.github.io/argo-helm
argo-rollouts chart:
Helm-values:
installCRDs: true
Modify Deployment and run Rollouts CRD
ScaleDown deployment, set Replicas 0:
run service
apiVersion: v1
kind: Service
metadata:
annotations:
argo-rollouts.argoproj.io/managed-by-rollouts: rollout-pregap
name: rollouts-pregap-canary
namespace: pregap
spec:
clusterIP: 10.43.139.197
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: test2-pregap
sessionAffinity: None
type: ClusterIP
apiVersion: v1
kind: Service
metadata:
annotations:
argo-rollouts.argoproj.io/managed-by-rollouts: rollout-pregap
spec:
clusterIP: 10.43.61.221
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: test2-pregap
sessionAffinity: None
type: ClusterIP
Run Rollouts CRD
Since we don't want to change the Deployment, reference it in the Rollout manifest: workloadRef.kind: Deployment, workloadRef.name
Running the manifest will create the additional ingress:
Argo Rollouts Dashboard
Additional steps in CD-pipeline
Add a boost step in .drone.yml:
- name: promote-release-dr
image: plugins/docker
settings:
repo: 172.16.77.115:5000/pregap
registry: 172.16.77.115:5000
insecure: true
dockerfile: Dockerfile.multistage
tags:
- latest
- ${DRONE_TAG##v}
when:
event:
- promote
target:
- production
- name: promote-release-prod
image: plugins/webhook
settings:
username: admin
password: admin
urls: http://172.16.77.118:9300/v1/webhooks/native
debug: true
content_type: application/json
template: |
{ "name": "172.16.77.115:5000/pregap",
"tag": "${DRONE_TAG##v}" }
when:
event:
- promote
target:
- production
Add Keel approval:
Conclusion
canary deployment or green/blue deployment is not difficult at all - it will increase the reliability of the production environment and reduce the affected area in case of any design errors. In the future, I will add RAM to the server, and possibly enable Prometheus monitoring and Istio, and try to perform analysis and experimental phases to implement Argo Rollouts.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。