Author: Adamzhoul, OpenYurt member

Open the README.md of openYurt, after a brief introduction, it is Getting started:

 yurtctl convert --provider [minikube|kubeadm|kind] // To convert an existing Kubernetes cluster to an OpenYurt cluster
 yurtctl revert // To uninstall and revert back to the original cluster settings

You can experience OpenYurt with a simple one-line command, which feels very convenient.

Hold on! Why convert/revert instead of install/uninstall?

What did this command do to the cluster?

It seems that it is necessary to figure out what it does before executing it.

What exactly did yurtctl convert do?

Core process

Following the openYurt source code (see related links at the end of the article for details), the core process of convert is sorted out:

1.png

It can be seen that 1, 2 is nothing special, just regular service deployment.

3. The operation of the original k8s system components requires special attention.

4. The node transition does not seem complicated, but it is essential to the edge. * *

What did disable nodelifecycle controller do

Work content:

1. Query the control plane node

2. Create a job, and use nodeName: {{.nodeName}} ensure that the pod of the job is scheduled to be executed on the corresponding node (executed by nsenter, modifying the file on the host).

3. sed -i 's/--controllers=/--controllers=-nodelifecycle,/g' /etc/kubernetes/manifests/kube-controller-manager.yaml

View kube-controller-manager.yaml

...
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
    ...
- --controllers=-nodelifecycle,*,bootstrapsigner,tokencleaner
...

It can be seen that the above series of operations ultimately modify the startup command of kube-controller-manager.

View the kube-controller-manager startup parameter description:

--controllers represents the list of controllers that need to be turned on \

It can be seen that the sed command removes the nodelifecycle controller.

So, what does the nodelifecycle controller do?

simply put:

*1. Continuous monitoring, node information reported by kubelet\

  1. If the status of a node is abnormal, or it has not been reported for a long time, etc.\
    2.1 Evicting this node or other ---> causes the above pod to be rescheduled*

It can be seen that for edge nodes in a weak network environment, it is easy to hit an abnormal state, causing the node to be evicted and the pod to be rescheduled.

So remove it here. Use yurt-controller-manager instead.

Even if the heartbeat of the node is lost, the pod in the node in autonomous mode will not be expelled from the APIServer.

Note: The node in the autonomous mode here refers to the edge node. We usually mark nodes as autonomous nodes by adding annotations.

How is node conversion achieved, and what is the difference between cloud nodes and edge nodes?

Similarly, by running a job, perform related operations in the context of the target host.

However, compared to the violent use of nsenter, a more elegant way is used here. Mount the host root path volume to the container.

Modification of kubelet

Add configuration for KUBELET_KUBEADM_ARGS in the file /var/lib/kubelet/kubeadm-flags.env:

--kubeconfig=/var/lib/openyurt/kubelet.conf --bootstrap-kubeconfig=

effect:

1. Parameters: - kubeconfig, to kubelet specifies the access apiServer profile.

2. When --kubeconfig file exists, - bootstrap-kubeconfig is empty, kubelet start does not need replacement certificate file by bootstrap-token and other processes, directly read kubeconfig file access apiServer. ​*\
*

3. Since the final part of kubelet KUBELET_KUBEADM_ARGS startup parameters, it can function as the front cover parameters.

Wherein /var/lib/openyurt/kubelet.conf follows, designated flow directly to yurthub:

apiVersion: v1
clusters:
- cluster:
server: http://127.0.0.1:10261
name: default-cluster
contexts:
- context:
cluster: default-cluster
namespace: default
user: default-auth
name: default-context
current-context: default-context
kind: Config
preferences: {}

yurthub's startup details

The startup parameters of the yurthub container are as follows:

command:
- yurthub
- --v=2
- --server-addr=__kubernetes_service_addr__
- --node-name=$(NODE_NAME)
- --join-token=__join_token__
- --working-mode=__working_mode__

We can see through the parameters:

1. Server-addr specifies the cloud apiServer address. Note that the address here must be an accessible address on the public network, otherwise there will be problems in a heterogeneous network.

token-token 2. the Join node is added, you can use kubeadm token create be created. k8s provides a mechanism to replace the normally accessed kubeconf file through the token.

3. Working-MODE: Cloud / Edge. This is the difference between edge nodes and cloud nodes.

We all know that yurthub can be used as a cache, which is an important part of solving edge autonomy. So why does the cloud also need to be deployed? Why distinguish between edge and cloud working modes? Simply view the yurthub source code cmd/yurthub/app/start.go:

if cfg.WorkingMode == util.WorkingModeEdge {
    cacheMgr, err = cachemanager.NewCacheManager(cfg.StorageWrapper, cfg.SerializerManager, cfg.RESTMapperManager, cfg.SharedFactory)
    ...
} else {
    klog.Infof("%d. disable cache manager for node %s because it is a cloud node", trace, cfg.NodeName)
}
if cfg.WorkingMode == util.WorkingModeEdge {
    ...
    gcMgr, err := gc.NewGCManager(cfg, restConfigMgr, stopCh)
} else {
    klog.Infof("%d. disable gc manager for node %s because it is a cloud node", trace, cfg.NodeName)
}

It can be seen that cloud yurthub has done less cache and GC work.

Check the issue (see related links at the end of the article for details) to understand: the cloud can also use the data-filtering capability provided by yurthub to control the traffic of the service.

Of course, the cloud does not need to do work such as cache.

Command line parameters

In the execution process, several parameters are more important:

--cloud-nodes is used to identify which are cloud nodes, multiple nodes are separated by commas: node1, node2

--deploy-yurttunnel mark whether to deploy yurttunnel

--kubeadm-conf-path marks the path of the kubeadm configuration file on the node machine. Default: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

For more parameters, you can use yurtctl convert --help to view.

Summarize

Simply put, the convert core does several things:

1. disable K8S of nodelifecontroller, with their yurtcontrollermanager to replace its responsibilities.

2. install their own kinds of components, deployment, damenonset other models deployed. (There is no need to worry about this type of resource deployment, because it will not

3. edge node: Start yurthub static pod; will be forwarded to yurthub kubelet traffic.

It can be seen that the conversion is relatively controllable. Don't worry too much about executing yurtctl convert. Of course, the last worry should also be completely eliminated by yurtctl revert!

What did yurtctl revert do?

Core process

2.png

The entire revert process is the reverse operation of convert, which is relatively easy to understand.

have to be aware of is. If the convert fails, such as job execution timeout or failure. The job will not be deleted.

Even yurtctl revert will not be deleted. The purpose is to keep the scene to facilitate locating the problem.

If you need to re-execute yurtctl convert, you need to delete the job manually.

kubectl get job -n kube-system -A |grep convert
kubectl delete job -n kube-system < job-name>

Summarize

The yurtctl convert/revert command is one of the quickest ways to experience OpenYurt functions.

After understanding the implementation principles of these two commands, you also know more than half of OpenYurt's technical solutions.

Don’t worry about executing orders, so easy!

Related Links

1) Source code:

https://github.com/openyurtio/openyurt/blob/5063752a9f6645270e3177e39a46df8c23145af2/pkg/yurtctl/cmd/convert/convert.go#L300

2)issue: 

https://github.com/openyurtio/openyurt/issues/450

Click [here] read the original site.


阿里云云原生
1.1k 声望319 粉丝