1

The previous article in this column wrote "Long Illustrated etcd Core Application Scenarios and Coding Practices", this article continues. Subsequent plan chapters are as follows:

  • "Long Illustrated etcd Core Application Scenarios and Coding Practice"
  • "Building a highly available etcd cluster"
  • "Implementing distributed locks based on etcd (java code implementation)"
  • "Implementation of configuration change notification based on etcd (java code implementation)"
  • "Realizing service registration and discovery based on etcd (java code implementation)"
  • "Implementation of distributed system node leader election based on etcd (java code implementation)"

Many people know that etcd is because of kubernetes, so the most commonly used etcd cluster construction method is to configure and start an etcd cluster through k8s. However, in addition to using etcd with k8s, there are many other application scenarios, such as: distributed locks, configuration change notifications, multi-node leader elections in distributed systems, etc. Therefore, the etcd cluster installation introduced in this article is separated from k8s, that is, the high availability service cluster of etcd is directly installed on the Linux server.

1. Preparation

The following preparations must be completed on the three servers

1.1. Planning the host server

First, you need to plan the server. Because the etcd cluster needs to elect the leader, it is recommended that the number of cluster nodes be 3 or 5. Not too much. There will be data replication between nodes to ensure data consistency. The more nodes, the greater the network and server performance consumption. Network connectivity between servers needs to be ensured.

Use the root user to add the following configuration to the /etc/hosts file to establish a mapping relationship between the hostname hostname and ip. To access peer1 is to access the corresponding host ip.

 192.168.161.3       peer1
192.168.161.4       peer2
192.168.161.5       peer3

1.2. New etcd user

Under the Linux distribution of CentOS, executing the following command will create the user and user group etcd, and automatically create the /home/etcd directory. If you are using another OS distribution, you may need to use the useradd command and create this directory yourself.

 groupadd etcd
adduser -g etcd etcd

Use the root user to create a new user and user home directory. The default new user has no password, you can use the passwd etcd command to set a password for it.

1.3. Open firewall ports

Open the firewall and use the following three commands to open the standard ports 2379 and 2380 of etcd. In the actual installation process, the author usually does not use these two ports, because the more fixed the port, the greater the possibility of being attacked. We randomly choose an uncommon port, the security will be better, here I still use the standard port. Each node inside the cluster communicates through port 2380, and port 2379 is responsible for external communication with clients

 firewall-cmd --zone=public --add-port=2379/tcp --permanent;
firewall-cmd --zone=public --add-port=2380/tcp --permanent;
firewall-cmd --reload

Use the root user to operate the firewall.

1.4. Create the necessary directories

Use su - etcd to switch from root user to etcd user, in the etcd user's home directory /home/etcd Create the following directory for etcd data storage

 mkdir -p /home/etcd/data;

1.5. Download etcd and extract it

etcd users download the etcd installation package, which is relatively slow to download from github. I chose the accelerated image provided by HUAWEI CLOUD in China. If you don't want to use my version, you can also search for "etcd domestic download acceleration" to choose the version you need. Wget command to download, tar command to decompress, this need not be said.

 wget https://mirrors.huaweicloud.com/etcd/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz;
tar -xzvf /home/etcd/etcd-v3.5.4-linux-amd64.tar.gz;

1.6. Cluster host password-free login

In the future, we will conduct etcd operation and maintenance. For example, when starting the cluster, we do not want to execute commands on one server and one server, but want to complete the operation on one server. This requires cluster etcd users to be able to log in to each other without passwords. . Here I will briefly introduce it, explain how to implement it, and learn the principle by searching for articles. Execute the following command in the root directory of the user with etcd authority, no matter what is prompted, just press Enter all the way.

 ssh-keygen -t rsa
  • Save the public key to the authorized_keys file

     cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
  • Distribute the public key to peer2 and peer3 hosts. Enter etcd login password as prompted
 ssh-copy-id -i ~/.ssh/id_rsa.pub -p22 etcd@peer2;

It needs to be executed separately, because the password is required during the execution of the command.

 ssh-copy-id -i ~/.ssh/id_rsa.pub -p22 etcd@peer3;

In this way, the configuration of our peer1 password-free login to peer2 and peer3 is completed. The same operation is performed on the peer2 and peer3 servers, and the public key is sent to the other two servers, and the host name is replaced. After this setting is completed, we can log in to peer3 by executing such as ssh etcd@peer3 under the etcd user on any of the three servers, without entering a password, which proves that our operation was successful.

2. Cluster startup and verification

2.1. One-click startup script implementation

After completing the above preparations, in fact, our etcd cluster installation has been completed. The actual installation action is to decompress. The etcd, etcdctl, etcdutl in the decompression directory are executable files and can be used directly.

Next, we use this script to start the etcd cluster ( just execute the script once on any of the three planned servers ). We give this script a name start-etcds.sh and give it executable permissions.

 #!/bin/bash

## ------------config-----------------
export ETCDCTL_API=3
CLUSTER_TOKEN=etcdcluster01
DATADIR=/home/etcd/data
HOSTNAME1=peer1
HOSTNAME2=peer2
HOSTNAME3=peer3
HOSTIP1=192.168.161.3
HOSTIP2=192.168.161.4
HOSTIP3=192.168.161.5
CLUSTER=${HOSTNAME1}=http://${HOSTIP1}:2380,${HOSTNAME2}=http://${HOSTIP2}:2380,${HOSTNAME3}=http://${HOSTIP3}:2380
CLUSTER_IPS=(${HOSTIP1}  ${HOSTIP2}  ${HOSTIP3})
CLUSTER_NAMES=(${HOSTNAME1}  ${HOSTNAME2}  ${HOSTNAME3})

## ---------------start etcd node------------------
for i in $(seq 0 `expr ${#CLUSTER_IPS[@]} - 1`); do
    nodeip=${CLUSTER_IPS[i]}
    nodename=${CLUSTER_NAMES[i]}

    ssh -T $nodeip <<EOF
    nohup /home/etcd/etcd-v3.5.4-linux-amd64/etcd  \
        --name ${nodename} \
        --data-dir  ${DATADIR}  \
        --initial-advertise-peer-urls http://${nodeip}:2380 \
        --listen-peer-urls http://${nodeip}:2380 \
        --advertise-client-urls http://${nodeip}:2379 \
        --listen-client-urls http://${nodeip}:2379 \
        --initial-cluster ${CLUSTER} \
        --initial-cluster-state new \
        --initial-cluster-token ${CLUSTER_TOKEN} >> ${DATADIR}/etcd.log 2>&1  &
EOF
echo 从节点 $nodename 启动etcd节点...[ done ]

sleep 5
done

This script is divided into two parts, the first part config is our custom shell script variable

  • export ETCDCTL_API=3 indicates that the etcdctl API version 3 is used.
  • CLUSTER_TOKEN An etcd cluster has a unique token, which can be set at will to ensure uniqueness.
  • DATADIR represents the data disk storage path of etcd
  • HOSTNAME1, 2, and 3 represent the host names of the three servers we planned in advance, namely: the linux host hostname the execution result of the command.
  • HOSTIP1, 2, and 3 represent the IP addresses of the three servers we planned in advance. (If there are multiple network cards, please select the network card ip that provides services to the outside world)
  • CLUSTER is the standard format for etcd cluster configuration
  • CLUSTER_IPS and CLUSTER_NAMES are the ip and hostname arrays of each node of the cluster server respectively

The second part is the etcd cluster startup script. Because we have configured the etcd user's password-free login between hosts, we can start the etcd service on three servers through one script.

  • for i in $(seq 0 expr ${#CLUSTER_IPS[@]} - 1 ); do represents a for loop, the length of the loop CLUSTER_IPS array is assigned to i, so i is equal to 1, 2, 3.
  • nodeip and nodename are equal to the element with subscript i in the CLUSTER_IPS and CLUSTER_NAMES arrays, that is: the host's ip and the host's name.
  • The for loop traverses 3 servers, and uses ssh -T $nodeip to log in to 3 servers in turn. Because password-free login has been done above, no password is required.
  • EOF is divided as paragraphs, and the command wrapped in the middle is the startup command of the etcd instance.

The command to start etcd is as follows:

  • /home/etcd/etcd-v3.5.4-linux-amd64/etcd : start etcd command
  • --name : etcd node name, to ensure uniqueness, we can use the host name deployed by etcd.
  • --data-dir : etcd data storage location
  • --initial-advertise-peer-urls ,
    --listen-peer-urls Specify the url of the current node to communicate with other nodes in the cluster. If there is a network proxy on this node, --initial-advertise-peer-urls is set to the proxy's address: 2379.
  • --advertise-client-urls ,
    --listen-client-urls Specifies the url of the client to communicate with the current node. If there is a network proxy on this node, --advertise-client-urls is set to the proxy's address: 2380.
  • --initial-cluster Communication address list of each node in the cluster
  • --initial-cluster-state Use new for a new cluster, and exist for a node to join an existing cluster
  • --initial-cluster-token The token unique identifier of the cluster.

2.2. Verify the cluster

Use etcdctl member list to see how many nodes the current etcd cluster contains and the status of the nodes

 /home/etcd/etcd-v3.5.4-linux-amd64/etcdctl \
--endpoints=192.168.161.3:2379,192.168.161.4:2379,192.168.161.5:2379 \
member list

In the result of the above command, you can see that the status is started, which proves that our cluster is in a normal operating state. If you want to query which node in the cluster is the Leader node, the command I use more often is the following

 /home/etcd/etcd-v3.5.4-linux-amd64/etcdctl \
--endpoints=192.168.161.3:2379,192.168.161.4:2379,192.168.161.5:2379 \
endpoint status -w table

The displayed results are as follows. You can see that the node with "IS LEADER=true" is the leader node of the cluster:

Code text is not easy, if you find it helpful, please help click to watch or share, I may not be able to persevere without your support!
Welcome to pay attention to my announcement number: Antetokounmpo, reply 003 and present the PDF version of the author's column "The Way of Docker Cultivation", more than 30 high-quality docker articles. Antetokounmpo Blog: zimug.com


字母哥博客
933 声望1.5k 粉丝