头图

公众号名片
作者名片

Introduction to Docker Swarm

Docker Swarm is a container cluster management platform officially launched by Docker based on Go language, which greatly facilitates our management of Docker hosts, networks, and storage.
A Swarm group includes one or more Docker hosts, where the Docker hosts can be physical machines, virtual machines, cloud hosts, and other operating systems that run the Docker environment.

Core idea

Init

The Swarm cluster capability has been integrated when Docker is installed, we only need one line of command to turn it on

 docker swarm init

Node

Node represents a node in the Swarm cluster, which is a large scheduling unit managed by the cluster manager. In the Swarm cluster, we can execute the following commands to view the node information

 docker node ls

Manager

The manager role of the Swarm cluster. There is at least one manager in a cluster. It is responsible for cluster resource allocation and task scheduling. It can also be scheduled by other managers. When a node is added to the cluster, we can specify its identity. Command to view the command to join the cluster manager role\

Worker

The worker nodes in the Swarm cluster can only be scheduled. The worker nodes are only responsible for running tasks. You can view the commands to join the cluster worker role through the following commands\

Service

Service is the smallest execution unit in a Swarm cluster. It has the capabilities of resource limitation, elastic scaling, rolling upgrade and simple rolling. Our applications are defined as services and run in the cluster

Config

config configuration management, used to define and store our configuration, such as the configuration required for application startup, we can use the following command to create our configuration

 echo "application.name=demo" | docker application.properties -

Secret

Secret key management, considering configuration security, we can choose key storage for some obvious information, similar to the way config is used

 echo "123456" | docker mysql.password -

docker-compose

Knowing the basic concepts of swarm cluster, we will learn how to use docker-compose syntax to write our application. This is only for learning and use. The storage monitoring components such as redis and mysql have been running in stand-alone version.

 version: "3.2" #compose版本号
services:
  redis: #服务名称
    image: redis #镜像地址
    logging: # 日志设置
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    command: --requirepass 123456 # 服务启动时执行的命令
    deploy:
      mode: global #指定服务模式为集群唯一
      placement:
        constraints: [node.labels.node == manager] #指定运行节点为 manager
  nacos:
    image: nacos/nacos-server
    depends_on: #指定服务启动顺序,nacos依赖 mysql
      - mysql 
    environment: #环境变量
      - MODE=standalone
      - SPRING_DATASOURCE_PLATFORM=mysql
      - MYSQL_SERVICE_HOST=mysql
      - MYSQL_SERVICE_PORT=3306
      - MYSQL_SERVICE_DB_NAME=nacos_config
      - MYSQL_SERVICE_USER=root
      - MYSQL_SERVICE_PASSWORD=123456
    deploy:
      mode: global
      placement:
        constraints: [node.labels.node == manager]
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    ports:
      - "8848:8848"
  sentinel:
    image: registry.cn-hangzhou.aliyuncs.com/yaochengzhu/sentinel
    deploy:
      mode: global
      placement:
        constraints: [node.labels.node == manager] 
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    ports:
      - "8858:8858"
  mysql:
    image: registry.cn-hangzhou.aliyuncs.com/yaochengzhu/mysql:2021-04-10
    environment:
      - MYSQL_ROOT_PASSWORD=123456
    volumes:
      - mysql:/var/lib/mysql
    command:
      --default-time_zone='+8:00'
      --character-set-server=utf8mb4
      --collation-server=utf8mb4_unicode_ci
    deploy:
      mode: global
      placement:
        constraints: [node.labels.node == manager]
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
  order-center:
    image: registry.cn-hangzhou.aliyuncs.com/yaochengzhu/order-center:2020-12-04-18-09-19
    restart: always
    depends_on:
      - mysql
      - nacos
      - redis
    deploy:
      replicas: 2 #运行实例数量
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    environment:
      - SERVER_PORT=80
      - MYSQL_SERVER=jdbc:mysql://mysql:3306/order_center?allowPublicKeyRetrieval=true&useSSL=false&useUnicode=true&characterEncoding=UTF-8&serverTimezone=Asia/Shanghai
      - MYSQL_USER_NAME=root
      - MYSQL_ROOT_PASSWORD=123456
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - REDIS_PASSWORD=123456
      - NACOS_SERVER=nacos:8848
      - LOG_LEVEL=INFO
    healthcheck: #监控检查
      test: ["CMD","curl","-f", "http://127.0.0.1/doc.html"] #检查命令
      interval: 5s #检查周期
      timeout: 5s #单次检查超时时间
      retries: 100 #最大检查次数
  user-center:
    image: registry.cn-hangzhou.aliyuncs.com/yaochengzhu/user-center:2020-12-04-18-09-19
    restart: always
    environment:
      - SERVER_PORT=80
      - MYSQL_SERVER=jdbc:mysql://mysql:3306/user_center?allowPublicKeyRetrieval=true&useSSL=false&useUnicode=true&characterEncoding=UTF-8&serverTimezone=Asia/Shanghai
      - MYSQL_USER_NAME=root
      - MYSQL_ROOT_PASSWORD=123456
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - REDIS_PASSWORD=123456
      - NACOS_SERVER=nacos:8848
      - LOG_LEVEL=INFO
    depends_on:
      - mysql
      - nacos
      - redis
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    healthcheck:
      test: ["CMD","curl","-f", "http://127.0.0.1/doc.html"]
      interval: 5s
      timeout: 5s
      retries: 100
  api-gateway:
    image: registry.cn-hangzhou.aliyuncs.com/yaochengzhu/api-gateway:2020-12-04-18-09-19
    restart: always
    environment:
      - SERVER_PORT=80
      - USER_CENTER_SERVER=lb://user-center
      - ORDER_CENTER_SERVER=lb://order-center
      - NACOS_SERVER=nacos:8848
      - SENTINEL_SERVER=sentinel:8858
      - LOG_LEVEL=INFO
    depends_on:
      - user-center
      - order-center
      - nacos
      - sentinel
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    healthcheck:
      test: ["CMD","curl","-f", "http://127.0.0.1/doc.html"]
      interval: 5s
      timeout: 5s
      retries: 100
  nginx:
    image: registry.cn-hangzhou.aliyuncs.com/yaochengzhu/nginx:api-gateway
    restart: always
    ports:
      - "80:80"
      - "443:443"
    depends_on:
      - api-gateway
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    healthcheck:
      test: ["CMD","curl","-f", "http://127.0.0.1/doc.html"]
      interval: 5s
      timeout: 5s
      retries: 100
volumes: #创建数据卷
  mysql:

Save the above file as docker-compose.yaml and execute it in the directory where the file is located docker-compose up to start the service. But in actual use, we may have multiple environments. For easier management, we usually use docker stack to isolate our services.

 docker stack deploy --compose-file=demo-compose.yaml demo #在demo环境运行我们的服务

View our services\

Portainer Visual Management

After the previous understanding of the swarm cluster, we need to manage the cluster more conveniently. Here I recommend the portainer container management tool for everyone. After that, all our operations will basically be completed on the UI interface provided by the portainer.

install portainer

Install in a swarm cluster

 version: '3.2'
services:
  agent:
    image: portainer/agent:2.11.1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/lib/docker/volumes:/var/lib/docker/volumes
    networks:
      - agent_network
    deploy:
      mode: global
      placement:
        constraints: [node.platform.os == linux]

  portainer:
    image: portainer/portainer-ce:2.11.1
    command: -H tcp://tasks.agent:9001 --tlsskipverify
    ports:
      - "9443:9443"
      - "9000:9000"
      - "8000:8000"
    volumes:
      - portainer_data:/data
    networks:
      - agent_network
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]

networks:
  agent_network:
    driver: overlay #跨主机网络
    attachable: true
volumes:
  portainer_data:

Execute the installation command

 docker stack deploy --compose-file=portainer.yaml portainer_demo

After the installation is successful, the browser ip:9000 accesses the portainer initialization interface and sets the password\

After logging in, this is what it looks like

I marked a few commonly used functions, and if you are interested, you can learn more about them. \

Let's take a look at the demo effect of the previous run\

Add domain name resolution access interface address\

Summarize

At this point, I believe that everyone has a certain understanding of docker swarm. Compared with k8s cluster swarm, it is much thinner and smaller. Swarm cluster is suitable for scenarios where the scale of the enterprise is small and the application is not very complicated.
Alibaba Cloud delisted the swarm cluster on December 31, 2019. Since then, the swarm has fallen to the cloud, and k8s has become increasingly popular. When choosing a technology, we need to consider not only the advantages of the technology itself, but also the applicability. After all, the most suitable is the best. Maybe you can try swarm + portainer or swarm + rancher.

References

[1] docker: https://www.docker.com/

[2] swarm: https://docs.docker.com/engine/reference/commandline/swarm/

[3] portainer: https://www.portainer.io/

For more exciting things, please pay attention to our public account "Hundred Bottles Technology", there are irregular benefits!


百瓶技术
127 声望18 粉丝

「百瓶」App 技术团队官方账号。