一、Docker的简介
什么是Docker
docker是一个开源的应用容器引擎,基于go语言开发并遵循了apache2.0协议开源。docker是一种容器技术,它可以对软件及其依赖进行标准化的打包;容器之间先对独立,基于容器运行的应用之间也是相互隔离的;并且容器之间是共享一个OS kernel的,充分利用服务器资源,容器可以运行在很多主流的操作系统之上。
容器和虚拟机的区别
容器时在linux上运行,并与其它容器共享主机的内核,它运行一个独立的进程,不占用其它任何可执行文件的内存,非常轻量。虚拟机运行的是一个完成的操作系统,通过虚拟机管理程序对主机资源进行虚拟访问,相比之下需要的资源更多。
- 容器是app层面的隔离
- 虚拟机是物理资源层面的隔离
docker架构和底层技术简介
docker本质就是宿主机的一个进程,docker是通过namespace实现资源隔离,通过cgroup实现资源限制,通过写时复制技术(copy-on-write)实现了高效的文件操作(类似虚拟机的磁盘比如分配500g并不是实际占用物理磁盘500g)
底层技术简介
namespace名称空间
namespace的六项隔离
namepace | 系统调用参数 | 隔离内容 |
---|---|---|
UTS | CLONE_NEWUTS | 主机名与域名 |
IPC | CLONE_NEWWIPC | 信号量、消息队列和共享内存 |
PID | CLONE_NEWPID | 进程编号 |
NEWWORD | CLONE_NEWNET | 网络设备、网络栈、端口等 |
MOUNT | CLONE_NEWNS | 挂载点(文件系统) |
USER | CLONE_NEWUSER | 用户和用户组(3.8以后的内核才支持) |
control group控制组
cgroup的功能
- 资源限制:可以对任务使用的资源总额进行限制
- 优先级分配:通过分配的cpu时间片数量以及磁盘IO带宽大小,实际上相当于控制了任务运行优先级
- 资源统计:可以统计系统的资源使用量,如cpu时长,内存用量等
- 任务控制:cgroup可以对任务执行挂起、恢复等操作
二、Docker的环境搭建
CentOS安装docker
- 前往官网下载页
卸载docker及相关依赖
sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine
安装yum-utils工具包
sudo yum install -y yum-utils
添加docker仓库
sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
安装最新版docker引擎(社区版)
sudo yum install docker-ce docker-ce-cli containerd.io
安装指定版本的docker引擎
#查询docker版本,版本从高到低排序 yum list docker-ce --showduplicates | sort -r #安装指定版本的docker,替换<VERSION_STRING>即可 #sudo yum install -y docker-ce-20.10.13-3.el7 docker-ce-cli-20.10.13-3.el7 containerd.io sudo yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
启动docker
sudo systemctl start docker
查看docker版本
sudo docker version
检验docker引擎被成功安装
sudo docker run hello-world
三、Docker的镜像、容器和仓库
docker基于linux内核空间,在基础镜像的基础上一层层构建出用户镜像。容器是镜像的运行实例。通过镜像启动一个容器,一个镜像是一个可执行的包,其中包括运行应用程序所需要的所有内容包含代码,运行时间,库、环境变量、和配置文件。
Docker image镜像
Docker镜像是什么
docker镜像就是一个只读模板,比如,一个镜像可以包含一个完整的centos,里面仅安装apache或用户的其他应用,镜像可以用来创建docker容器。
- 文件(root filesystem)和meta data的集合
- image是分层的,并且每一层都可以添加、修改、删除文件,成为一个新的image
- 不同的image可以共享相同的layer
- image本身是read-only只读的
Docker镜像地址
从官方dockerhub拉取镜像非常慢,可以试试国内的镜像源。
Docker添加镜像地址
对于使用 systemd 的系统,请在 /etc/docker/daemon.json 中写入如下内容(如果文件不存在请新建该文件)
{
"registry-mirrors": ["https://hub-mirror.c.163.com","https://reg-mirror.qiniu.com"]
}
#重新加载配置,重启docker server
systemctl daemon-reload
systemctl restart docker
Docker镜像相关命令
查看本地已有的镜像列表
docker image ls docker images
删除镜像
docker image rm f6509bac4980 #简写 docker rmi f6509bac4980
构建镜像
#根据dockerfile构建一个镜像 docker image build #从一个被改变的容器创建一个新的镜像 docker container commit #简写,从一个被改变的容器创建一个新的镜像 docker commit
从镜像仓库拉取镜像
#默认从docker hub拉取 docker pull ubuntu:14.04
查看指定镜像的创建历史
#通过镜像名称 docker history mysql:5.7 #通过镜像ID docker history f6509bac4980
镜像的发布docker hub官网
#首先登陆,输入dockhub的用户名和密码进行登陆 docker login #将本地镜像推送到dockerhub,这种方式是不被推荐的,因为镜像是如何构建的是透明的 #正确的做法应该是用dockerhub关联github,通过dockerhub自动拉取保存在github中的 #dockerfile,并自动帮我们构建镜像,这样的镜像就会显得安全可靠。 docker push neojayway/hello-world:latest
Dockerfile语法及实践
dockerfile语法
FROM关键字
#制作base image
FROM scratch
#使用base image
FROM centos
FROM ubuntu:14.04
说明:为了安全起见,尽量使用官方的镜像作为基础镜像。
LABEL关键字
LABEL version="1.0"
LABEL description="this is description"
说明:此关键字的作用是定义镜像的元数据,类似于注释及帮助信息,还是非常必要的
RUN关键字
RUM yum update && yum install -y vim \
python-dev #反斜线换行
说明:RUN关键字用来执行命令并创建新的image layer层,值得注意的是每运行一次RUN,都会生成一层layer,为了避免无用分层,务必合并多条命令成一行,反斜线\换行,&&合并成多条命令
shell命令格式
RUN yum install -y vim
CMD echo "hello docker"
ENTRYPOINT echo "hello docker"
exec命令格式
#exec命令格式执行命令,仅仅是单纯的执行命令,没有以shell环境去执行
RUN ["yum", "install", "-y", "vim"]
CMD ["/bin/echo", "hello docker"]
ENTRYPOINT ["/bin/echo", "hello docker"]
#未指定以bash shell环境去执行,$name不会被变量替换
ENTRYPOINT ["/bin/echo", "hello $name"]
#指定以bash shell环境去执行,$name才会被变量替换
ENTRYPOINT ["/bin/bash", "-c", "echo hello $name"]
CMD关键字
FROM centos
ENV name docker
CMD echo "hello $name"
- 将以上dockerfile构建成镜像并执行docker run [image]会输出“hello docker”
- 执行docker run -it [image] /bin/bash就不会输出“hello docker”了
说明:该关键字的作用是设置容器启动后默认执行的命令和参数,如果docker run指定了其它命令,CMD命令将会被忽略,如果定义了多个CMD,只有最后一个会执行
ENTRYPOINT关键字
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 27017
CMD ["mongod"]
说明:设置容器启动时运行的命令,让容器以应用程序或者服务的形式运行,不会被忽略,一定会执行
WORKDIR关键字
#如果不存在此目录,则会自动创建目录,并进入目录
WORKDIR /test
WORKDIT demo
RUN $PWD #输出结果应为/test/demo
说明:WORKDIR设定当前工作目录,进入目录应当使用WORKDIR,避免使用RUN cd,因为RUN会新增层,还有尽量使用绝对目录。
ADD和COPY关键字
ADD test.tar.gz / #添加到根目录并解压缩
COPY hello /
说明:AND and COPY本地文件添加到Docker Image里面,大部分情况,COPY优于ADD,ADD相比COPY有解压缩的功能。如需要添加远程文件或目录,请使用RUN关键字执行curl或者wget命令。
ENV关键字
ENV MYSQL\_VERSION 5.7 #设置常量
RUN apt-get install -y mysql-servier= "${MYSQL_VERSION}" \ && rm -rf /var/lib/apt/lists/* #引用常量
说明:设置常量,增加可维护性。
VOLUME关键字
EXPOSE关键字
Docker container容器
Docker container容器是什么
docker利用容器来运行应用,容器是从镜像创建的运行实例,它可以被启动,开始、停止、删除、每个容器都是互相隔离的,保证安全的平台,可以吧容器看做是要给简易版的linux环境(包括root用户权限、镜像空间、用户空间和网络空间等)和运行再其中的应用程序
- 容器通过image创建(copy)
- 容器在imagelayer之上建立要一个container layer(可读写)
- image负责app的存储和分发,container负责运行app
- 容器与镜像的关系类似于类和实例的关系
Docker容器相关命令
查看容器列表
#当前正在运行的容器列表 docker container ls #列出所有容器,包含已退出的 docker container ls -a docker ps -a
运行一个容器
docker run centos #指定容器名字,以后台进程方式运行 docker run -d --name=demo neojayway/hello-world #以交互模式运行一个容器 docker run -it centos
限制容器资源
#限制容器内存为200M docker run --memory=200M neojayway/hello-world #设置cpu相对权重 docker run --cpu-shares=10 neojayway/hello-world
删除容器
#指定容器ID删除 docker container rm d02f80816fbb #简写 docker rm d02f80816fbb #删除所有的容器,-q选项只列出容器ID docker rm $(docker container ls -aq) #删除退出状态的容器,-q选项只列出容器ID docker rm $(docker container ls -f "status=exited" -q)
进入容器
#以交互方式进入指定的容器中 docker exec -it 11a767d3a588 /bin/bash #查看指定容器的ip地址 docker exec -it 11a767d3a588 ip a
停止容器
docker container stop 11a767d3a588 #简写 docker stop 11a767d3a588
启动容器
docker container start 11a767d3a588 #简写 docker start 11a767d3a588 #指定名称启动容器 docker start demo
检查容器
#简写,查看容器的详细信息 docker inspect 11a767d3a588 #指定名称检查容器 docker inspect demo
查看容器日志
#简写,查看容器运行输出的一些日志 docker logs 11a767d3a588
Docker repository仓库
Docker repository仓库是什么
仓库是集中存储镜像文件的地方,registry是仓库注册服务,实际上仓库注册服务器上存放着多个仓库,每个仓库中又包含了多个镜像,每个镜像有不同的标签(tag)。
仓库分为两种,即公有仓库和私有仓库,最大的公开仓库是docker Hub,存放了数量庞大的镜像供用户下载,国内的docker pool,这里仓库的概念与Git类似,registry可以理解为github这样的托管服务
四、Docker的网络
Linux网络命名空间
网络命名空间(network namespace)为命名空间内的所有进程提供了全新隔离的网络协议栈。这包括网络接口,路由表和iptables规则。通过使用网络命名空间就可以实现网络虚拟环境,实现彼此之间的网络隔离,Docker中的网络隔离也是基于此实现的。
利用Linux网络命名空间模拟docker网络通信
运行docker容器并观察其网络信息
#分别运行以下两个容器,busybox是非常小的linux image,
docker run -d --name test1 busybox /bin/sh -c "while true; do sleep 3600; done"
docker run -d --name test2 busybox /bin/sh -c "while true; do sleep 3600; done"
#查看容器的网络命名空间
docker exec aa809330a169 ip a
#进入容器
docker exec -it aa809330a169 /bin/sh
#验证容器之间的网络是否也是可达的
docker exec aa809330a169 ping 172.17.0.2
说明:宿主机上docker容器之间的网络是互通,借助于linux网络命名空间,每个docker容器有着不同的命名空间。
Linux网络命名空间模拟docker网络通信
实验通过创建两个两个linux网络命名空间,以及一对虚拟网卡接口对(veth pair),然后分别将虚拟接口分别附加到网络命名空间中,并为虚拟接口设置ip并启用,然后测试两个命名空间的网络可达性。
手动创建linux网络命名空间
#查看本机的网络命名空间列表
ip netns list
#新建网络命名空间
ip netns add test1
ip netns add test2
#删除网络命名空间
ip netns delete test1
查看手动创建linux网络命名空间
查看网络命名空间信息
ip netns exec test1 ip a
#查看网络命名空间中的连接
ip netns exec test1 ip link
#将本地回环连接状态启用,所谓连接需要连接两个网卡端点,只有一端是无法up的
ip netns exec test1 ip link set dev lo up
说明:此新建的网络命名空间中仅有一个本地回环接口,没有地址并且状态还是down的。
创建虚拟网卡对(veth pair)
#创建 veth-test1 和 veth-test2 一对虚拟网卡接口
ip link add veth-test1 type veth peer name veth-test2
说明:此时宿主机上多了两个虚拟网卡接口,就是我们上面创建的一条链接(veth pair),我们需要将这两个虚拟网卡接口分别添加到上面创建的test1、test2命名空间中。
为网络命名空间添加虚拟网卡接口
#分别将这一对虚拟网卡接口添加给对应的网络命名空间
ip link set veth-test1 netns test1
ip link set veth-test2 netns test2
说明:此时宿主机上的两个虚拟网卡接口小时不见,test1、test2命名空间中分别被添加了虚拟网卡接口veth-test1、veth-test2,但是此时这两个虚拟接口仍然没有分配地址等资源。
为网络命名空间中虚拟网卡接口添加ip地址等资源
#分别为网络命名空间下的虚拟接口附加ip地址和子网掩码资源
ip netns exec test1 ip addr add 192.168.1.1/24 dev veth-test1
ip netns exec test2 ip addr add 192.168.1.2/24 dev veth-test2
启动虚拟网卡接口
ip netns exec test1 ip link set dev veth-test1 up
ip netns exec test2 ip link set dev veth-test2 up
测试网络命名空间虚拟网卡接口可达性
ip netns exec test1 ping 192.168.1.2
docker桥网络
如下图所示,docker容器之间并不是直接连通的,而是通过中间的桥接点docker0做中转,docker0是宿主机默认网络命名空间里的虚拟接口,各个容器与桥节点docker0之间的连接通过虚拟网卡接口对实现,实现了容器间互通;然后docker0通过NAT(网路地址转换)与宿主机的eth0虚拟接口连通,则容器可以借助宿主机访问外网的能力。
docker桥网络相关详情查看命令
#安装桥网络相关工具包
yum install -y bridge-utils
#查看桥网络列表,如哪些虚拟接口已连接到桥网络上了
brctl show
#查看docker桥网络详情,如哪些容器连接到了桥网络上
docker network inspect bridge
docker容器网络之host和none
上面介绍了bridge网络,接下来介绍host和none网络。对于none网络,使用none网络的容器是无法被外界访问的,容器内部的网络命名空间是被孤立的,也只有一个本地回环端点,只能在宿主机本地进入容器内部的方式;
docker容器网络之none
#运行容器,采用none网络 docker run -d --name none-nw --network none centos /bin/sh -c "while true; do sleep 3600; done"
docker容器网络之host
#运行容器,采用host网络 docker run -d --name host-nw --network host centos /bin/sh -c "while true; do sleep 3600; done"
说明:采用host网络运行容器,容器同样不会单独分配ip,因为它完全是用宿主机的网络命名空间。
docker容器之间的link
试想一下,一台宿主机上有两个容器,一个容器中跑了一个mysql服务,另外一个容器中跑了一个web服务,web服务需要访问mysql服务,而两个容器的ip又是动态的,那么如何能使web服务能够通过指定容器名称就能访问到mysql服务,容器之间的link就可以做到。
docker run -d --name test1 busybox /bin/sh -c "while true; do sleep 3600; done"
#test2容器连接test1,是有方向的
docker run -d --name test2 --link test1 centos /bin/sh -c "while true; do sleep 3600; done"
说明:test2容器与test1容器之间link了,test2容器可以直接以test1容器名称测试容器直接的可达性。
docker容器之间的采用新建的bridge连接
默认情况下docker容器之间是通过docker0这个默认的bridge连接的,我们也可以新建一个bridge,然后指定容器使用这个我们自定义的bridge,与默认的bridge的区别在于,指定使用自定义的bridge的容器会默认相互link(添加DNS记录),这样容器之间可以通过容器名称通信了。
新建bridge
#创建bridge,-d指定driver
docker network create -d bridge my-bridge
#查看docker的网络名称空间
docker network ls
#查看桥网络
brctl show
容器采用自定义的bridge网络
#--network指定bridge网络,不指定则使用默认docker0
docker run -d --name test2 --network my-bridge centos /bin/sh -c "while true; do sleep 3600; done"
#将已经存在的容器添加到自定义容器上了,此时test1容器同时连接到多个桥网络上了
docker network connect my-bridge test1
docker容器的端口映射
docker容器中服务的端口需要在宿主机中一个端口进行绑定映射,这样才能被外部访问到。
#启动一个Nginx服务,将容器的80端口与宿主机的80端口映射
docker run -d -p 80:80 --name web nginx
五、Docker部署简单实验
Windows docker desktop单机容器部署
#windows docker desktop运行mysql容器
docker run -d --name mysql-ho36 -p3306:3306 -v mysql-vol-data-ho36:/var/lib/mysql -v mysql-vol-conf-ho36:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7.39
#windows docker desktop运行mongo容器
docker run -d --name mongo-ho36 -p27017:27017 -v mongo-volume-ho36:/data/db -v mongo-volume-ho36:/data/configdb -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=123456 -m 2G mongo:4.4.17-rc2
#windows docker desktop运行es容器
docker run -d --name es-ho36 -p9200:9200 -p9300:9300 -v es-vol-data-ho36:/usr/share/elasticsearch/data -v es-vol-plugins-ho36:/usr/share/elasticsearch/plugins -e "discovery.type=single-node" -e "cluster.name=es.ho36" -e "bootstrap.memory_lock=true" -m 2G --ulimit memlock=-1:-1 elasticsearch:7.17.6
#进入容器安装IK分词插件
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.17.6/elasticsearch-analysis-ik-7.17.6.zip
#windows docker desktop运行neo4j容器
docker run -d --name neo4j-ho36 -p7474:7474 -p7687:7687 -v neo4j-vol-data-ho36:/data -v neo4j-vol-logs-ho36:/logs -e NEO4J_AUTH=neo4j/123456 neo4j:3.5.28
#windows docker desktop运行reids容器
docker run -d --name redis-ho36 -p6379:6379 -v redis-vol-data-ho36:/data redis:6.2.7 --requirepass "123456"
单机多容器部署
在单台虚拟机上运行两个容器,一个容器运行redis服务,一个容器运行Python web小程序,并且web小程序需要访问到另一个容器中的redis服务。
创建容器运行redis服务
#运行一个redis服务,同宿主机不同容器之间访问,无需映射端口 docker run -d --name redis redis
编写Python web小程序app.py
from flask import Flask from redis import Redis import os import socket app = Flask(__name__) redis = Redis(host=os.environ.get('REDIS_HOST', '127.0.0.1'), port=6379) @app.route('/') def hello(): redis.incr('hits') return 'Hello Container World! I have been seen %s times and my hostname is %s.\n' % (redis.get('hits'),socket.gethostname()) if __name__ == "__main__": app.run(host="0.0.0.0", port=5000, debug=True)
说明:此web小程序暴露一个http端点,访问一次此端点则借助于redis服务记录一次访问量。
新建Dockerfile
FROM python:2.7 LABEL maintaner="neojayway liu_rmr@163.com" COPY . /app WORKDIR /app RUN pip install flask redis EXPOSE 5000 CMD [ "python", "app.py" ]
构建flask-redis镜像,请运行容器
#通过Dockerfile构建镜像 docker build -t flask-redis /home/docker_practice/demo1/ #运行容器,-e通过设置环境变量,使得容器内部可以读取,直接通过redis服务容器的名称访问redis docker run -d -p 5000:5000 --link redis --name flask-redis -e REDIS_HOST=redis flask-redis
验证
curl 127.0.0.1:5000
容器部署Nacos
- docker compose部署单实例Nacos
version: "3.8"
services:
nacos:
image: nacos/nacos-server:v2.3.2
container_name: local-nacos-standalone
environment:
- PREFER_HOST_MODE=hostname
- MODE=standalone
- SPRING_DATASOURCE_PLATFORM=mysql
- MYSQL_SERVICE_HOST=www.abc.cn
- MYSQL_SERVICE_DB_NAME=nacos_jayway
- MYSQL_SERVICE_PORT=3306
- MYSQL_SERVICE_USER=root
- MYSQL_SERVICE_PASSWORD=123456
- MYSQL_SERVICE_DB_PARAM=characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true
- NACOS_AUTH_ENABLE=true
- NACOS_AUTH_IDENTITY_KEY=nacos_jayway_identity_key
- NACOS_AUTH_IDENTITY_VALUE=nacos_jayway_identity_value
- NACOS_AUTH_TOKEN=SecretKey012345678901234567890123456789012345678901234567890123456789
volumes:
- local-single-nacos-log-vol:/home/nacos/logs
ports:
# 主端口,客户端、控制台及OpenAPI所使用的HTTP端口
- "8848:8848"
# 客户端gRPC请求服务端端口,用于客户端向服务端发起连接和请求
- "9848:9848"
# 服务端gRPC请求服务端端口,用于服务间同步等
- "9849:9849"
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8848/nacos/actuator/health"]
interval: 30s
timeout: 10s
retries: 5
volumes:
local-single-nacos-log-vol:
driver: local
name: local-single-nacos-log-vol
注意:外置的Mysql数据库需要被初始化,执行如下的初始化语句
/*
* Copyright 1999-2018 Alibaba Group Holding Ltd.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/******************************************/
/* 表名称 = config_info */
/******************************************/
CREATE TABLE `config_info` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
`data_id` varchar(255) NOT NULL COMMENT 'data_id',
`group_id` varchar(128) DEFAULT NULL COMMENT 'group_id',
`content` longtext NOT NULL COMMENT 'content',
`md5` varchar(32) DEFAULT NULL COMMENT 'md5',
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
`src_user` text COMMENT 'source user',
`src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
`tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
`c_desc` varchar(256) DEFAULT NULL COMMENT 'configuration description',
`c_use` varchar(64) DEFAULT NULL COMMENT 'configuration usage',
`effect` varchar(64) DEFAULT NULL COMMENT '配置生效的描述',
`type` varchar(64) DEFAULT NULL COMMENT '配置的类型',
`c_schema` text COMMENT '配置的模式',
`encrypted_data_key` varchar(1024) NOT NULL DEFAULT '' COMMENT '密钥',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info';
/******************************************/
/* 表名称 = config_info_aggr */
/******************************************/
CREATE TABLE `config_info_aggr` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
`data_id` varchar(255) NOT NULL COMMENT 'data_id',
`group_id` varchar(128) NOT NULL COMMENT 'group_id',
`datum_id` varchar(255) NOT NULL COMMENT 'datum_id',
`content` longtext NOT NULL COMMENT '内容',
`gmt_modified` datetime NOT NULL COMMENT '修改时间',
`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
`tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段';
/******************************************/
/* 表名称 = config_info_beta */
/******************************************/
CREATE TABLE `config_info_beta` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
`data_id` varchar(255) NOT NULL COMMENT 'data_id',
`group_id` varchar(128) NOT NULL COMMENT 'group_id',
`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
`content` longtext NOT NULL COMMENT 'content',
`beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps',
`md5` varchar(32) DEFAULT NULL COMMENT 'md5',
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
`src_user` text COMMENT 'source user',
`src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
`tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
`encrypted_data_key` varchar(1024) NOT NULL DEFAULT '' COMMENT '密钥',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta';
/******************************************/
/* 表名称 = config_info_tag */
/******************************************/
CREATE TABLE `config_info_tag` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
`data_id` varchar(255) NOT NULL COMMENT 'data_id',
`group_id` varchar(128) NOT NULL COMMENT 'group_id',
`tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',
`tag_id` varchar(128) NOT NULL COMMENT 'tag_id',
`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
`content` longtext NOT NULL COMMENT 'content',
`md5` varchar(32) DEFAULT NULL COMMENT 'md5',
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
`src_user` text COMMENT 'source user',
`src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag';
/******************************************/
/* 表名称 = config_tags_relation */
/******************************************/
CREATE TABLE `config_tags_relation` (
`id` bigint(20) NOT NULL COMMENT 'id',
`tag_name` varchar(128) NOT NULL COMMENT 'tag_name',
`tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type',
`data_id` varchar(255) NOT NULL COMMENT 'data_id',
`group_id` varchar(128) NOT NULL COMMENT 'group_id',
`tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',
`nid` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'nid, 自增长标识',
PRIMARY KEY (`nid`),
UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`),
KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation';
/******************************************/
/* 表名称 = group_capacity */
/******************************************/
CREATE TABLE `group_capacity` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',
`group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群',
`quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',
`usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',
`max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',
`max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值',
`max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',
`max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_group_id` (`group_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表';
/******************************************/
/* 表名称 = his_config_info */
/******************************************/
CREATE TABLE `his_config_info` (
`id` bigint(20) unsigned NOT NULL COMMENT 'id',
`nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'nid, 自增标识',
`data_id` varchar(255) NOT NULL COMMENT 'data_id',
`group_id` varchar(128) NOT NULL COMMENT 'group_id',
`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
`content` longtext NOT NULL COMMENT 'content',
`md5` varchar(32) DEFAULT NULL COMMENT 'md5',
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
`src_user` text COMMENT 'source user',
`src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
`op_type` char(10) DEFAULT NULL COMMENT 'operation type',
`tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
`encrypted_data_key` varchar(1024) NOT NULL DEFAULT '' COMMENT '密钥',
PRIMARY KEY (`nid`),
KEY `idx_gmt_create` (`gmt_create`),
KEY `idx_gmt_modified` (`gmt_modified`),
KEY `idx_did` (`data_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造';
/******************************************/
/* 表名称 = tenant_capacity */
/******************************************/
CREATE TABLE `tenant_capacity` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',
`tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID',
`quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',
`usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',
`max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',
`max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数',
`max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',
`max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表';
CREATE TABLE `tenant_info` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
`kp` varchar(128) NOT NULL COMMENT 'kp',
`tenant_id` varchar(128) default '' COMMENT 'tenant_id',
`tenant_name` varchar(128) default '' COMMENT 'tenant_name',
`tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc',
`create_source` varchar(32) DEFAULT NULL COMMENT 'create_source',
`gmt_create` bigint(20) NOT NULL COMMENT '创建时间',
`gmt_modified` bigint(20) NOT NULL COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`),
KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info';
CREATE TABLE `users` (
`username` varchar(50) NOT NULL PRIMARY KEY COMMENT 'username',
`password` varchar(500) NOT NULL COMMENT 'password',
`enabled` boolean NOT NULL COMMENT 'enabled'
);
CREATE TABLE `roles` (
`username` varchar(50) NOT NULL COMMENT 'username',
`role` varchar(50) NOT NULL COMMENT 'role',
UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE
);
CREATE TABLE `permissions` (
`role` varchar(50) NOT NULL COMMENT 'role',
`resource` varchar(128) NOT NULL COMMENT 'resource',
`action` varchar(8) NOT NULL COMMENT 'action',
UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE
);
INSERT INTO users (username, password, enabled) VALUES ('nacos', '$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu', TRUE);
INSERT INTO roles (username, role) VALUES ('nacos', 'ROLE_ADMIN');
容器部署mysql8
- docker compose部署单实例mysql8
version: "3"
services:
mysql8:
image: mysql:8.4.1
container_name: local-single-mysql8
hostname: www.neojayway.cn
restart: always
environment:
- MYSQL_ROOT_PASSWORD=95278520
- "TZ=Asia/Shanghai"
ports:
- 3306:3306
volumes:
- local-single-mysql8-data-vol:/var/lib/mysql
- local-single-mysql8-cfg-vol:/etc/mysql/conf.d
- local-single-mysql8-log-vol:/var/log/mysql
- /etc/localtime:/etc/localtime:ro
healthcheck:
# 使用mysqladmin命令进行健康检查
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p${MYSQL_ROOT_PASSWORD}"]
# 每10秒检查一次
interval: 10s
# 超时时间设置为5秒
timeout: 5s
# 最多重试3次
retries: 3
# 启动后30秒内不进行健康检查
start_period: 30s
volumes:
local-single-mysql8-data-vol:
driver: local
name: local-single-mysql8-data-vol
local-single-mysql8-cfg-vol:
driver: local
name: local-single-mysql8-cfg-vol
local-single-mysql8-log-vol:
driver: local
name: local-single-mysql8-log-vol
容器部署ES
- docker部署单实例elasticsearch
#虚拟内存区域被限制调整
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p
#宿主机创建相关空目录,供容器挂载(数据目录、插件目录、配置目录、日志目录)
mkdir -p $HOME/ho_product_test/elasticsearch/data
mkdir -p $HOME/ho_product_test/elasticsearch/plugins
mkdir -p $HOME/ho_product_test/elasticsearch/config
mkdir -p $HOME/ho_product_test/elasticsearch/logs
#先启动容器,目的在于将容器中的相关(数据目录、插件目录、配置目录、日志目录)拷贝到宿主机相关目录
docker run -d \
--name es-ho36 \
-p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e "cluster.name=es.ho36" \
elasticsearch:7.6.1
docker cp es-ho36:/usr/share/elasticsearch/config $HOME/ho_product_test/elasticsearch
docker cp es-ho36:/usr/share/elasticsearch/data $HOME/ho_product_test/elasticsearch
docker cp es-ho36:/usr/share/elasticsearch/plugins $HOME/ho_product_test/elasticsearch
docker cp es-ho36:/usr/share/elasticsearch/logs $HOME/ho_product_test/elasticsearch
#删除临时容器
docker rm -f es-ho36
#进入插件文件夹
cd $HOME/ho_product_test/elasticsearch/plugins
#创建文件夹并进入
mkdir analysis-ik && cd analysis-ik
#下载分词器
wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.6.1/elasticsearch-analysis-ik-7.6.1.zip
#解压分词器
unzip elasticsearch-analysis-ik-7.6.1.zip
#删除压缩包
rm elasticsearch-analysis-ik-7.6.1.zip
#容器中运行es的用户是elasticsearch(uid:gid - 1000:0)授予0号组足够的权限
chmod -R g+rwx $HOME/ho_product_test/elasticsearch
chgrp -R 0 $HOME/ho_product_test/elasticsearch
#正式运行es容器(--publish-all随机映射所有es端口)
docker run \
-d \
--name es-ho36 \
-p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e "cluster.name=es.ho36" \
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 \
--ulimit nofile=65535:65535 \
-v $HOME/ho_product_test/elasticsearch/data:/usr/share/elasticsearch/data \
-v $HOME/ho_product_test/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-v $HOME/ho_product_test/elasticsearch/config:/usr/share/elasticsearch/config \
-v $HOME/ho_product_test/elasticsearch/logs:/usr/share/elasticsearch/logs \
--restart=always elasticsearch:7.6.1
- docker compose部署单实例elasticsearch
version: "3"
services:
local-es-1:
image: elasticsearch:7.17.22
container_name: local-single-es
hostname: www.neojayway.cn
environment:
- node.name=www.neojayway.cn
- cluster.name=docker-cluster
- discovery.seed_hosts=www.neojayway.cn
- cluster.initial_master_nodes=www.neojayway.cn
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms4096m -Xmx4096m"
- "TZ=Asia/Shanghai"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- 19200:9200
logging:
driver: "json-file"
options:
max-size: "50m"
volumes:
- local-single-es-cfg-vol:/usr/share/elasticsearch/config
- local-single-es-plugin-vol:/usr/share/elasticsearch/plugins
- local-single-es-data-vol:/usr/share/elasticsearch/data
- local-single-es-log-vol:/usr/share/elasticsearch/log
networks:
- es_kibana_net
healthcheck:
test: ["CMD-SHELL", "curl -I http://www.neojayway.cn:9200 || exit 1"]
interval: 10s
timeout: 10s
retries: 5
local-single-kibana:
container_name: local-single-kibana
hostname: www.neojayway.cn
image: kibana:7.17.22
environment:
ELASTICSEARCH_HOSTS: '["http://local-es-1:9200"]'
TZ: "Asia/Shanghai"
volumes:
- local-single-kibana-cfg-vol:/usr/share/kibana/config
ports:
- 5601:5601
networks:
- es_kibana_net
healthcheck:
test: ["CMD-SHELL", "curl -I http://www.neojayway.cn:5601 || exit 1"]
interval: 10s
timeout: 10s
retries: 5
volumes:
local-single-es-data-vol:
driver: local
name: local-single-es-data-vol
local-single-es-cfg-vol:
driver: local
name: local-single-es-cfg-vol
local-single-es-log-vol:
driver: local
name: local-single-es-log-vol
local-single-es-plugin-vol:
driver: local
name: local-single-es-plugin-vol
local-single-kibana-cfg-vol:
driver: local
name: local-single-kibana-cfg-vol
# 连接外部网络
networks:
es_kibana_net:
external: true
cd /usr/share/elasticsearch/bin
#在线安装分词器,注意ik版本要和es版本保持一致
./elasticsearch-plugin install https://release.infinilabs.com/analysis-ik/stable/elasticsearch-analysis-ik-7.17.22.zip
- docker compose部署单实例elasticsearch并开启SSL认证
在以上不带认证的基础上,需要先执行如下初始化动作后,再重建容器
#进入elasticsearch容器后,执行如下初始化动作,生成证书以及初始化内置用户密码
#创建CA本地证书颁发机构
bin/elasticsearch-certutil ca --out config/certs/elastic-stack-ca.p12 --pass Jayway@9527
#创建elastic-certificates.p12证书
bin/elasticsearch-certutil cert --silent --ca config/certs/elastic-stack-ca.p12 --out config/certs/elastic-certificates.p12 --ca-pass Jayway@9527 --pass Jayway@9527
#生成加密的 keystore 文件
bin/elasticsearch-keystore create -p
#将p12 证书的密码配置添加到 keystore文件中
bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
#验证 elasticsearch.keystore是否加密,输入正确的密码后显示文件内容,说明文件已经加密
bin/elasticsearch-keystore list
#为保留用户自动生成初始密码(二选一)
bin/elasticsearch-setup-passwords auto
#为保留用户手动指定初始密码(二选一)
bin/elasticsearch-setup-passwords interactive
version: "3"
services:
local-es-1:
image: elasticsearch:7.17.22
container_name: local-single-es
hostname: www.neojayway.cn
environment:
- "TZ=Asia/Shanghai"
- discovery.type=single-node
- node.name=www.neojayway.cn
- cluster.name=docker-cluster
- network.host=0.0.0.0
- discovery.seed_hosts=www.neojayway.cn
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.audit.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.client_authentication=required
- xpack.security.transport.ssl.keystore.path=certs/elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=certs/elastic-certificates.p12
- xpack.security.transport.ssl.keystore.type=PKCS12
- xpack.security.transport.ssl.truststore.type=PKCS12
- xpack.security.http.ssl.enabled=false
#- xpack.security.http.ssl.verification_mode=certificate
#- xpack.security.http.ssl.client_authentication=optional
#- xpack.security.http.ssl.keystore.path=certs/elastic-certificates.p12
#- xpack.security.http.ssl.truststore.path=certs/elastic-certificates.p12
#- xpack.security.http.ssl.keystore.type=PKCS12
#- xpack.security.http.ssl.truststore.type=PKCS12
#证书的keystore密码不需要配置了,已经被写进elasticsearch.keystore文件中加密存储了
#- xpack.security.transport.ssl.keystore.secure_password=Jayway@9527
#- xpack.security.transport.ssl.truststore.secure_password=Jayway@9527
#- xpack.security.http.ssl.keystore.secure_password=Jayway@9527
#- xpack.security.http.ssl.truststore.secure_password=Jayway@9527
- KEYSTORE_PASSWORD=Jayway@9527
- "ES_JAVA_OPTS=-Xms4096m -Xmx4096m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- 19200:9200
logging:
driver: "json-file"
options:
max-size: "50m"
volumes:
- local-single-es-cfg-vol:/usr/share/elasticsearch/config
- local-single-es-plugin-vol:/usr/share/elasticsearch/plugins
- local-single-es-data-vol:/usr/share/elasticsearch/data
- local-single-es-log-vol:/usr/share/elasticsearch/log
networks:
- es_kibana_net
healthcheck:
test: ["CMD-SHELL", "curl -I http://www.neojayway.cn:9200 || exit 1"]
interval: 10s
timeout: 10s
retries: 5
local-single-kibana:
container_name: local-single-kibana
hostname: www.neojayway.cn
image: kibana:7.17.22
environment:
ELASTICSEARCH_HOSTS: '["http://local-es-1:9200"]'
TZ: 'Asia/Shanghai'
ELASTICSEARCH_USERNAME: kibana_system
ELASTICSEARCH_PASSWORD: Jayway@9527
XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: min-32-byte-long-strong-encryption-key
XPACK_REPORTING_ENCRYPTIONKEY: min-32-byte-long-strong-reporting-encryption-key
XPACK_SECURITY_ENCRYPTIONKEY: min-32-byte-long-strong-security-encryption-key
XPACK_REPORTING_ROLES_ENABLED: false
volumes:
- local-single-kibana-cfg-vol:/usr/share/kibana/config
ports:
- 5601:5601
networks:
- es_kibana_net
healthcheck:
test: ["CMD-SHELL", "curl -I http://www.neojayway.cn:5601 || exit 1"]
interval: 10s
timeout: 10s
retries: 5
volumes:
local-single-es-data-vol:
driver: local
name: local-single-es-data-vol
local-single-es-cfg-vol:
driver: local
name: local-single-es-cfg-vol
local-single-es-log-vol:
driver: local
name: local-single-es-log-vol
local-single-es-plugin-vol:
driver: local
name: local-single-es-plugin-vol
local-single-kibana-cfg-vol:
driver: local
name: local-single-kibana-cfg-vol
# 连接外部网络
networks:
es_kibana_net:
external: true
容器部署redis
- docker compose部署单实例redis
version: "3"
services:
redis:
image: docker.m.daocloud.io/redis:7.2.5
container_name: local-single-redis
hostname: www.neojayway.cn
restart: always
environment:
- "TZ=Asia/Shanghai"
ports:
- 16379:6379
volumes:
- local-single-redis-data-vol:/data
- local-single-redis-cfg-vol:/usr/local/etc/redis
- local-single-redis-log-vol:/logs
# 配置文件启动
command: redis-server /usr/local/etc/redis/redis.conf --requirepass 95278520
healthcheck:
test: ["CMD", "redis-cli", "-h", "localhost", "-a", "95278520", "ping"]
interval: 10s # 每10秒检查一次
timeout: 5s # 超时时间设置为5秒
retries: 3 # 最多重试3次
volumes:
local-single-redis-data-vol:
driver: local
name: local-single-redis-data-vol
local-single-redis-cfg-vol:
driver: local
name: local-single-redis-cfg-vol
local-single-redis-log-vol:
driver: local
name: local-single-redis-log-vol
容器部署tdengin
- docker compose部署单实例tdengin
version: "3"
services:
tdengine-1:
image: tdengine/tdengine:3.3.1.0
container_name: local-single-tdengine
hostname: www.neojayway.cn
#environment:
#TAOS_FQDN: "www.neojayway.cn"
#TAOS_FIRST_EP: "www.neojayway.cn"
ports:
# 6041 为 taosAdapter 所使用提供 REST 服务端口
- 6041:6041
# 服务端仅使用 6030 TCP 端口
- 6030:6030
# 6043-6049 为 taosAdapter 提供第三方应用接入所使用端口,可根据需要选择是否打开
# 6060为TDengine 图形化界面taos-explorer映射的端口
- 6043-6060:6043-6060
- 6043-6060:6043-6060/udp
volumes:
# /var/lib/taos: TDengine 默认数据文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode1/data为你自己的数据目录
- local-single-tdengin-data-vol:/var/lib/taos
# /var/log/taos: TDengine 默认日志文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode1/log为你自己的日志目录
- local-single-tdengin-log-vol:/var/log/taos
# 配置文件映射
- local-single-tdengin-cfg-vol:/etc/taos
- local-single-tdengin-corefile-vol:/corefile
healthcheck:
test: ["CMD-SHELL", "curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
volumes:
local-single-tdengin-data-vol:
driver: local
name: local-single-tdengin-data-vol
local-single-tdengin-log-vol:
driver: local
name: local-single-tdengin-log-vol
local-single-tdengin-cfg-vol:
driver: local
name: local-single-tdengin-cfg-vol
local-single-tdengin-corefile-vol:
driver: local
name: local-single-tdengin-corefile-vol
提示:Monitor服务的网络端口为6060,涛思库默认用户名和密码为(root/taosdata)
容器部署neo4j
#将SELinux设置为permissive模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#宿主机创建相关空目录,供容器挂载(数据目录、插件目录、配置目录、日志目录。。。)
mkdir -p $HOME/ho_product_test/neo4j
#运行neo4j容器,需要关闭selinux
docker run \
-d \
--restart always \
--name neo4j-ho36 \
--publish=7474:7474 --publish=7687:7687 \
--env NEO4J_AUTH=neo4j/Dms@8023 \
neo4j:3.5.28
docker cp neo4j-ho36:/data $HOME/ho_product_test/neo4j
docker cp neo4j-ho36:/var/lib/neo4j/plugins $HOME/ho_product_test/neo4j
docker cp neo4j-ho36:/var/lib/neo4j/conf $HOME/ho_product_test/neo4j
docker cp neo4j-ho36:/logs $HOME/ho_product_test/neo4j
docker cp neo4j-ho36:/var/lib/neo4j/import $HOME/ho_product_test/neo4j
#简单设置权限
chmod -R 777 $HOME/ho_product_test/neo4j
#删除临时容器
docker rm -f neo4j-ho36
#运行neo4j容器,需要关闭selinux
docker run \
-d \
--restart always \
--name neo4j-ho36 \
--publish=7474:7474 --publish=7687:7687 \
--volume=$HOME/ho_product_test/neo4j/data:/var/lib/neo4j/data \
--volume=$HOME/ho_product_test/neo4j/logs:/var/lib/neo4j/logs \
--volume=$HOME/ho_product_test/neo4j/conf:/var/lib/neo4j/conf \
--volume=$HOME/ho_product_test/neo4j/import:/var/lib/neo4j/import \
--volume=$HOME/ho_product_test/neo4j/plugins:/var/lib/neo4j/plugins \
--env NEO4J_AUTH=neo4j/123456 \
neo4j:3.5.28
- docker compose部署单实例neo4j
version: "3"
services:
redis:
image: neo4j:5.21.2
container_name: local-single-neo4j5
hostname: www.neojayway.cn
restart: always
environment:
- NEO4J_AUTH=neo4j/95278520
- NEO4J_dbms_memory_pagecache_size=1G
- NEO4J_dbms_memory_heap_initial__size=1G
- NEO4J_dbms_memory_heap_max__size=1G
- "TZ=Asia/Shanghai"
ports:
- 7474:7474
- 7687:7687
volumes:
- local-single-neo4j5-cfg-vol:/conf
- local-single-neo4j5-data-vol:/data
- local-single-neo4j5-import-vol:/import
- local-single-neo4j5-log-vol:/logs
- local-single-neo4j5-plugins-vol:/plugins
healthcheck:
test: ["CMD-SHELL", "wget --spider -S 'http://localhost:7474' 2>&1 | grep 'HTTP/' | awk '{print $2}' | grep 200"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
local-single-neo4j5-data-vol:
driver: local
name: local-single-neo4j5-data-vol
local-single-neo4j5-cfg-vol:
driver: local
name: local-single-neo4j5-cfg-vol
local-single-neo4j5-log-vol:
driver: local
name: local-single-neo4j5-log-vol
local-single-neo4j5-import-vol:
driver: local
name: local-single-neo4j5-import-vol
local-single-neo4j5-plugins-vol:
driver: local
name: local-single-neo4j5-plugins-vol
容器部署MongoDB
- docker部署单实例MongoDB
docker run \
-d \
--restart always \
--name mongo4 \
--publish=27017:27017 \
mongo:4.2.22
#准备相关目录用于容器卷挂载
mkdir $HOME/mongo4
touch $HOME/mongo4/mongod.log
docker cp mongo4:/etc/mongod.conf.orig $HOME/mongo4/mongod.conf.orig
#简单设置权限
chmod -R 777 $HOME/mongo4
#删除临时容器
docker rm -f mongo4
#使用自定义配置运行容器,默认MongoDB不会读取配置文件
docker run \
-d \
--net=host \
--restart always \
--name mongo4 \
--publish=27017:27017 \
--volume=$HOME/mongo4/mongod.conf.orig:/etc/mongod.conf.orig \
--volume=$HOME/mongo4/mongodb:/var/lib/mongodb \
--volume=$HOME/mongo4/mongod.log:/var/log/mongodb/mongod.log \
-e MONGO_INITDB_ROOT_USERNAME=root \
-e MONGO_INITDB_ROOT_PASSWORD=123456 \
mongo:4.2.22 \
--config /etc/mongod.conf.orig
- docker compose部署单实例MongoDB
version: "3"
services:
mongodb:
image: docker.m.daocloud.io/mongo:4.4.29
container_name: local-single-mongo
hostname: www.neojayway.cn
restart: always
environment:
- TZ=Asia/Shanghai
- MONGO_INITDB_DATABASE=admin
- MONGO_INITDB_ROOT_USERNAME=jayway
- MONGO_INITDB_ROOT_PASSWORD=95278520
ports:
- 27017:27017
volumes:
- local-single-mongo-data-vol:/data/db
- local-single-mongo-log-vol:/data/logs
- local-single-mongo-cfg-vol:/data/configdb
command: mongod --auth --config /data/configdb/mongod.conf
healthcheck:
test: mongo --eval 'db.stats()' --username=jayway --password=95278520
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
volumes:
local-single-mongo-data-vol:
driver: local
name: local-single-mongo-data-vol
local-single-mongo-cfg-vol:
driver: local
name: local-single-mongo-cfg-vol
local-single-mongo-log-vol:
driver: local
name: local-single-mongo-log-vol
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /data/db
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
#systemLog:
# destination: file
# logAppend: true
# path: /data/logs/
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
容器部署Kafka
- docker run部署单实例Kafka
docker run --name kafka -p 9092:9092 -d ^
-e TZ=Asia/Shanghai ^
-e KAFKA_BROKER_ID=0 ^
-e KAFKA_ENABLE_KRAFT=yes ^
-e KAFKA_CFG_NODE_ID=0 ^
-e KAFKA_CFG_PROCESS_ROLES=controller,broker ^
-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@localhost:9093 ^
-e KAFKA_KRAFT_CLUSTER_ID=LelM2dIFQkiUFvXCEcqRWA ^
-e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER ^
-e KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL ^
-e KAFKA_CFG_LISTENERS=INTERNAL://:9092,CONTROLLER://:9093,EXTERNAL://0.0.0.0:9094 ^
-e KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://192.168.4.25:9092,EXTERNAL://192.168.4.25:9094 ^
-e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT ^
-e KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN ^
-e KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN,SCRAM-SHA-512 ^
-e KAFKA_INTER_BROKER_USER=admin ^
-e KAFKA_INTER_BROKER_PASSWORD=kafka@dms@pimc1 ^
-e KAFKA_AUTO_CREATE_TOPICS_ENABLE=true ^
-e KAFKA_CFG_NUM_PARTITIONS=2 ^
-e KAFKA_CFG_MESSAGE_MAX_BYTES=10475860 ^
-e KAFKA_CFG_MAX_REQUEST_SIZE=10475860 ^
-e KAFKA_CFG_LOG_RETENTION_HOURS=24 ^
-e KAFKA_CFG_LOG_RETENTION_BYTES=2141192192 ^
-e KAFKA_HEAP_OPTS=-Xmx1024m -Xms1024m ^
bitnami/kafka:3.7.1
- docker compose部署单实例Kafka
version: "2"
services:
kafka:
image: bitnami/kafka:3.7.1
container_name: local-single-kafka
ports:
- 9092:9092
- 9094:9094
volumes:
- local-single-kafka-data-vol:/bitnami
environment:
# KRaft settings
- KAFKA_CFG_NODE_ID=0
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9093
# Listeners
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://0.0.0.0:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://www.neojayway.cn:9094
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT
- KAFKA_HEAP_OPTS=-Xmx1024m -Xms1024m
# broker.id,必须唯一,且与KAFKA_CFG_NODE_ID一致
- KAFKA_BROKER_ID=0
healthcheck:
test: ["CMD", "bash", "-c", "echo > /dev/tcp/localhost/9092"]
interval: 30s
timeout: 10s
retries: 5
start_period: 40s
kafka-ui:
container_name: local-kafka-ui
image: provectuslabs/kafka-ui:v0.7.2
ports:
- 18080:8080
depends_on:
- kafka
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
DYNAMIC_CONFIG_ENABLED: 'true'
volumes:
local-single-kafka-data-vol:
driver: local
name: local-single-kafka-data-vol
- docker compose部署单实例SASL认证的Kafka
version: "2"
services:
kafka:
image: bitnami/kafka:3.7.1
container_name: local-single-kafka
ports:
- 9092:9092
- 9094:9094
volumes:
- local-single-kafka-data-vol:/bitnami
environment:
- TZ=Asia/Shanghai
# broker.id,必须唯一,且与KAFKA_CFG_NODE_ID一致
- KAFKA_BROKER_ID=0
# KRaft settings
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_CFG_NODE_ID=0
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9093
- KAFKA_KRAFT_CLUSTER_ID=LelM2dIFQkiUFvXCEcqRWA
# Listeners
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,CONTROLLER://:9093,EXTERNAL://0.0.0.0:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://www.abc.cn:9092,EXTERNAL://www.abc.cn:9094
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN,SCRAM-SHA-512
- KAFKA_INTER_BROKER_USER=admin
- KAFKA_INTER_BROKER_PASSWORD=Jayway@9527
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_CFG_NUM_PARTITIONS=2
- KAFKA_CFG_MESSAGE_MAX_BYTES=10475860
- KAFKA_CFG_MAX_REQUEST_SIZE=10475860
- KAFKA_CFG_LOG_RETENTION_HOURS=24
- KAFKA_CFG_LOG_RETENTION_BYTES=2141192192
- KAFKA_HEAP_OPTS=-Xmx1024m -Xms1024m
healthcheck:
test: ["CMD", "bash", "-c", "echo > /dev/tcp/localhost/9092"]
interval: 30s
timeout: 10s
retries: 5
start_period: 40s
kafka-ui:
container_name: local-kafka-ui
image: provectuslabs/kafka-ui:v0.7.2
ports:
- 18080:8080
depends_on:
- kafka
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL=SASL_PLAINTEXT
- KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM=PLAIN
- KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="Jayway@9527";
- DYNAMIC_CONFIG_ENABLED=true
volumes:
local-single-kafka-data-vol:
driver: local
name: local-single-kafka-data-vol
容器部署Nginx
- docker compose部署单实例Nginx
version: "3"
services:
local-single-nginx:
image: nginx:1.27.0
container_name: local-single-nginx
restart: always
ports:
- 8080:80
environment:
- NGINX_HOST=www.neojayway.cn
- NGINX_PORT=80
volumes:
- local-single-nginx-data-vol:/usr/share/nginx/html
- local-single-nginx-cfg-vol:/etc/nginx
- local-single-nginx-log-vol:/var/log/nginx
volumes:
local-single-nginx-data-vol:
driver: local
name: local-single-nginx-data-vol
local-single-nginx-cfg-vol:
driver: local
name: local-single-nginx-cfg-vol
local-single-nginx-log-vol:
driver: local
name: local-single-nginx-log-vol
容器部署Minio
- docker compose部署单实例Minio
version: "2"
services:
minio:
image: bitnami/minio:2024.8.3
container_name: local-single-minio
ports:
- 9000:9000
- 9001:9001
environment:
- MINIO_ROOT_USER=root
- MINIO_ROOT_PASSWORD=95278520
volumes:
- local-single-minio-data-vol:/bitnami/minio/data
volumes:
local-single-minio-data-vol:
driver: local
name: local-single-minio-data-vol
容器部署RocketMQ
简介
关于RocketMQ基于docker容器化部署请参考官方rocketmq-docker(git仓库)
提供了做种部署脚本,比如docker单实例部署、docker-compose集群部署、k8s集群部署。
部署
- 克隆rocketmq-docker(git仓库)
git clone https://github.com/apache/rocketmq-docker.git
执行stage.sh脚本
#指定rocketmq镜像tag,例如4.9.1 sh stage.sh RMQ-VERSION
- 单节点部署
cd stages/4.9.1
./play-docker.sh alpine
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
start_namesrv_broker()
{
TAG_SUFFIX=$1
# Start nameserver
docker run -d -v `pwd`/data/namesrv/logs:/home/rocketmq/logs --name rmqnamesrv -p 9876:9876 apache/rocketmq:4.9.1${TAG_SUFFIX} sh mqnamesrv
# Start Broker
docker run -d -v `pwd`/data/broker/logs:/home/rocketmq/logs -v `pwd`/data/broker/store:/home/rocketmq/store --name rmqbroker --link rmqnamesrv:namesrv -e "NAMESRV_ADDR=namesrv:9876" -p 10909:10909 -p 10911:10911 -p 10912:10912 apache/rocketmq:4.9.1${TAG_SUFFIX} sh mqbroker
}
if [ $# -lt 1 ]; then
echo -e "Usage: sh $0 BaseImage"
exit -1
fi
export BASE_IMAGE=$1
echo "Play RocketMQ docker image of tag 4.9.1-${BASE_IMAGE}"
RMQ_CONTAINER=$(docker ps -a|awk '/rmq/ {print $1}')
if [[ -n "$RMQ_CONTAINER" ]]; then
echo "Removing RocketMQ Container..."
docker rm -fv $RMQ_CONTAINER
# Wait till the existing containers are removed
sleep 5
fi
prepare_dir()
{
dirs=("data/namesrv/logs" "data/broker/logs" "data/broker/store")
for dir in ${dirs[@]}
do
if [ ! -d "`pwd`/${dir}" ]; then
mkdir -p "`pwd`/${dir}"
chmod a+rw "`pwd`/${dir}"
fi
done
}
prepare_dir
echo "Starting RocketMQ nodes..."
case "${BASE_IMAGE}" in
alpine)
start_namesrv_broker -alpine
;;
centos)
start_namesrv_broker
;;
*)
echo "${BASE_IMAGE} is not supported, supported base images: centos, alpine"
exit -1
;;
esac
# Service unavailable when not ready
# sleep 20
# Produce messages
# sh ./play-producer.sh
- docker-compose集群部署
cd stages/4.9.1
./play-docker-compose.sh
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
RMQ_CONTAINER=$(docker ps -a|awk '/rmq/ {print $1}')
if [[ -n "$RMQ_CONTAINER" ]]; then
echo "Removing RocketMQ Container..."
docker rm -fv $RMQ_CONTAINER
# Wait till the existing containers are removed
sleep 5
fi
prepare_dir()
{
dirs=("docker-compose/data/namesrv/logs" "docker-compose/data/broker/logs" "docker-compose/data/broker/store" "docker-compose/data1/broker/logs" "docker-compose/data1/broker/store")
for dir in ${dirs[@]}
do
if [ ! -d "`pwd`/${dir}" ]; then
mkdir -p "`pwd`/${dir}"
chmod a+rw "`pwd`/${dir}"
fi
done
}
prepare_dir
# Run nameserver and broker
docker-compose -f ./docker-compose/docker-compose.yml up -d
version: '2'
services:
#Service for nameserver
namesrv:
image: apache/rocketmq:4.9.1
container_name: rmqnamesrv
ports:
- 9876:9876
environment:
- JAVA_OPT_EXT=-server -Xms256m -Xmx256m -Xmn256m
volumes:
- ./data/namesrv/logs:/home/rocketmq/logs
command: sh mqnamesrv
#Service for broker
broker:
image: apache/rocketmq:4.9.1
container_name: rmqbroker
links:
- namesrv
ports:
- 10909:10909
- 10911:10911
- 10912:10912
environment:
- NAMESRV_ADDR=namesrv:9876
- JAVA_OPT_EXT=-server -Xms512m -Xmx512m -Xmn256m
volumes:
- ./data/broker/logs:/home/rocketmq/logs
- ./data/broker/store:/home/rocketmq/store
- ./data/broker/conf/broker.conf:/opt/rocketmq-4.9.1/conf/broker.conf
command: sh mqbroker -c /opt/rocketmq-4.9.1/conf/broker.conf
#Service for another broker -- broker1
broker1:
image: apache/rocketmq:4.9.1
container_name: rmqbroker-b
links:
- namesrv
ports:
- 10929:10909
- 10931:10911
- 10932:10912
environment:
- NAMESRV_ADDR=namesrv:9876
- JAVA_OPT_EXT=-server -Xms512m -Xmx512m -Xmn256m
volumes:
- ./data1/broker/logs:/home/rocketmq/logs
- ./data1/broker/store:/home/rocketmq/store
- ./data1/broker/conf/broker.conf:/opt/rocketmq-4.9.1/conf/broker.conf
command: sh mqbroker -c /opt/rocketmq-4.9.1/conf/broker.conf
#Service for rocketmq-dashboard
dashboard:
image: apacherocketmq/rocketmq-dashboard:1.0.0
container_name: rocketmq-dashboard
ports:
- 9527:8080
links:
- namesrv
depends_on:
- namesrv
environment:
- NAMESRV_ADDR=namesrv:9876
- 基于Kubernetes集群部署
cd stages/4.9.1
./play-kubernetes.sh
RocketMQ-Dledger高可用集群部署
#注意此特性需要rocketmq版本4.4.0及以上 cd stages/4.9.1 ./play-docker-dledger.sh
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。