Recently used dockerFILE to build images. Before that, I didn't really know the role of docker, and I didn't know the clearer concepts of containers and images. So record it here.
Introduction to docker
First, we need to understand virtualization technology.
For example, virtual machines, most people may have used virtual machines. A virtual machine (VM) is a virtual environment created on a physical hardware system and acting as a virtual computer system. It simulates its own complete set of hardware, including CPU, memory, network interface and storage.
In a virtual machine, you can run programs like a normal computer, such as downloading software.
A virtual machine is a virtualization technology. The container technology provided by docker belongs to lightweight virtualization.
The essential difference between them is that virtual machines are resource isolation at the operating system level, while containers are resource isolation at the process level.
Compared with virtual machines, the advantages of containerization are obvious:
- Low hardware cost;
- Deployment environment is faster;
- It is more convenient to maintain the environment;
Containers only need to virtualize a small-scale environment. And dokcer is a tool for creating containers.
Therefore, in recent years, due to the open source and many advantages of docker, it has become more and more popular.
Three concepts of docker
The three core concepts of Docker technology are:
- Image
- Container
- Repository
Let's take a look at the slogan of docker: build, ship and Run Any APP, Anywhere
But when I read it today, the official page was changed to: build, share Run Any APP, Anywhere
That is, build, share, run.
How to understand it? for instance:
For example, if we want to build a house, we need to draw drawings, move wood, and build. After it was covered, I moved in.
When I came over, I suddenly wanted to move to another place and build the same house as before. According to the previous practice, we need to continue to draw drawings, move wood, and build.
But at this time, docker came out and told me that I could turn your previous house into a " mirror ", and then make a copy of the exact same house for you .
Finally, docker used my previous house as an image and copied the same house for me. Let me just pack my bags and check in.
The image here refers to the image in docker. The house I built with the image refers to the container.
In fact, this Docker image, in addition to providing the programs, libraries, resources, configuration and other files required for the container runtime, also contains some configuration parameters (such as environment variables) prepared for the runtime. The image does not contain any dynamic data and its contents will not be changed after construction.
That is, the three concepts mentioned above:
- Image
- Container
- Repository
So what is a repository ?
Also very understandable. Different people create different mirror images. For example, I created a mirror image of a courtyard in Beijing, Zhang San created a mirror image of a yurt, and Li Si created a mirror image of a villa.
So what should I do if I want to use someone else's image?
So the warehouse (Repository) appeared.
Responsible for managing Docker images is the Docker Registry service (similar to warehouse administrators)
Now, through the Repository, I can build other people's houses.
This repository, also known as the official Docker Hub , has a large collection of high-quality official images.
A few more introductions:
container : container, refers to the runtime of the image, including file resources and system resources
image : Image, which refers to the storage method after the application is packaged. An image contains multiple layers
layer : Each step in the Dockerfile will generate a layer, and the result of each step will become a file
Dockerfile : A DSL for building image files
docker : You can build an image through a Dockerfile, or you can run the image to make it a container
docker-compose : A docker orchestration tool written in Python
I have also used docker-compose before. By writing the docker-compose.yml
file, docker-compose can also help complete the image arrangement very conveniently and quickly.
Dockerfile
This time, Dockerfile is used to build the image image.
Official documentation: https://docs.docker.com/engine/reference/builder/
Write an example:
FROM node:14.16.0-stretch
ARG DING_TKON
RUN apt-get update
RUN apt install -y curl
RUN apt-get clean
COPY ./send-ding.sh /
CMD sh send-ding.sh -a ${DING_TKON} -t markdown -c pipeine运行成功 -T "title"
Command introduction:
FROM
格式为 FROM <image>或FROM <image>:<tag>。
Customized images are all FROM-based images. For example, FROM nginx is the basic image required for customization, and subsequent operations are based on nginx.
FROM node:14.16.0-stretch
RUN
<command line command>: Equivalent to shell commands operating at the terminal. Executed during docker build.
RUN apt-get update
CMD
["<executable or command>","","",…]: run when docker run.
CMD sh send-ding.sh -a ${DING_TKON} -t markdown -c pipeine运行成功 -T "title"
ENV
<key>=< value>: Set the environment variable and define the environment variable, then in the subsequent instructions, this environment variable can be used. Runs when docker run.
ARG <key>=< value>: Set the environment variable, the environment variable only works in the Dockerfile. Executed during docker build.
ARG DING_TKON
COPY [–chown=:] <source path 1>… <destination path> Copy command, copy files or directories from the context directory to the specified path in the container.
COPY ./send-ding.sh
You can refer to the teacher's article to put the image on Alibaba Cloud: https://segmentfault.com/a/1190000042406589
problems encountered
Use this image in gitlab-ci.yml
workflow:
rules:
# only run on PR
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
variables:
FF_USE_FASTZIP: "true"
# 设置步骤(步骤间串行)
stages:
- unit-test
- notify
# - done
# 设置自动执行的管道名称
angular-test:
# 前台使用docker来构建
tags:
- docker
# 设置该管道属于的步骤,同步骤的管道并行执行
stage: unit-test
image: xxxx
before_script:
- cd web
script:
- pwd
- npm install
- npm run test -- --no-watch --no-progress --browsers=ChromeHeadlessCI
success-notify:
# 前台使用docker来构建
tags:
- docker
# 设置该管道属于的步骤,同步骤的管道并行执行
stage: notify
variables:
DING_TKON: xxxx
image: xxxx
script:
- env
error-notify:
# 前台使用docker来构建
tags:
- docker
# 设置该管道属于的步骤,同步骤的管道并行执行
stage: notify
image: xxx
script:
- env
when: on_failure
Purpose: Pass the DingTalk toekn parameter to dockerFile, and dockerFile receives the parameter and executes the uploaded script. When gitlab executes the pipeline, the script is automatically executed when the image is built.
Written dockerFile:
However, I checked the official website and Google, and all I found are: If you want to use the ARG command to pass parameters, you need to manually use the docker build
command to build.
However, the currently used gitlab-ci.yml can be built automatically as long as the image address is passed in. You don't need to manually write the docker build command yourself.
Then go to the official gitlab documentation to see if you can pass parameters. result not found
At present, the variables defined in the gitlab-ci.yml file have been obtained in the pipeline. The next step is to try to build directly and see if the environment variables in the pipeline can be directly obtained in dockerFile
refer to:
https://blog.csdn.net/qq_33598419/article/details/107612891
https://zhuanlan.zhihu.com/p/53260098
https://docs.docker.com/engine/reference/builder/
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。