Author: Michelangelo, Fang Tian
Cloud native is swallowing the software world. Containers have changed the traditional application development model. Today, developers not only need to build applications, but also use Dockerfile to complete application containerization and package applications and their dependencies to obtain more reliable products. Improve R&D efficiency.
With the iteration of the project, after reaching a certain scale, the operation and maintenance team and the R&D team need to cooperate with each other. The perspective of the operation and maintenance team is different from that of the R&D team. Their requirements for mirroring are security and standardization . for example:
- Which base image should be chosen for different applications?
- What versions of dependencies are there?
- What ports does the application need to expose?
In order to optimize operation and maintenance efficiency and improve application security, developers need to continuously update the Dockerfile to achieve the above goals. At the same time, the operation and maintenance team will also intervene in the construction of the image. If the CVE in the base image is repaired, the operation and maintenance team needs to update the Dockerfile to use a newer version of the base image. In short, both O&M and R&D need to intervene in the Dockerfile, and decoupling cannot be achieved.
In order to solve this series of problems, more excellent products have emerged to build images, including Cloud Native Buildpacks (СNB). CNB
provides a faster, safer and more reliable way to build OCI
specification, realizing the decoupling between R&D and operation and maintenance teams.
Before introducing CNB
, let's first explain a few basic concepts.
OCI-compliant image
Today, the container runtime is no Docker
only one. In order to ensure that all container runtimes can run images generated by any build tool, the Linux Foundation, together with Google, Huawei, HP, IBM, Docker, Red Hat, VMware and other companies, announced the establishment of the Open Container Project (OCP), which was later renamed Open Containers Initiative (OCI) . OCI defines industry standards around container image formats and runtimes, and given an OCI image, any OCI runtime standard can use that image to run containers.
If you were to ask what is the difference between a Docker image and an OCI image, the answer today is: There is little difference . There are some old Docker images that existed before the OCI specification, they are called the Docker v1 specification and are not compatible with the Docker v2 specification. The Docker v2 specification was donated to OCI and forms the basis of the OCI specification. All container repositories, Kubernetes platforms, and container runtimes today are built around the OCI specification.
What are Buildpacks
The Buildpacks project was first Heroku
in 2011, and has been widely adopted by PaaS platforms represented Cloud Foundry
A buildpack refers to a program that turns the source code into a compressed package that can be run by the PaaS platform. Usually, each buildpack encapsulates the toolchain of a single language ecosystem, such as Ruby, Go, NodeJs, Java, Python, etc. have special buildpacks.
You can understand buildpack as a bunch of scripts. The function of this script is to package the executable file of the application and its dependent environment, configuration, startup script, etc., and then upload it to a repository such as Git. The packaged compressed package is called .
Then Cloud Foundry will select a virtual machine that can run the application through the scheduler, and then notify the Agent on this machine to download the application compressed package, and start the application according to the startup command specified by the buildpack.
In January 2018, Pivotal
and Heroku
jointly launched the Cloud Native Buildpakcs (CNB) project, and in October of the same year, the project entered CNCF
.
In November 2020, the CNCF Technical Oversight Committee (TOC) voted to promote CNB from a sandbox project to an incubation project . Time to take a good look at CNB.
Why you need Cloud Native Buildpacks
Cloud Native Buildpacks (CNB) can be regarded as based on cloud-native Buildpacks technology , which supports modern language ecosystems and shields developers from the details of application construction and deployment, such as which operating system to choose, and how to write a mirror operating system. It can process scripts, optimize image size, etc., and produce OCI container images that can run in any cluster that is compatible with the OCI image standard. CNB also embraces many more cloud-native features, such as blob mounting across image repositories and image-level rebasing .
Thus CNB mirrored way to build more standardized, automated, compared with Dockerfile, Buildpacks provides a higher level of abstraction for building applications, Buildpacks for OCI to build abstract image, similar to abstract Helm of Deployment choreography .
In October 2020, Google Cloud began announcing full support for Buildpacks, including Cloud Run, Anthos, and Google Kubernetes Engine (GKE). Companies such as IBM Cloud, Heroku, and Pivital are already adopting Buildpacks, and if nothing else, other cloud providers will soon follow.
Advantages of Buildpacks:
- Applications for the same build purpose do not need to write build files repeatedly (only one Builder is required).
- Does not depend on Dockerfile.
- The working content of each layer (buildpacks) can be easily checked against the rich metadata information (buildpack.toml).
- After changing the underlying operating system, there is no need to rewrite the image build process.
- Guarantee secure and compliant app builds without developer intervention.
The Buildpacks community also gave a table to compare similar app packaging tools:
It can be seen that Buildpacks supports more functions than other packaging tools, including: caching, source code detection, plug-in, support for rebase, reuse, CI/CD multiple ecosystems.
How Cloud Native Buildpacks work
Cloud Native Buildpacks mainly consist of 3 components: Builder
, Buildpack
and Stack
.
Buildpack
Buildpack is essentially a collection of executable units, which generally include checking program source code, building code, and generating images. A typical Buildpack needs to contain the following three files:
- buildpack.toml - Provides metadata information for the buildpack.
- bin/detect - Check if this buildpack should be executed.
- bin/build – Execute the build logic of the buildpack, and finally generate the image.
Builder
Buildpacks will complete a build logic through the three actions of "detection", "build" and "output". Usually, in order to complete the construction of an application, we will use multiple Buildpacks, so Builder
is a collection of build logic, including all the components required for the build and the image of the running environment.
Let's try to understand how the Builder works with a hypothetical pipeline:
- Initially, as the developer of the application, we prepared an application source code, which we marked as "0" here.
- Then applying "0" to the first pass, we use Buildpacks1 to machine it. In this process, Buildpacks1 will check whether the application has a "0" flag, and if so, enter the build process, that is, add "1" to the application flag to change the application flag to "01".
- In the same way, the second and third processes will also judge whether they need to execute their own construction logic according to their own access conditions.
In this example, the application satisfies the access conditions of the three processes, so the content of the final output OCI image is the identifier of "01234".
Corresponding to the concept of Builders
, 061d7cfe78bce6 is an ordered combination of Buildpacks, including a base image called build image
, a lifecycle
and an application to another base image run image
Builders are responsible for building the application source code into an app image.
build image
provides the base environment for the Builders (eg Ubuntu Bionic OS image with build tools), while run image
provides the base environment for the application image The combination of build image
and run image
Stack .
Stack
As mentioned above, build image
and run image
is called Stack, that is to say, it defines the execution environment of Buildpacks and the base image of the final application.
you can build image
understood as Dockerfile multi-stage construction of the first phase of the mirror base, will run image
understood as the second phase of the base image.
The above three components all exist in the form of Docker images, and provide very flexible configuration options, as well as the ability to control each layer of the generated image. Combined with its powerful caching and rebasing capabilities, customized component images can be reused by multiple applications, and each layer can be updated individually as needed.
Lifecycle
is the Builder . It abstracts the construction steps from application source code to image, completes the arrangement of the whole process, and finally produces an application image. Below we use a separate chapter to introduce Lifecycle.
Build Lifecycle
Lifecycle extracts the detection and construction process of all Buildpacks and divides it into two large steps for aggregate execution: Detect and Build. This reduces the architectural complexity of Lifecycle and facilitates the implementation of a custom Builder.
In addition to the two main steps of Detect and Build, Lifecycle also includes some additional steps, which we will explain together.
Detect
/bin/detect
file for detection is included in the Buildpack, then during the Detect
process, Lifecycle will instruct all Buildpacks in the /bin/detect
to execute in order, and obtain the execution result from it.
So after Lifecycle Detect
and Build
, how does it maintain the relationship between the two processes?
Buildpacks In the Detect and Build phases, they usually tell themselves what prerequisites are needed in the process, and what results they will provide.
In Lifecycle, Build Plan
is provided to store the required items and outputs of each Buildpack.
type BuildPlanEntry struct {
Providers `toml:“providers”`
Requires `toml:"requires"`
At the same time, Lifecycle also stipulates that these Buildpacks can be combined into a Builder only when all outputs match a corresponding required item.
Analysis
Buildpacks will create some directories on the layer
, in Lifecycle these directories are called 061d7cfe78bed4. So for these layer
, some can be used as a cache for the next Buildpacks, some need to work when the application is running, and some need to be cleaned up. How can I control these layer
more flexibly?
Lifecycle provides three switch parameters for each layer
expected processing method:
- launch indicates whether this layer will work when the application is running.
- build indicates whether this layer will be accessed during subsequent builds.
- cache indicates whether this layer will be used as a cache.
After that, Lifecycle judges the final destination of the layer according to a relationship matrix. We can also simply understand that the Analysis stage provides a cache for building and application running.
Build
Build
stage will use Detect
stage output build plan
, and metadata information environment, with the retention layers to this stage, execution logic Buildpacks construct application source code. Finally, a runnable application artifact is generated.
Export
The export phase is easy to understand. After completing the above construction, we need to output the final construction result as an OCI standard image, so that this App artifact can run in any OCI standard compatible cluster.
Rebase
In CNB's design, the final app artifact actually runs on the stack's run image. It can be understood that the artifact above the run image is a whole, which is connected with the run image in the form of ABI (application binary interface), which makes the artifact can be flexibly switched to another run image.
This action is actually part of Lifecycle, called rebase . rebase in the process of building the image, which happens when the app artifact is switched from build image to run image.
This mechanism is also where CNB has the most advantage over Dockerfile. For example, in a large-scale production environment, if there is a problem with the OS layer of the container image and the OS layer of the image needs to be replaced, then for different types of application images, it is necessary to rewrite their dockerfiles and verify whether the new dockerfiles are feasible, and add new ones. Whether there is a conflict between the existing layer and the existing layer, etc. Using CNB only needs to do one rebase, which simplifies the upgrade of images in large-scale production.
The above is an analysis of the process of CNB building mirrors. In summary:
- Buildpacks are the smallest building units that perform specific build operations;
- Lifecycle is the image construction lifecycle interface provided by CNB;
- Builder is a builder with a specific build purpose formed by several Buildpacks plus Lifecycle and stack.
Let's narrow it down a bit:
- build image + run image = stack
- stack(build image) + buildpacks + lifecycle = builder
- stack(run image) + app artifacts = app
So now the question is, how to use this tool?
Platform
At this time, a Platform is needed. Platform is actually the executor of Lifecycle. Its role is to apply the Builder to the given source code to complete the Lifecycle instructions.
During this process, the Builder will build the source code into an app, which is in build image
at this time. At this time, according to the rebase interface in Lifecycle, the underlying logic is to use ABI (application binary interface) convert the app artifacts from build image
to run image
. This is the final OCI image.
Commonly used Platforms are Tekton and CNB's Pack . Next we will use Pack to experience how to use Buildpacks to build images.
Install the Pack CLI tool
Currently Pack CLI supports Linux, MacOS and Windows platforms. Taking Ubuntu as an example, the installation command is as follows:
$ sudo add-apt-repository ppa:cncf-buildpacks/pack-cli
$ sudo apt-get update
$ sudo apt-get install pack-cli
View version:
$ pack version
0.22.0+git-26d8c5c.build-2970
Note: Docker needs to be installed and running before using Pack.
Currently Pack CLI only supports Docker and does not support other container runtimes (such as Containerd, etc.). But Podman can be supported in disguise through some hacks. Taking Ubuntu as an example, the approximate steps are as follows:
Install podman first.
$ . /etc/os-release
$ echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
$ curl -L "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key" | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get -y upgrade
$ sudo apt-get -y install podman
Then enable Podman Socket.
$ systemctl enable --user podman.socket
$ systemctl start --user podman.socket
Specify the DOCKER_HOST
environment variable.
$ export DOCKER_HOST="unix://$(podman info -f "{{.Host.RemoteSocket.Path}}")"
Finally, it is possible to use Pack to build images in the Podman container runtime. For detailed configuration steps, please refer to Buildpacks official document .
Building an OCI image with Pack
After installing the Pack, we can deepen our understanding of the principle of samples 161d7cfe78c293 officially provided by CNB. This is a Java example, there is no need to install a JDK, run Maven or another build environment during the build process, Buildpacks will take care of that for us.
First clone the example repository:
$ git clone https://github.com/buildpacks/samples.git
Later, we will use the bionic Builder to build the image. Let's take a look at the configuration of the Builder:
$ cat samples/builders/bionic/builder.toml
# Buildpacks to include in builder
[[buildpacks]]
id = "samples/java-maven"
version = "0.0.1"
uri = "../../buildpacks/java-maven"
[[buildpacks]]
id = "samples/kotlin-gradle"
version = "0.0.1"
uri = "../../buildpacks/kotlin-gradle"
[[buildpacks]]
id = "samples/ruby-bundler"
version = "0.0.1"
uri = "../../buildpacks/ruby-bundler"
[[buildpacks]]
uri = "docker://cnbs/sample-package:hello-universe"
# Order used for detection
[[order]]
[[order.group]]
id = "samples/java-maven"
version = "0.0.1"
[[order]]
[[order.group]]
id = "samples/kotlin-gradle"
version = "0.0.1"
[[order]]
[[order.group]]
id = "samples/ruby-bundler"
version = "0.0.1"
[[order]]
[[order.group]]
id = "samples/hello-universe"
version = "0.0.1"
# Stack that will be used by the builder
[stack]
id = "io.buildpacks.samples.stacks.bionic"
run-image = "cnbs/sample-stack-run:bionic"
build-image = "cnbs/sample-stack-build:bionic"
The definition of Builder is completed in the builder.toml file, and the configuration structure can be divided into three parts:
- [[buildpacks]] syntax identifier is used to define the Buildpacks contained in the Builder.
- [[order]] used to define the execution order of the Buildpacks contained in the Builder.
- [[stack]] used to define which base environment the Builder will run on.
We can use this builder.toml to build our own builder image:
$ cd samples/builders/bionic
$ pack builder create cnbs/sample-builder:bionic --config builder.toml
284055322776: Already exists
5b7c18d5e17c: Already exists
8a0af02bbad1: Already exists
0aa0fb9222a5: Download complete
3d56f4bc2c9a: Already exists
5b7c18d5e17c: Already exists
284055322776: Already exists
8a0af02bbad1: Already exists
a967314b5694: Already exists
a00d148009e5: Already exists
dbb2c49b44e3: Download complete
53a52c7f9926: Download complete
0cceee8a8cb0: Download complete
c238db6a02a5: Download complete
e925caa83f18: Download complete
Successfully created builder image cnbs/sample-builder:bionic
Tip: Run pack build <image-name> --builder cnbs/sample-builder:bionic to use this builder
Next, enter the samples/apps directory and use the pack tool and the builder image to complete the application construction. When the build is successful, an OCI image sample-app
$ cd ../..
$ pack build --path apps/java-maven --builder cnbs/sample-builder:bionic sample-app
Finally use Docker to run this sample-app
image:
$ docker run -it -p 8080:8080 sample-app
Now let's take a look at the image we built earlier:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
cnbs/sample-package hello-universe e925caa83f18 42 years ago 4.65kB
sample-app latest 7867e21a60cd 42 years ago 300MB
cnbs/sample-builder bionic 83509780fa67 42 years ago 181MB
buildpacksio/lifecycle 0.13.1 76412e6be4e1 42 years ago 16.4MB
The image creation time turned out to be a fixed timestamp: 42 years ago . Why is this? If the timestamp is not fixed, the hash value of each image build is different. Once the hash value is different, it is not easy to judge whether the content of the image is the same. With a fixed timestamp, layers created during previous builds can be reused.
Summarize
Cloud Native Buildpacks represent a major advance in modern software development, and in most cases the benefits over Dockerfiles are immediate. While large enterprises will need to invest in re-engineering CI/CD processes or writing custom Builders, it can save a lot of time and maintenance costs in the long run.
This article introduces the origin of Cloud Native Buildpacks (CNB) and its advantages over other tools, and elaborates on how CNB works, and finally experiences how to use CNB to build images through a simple example. Subsequent articles will introduce how to create custom Builders, Buildpacks, Stacks, and function computing platforms (for example, OpenFunction , Google Cloud Functions) how to use the S2I capabilities provided by CNB to realize the function code from the user to the final application. conversion process.
This article is published by the blog OpenWrite !
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。