introduction
This is the fourth and final article in a series of articles about container runtimes. It's been a while since the first article. In that article, I gave an overview of the container runtime and discussed the difference between low-level and high-level runtimes. In the second article, I introduced the low-level container runtime in detail and built a simple low-level runtime. In the third article, I upgraded the technology stack and introduced the advanced container runtime.
The Kubernetes runtime supports the advanced container runtime of the Container Runtime Interface (CRI). CRI was introduced in Kubernetes 1.5 and acts as a bridge between kubelet and the container runtime. It is expected that the advanced container runtime integrated with Kubernetes will implement CRI. It is expected that the runtime will handle image management and support Kubernetes Pods, and manage individual containers, so according to our definition in the third article, the Kubernetes runtime must be an advanced runtime. Some necessary features are missing during low-level operation. Since the third article introduces everything about advanced container runtimes, in this article, I will focus on CRI and introduce some runtimes that support CRI.
To learn more about CRI, it is worth looking at the Kubernetes architecture. Kubelet is an agent located on each worker node in the Kubernetes cluster. Kubelet is responsible for managing the container workload of its nodes. When it comes to actual running workloads, kubelet uses CRI to communicate with the container runtime running on the same node. In this way, CRI is just an abstraction layer or API, which allows you to separate the implementation of the container runtime without having to build it into the kubelet.
CRI runtime example
Here are some CRI runtimes that can be used with Kubernetes.
containerd
Containerd is the advanced runtime I mentioned in the third article. Containerd is probably the most popular CRI runtime currently, and it implements the CRI as a plug-in that is enabled by default. By default, it turns on listening on the unix socket, so it connects to the container with the following configuration:
cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
EOF
This is an interesting high-level runtime because it supports multiple low-level runtimes through something runtime handler
runtime handler
is passed through CRI
. The container based on this runtime handler will run an shim
to start the container. It can be used to run containers with low-level runc
gVisor
, Kata Containers
or Nabla Containers
. runtime handler
in k8s 1.12 alpha version of RuntimeClass object
officially submit, here more about containerd's shim
concepts introduced.
Docker
Docker runtime
first realized the CRI
support, and as a kubelet
and Docker
a between shim
achieved. Since then, Docker
has broken down many of its functions into containers, and CRI
is now supported through containers. When installing the latest version of Docker
containerd
and CRI
will be installed at the same time to directly communicate containerd
Therefore, Docker
itself does not need to support CRI
. Therefore, according to your actual situation, you can install the container directly or through Docker
.
cri-o
cri-o
is a lightweight CRI
runtime, which is an advanced runtime specific to Kubernetes
It supports OCI
compatible image management, and from any OCI
extract compatible image registry. It supports runc
and Clear Containers
as low-level runtimes, and theoretically supports other OCI-compatible low-level runtimes, but relies on compatibility with the runc OCI
command line interface, so in practice it is not as flexible shim API
cri-o
of endpoints
default located in /var/run/crio/crio.sock
, can be configured by way crictl
:
cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///var/run/crio/crio.sock
EOF
CRI specification
CRI
is a protocol buffers
and gRPC API
. The specification is in kubelet
under Kubernetes
image repository protobuf
defined in the file. CRI
defines several remote procedure calls ( RPCs
) and message types. RPCs
used for "mirror" ( ImageService.PullImage
), "create container" ( RuntimeService.RunPodSandbox
), "create container" ( RuntimeService.CreateContainer
), "start container" ( RuntimeService.StartContainer
), "stop container" and other operations ( RuntimeService.StopContainer
), etc.
For example, CRI
looks similar to the following (in the form of my own pseudo gRPC
, each RPC
will get a larger request object, I simplified it for simplicity ). RunPodSandbox
and CreateContainer RPC
return IDs in their responses, and these IDs are used in subsequent requests:
ImageService.PullImage({image: "image1"})
ImageService.PullImage({image: "image2"})
podID = RuntimeService.RunPodSandbox({name: "mypod"})
id1 = RuntimeService.CreateContainer({
pod: podID,
name: "container1",
image: "image1",
})
id2 = RuntimeService.CreateContainer({
pod: podID,
name: "container2",
image: "image2",
})
RuntimeService.StartContainer({id: id1})
RuntimeService.StartContainer({id: id2})
Use crictl
tool directly CRI
for running interact directly from the command line transmission gRPC message to
CRI
run time and use it to debug and test CRI
achieved, without starting kubelet
or Kubernetes cluster may cri-tools from the GitHub Version page download crictl binary file
to get related files.
You can /etc/crictl.yaml
by creating a configuration file under crictl
. Here, you should gRPC
endpoint at Unix socket
as the 0609deec12a44b file ( unix:///path/to/file
) or the TCP endpoint ( tcp://<host>:<port>
). In this example, containerd
will be used:
cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
EOF
runtime endpoint
every time the command line is executed:
crictl --runtime-endpoint unix:///run/containerd/containerd.sock …
Use crictl
run a single container pod
. First, tell the runtime pull
required nginx
image, because the container cannot be started without a locally stored image.
sudo crictl pull nginx
Next, create a Pod
creation request, you can use the JSON
file to operate.
cat <<EOF | tee sandbox.json
{
"metadata": {
"name": "nginx-sandbox",
"namespace": "default",
"attempt": 1,
"uid": "hdishd83djaidwnduwk28bcsb"
},
"linux": {
},
"log_directory": "/tmp"
}
EOF
Then create pod sandbox
and sandbox
the ID of SANDBOX_ID
as 0609deec12a544.
SANDBOX_ID=$(sudo crictl runp --runtime runsc sandbox.json)
Next, create a container creation request JSON
cat <<EOF | tee container.json
{
"metadata": {
"name": "nginx"
},
"image":{
"image": "nginx"
},
"log_path":"nginx.0.log",
"linux": {
}
}
EOF
Then, create and start the container Pod
{
CONTAINER_ID=$(sudo crictl create ${SANDBOX_ID} container.json sandbox.json)
sudo crictl start ${CONTAINER_ID}
}
Check the running pod
and the running container:
sudo crictl inspectp ${SANDBOX_ID}
sudo crictl inspect ${CONTAINER_ID}
Clean up by stopping and deleting the container:
{
sudo crictl stop ${CONTAINER_ID}
sudo crictl rm ${CONTAINER_ID}
}
Then stop and delete Pod
:
{
sudo crictl stopp ${SANDBOX_ID}
sudo crictl rmp ${SANDBOX_ID}
}
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。