头图

<!-- Head image: MammothHS.jpg from bing.com -->

[Notes] Basic use of K8S

  • https://kubernetes.io/docs
  • https://kubernetes.io/zh/docs

The above two are the official documents of this thing, maybe quite comprehensive, but anyway, I don't think it's the point: I want to check what it doesn't say, just chant the sutras by myself.

So I wanted to make one myself. In case Alzheimer lost his memory someday, if I was still alive at that time, I might be able to pick up this part of what I had mastered again.

Basic

A major feature of this k8s is that there are many Concept ThievesTM. Its set of documents like to introduce the concept first, then I will work on it, let's start with the introduction of how to use it:

Generally, commands other than kubectl will not be used in use.

The basic usage is a simple example of these:

  • Check which Pods
  • Enter Pod

There is also a very useful basic concept:

  • What is the relationship between Pod and Container

Check Pod

If you want to check which Pods are available:

kubectl get po

If you want to specify the wahaha (do not know what this is, it doesn’t matter if you use it, you can know if you want to check the concept, you can check it yourself):

kubectl get po -n wahaha

This -n can also be written as --namespace , the former is abbreviated.

Well, if you can use these, you have taken a big step. If you have a Pod, the print you see should be a neat text output like a table. If you want to see more fields, you can [🦕add it at the end] -o wide :

kubectl get po -n wahaha -o wide

(After the words [🦕add after] appear, it means that the following command is the command mentioned in the previous one with the content added-of course, it must be separated by spaces! I am not talking about string splicing!)

If you want to see all Pods directly, just like this:

kubectl get po -A

has other ways to check: just bring the option -l and use the selector to find the kind. There is a specific defined Pod label

Enter Pod

generally

If you have a Pod called abc (under the default namespace), and this Pod has only one container , the way to get in is:

kubectl exec abc -t -i -- sh
  • Generally, sh is bash . The latter is better but not necessarily available, depending on the image of the container.
  • The software self-interpretation of that -i Pass stdin to the container , roughly means that should pass stdin (standard input) to the container .
  • That -t software from interpretation: Stdin is a TTY , probably means should be standard input ( stdin ) is a terminal ( TTY ) (or "standard input is from the terminal") .

I explained the two options above ( Option ). I kubectl exec -h . This output also shows the equivalent long name version of these short option names (and the default value if not specified).

The so-called into the container means that can interactively operate the container . This kind of thing must be done without SHell, and Linux usually installs sh ash bash default. Of course, it may only install the three SHell software. I got sh this one or just didn’t install bash . Here does not consider the case not installed Then, the so-called into the container is actually to : to start a container in interactive mode sh (or other SHell) software .

Designated container

If Pod abc under (I specifically do not say that in explained after the description), there are multiple containers (the container in the Pod definition container definition file is an array, so there can be multiple), one of which is named ddf , Want to enter this container, generally write like this:

kubectl exec abc -c ddf -t -i -- sh

[🦕 it is in inside plus] -c ddf .

Moreover, does not matter whether there are multiple containers under this Pod. , but if there is only one container under the Pod, you do not have to specify which container to enter.

Namespaces

If Pod abc in the namespace qwe , the container ddf to enter it needs to be like this:

kubectl exec abc -c ddf -n qwe -t -i -- sh

The -c ddf -n qwe and 06131f1575ffd4 does not matter, and the order between all options and the Pod name abc is also irrelevant (but I think it will be clear exec

Pod and Container

It says Pod under instead the Pod , because there is no physical Pod, the container has, and the Pod is a group of containers, just K8S, the smallest unit of scheduling arrangement is Pod.

If your K8S is running based on Docker, this means that K8S related components are some Docker containers, and the Pod you started, on its node, you can also use docker ps find the corresponding running container, and you will It was found that using docker exec to enter the container looks almost the same as entering the container through the Pod above.

In this way, K8S is more like an upper-layer package of Docker. (Of course, in terms of documentation, Docker's is better, and K8S's are more like formalism...)

If you qwe have under namespace abc at Pod ddf container, so you can refer to the following command to in the correct node find the corresponding container:

docker ps | (fgrep qwe | fgrep abc | fgrep ddf)

The parentheses above do not need to be written; the | ) can be interchanged at will (meaning that this will not cause the output result to be different in content).

Then the output content will be very long, and there is a field behind the container name, which contains: K8S Pod name, container name defined under K8S Pod, K8S namespace name and other parts.

You now know that the corresponding Docker container name is, for example, xxxxxxxxxxx , and the entry is like this:

docker exec -t -i -- xxxxxxxxxxx sh

Command docker parameters and commands kubectl is not quite the same, but will be similar to the case of shorthand:

docker exec -ti xxxxxxxxxxx sh

The container name of the command docker is the first item of the parameter part of the irrelevant option (the -- after 06131f15760101), and what command is executed in it is the second item here;

kubectl is not the case with the command 06131f1576011a. The parameter part of the irrelevant option (the -- after 06131f1576011b) is all the command to be executed in the container, and the Pod name itself is the part that is treated specially in the option-related parameter area. (Probably because of this, K8S strongly recommends not to omit the double horizontal line -- .)

The meaning of Pod

Its biggest feature is volatile , that is to say, a single Pod must be a temporary thing, is must not expect things to run its long .

It requires that a long-running resource is actually a number of short-term resources like this on the timeline. When you access a long-term resource, you are actually accessing a short-term resource. The specific short-term resource should not be known to the person using it. No need to know.

Generally, when we restart a resource (which can be a running machine or a running program instance (ie process)), we first turn it off and then turn it on. Due to the introduction of the volatile nature of , as long as you make sure that the one that is turned on is turned on, the interface for external access to (long-term) resources only needs to be switched, and then the Pod (as a short-term resource) is turned off. If the user feels that the access fails, it is also the moment of switching, and this moment is very short, and the waiting time is much shorter than that of turning off and then turning on. This is the basis of the advantages of K8S.

Define resources

Even if you write Yaml in general, the equivalent Json is also possible.

explain

It is the subcommand kubectl explain . It is followed by a way of writing an object path. For example, if you want to write a Ingress , you can execute:

kubectl explain Ingress # or # kubectl explain ing

To get some information. In the output content, there are a total of these blocks:

  • KIND : This is the type (full name) of the definition object.
  • VERSION : This should be the current version of this type (it is said that this is the embodiment of the K8S plug-in design).
  • DESCRIPTION : Below this is the description of the object currently referred to.
  • FIELDS : field; the following can also understand the sub-object path under the current object path.

The output when I execute kubectl explain ing here is really like this:

KIND:     Ingress
VERSION:  extensions/v1beta1

DESCRIPTION:
     Ingress is a collection of rules that allow inbound connections to reach
     the endpoints defined by a backend. An Ingress can be configured to give
     services externally-reachable urls, load balance traffic, terminate SSL,
     offer name based virtual hosting etc. DEPRECATED - This group version of
     Ingress is deprecated by networking.k8s.io/v1beta1 Ingress. See the release
     notes for more information.

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata     <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec <Object>
     Spec is the desired state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

   status       <Object>
     Status is the current state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

I see a spec field. Assuming I understand what it can be used for based on its description, I want to know its child objects (that is, its field ), then execute kubectl explain ing.spec , and you can see Output:

KIND:     Ingress
VERSION:  extensions/v1beta1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec is the desired state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

     IngressSpec describes the Ingress the user wishes to exist.

FIELDS:
   backend      <Object>
     A default backend capable of servicing requests that don't match any rule.
     At least one of 'backend' or 'rules' must be specified. This field is
     optional to allow the loadbalancer controller or defaulting logic to
     specify a global default.

   ingressClassName     <string>
     IngressClassName is the name of the IngressClass cluster resource. The
     associated IngressClass defines which controller will implement the
     resource. This replaces the deprecated `kubernetes.io/ingress.class`
     annotation. For backwards compatibility, when that annotation is set, it
     must be given precedence over this field. The controller may emit a warning
     if the field and annotation have different values. Implementations of this
     API should ignore Ingresses without a class specified. An IngressClass
     resource may be marked as default, which can be used to set a default value
     for this field. For more information, refer to the IngressClass
     documentation.

   rules        <[]Object>
     A list of host rules used to configure the Ingress. If unspecified, or no
     rule matches, all traffic is sent to the default backend.

   tls  <[]Object>
     TLS configuration. Currently the Ingress only supports a single TLS port,
     443. If multiple members of this list specify different hosts, they will be
     multiplexed on the same port according to the hostname specified through
     the SNI TLS extension, if the ingress controller fulfilling the ingress
     supports SNI.

Then based on this method, it should be possible to learn how to write definition files "without a teacher". . . (You see it even indicates the type of the field for you...🐌)

(However, what po ing sc psp sts , etc., so many resources... I still don’t know how to know all types of resources and their full (abbreviated) names.)

This is the ideal situation anyway. I know that it’s not an ideal state, and the fastest learning order is not the kind of help information or online documents they officially give (Google’s online documents are really perfunctory and reciting scriptures may indeed not give employees too much time to write. kubectl explain ), I don't know what it is, but if you know that 06131f15760365 can be like this, it should be able to reduce the difficulty a little bit. . . (Unfortunately, it's not Chinese...)

The resources that generate Pod or other KIND are generally written definition files, either Yaml or Json. You can make specific examples yourself, but here is just a way to understand a specific definition file. (Then so many miscellaneous things are enough for you to check and understand for a while...)

Correspondingly, the docker . For example, if you want to see the default startup command of a certain mirror, but don't know how to look at it, you can also do this:

  • First of all, the mirror is not image , then execute docker image --help get some help information;
  • It will list the subcommands of this subcommand and then the next level of subcommands. Anyway, it inspect seems to be what I want to use, because everything else is completely irrelevant to what I want to do. Then execute docker image inspect nginx If you want to see nginx mirror, then a bunch of Json will come out. This is better than Yaml, right? You can see the details of many aspects of this mirror.
  • Then you can search one by one when you see the available key-value pairs. For example, I found such an interesting thing: https://yeasy.gitbook.io/docker_practice/image/dockerfile/entrypoint (The original command executed at startup is not only in the CMD)

Find resources

Generally, no matter what KIND , there will be metadata first wave field. For Ingress resources, execute kubectl explain ing.metadata to see the detailed introduction of this level. meta-information is defined here. For example, what is the name of the resource object and which namespace is defined here.

And in general, in no matter what KIND in resources metadata next, there is a field called labels , you can see the type key-value pairs ( map ), which you can arbitrarily define this corresponds to the various fields and values:

  • First of all, you can see that kubectl explain ing.metadata.labels , it does not have a FIELDS part to indicate which fields are under it, but there is a FIELD part to describe different aspects of ing.metadata.labels DESCRIPTION part.
  • Secondly, if you try to execute kubectl explain ing.metadata.labels.aaaa or kubectl explain ing.metadata.labels.zzzz , that is, write casually in the next-level object path, it will be executed correctly and get an DESCRIPTION part of <empty> (indicating empty), but this behavior is in the object ( Object ) Type does not work at the level. If you don’t believe it, just try it yourself. For example, if you execute kubectl explain ing.metadata.zzz there will be an error message: error: field "zzz" does not exist .

Then selector. . . .

(Unfinished)

(Major discovery: The service selector should correspond to the label of the pod!!! Not the label of the deployment or replica set!!! Just run to copy back and forth...))

Draw files

Use configMap .

Among them, kubectl explain po.spec.volumes.configMap describes how to cite a configMap to a specific volume.

Here is an example.

In a resource of type Pod

  • Write this in these two positions under po.spec.containers.volumeMounts

    • name value haproxy-config
    • mountPath value /usr/local/etc/haproxy
  • Write these positions under po.spec.volumes

    • name value haproxy-config
    • configMap.name value haproxy-appdemo
    • configMap.defaultMode value 0420

(For the above po : it is Pod type, and is equivalent to deploy.spec.template or sts.spec.template etc.)

In a resource of type ConfigMap

  • The value at metadata.name haproxy-appdemo
  • The value in data."haproxy.cfg" is [a bunch of strings].

So, the above Pod type of po.spec.containers.args in use can go /usr/local/etc/haproxy/haproxy.cfg this files. (Of course this file has been there after that)

Above, I just talked about the key parts of the definition file in my way. You can view the description of each level kubectl explain For a complete definition, after successfully executing helm install haproxy-appdemo haproxytech/haproxy to complete the installation (requires the Helm3 tool and helm repo add haproxytech https://haproxytech.github.io/helm-charts && helm repo update ), you can view the newly added resources in various ways.

In this way, a specific file is expressed through part of the content of the resource definition. Moreover, the general interface operation tools for ConfigMap.data can directly edit the values of the existing keys in 06131f15760808.

As for that 0420 is actually a number for file permission control ( here is octal representation, so be sure not to lose the 0 ! Otherwise it will be recognized as a decimal number, then it is not the effect you want! ), you use chmod that will be used when kubectl explain po.spec.volumes.configMap.defaultMode from the execution output of 06131f1576085e). That is to say, until po.spec.volumes.configMap , the content in the corresponding ConfigMap resource will be regarded as the configuration indicating the added file.

Whole mirror

Roughly three steps:

  • Create and start a container based on an existing image
  • Enter the container to do some operations (this can be merged to the previous step dockerfile
  • Submit the new container as a new image

Why write this? Because even K8S uses mirroring to create containers after all. Without mirroring, there is no container. (It's like there is no process instance without a program.)

Start the container

Generally use docker run to abbreviate commands such as docker create and docker container

My example:

docker run -d -t -i -p 1234:4321 -p 5678:8765 -v /tmp/abc/data:/data -v /run/abc:/run -p 7788 --name playing-abc -- a.b/center/abc:v1 /usr/sbin/init

Among them, run is a docker , you can see docker run --help to understand the effect of each option.

In the option parameters, let's talk -- front of 06131f1576099a (my understanding is not necessarily correct, but I will first attach the software's own explanation of itself):

  • -d help explain this is: Run container in background and print container ID , which means to let the container run in the background (it should mean entering improperly) and print the ID of the container. (My understanding: If you don’t make a background to run, the foreground is used up and all processes in the container are gone, the container will be gone-if there is no process in the container, the container stops. This is a feature docker Think about it, it's reasonable.)
  • -i : Keep STDIN open even if not attached , which means to keep standard input open even if no one uses it.
  • -t : Allocate a pseudo-TTY , which means it should be allowed to use the terminal.
  • -p : Publish a container's port(s) to the host (default []) . It seems that a list can be passed in. The way I write above is like adding elements to the list one by one. has a colon, the left side represents the bare metal, and the right side represents the container.
  • -v : Bind mount a volume (default []) . What I use here is Bind Mount , which is directly bound to the directory inside and outside the container. also the left side of the colon to indicate outside the container. Of course, -v is another equivalent, more explicit notation. .
  • --name : Assign a name to the container , used to give the container a name. If you don't specify it, Docker will randomly give an interesting name that seems to have some meaning.

There is no order between the above options and the options (the option name and option value should be one after the other).

For the back --

  1. The first item is the name of the image on which the container is created (and started).
  2. The second item is the startup command-if the container will be submitted to the image later, then the default ENDPOINT (the default executor at startup) of that image will be this one. (Not specified ENDPOINT , then it will be /bin/sh -c a.) (If you want the actual machine like the same container is opened can use this, do not want, then you can put /usr/sbin/init replaced sh (full or write can be /usr/bin/sh ); general application container It is to start an application, such as /usr/bin/env python xxx.py , Note that of course, the foreground must be started instead of the original background .)

For -- itself:

  • It is not necessary to write it, but it is recommended to write it out for easy reading. It stands to reason that if the software option is made in the standard GNU or UNIX style, then you don't write it. It should be the program that first intelligently judges where to write this stuff, and then fills it up for you before executing it. Moreover, the two horizontal lines are not bothersome, so in fact, it is not necessary and necessary not to write. I suggest to write it. Being able to write it means that all parts of this command will be clearer to the people you see. I think there are only seq -- 4 , of which -- really needs to be omitted.
some small things...

fact docker run options will -t -i that has mystified me, unless and -d when used together with the error that I will not be confused. In the above example, the -t -i can be omitted, which will not have any effect on our purpose.

has one more important thing: if you have --privileged=true in your options, this docker run command will definitely affect the host! If your host is using the desktop, then:

  • If the --privileged=true option configuration is used together with the -t then after the docker run command is executed, the desktop you are using may be replaced with the terminal in the container! ! (And for this, I have no solution except restart anyway)
  • If the --privileged=true option is configured but does not have the -t option, then after the docker run command is executed, the desktop you are using will be logged out...Of course, you can have the right to log in again just like you just started it. . (This one is less destructive than the previous one, but it's actually not small)

(The above phenomenon was CentOS7 by the mentioned methods in 06131f15760cce. After inquiring, the option configuration --privileged=false is the default situation. At this time, the root in the container is just an ordinary user outside the container; if the configuration bit --privileged=true , inside the container root is equivalent to root outside the container.)

If the default mirroring from Docker Hub is too slow, you can see if this page is useful for you: https://mirrors.ustc.edu.cn/help/dockerhub.html .

Enter the container

Use docker exec .

In fact, the meaning exec exec command of Linux, at least not when used. You can write whatever commands you want in exec bash -c 'xxxx' among them xxxx , to try the effect, maybe there will be surprises (so please open a separate SHell to try). . .

Example:

docker exec -ti -- playing-abc bash

The -t and -i here should be the same as the above docker run , or may be different, it is recommended to perform the following docker exec --help take a look. 🙃

The playing-abc here can be replaced with the ID of the container (the long or short will do; the one in the output message is long when it is created, and you docker ps ), and it must be the first item after --

And that bash is the command you want to execute in this container, after which it can talk to a number of parameters, namely container name part of it after all the command and its parameters (this is like a look back Things are just the parameters passed to the container name, so you can understand why the previous subcommand is called exec . But this is not the focus of this article, you can study it yourself if you are interested, of course you don’t need to be interested...🐛). For example, you can write this to try the effect:

docker exec -ti -- playing-abc bash -c hostname
docker exec -ti -- playing-abc hostname
docker exec -ti -- playing-abc python
docker exec -ti -- playing-abc iex xxx.iex

(Actually, look carefully, except for the third line, you will not enter the interface of the executed program, so there is no need for -ti .)

Of course, there may be no bash program in the container (not in many streamlined images) (or no python program), but only sh and ash or only the former. Then just change the corresponding position.

(Concept analysis: program refers to a file, which may be one or a bunch, which is the basis of process creation, and the process is a running thing (may be called instance ), the so-called program Running is actually using this program to create process instances (may be a series or one). means. So, can you find a one-to-one correspondence with the concepts in Docker? 🙃🐌🐌)

To exit, you can execute exit or press the Ctrl-D combination 06131f15760e2c.

Other little things...

do not need to use the exec subcommand. For example, if you want to check what resources are in the container, the docker container top subcommand is a ready-made solution . For example: docker container top -- playing-abc . This actually does not belong to entering the container , because there is no way to perceive the existence of this query behavior inside the container, and it will not enter the container to start a process to complete this function; and, in theory, its performance impact is also the smallest.

You can perform this to view the help information for more operations that interest you:

docker container --help

And, not all base images have /usr/sbin/init . In particular, it is designed as a base image dedicated to building application images.

For the above situation, in fact, we should realize that we can not let docker run be put in the background (that is -d option), and then go in and do something, but let the work to be done directly become the [entry point].

And if there is no -d , you will be in this command during execution, your text interactive interface will be blocked, and you can see its output information. You can try the following examples:

docker run --name making--funnyalp -- docker.io/library/alpine:latest apk add bash luajit

This example is to start an image as a container, and then only starts a process (which is apk add bash luajit command), it will install the bash and luajit in the container. After the installation, this only process will exit. , This container also stopped. (But it’s okay, you can submit it if you stop it. Our purpose is to submit. Stopping just saves the computer so that it doesn’t have to do more work to suspend the container (but in order to ensure that the submission is indeed stopped, the option to instruct the suspension is in It will still be written when submitting) (The details of the submission will be described later).)

Of course, if you think the download is too slow, generally speaking, you can do this:

docker run --name making--funnyalp -- docker.io/library/alpine:latest sh -c '
    sed -i.bak2 s/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g /etc/apk/repositories ;
    apk add bash luajit'

Or like this (in two steps, mirroring will also be divided into two layers):

docker run --name making--alpine-ustc -- docker.io/library/alpine:latest sed -i.bak2 s/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g /etc/apk/repositories &&
docker commit -p -a hm -m 'chg repo -> ustc' -- making--alpine-ustc hm.io/tools/alpine-ustc:latest &&
docker rm -- making--alpine-ustc ;

docker run --name making--funnyalp -- hm.io/tools/alpine-ustc:latest apk add bash luajit &&
docker commit -p -a hm -m 'add: bash luajit' -- making-funnyalp hm.io/tools/funnyalp:latest &&
docker rm -- making--funnyalp ;

# test:

docker run --rm --name testing--funnyalp -t -i -- hm.io/tools/funnyalp:latest bash ;

# and just Ctrl-D to exit bash 
# --- this container also will be exit if nothing run in it 
# --- and if container exited it will be remove automatically.

# because of we need to do sth in the bash manually,
# so we need the -t and -i options.

In fact, the fact that -ti and -d can be put together is really confusing: how to operate manually after putting them in the background? Probably it is in the background, but it can also receive standard input, and it also occupies a terminal (it's just not a terminal of the root is used. the general not recommended to use [make the container root users will have an outer container root user's permission] option configuration.)

So what exactly is -t -i

In fact, you can docker exec command:

(Assuming you have used docker run -d --name c7 -- centos:7 /usr/sbin/init launched a systemd internal process ID 1 container)

echo aaa | docker exec -i -- c7 cat ## 会输出 aaa
echo aaa | docker exec -ti -- c7 cat ## 会报错 the input device is not a TTY
echo aaa | docker exec -- c7 cat ## 会阻塞你的控制台输入啥都不会有反应
cat ## 它会等你手动输入随便什么内容、并随时地给你吐出来相同的内容。按 `Ctrl-D` 可以结束输入
docker exec -ti -- c7 cat ## 它会等你手动输入随便什么内容、并随时地给你吐出来相同的内容。按 `Ctrl-D` 可以结束输入

Explain one by one.

First of all, the common point of the three commands is: standard input received the string content aaa , it is the docker command!

The difference lies in:

  • The first line can be used to understand the meaning of the standard input opening: the command docker will treat what it receives from its standard input as the content received by the standard input of the command in the latter container (in this case, cat .

    (well known... The cat command is to spit out the content received from your standard input to your standard output) (still well known... Generally speaking, the standard output of the last command of the pipeline is connected to your screen. , The output caused is printing, the default is this, of course, it may also be redirected to a file for local storage (or go to any other place) just...)

  • In the second line, although standard input is opened, it is opened together with the terminal. What is the terminal? Simply put, when you manually micro-operate the keyboard, you are using the terminal to input something into the standard input of the command. The meaning of this error should be that it is complaining. Once you have said that you want to micro-manipulate, but not micro-manipulation, you directly give it a set of written content. If it thinks this is unreasonable, it will give you a reminder and give it out. Stopped for safety reasons. . .

    -t it means that you have made a mistake. If you make a mistake, you may be in danger of step by step. Therefore, it is kind to remind you and not work. of.

  • In the third line, there is no -i . Therefore, the command docker will take it from its own standard input. (As we all know, the standard input of each process is its own. Although it is called stdin and can be accessed by /dev/stdin it is really each aaa obtained by each process is improper (that is, no processing is done-naturally, it will not be forwarded to the cat process to be started in the container specified after the command), so the cat command in the container ( All in all, it is cat command) and it will not feel that its standard input has been entered, so it will wait forever.

    keep waiting? Combine the fourth line and fifth line to explain.

  • The fourth line fifth line effect is the same, you can enter content at any time, it will make a deal at any time, as it is here to give you the output out (as I said before cat processing program starts the process of doing as it is nothing more than the it contents of standard input instantly as it spit to its standard output and standard output is the default docking your character interface), and you have been able to manual , it is because there is this thing terminal . The command line exists by default, otherwise it cannot be used; docker does not necessarily need to be manually operated, so it needs to be selected.

    there no response in the third act? Because cat has no standard input (that is, aaa cannot be obtained) nor a terminal (that is, it cannot be operated manually), but it is started as it is (this is equivalent to executing the fourth line in the container), so it must always Wait here for a manual input that you can't give it.

For the docker exec commands docker run is actually the same:

echo aaa | docker run --rm -i -- busybox cat ## 会输出 aaa
echo aaa | docker run --rm -ti -- busybox cat ## 会报错 the input device is not a TTY
echo aaa | docker run --rm -- busybox cat ## 等了一会儿,然后自动退出。
docker run --rm -ti -- busybox cat ## 它会等你手动输入随便什么内容、并随时地给你吐出来相同的内容。按 `Ctrl-D` 可以结束输入

Here is nothing more than executing the cat entrypoint , that is to say, for the containers created in the above four lines (because there are --rm they are all containers that are automatically deleted as long as they are stopped), in this case When they are running, there is only cat command, this one process. different only in the third line , because if it is entrypoint , it should have a terminal if it is waiting for a terminal (here it refers to cat ), but it is not given to the terminal (Because there is no -t it will not be given by default), so it will be killed; if it is killed, the container will have no process. According to Docker , the container will stop, and because it is --rm option, this container It will be deleted automatically when stopped.

At this point, someone may ask: You see, you have an interview with -i -ti and no situation, what if there is only -t ?

You can try it yourself. The effect will be very reasonable: you can input, because the process inside is connected to the terminal, allowing you to micro-operate at any time; but your input will not be perceived by any process in the container, because the standard input is not turned on, and is Unix-like Any process of the system only has these few ways to get information input: pass reference and standard input , since there are neither of them here, isn’t it how you can’t hear it. . . (I won’t be here, no matter how you think...)

But! Separate -t may also be used, such as this case:

  • You are using a basic image, there is no such thing as /sbin/init
  • Then you change /sbin/init to sh , but you find that the started container then stops:

    of course! sh that did not get the terminal, it is impossible to feel that it is necessary to run all the time to wait for manual input from people, and it will stop, and the container will stop without the only process in the container;

  • You want to keep this basic mirroring, but you don’t care entrypoint is, then you can let sh do the entry point , and give it -t but not -i ensure that it continues to wait for input but no input is required. In this way, the container can always run a sh that will do nothing and live as a process with the internal process number 1

    ——

    And of course, the sh replaced cat as well. Do not believe? Let cat in a terminal mode: I mean, directly execute a cat command, and then see what effect the manual operation will have, you should know what I mean.

    Example command:

    docker run -t --name rust-playing -d -- rust:slim cat
    

Then the rust-playing can always run, and the 1 container is the process created by the cat command that has been waiting for manual input (but not because there is no -i

How to judge that there is only one cat ? Execute this: docker container top -- rust-playing , if there is no process management software in the container-the basic image built by the application does not need to have such a thing.

Submit the container as a mirror

(I have just mentioned the above and the following examples. After reading the following, you can look back at the above trivia again.)

With docker commit , the detailed information can docker commit --help viewed by 06131f157617b4.

Example-It is known that at this time I have playing-abc and changed some things:

docker commit -p -m 'add some fun, have some command history!' -- playing-abc e.f/given/efg:0.1

Among them, the options -p represent pause containers submit, -m used to write notes this submission (a bit like git submitted command), -- , the first item should be container name or container ID (may be longer or shorter ) , the second item is the full name of the new image you are about to create. It can be an almost new name like e.f/given/efg:0.1 here, or it can be the TAG part of the previous image, such as: a.b/center/abc:v2-dev .

For mirroring can be used docker tag renamed, docker tag -- e.f/given/efg:0.1 a.b/center/abc:v2-dev will provide e.f/given/efg:0.1 create this image new alias a.b/center/abc:v2-dev , original full name is actually an alias aliases and two have equal status , but with a presence both on your hard drive a bunch of files image name , the name of this increase will not cause more memory to be occupied, but you can call to find more of the same image that is.

Export image

It is to save the image as an offline package.

Use docker save . It is recommended to look at the execution output of docker save --help

Example:

mkdir -p -- "$(dirname e.f/given/efg:0.1)" &&

docker save -- e.f/given/efg:0.1 |
    xz -T0 --best > e.f/given/efg:0.1.tar.xz ;

This is the command I use, which can be executed even if formatting is removed.

This is according to the specified image name to save the image , when the file content is docker load to the standard input of the command 06131f1576197a, you can get a mirror with this name.

The part where the image name e.f/given/efg:0.1 is written can also be replaced with the ID of the image, but this name will not be included when importing.

The name of the image that is automatically generated when importing indicates which alias of the image is used for exporting.

The default input or output of import and export related subcommands are standard input and standard output respectively. export must be TAR format , which is not compressed. You can throw this output to the pipeline to choose which compression command to use. To compress it, I used the xz command to turn on the concurrent mode of dynamic concurrency and specify the highest compression rate. The compression result is still put into the standard output (just like docker save ), so I redirect it to a File. Here redirected to to the current directory for the e.f/given/efg:0.1.tar.xz this file, so you need to create it will need a good job directory, with mkdir -p -- "$(dirname e.f/given/efg:0.1)" . The && indicates that the left one succeeds before the right one is executed. (Nothing on the right? Carriage return on the right! Carriage-return spaces are all blanks and I am here to change the carriage-return to the space and then to change consecutive spaces to a space. The command is still exactly the same as the original one. The right side I mean is the command on the right. (Not characters).)

Import example:

cat -- e.f/given/efg:0.1.tar.xz | docker load

In this way, the content e.f/given/efg:0.1.tar.xz docker load the standard input .

As long as it is docker save , the above command will generally not execute errors. When successful, there will be several lines of output information.


awsr
13 声望0 粉丝

« 上一篇
SHell 与远方
下一篇 »
Bash 笔记