1
头图
The content of "K8S Ecological Weekly" mainly includes some recommended weekly information related to the K8S ecology that I have come into contact with. Welcome to subscribe to the column "k8s Ecology" .

Off topic

Hello everyone, my name is Zhang Jintao.

This time I put this part at the beginning. Talk about some recent situations.

"K8S Ecological Weekly" has been suspended for more than 2 months. During this period, some friends came to urge updates. Thank you for your continued attention! ! !

There are two main reasons, one is that I have been really busy recently, and the other is that I have made some reflections and summaries to share with you.

"K8S Ecological Weekly" started in March 2019, and it is now the fourth year. I have been thinking about what it can bring to me and my readers who pay attention to me.

For me, it's a process of wrapping up the archives, sharing feedback, and growing a lot during that time.

What I'm more happy about is that, compared to the daily, weekly, etc. issued by other people/communities, the "K8S Ecological Weekly" is not simply moving links, or moving ChangeLogs.
In the content of each issue, in addition to the information itself, I will also add some of my personal views, as well as some other content I have learned, including some background knowledge.
In addition, some code analysis/functional practice and comparison will be included. It can be said that the "K8S Eco Weekly" is a more technical content.

Based on some of the above analysis and some personal thoughts, I decided to add more of my personal thinking and understanding to the follow-up "K8S Ecological Weekly".
While providing these valuable information, increase more exchanges and communication with small partners.

The Ingress NGINX project suspends receiving new features to focus on stability improvements

Friends who are familiar with me may know that I am the maintainer of the Kubernetes Ingress NGINX project.

After a long discussion with our development team, we found that the Kubernetes Ingress NGINX project has gone through 6 years since 2016,
In the past 6 years, it has reached 13K star on GitHub, and 800+ Contributors have also contributed to this project.
Also received 4000+ Issue feedback and 4000+ PR.

In this process, the functions of the Ingress NGINX project have been greatly enriched, but as a software, there will inevitably be various bugs and loopholes.
At present, for this project, when some functions are needed, everyone will quickly implement it (thanks to everyone for contributing PR), but when there are bugs or vulnerabilities, few people will fix it. (In open source projects, this is a common situation. Fixing bugs or vulnerabilities requires more familiarity with the project itself than adding new features)

This situation actually adds a lot of burden to the maintainers, we need to spend time dealing with issues, adding and reviewing PRs for new features, and making bug and bug fixes, as well as considering whether new features may bring some ripple effects Wait.

In the above data, we can see that this project and the community are relatively active. We maintain and develop this project in our spare time. The overall pressure is relatively large, and we cannot respond in a very timely manner.

Recently, some security vulnerabilities have been reported in the Ingress NGINX project (which have been fixed), but during the correction process, we found that it is difficult to perfectly correct these vulnerabilities, and any changes may cause other chain reactions. Including introducing other vulnerabilities, or affecting certain functions/behaviors of users, etc.

Based on the above considerations, we have unanimously reached a decision to suspend receiving new features and focus on fixing and improving the stability of the Ingress NGINX project . Maybe you have a new PR waiting to be merged,
But I'm sorry, I hope you can understand, after we improve the stability of this project, we can iterate faster!

Our current plan is to complete this goal in 6 months, and we have identified some specific things that need to be done, you can follow the following links to see our progress:
https://github.com/kubernetes/ingress-nginx/projects/52

At the same time, we are also officially sending out a community survey to help us decide where we should go after this freeze period. If you are an Ingress NGINX user, please fill out this form, thank you!

https://www.surveymonkey.com/r/ingressngx2022

Upstream progress

Introduce kuberc configuration file for kubectl

KEP-3104 This KEP aims to introduce a new configuration file kuberc for kubectl, which is used to configure some user-defined configuration. This has similar usage in many projects or tools. For example, in Vim, you can specify the user's own configuration file by -u , or use the default ~/.vimrc to complete the custom configuration.

The advantage of this is that it can make kubeconfig more focused, only need to keep the information related to the cluster and user credentials, and separate the user's custom configuration. Specifically, the configuration file would look like this:

 apiVersion: v1alpha1
kind: Preferences

command:
  aliases:
    - alias: getdbprod
      command: get pods -l what=database --namespace us-2-production
      
  overrides:
    - command: apply
      flags:
        - name: server-side
          default: "true"
    - command: delete
      flags:
        - name: confirm
          default: "true"
    - command: "*"
      flags:
        - name: exec-auth-allowlist
          default: /var/kubectl/exec/...

It looks more intuitive and can be used to add some alias and override some default configurations, so that you no longer need to define a lot of aliases, and you can type a lot less commands when using kubectl in the future.
Before this feature was implemented,
By the way, I recommend another project kubectl-aliases . This project contains a lot of aliases, which can make the process of using kubectl easier.

But there are also some drawbacks, like every Vim user has to have their own vimrc configuration file, which will develop a certain habit. When used on some machines without their own custom configuration, it will be a little uncomfortable.
At the same time, when troubleshooting problems, it may also increase the links for troubleshooting (such as adding a wrong configuration in kuberc).

For example, when I troubleshoot Vim, I usually go straight to vim -u /dev/null in case any custom configuration is used. Then if this function is fully realized in the future, when you troubleshoot the problem, you need to pay attention to using kubectl --kuberc /dev/null similar to this method to avoid the impact of local custom configuration.

PodSecurity feature reaches GA

Recently, the PodSecurity feature was officially upgraded to GA in #110459 · kubernetes/kubernetes . If I remember correctly, this feature was probably one of the quickest from being introduced to GA.

PodSecurity is an alpha feature introduced since Kubernetes v1.22 as a replacement for PodSecurityPolicy. It reached the beta level in v1.23. Through the above PR, it was officially GA in v1.25, and it was enabled by default. You can see that the whole development process is still very fast.

PodSecurity defines 3 levels:

  • Enforce: If the Pod violates the policy, it will not be created;
  • Audit: If the Pod violates the policy, it will be recorded in the audit log, but the Pod will still be created normally;
  • Warn: If the Pod violates the policy, warning information will be printed in the console, and the Pod will still be created normally;

It is also very simple to use, just add pod-security.kubernetes.io/<mode>=<standard> tag to the namespace.

But if you have a lower version of the cluster and want a relatively general approach, I suggest you take a look at the article I wrote earlier. "Understanding the Admission Controller in Kubernetes"
and "Cloud Native Strategy Engine Kyverno (Part 1)"
Unified configuration is done by using Admission controller, OPA/GateKeeper, Kyverno, etc.


Welcome to subscribe my article public account [MoeLove]


张晋涛
1.7k 声望19.7k 粉丝