集群安全机制
apiserver统一协调:认证 > 鉴权(授权)> 准入控制
认证
传输安全: 对外不暴露8080端口,只能内部访问,对外使用的端口6443
客户端身份认证常用方式
- https证书认证,基于ca证书
- http token认证,通过token来识别用户
- http基本认证,用户名 + 密码认证
鉴权:RBAC
RBAC:基于角色的访问控制,某个角色设置访问内容,然后用户分配该角色后,就拥有该角色的访问权限
角色
- Role:特定命名空间访问权限
- ClusterRole:所有命名空间的访问权限
角色绑定
- roleBinding:角色绑定到主体
- ClusterRoleBinding:集群角色绑定到主体
主体
- user:用户
- group:用户组
- serviceAccount:服务账号
测试
1.创建命名空间
kubectl create ns roledemo
2.在命名空间下创建一个Pod
kubectl run nginx --image=nginx -n roledemo
3.创建角色
tip:这个角色只对pod 有 get、list权限
# 创建
kubectl apply -f rbac-role.yaml
# 查看
kubectl get role -n roledemo
4.角色绑定用户
# 创建角色绑定
kubectl apply -f rbac-rolebinding.yaml
# 查看角色绑定
kubectl get role, rolebinding -n roledemo
5.使用证书识别身份
这里包含了很多证书文件,在TSL目录下,需要复制过来
通过下面命令执行我们的脚本
./rbac-user.sh
测试
# 切换命名空间看输出结果区别
kubectl get pods -n roledemo --kubeconfig=./mary-kubeconfig
准入控制
就是准入控制器的列表,对api-server的请求进行过滤:如果列表有请求的内容就通过,没有的话 就拒绝
Ingress
前言
原来我们需要将端口号对外暴露,通过 ip + 端口号就可以进行访问
原来是使用Service中的NodePort来实现
在每个节点上都会启动端口
在访问的时候通过任何节点,通过ip + 端口号就能实现访问
但是NodePort还存在一些缺陷
因为端口不能重复,所以每个端口只能使用一次,一个端口对应一个应用
实际访问中都是用域名,根据不同域名跳转到不同端口服务中
Ingress和Pod关系
Pod 和 Ingress 是通过Service进行关联的,而Ingress作为统一入口,由Service关联一组Pod中
- 首先service就是关联我们的pod
- 然后ingress作为入口,首先需要到service,然后发现一组pod
- 发现pod后,就可以做负载均衡等操作
Ingress工作流程
不同域名对应不同的Service,然后service管理不同的pod
需要注意,ingress不是内置的组件,需要我们单独的安装
使用Ingress
- 部署ingress Controller【需要下载官方的】
- 创建ingress规则【对哪个Pod、名称空间配置规则】
使用Ingress对外暴露应用
1.创建一个nginx应用,然后对外暴露端口
# 创建pod
kubectl create deployment web --image=nginx
# 查看
kubectl get pods
对外暴露端口
kubectl expose deployment web --port=80 --target-port=80 --type=NodePort
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 147m
web NodePort 10.107.12.59 <none> 80:30812/TCP 38s
2.部署ingress controller
vim ingress-con.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: lizhenliang/nginx-ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
部署
kubectl apply -f ingress-con.yaml
查看状态
kubectl get pods -n ingress-nginx
需要注意的是 hostNetwork: true,表示对外暴露网络,改成ture是为了让后面访问到
创建Ingress规则
1.创建ingress规则文件,ingress-h.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: ingressdemo.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
部署
kubectl apply -f ingress-h.yaml
ingress.networking.k8s.io/example-ingress created
查看pod
kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-766fb9f77-g8hgz 1/1 Running 0 5m3s 10.206.0.2 k8s-node1 <none> <none>
到k8s-node1查看80端口
netstat -antp |grep 80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 29326/nginx: master
2.修改本地hosts文件,添加域名访问规则(k8s-node1的公网IP)
vim /etc/hosts
119.45.233.2 ingressdemo.com
3.通过域名就能访问
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。