KubeSphere 3.2.0 is released! adds a complete set of monitoring and management pages to the project gateway, and at the same time introduces a cluster gateway to provide global Ingress gateway capabilities at the cluster level. Of course, we can still deploy and use a third-party Ingress Controller. This article will take Apache APISIX Ingress Controller as an example to introduce how to use KubeSphere to quickly use two different types of gateways for Kubernetes clusters, while monitoring their usage status.
This article will be divided into the following parts to expand:
- Application demonstration of the new management interface of the KubeSphere project gateway
- Quickly use Apache APISIX Ingress Controller through KubeSphere's application management capabilities
- Use KubeSphere's custom monitoring capabilities to obtain the running indicators of the Apache APISIX gateway
Ready to work
Install KubeSphere
There are two ways to install KubeSphere. One is mounted directly on Linux, refer to the documentation: install KubeSphere in Linux ; the second is installed in an existing Kubernetes, you can refer to the documentation: installed KubeSphere in Kubernetes .
The minimal installation version of KubeSphere already includes the monitoring module, so no additional activation is required. You can confirm the installation status through the "Monitoring" tab in the "System Components" page.
Deploy httpbin demo application
Due to the need to demonstrate the access control capabilities of the gateway, we must first have an accessible application as the gateway's background service. Here we use httpbin.org provided kennethreitz / httpbin container application as a demonstration application.
In KubeSphere, we can first create a new project or use an existing project. After entering the project page, select "Service" under "Application Load" to directly create a stateless workload and generate supporting services.
Use kennethreitz / httpbin container default 80
port as a service port, after the completion of creation to ensure that under the "workload" and "Service" page you can see httpbin
corresponding entry, as shown below.
The new look of the project gateway
project gateway is a feature that has existed since KubeSphere 3.0: "The gateway in the KubeSphere project is a Ingress controller . The built-in mechanism for HTTP load balancing in is called 161a70869b5ffa, which defines the application route Connection rules to cluster services. To allow external access to services, users can create routing resources to define URI paths, back-end service names and other information."
Next, we first enter the project where the httpbin service has been deployed, open the "Gateway Settings" page in the "Project Settings", and then perform the "Open Gateway" operation. For convenience, just select NodePort
as the "Access Method".
After confirming, return to the gateway page, wait a while and refresh the page, you can get the deployment completion status as shown in the figure below, and you can see that NodePort is assigned two node ports by default. Next, we use the "Manage" button in the upper right corner to "View Details".
At this time, what we see is the new monitoring page of the project/cluster gateway brought by 3.2.0! But now there is obviously no data, because we have not generated any traffic from the gateway. Then we need to create an application route for the httpbin service.
From the "application load", enter the "application routing" page, and start to "create" a route. After naming the route httpbin
, we specify a domain name that is convenient for testing, and set the "path" to /
, select "service" httpbin
and "port" 80
.
httpbin
skip the advanced settings and complete the route creation in the next step. You can get a new 061a70869b6143 application route item as shown in the figure below.
Next, we can access the httpbin application service through the NodePort address of the project gateway and the specified domain name (for example, http://httpbin.ui:32516
here), refresh or operate the request generation function of the page at will, and then enter the gateway details page, you can see the Some of the built-in monitoring indicators of the gateway have appeared on the panel.
Specify the NodePort node port for the gateway
For public cloud environments, if you use NodePort to expose access capabilities, the open ports are usually limited and controlled. Therefore, we need to be able to modify the NodePort used by the gateway.
Since the gateway is managed uniformly by KubeSphere, to modify the NodePort serviced by the gateway, you need to have access to the kubesphere-controls-system
project. kubesphere-router-<project-namespace>
and open NodePort for external access through the "Service" page of "Application Load". The NodePort service port needs to be directly modified through "Edit YAML".
Start using the cluster gateway
In KubeSphere 3.1, only project-level gateways are supported. If the user has too many projects, it will inevitably cause a waste of resources. And the gateways in different corporate spaces are independent of each other.
KubeSphere 3.2.0 started to support cluster-level global gateways. All projects can share the same gateway, and project gateways that have been created before will not be affected by cluster gateways. The gateways of all projects can also be managed uniformly, and they can be managed and configured centrally. Administrator users no longer need to switch to different corporate spaces to configure gateways.
After entering KubeSphere 3.2.0 version, we recommend that you use the cluster gateway function to unify the application routing of the entire cluster. To enable the cluster gateway is actually very simple: use an account with cluster management permissions to enter a cluster that it can manage (for example, we take the default
cluster here as an example), and click the "Gateway Settings" in the "Cluster Settings". Open Gateway” and check the “Project Gateway” at the same time.
The opening method of the cluster gateway and the modification of aligning the NodePort access port are basically the same as the operation of the previous project gateway, so the process will not be repeated here.
⚠️ One thing that needs special attention is : After the cluster gateway is turned on, the opened project gateway will be retained; but the project that has not yet created a gateway cannot create a separate gateway, and the cluster gateway will be used directly.
The figure below shows an overview of all gateways displayed on the "Gateway Settings" page after having a project and a cluster gateway at the same time after a project with a gateway has been created.
Quickly use Apache APISIX Ingress Controller
Apache APISIX is an open source, high-performance, dynamic cloud native gateway, donated by Shenzhen Tributary Technology Co., Ltd. to the Apache Foundation in 2019. It has become the top open source project of the Apache Foundation and the most active gateway project on GitHub. Apache APISIX currently covers multiple scenarios such as API Gateway, LB, Kubernetes Ingress, and Service Mesh.
The community has also introduced how uses Apache APISIX as the Ingress Controller . This article will focus more on the details that are not covered in the previous article, and combine it with some new functions of KubeSphere to make it more concrete.
Deploy Apache APISIX Ingress Controller
First of all, you must add the Apache APISIX Helm Chart warehouse first. It is recommended to use this self-management method to ensure that the warehouse content is synchronized in time. After we select an enterprise space, we add the following Apache APISIX warehouse (Warehouse URL: https://charts.apiseven.com
) through the "Application Repository" under "Application Management".
Next we create a project apisix-system
After entering the project page, choose to create an "application" in the "application load" to deploy Apache APISIX, and select the apisix
application template to start deployment.
Why deploy Helm Chart of Apache APISIX application instead of deploying Apache APISIX Ingress Controller directly?
This is because the Apache APISIX Ingress Controller is currently strongly associated with the Apache APISIX gateway (as shown in the figure below), and it is most convenient to deploy Apache APISIX Gateway + Dashboard + Ingress Controller through Apache APISIX Helm Charts at the same time, so this article recommends direct use The Helm Chart of Apache APISIX carries out the deployment of the whole set of components.
Name the application apisix
to avoid mismatches in the workload and service names of multiple components (Gateway, Dashboard, Ingress Controller); in the "application settings" section edited in the installation step, please refer to the following configuration to fill in ( Please pay special attention to the description of the comment part marked with [Note], and the rest can be edited and modified by ).
global:
imagePullSecrets: []
apisix:
enabled: true
customLuaSharedDicts: []
image:
repository: apache/apisix
pullPolicy: IfNotPresent
tag: 2.10.1-alpine
replicaCount: 1
podAnnotations: {}
podSecurityContext: {}
securityContext: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
podAntiAffinity:
enabled: false
nameOverride: ''
fullnameOverride: ''
gateway:
type: NodePort
externalTrafficPolicy: Cluster
http:
enabled: true
servicePort: 80
containerPort: 9080
tls:
enabled: false
servicePort: 443
containerPort: 9443
existingCASecret: ''
certCAFilename: ''
http2:
enabled: true
stream:
enabled: false
only: false
tcp: []
udp: []
ingress:
enabled: false
annotations: {}
hosts:
- host: apisix.local
paths: []
tls: []
admin:
enabled: true
type: ClusterIP
externalIPs: []
port: 9180
servicePort: 9180
cors: true
credentials:
admin: edd1c9f034335f136f87ad84b625c8f1
viewer: 4054f7cf07e344346cd3f287985e76a2
allow:
ipList:
- 0.0.0.0/0
plugins:
- api-breaker
- authz-keycloak
- basic-auth
- batch-requests
- consumer-restriction
- cors
- echo
- fault-injection
- grpc-transcode
- hmac-auth
- http-logger
- ip-restriction
- ua-restriction
- jwt-auth
- kafka-logger
- key-auth
- limit-conn
- limit-count
- limit-req
- node-status
- openid-connect
- authz-casbin
- prometheus
- proxy-cache
- proxy-mirror
- proxy-rewrite
- redirect
- referer-restriction
- request-id
- request-validation
- response-rewrite
- serverless-post-function
- serverless-pre-function
- sls-logger
- syslog
- tcp-logger
- udp-logger
- uri-blocker
- wolf-rbac
- zipkin
- traffic-split
- gzip
- real-ip
#【注意】添加此插件以配合 Dashboard 展示服务信息
- server-info
stream_plugins:
- mqtt-proxy
- ip-restriction
- limit-conn
customPlugins:
enabled: true
luaPath: /opts/custom_plugins/?.lua
#【注意】如下配置保障 Prometheus 插件可对外暴露指标
plugins:
- name: prometheus
attrs:
export_addr:
ip: 0.0.0.0
port: 9091
configMap:
name: prometheus
mounts: []
dns:
resolvers:
- 127.0.0.1
- 172.20.0.10
- 114.114.114.114
- 223.5.5.5
- 1.1.1.1
- 8.8.8.8
validity: 30
timeout: 5
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
configurationSnippet:
main: ''
httpStart: ''
httpEnd: ''
httpSrv: ''
httpAdmin: ''
stream: ''
etcd:
enabled: true
host:
- 'http://etcd.host:2379'
prefix: /apisix
timeout: 30
auth:
rbac:
enabled: false
user: ''
password: ''
tls:
enabled: false
existingSecret: ''
certFilename: ''
certKeyFilename: ''
verify: true
service:
port: 2379
replicaCount: 3
dashboard:
enabled: true
#【注意】为 Dashboard 开启 NodePort 方便后续使用
service:
type: NodePort
ingress-controller:
enabled: true
config:
apisix:
#【注意】一定要设置 gateway 所在的 namespace
serviceNamespace: apisix-system
serviceMonitor:
enabled: true
namespace: 'apisix-system'
interval: 15s
After the deployment is successful, click the application name to enter the details page, and you can see the following service deployment and working status display under the "Resource Status" tab.
💡 The default configuration parameters corresponding to the two other Helm Charts of the Apache APISIX project can be referred to respectively: Dashboard and Ingress Controller values.yaml
.
Use Apache APISIX Dashboard to understand system information
After the Apache APISIX application is deployed, we first check the current status of the Apache APISIX gateway through the Apache APISIX Dashboard. From the "Services" page of "Application Load", we can find apisix-dashboard
. Since we have enabled NodePort for Dashboard in the application configuration, we can access Dashboard directly through the NodePort port.
Use the default user name and password admin
log in to the Apache APISIX Dashboard, and you can enter the "System Information" page to view the information of the "Apache APISIX Node" that we currently connect to and manage.
Use Apache APISIX Ingress Controller
Let's go back to the "application routing" page, create another route (such as apisix-httpbin
), set the path to /*
httpbin
80
and add the key value of kubernetes.io/ingress.class
: apisix
How to verify that the application routing takes effect after creation? First, we can go back to the Apache APISIX Dashboard and enter the "Routing" page. You can see that the newly created application route has been recognized by the Apache APISIX Ingress Controller and then automatically added to the Apache APISIX gateway. You can also see the automatic creation on the "Upstream" page. An upstream entry for.
Then we return to the "Services" page of the apisix-system
project, find the port corresponding to the apisix-gateway
<domain name specified by the apisix-httpbin application route>:<apisix-gateway external access port> (for example,
httpbin.ui:30408
here). You can access the background services associated with the apisix-httpbin
Custom monitor Apache APISIX gateway
After the Apache APISIX gateway is available, it actually lacks the built-in status monitoring capabilities like native clusters or project gateways, but we can also make up for this through the Prometheus plug-in of Apache APISIX and the custom monitoring capabilities that come with KubeSphere.
Expose the Prometheus monitoring metrics of the Apache APISIX gateway
Prometheus plug-in when deploying the Apache APISIX application, we only need to expose the interface of the Prometheus monitoring indicators here. Enter the apisix-system
apisix
on the "Workload" page and enter the deployment details page, and then select "Edit Settings" in the "More Operations" on the left operation panel.
In the pop-up "Edit Settings" panel, enter apisix
container, find the "Port Settings", add a new prom
to the container's 9091
port, the apisix
workload will restart after saving.
Create ServiceMonitor for Apache APISIX gateway monitoring indicators
Next, we need to connect the exposed indicator interface to KubeSphere's own Prometheus so that it can be accessed (captured indicator data). Since KubeSphere Prometheus Operator , it is the most convenient The natural way is to directly create a ServiceMonitor resource to realize the access of the indicator interface.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: apisix
namespace: apisix-system
spec:
endpoints:
- scheme: http
#【注意】使用上一步中工作负载暴露的容器端口名称
targetPort: prom
#【注意】需要正确绑定 apisix 对应的指标接口路径
path: /apisix/prometheus/metrics
interval: 15s
namespaceSelector:
matchNames:
- apisix-system
selector:
matchLabels:
app.kubernetes.io/name: apisix
app.kubernetes.io/version: 2.10.0
helm.sh/chart: apisix-0.7.2
Use kubectl apply -f your_service_monitor.yaml
create this ServiceMonitor resource. After the creation is successful, if you have cluster management permissions, you can also search and view the ServiceMonitor resource in the CRD management page of the cluster and find apisix
. You can also make subsequent YAML modifications here.
Connect Apache APISIX gateway indicators to a custom monitoring panel
Next, we find "Custom Monitoring" in "Monitoring Alarm" in the menu list on the left side of the project, and start to "create" a custom monitoring panel.
Fill in the "Name" in the pop-up window, select the "Custom" monitoring template, and enter the "Next" monitoring panel creation.
After entering the edit page, click on the +
area on the left, and configure the Prometheus monitoring indicators in the "Data" area on the right. For example, here we can use sum(apisix_nginx_http_current_connections)
to count the total number of real-time connections of the Apache APISIX gateway.
After saving, find "+ Add Monitoring Item" in the lower right corner of the page. We select "Line Chart" to create a Nginx connection state
indicator: use sum(apisix_nginx_http_current_connections) by (state)
as the indicator, {{state}}
as the legend name, and select the "Legend Type" as the stacked graph to get a similar The following chart shows the effect. After saving the template, you will get your first custom monitoring panel!
The Prometheus indicators currently provided by the Apache APISIX gateway can be found in the Available Indicators section of the official document.
Since the configuration of indicators is still troublesome, it is recommended to directly import Apache APISIX Grafana template in the "Custom Monitoring" at the cluster level (download JSON and import it through "Local Upload").
After the creation is complete, you can directly get a very rich Apache APISIX gateway monitoring panel. KubeSphere also in actively promote introducing Grafana template to import custom functions to monitor the project's ability to go, so stay tuned!
So far, we have learned about the new projects in KubeSphere 3.2.0 and the richer status information display capabilities of the cluster gateway; at the same time, we have also completed the Apache APISIX Ingress gateway to connect to KubeSphere and use custom monitoring on it. Let's start the wonderful journey of KubeSphere Application Gateway~
This article is published by the blog one article multi-posting OpenWrite
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。