龙蜥操作系统作为阿里云开源的操作系统,虽暂未纳入安可目录,但凭借其与 CentOS 8 软件生态的 100% 兼容性,在国产化浪潮及 CentOS 停止维护的背景下,已成为理想的替代方案。这一特性使企业在无需调整原有应用架构的前提下实现平滑迁移,既满足了国产化要求,又保障了业务连续性。
环境软件版本信息
- 服务器芯片:海光 3350 / 兆芯开先 KX-5000 / Intel
- 操作系统:Anolis OS 8.6 / Anolis 8.9
- Containerd:1.7.13
- Kubernetes:v1.30.12
- KubeSphere:v4.1.3
- KubeKey:v3.1.9
- Docker:24.0.9
- Docker Compose:v2.26.1
- Harbor:v2.10.1
- Prometheus:v2.51.2
服务器基本信息
[root@node1 ~]# cat /etc/os-release
NAME="Anolis OS"
VERSION="8.6"
ID="anolis"
ID_LIKE="rhel fedora centos"
VERSION_ID="8.6"
PLATFORM_ID="platform:an8"
PRETTY_NAME="Anolis OS 8.6"
ANSI_COLOR="0;31"
HOME_URL="https://openanolis.cn/"
[root@node1 ~]# uname -a
Linux node1 4.19.91-26.an8.x86_64 #1 SMP Tue May 24 13:10:09 CST 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@node1 ~]#
<h2 id="84d3fa82">1.说明</h2>
本文由 [编码如写诗-天行1st] 原创编写,有任何问题可添加作者微信 [sd_zdhr] 获取帮助。
作者已在以下芯片与操作系统上完成适配验证:
CPU芯片:
- 鲲鹏
- 飞腾
- 海光
- 兆芯
- 国际芯片:interl、amd等
操作系统
- 银河麒麟V10
- 麒麟国防版
- 麒麟信安
- 中标麒麟V7
- 统信 UOS
- 华为欧拉 openEuler、移动大云
- 阿里龙蜥 Anolis OS
- 腾讯 TencentOS
- 国际操作系统:CentOS、Ubuntu、Debian 等
<h2 id="199223ce">2.前提条件</h2>
建议准备至少三台主机。其中 node1
可省略,直接将 master
既作为控制节点也作为工作节点使用。
<font style="color:rgb(66, 75, 93);">主机名</font> | <font style="color:rgb(66, 75, 93);">IP</font> | <font style="color:rgb(66, 75, 93);">架构</font> | <font style="color:rgb(66, 75, 93);">OS</font> | <font style="color:rgb(66, 75, 93);">用途</font> |
---|---|---|---|---|
<font style="color:rgb(66, 75, 93);">harbor</font> | <font style="color:rgb(66, 75, 93);">192.168.3.249</font> | <font style="color:rgb(66, 75, 93);">x86_64</font> | <font style="color:rgb(66, 75, 93);">Ubuntu24.04</font> | <font style="color:rgb(66, 75, 93);">联网主机,用于制作离线包,并作为镜像仓库节点</font> |
<font style="color:rgb(66, 75, 93);">master1</font> | <font style="color:rgb(66, 75, 93);">192.168.85.138</font> | <font style="color:rgb(66, 75, 93);">x86_64</font> | <font style="color:rgb(66, 75, 93);">龙蜥 8.6</font> | <font style="color:rgb(66, 75, 93);">离线环境主节点1</font> |
<font style="color:rgb(66, 75, 93);">master2</font> | <font style="color:rgb(66, 75, 93);">192.168.85.231</font> | <font style="color:rgb(66, 75, 93);">x86_64</font> | <font style="color:rgb(66, 75, 93);">龙蜥 8.6</font> | <font style="color:rgb(66, 75, 93);">离线环境主节点2</font> |
<font style="color:rgb(66, 75, 93);">master3</font> | <font style="color:rgb(66, 75, 93);">192.168.85.232</font> | <font style="color:rgb(66, 75, 93);">x86_64</font> | <font style="color:rgb(66, 75, 93);">龙蜥 8.6</font> | <font style="color:rgb(66, 75, 93);">离线环境主节点3</font> |
<h2 id="7693d3c3">3.构建离线包(在可联网节点执行)</h2>
<h3 id="d9ff5366">3.1 下载 KubeKey </h3>
curl -sSL https://get-kk.kubesphere.io | sh -
<h3 id="76397d82">3.2 创建 Manifest 文件</h3>
export KKZONE=cn
./kk create manifest --with-kubernetes v1.30.12 --with-registry
<h3 id="0db2e794">3.3 编辑 Manifest 文件</h3>
vi manifest-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: sample
spec:
arches:
- amd64
operatingSystems: []
kubernetesDistributions:
- type: kubernetes
version: v1.30.12
components:
helm:
version: v3.14.3
cni:
version: v1.2.0
etcd:
version: v3.5.13
containerRuntimes:
- type: docker
version: 24.0.9
- type: containerd
version: 1.7.13
calicoctl:
version: v3.27.4
crictl:
version: v1.29.0
docker-registry:
version: "2"
harbor:
version: v2.10.1
docker-compose:
version: v2.26.1
images:
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.9
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.31.8
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.31.8
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.31.8
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.31.8
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4
- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4
- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4
- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4
# ks
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-extensions-museum:v1.1.5
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-controller-manager:v4.1.3
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-apiserver:v4.1.3
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-console:v4.1.3
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/kubectl:v1.27.16
# whizard-telemetry
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/whizard-telemetry-apiserver:v1.2.2
# whizard-monitoring
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubespheredev/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6
- swr.cn-southwest-2.myhuaweicloud.com/ks/prometheus/node-exporter:v1.8.1
- swr.cn-southwest-2.myhuaweicloud.com/ks/brancz/kube-rbac-proxy:v0.18.0
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/kube-state-metrics:v2.12.0
- swr.cn-southwest-2.myhuaweicloud.com/ks/prometheus-operator/prometheus-operator:v0.75.1
- swr.cn-southwest-2.myhuaweicloud.com/ks/prometheus-operator/prometheus-config-reloader:v0.75.1
- swr.cn-southwest-2.myhuaweicloud.com/ks/prometheus/prometheus:v2.51.2
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/kubectl:v1.27.12
registry:
auths: {}
<h3 id="c1b858f3">3.4 导出离线制品</h3>
export KKZONE=cn
./kk artifact export -m manifest-sample.yaml -o artifact-k8s-13012-ks413-monit.tar.gz
<h3 id="d9a19435">3.5 下载 KubeSphere Core Helm Chart</h3>
安装 Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
下载 Chart
VERSION=1.1.3 # Chart 版本
helm fetch https://charts.kubesphere.io/main/ks-core-${VERSION}.tgz
<h2 id="5b1bea88">4. 离线部署准备</h2>
<h3 id="1910e8a5">4.1 将安装包拷贝至离线环境</h3>
将下载好的 KubeKey、Artifact、Helm Chart 等安装包拷贝至 Master 主节点。
<h3 id="697570b1">4.2 安装 K8s 依赖包</h3>
在所有节点上传 k8s-init-Anolis.tar.gz
,解压并执行安装脚本:
# 解压缩并执行 install.sh
<h3 id="df45946d">4.3 修改部署配置文件 </h3>
重点修改节点信息与 Harbor 配置信息:
vi config-sample.yaml
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: harbor, address: 192.168.3.249, internalAddress: 192.168.3.249, user: root, password: "123456"}
- {name: master1, address: 192.168.85.138, internalAddress: 192.168.85.138, user: root, password: "123456"}
- {name: master2, address: 192.168.85.231, internalAddress: 192.168.85.231, user: root, password: "123456"}
- {name: master3, address: 192.168.85.232, internalAddress: 192.168.85.232, user: root, password: "123456"}
roleGroups:
etcd:
- master1
- master2
- master3
control-plane:
- master1
- master2
- master3
worker:
- master1
- master2
- master3
# 如需使用 kk 自动部署镜像仓库,请设置该主机组 (建议仓库与集群分离部署,减少相互影响)
# 如果需要部署 harbor 并且 containerManager 为 containerd 时,由于部署 harbor 依赖 docker,建议单独节点部署 harbor
registry:
- harbor
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.30.12
clusterName: cluster.local
autoRenewCerts: true
containerManager: containerd
etcd:
type: kubekey
network:
plugin: flannel
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
type: harbor
registryMirrors: []
insecureRegistries: []
privateRegistry: "dockerhub.kubekey.local"
namespaceOverride: "kubesphereio"
auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
"dockerhub.kubekey.local":
username: "admin"
password: Harbor@123 # 此处可自定义,kk3.1.8新特性
skipTLSVerify: true # Allow contacting registries over HTTPS with failed TLS verification.
plainHTTP: false # Allow contacting registries over HTTP.
certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local"
addons: []
<h3 id="906052eb">4.4 初始化镜像仓库</h3>
./kk init registry -f config-sample.yaml -a artifact-k8s-13012-ks413.tar.gz
<h3 id="8ea53dfd">4.5 创建 Harbor 项目</h3>
脚本编写
vi create_project_harbor.sh
#!/usr/bin/env bash
url="https://dockerhub.kubekey.local"
user="admin"
passwd="Harbor@123"
harbor_projects=(
ks
kubesphere
kubesphereio
)
for project in "${harbor_projects[@]}"; do
echo "creating $project"
curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k
done
执行脚本创建项目
chmod +x create_project_harbor.sh
./create_project_harbor.sh
<h4 id="cd8992b6">验证</h4>
注意事项
- Harbor 默认管理账号:admin,密码:Harbor@123(与配置文件保持一致)。
- Harbor 安装目录:/opt/harbor,可在此目录进行日常运维管理。
- 公共项目(Public):任何用户均可拉取镜像;
- 私有项目(Private):仅项目成员可拉取镜像。
<h2 id="535129cc">5. 安装 Kubernetes 集群 </h2>
执行以下命令创建 Kubernetes 集群:
./kk create cluster -f config-sample.yaml -a artifact-k8s-13012-ks413.tar.gz --with-local-storage
等待大概两分钟左右看到成功消息
验证
<h2 id="24fbebb4">6. 安装 KubeSphere</h2>
执行如下 Helm 安装命令:
helm upgrade --install -n kubesphere-system --create-namespace ks-core ks-core-1.1.5.tgz \
--set global.imageRegistry=dockerhub.kubekey.local/ks \
--set extension.imageRegistry=dockerhub.kubekey.local/ks \
--set ksExtensionRepository.image.tag=v1.1.5 \
--debug \
--wait
等待大概30秒左右看到成功消息
<h2 id="42c9f6cd">7. 功能验证</h2>
登录页面
初次登录需要换密码,如果不想换也可以继续填写P@88w0rd
,不过建议更换。
首页概览
集群节点版本信息
系统总览页面
<h2 id="922b7d29">8. 安装监控组件</h2>
<h3 id="dfc413a0">8.1 安装平台服务组件</h3>
点击左上角扩展市场
后点击WhizardTelemetry 平台服务
然后点击安装
点击开始安装
<h3 id="4930dba6">8.2 安装监控模块</h3>
点击扩展市场
后点击WhizardTelemetry 监控
的管理
然后点击安装
,建议安装1.1.1版本
安装至host
节点
<h3 id="6eef6b97">8.3 监控功能验证</h3>
等待以上服务安装完成后,退出登录,重新登录
- 系统概览
- 集群节点信息
- 监控告警-集群状态
(界面与 KubeSphere 3.X 版本相似)
9. 结语
通过以上步骤,我们基于海光/兆芯服务器平台、阿里龙蜥 Anolis OS 成功完成了 Kubernetes 1.30 + KubeSphere 4.1 离线高可用集群的搭建,并顺利部署了平台监控组件。整个部署过程充分兼容信创软硬件生态,既保障了业务连续性,也为后续国产化应用迁移与运维管理奠定了良好基础。
后续如在部署或使用过程中遇到问题,欢迎联系作者天行或参考官方文档持续优化集群配置与扩展能力。
本文由博客一文多发平台 OpenWrite 发布!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。