1、基础环境
1.1、k8s环境
[root@node1 opt]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 4h3m v1.23.3
node2 Ready <none> 4h v1.23.3
1.2、helm版本
[root@node1 opt]# helm version
version.BuildInfo{Version:"v3.12.1", GitCommit:"f32a527a060157990e2aa86bf45010dfb3cc8b8d", GitTreeState:"clean", GoVersion:"go1.20.4"}
1.3、HDFS准安装(简单安装)
1、准hadoop-3.3.1二进制包
[root@node2 hadoop-3.3.1]# pwd
/opt/hadoop-3.3.1
2、安装JDK
[root@node2 hadoop-3.3.1]#yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel
3、配置JAVA_HOME和HADOOP_HOME环境变量
[root@node2 ~]# cat .bashrc
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.372.b07-1.el7_9.x86_64
export HADOOP_HOME=/opt/hadoop-3.3.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
4、配置 /opt/hadoop-3.3.1/etc/hadoop/hadoop-env.sh
JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.372.b07-1.el7_9.x86_64
export HADOOP_HOME=/opt/hadoop-3.3.1
5、配置 /opt/hadoop-3.3.1/etc/hadoop/core-site.xml
先创建一个存储目录 mkdir -p /data/hadoop
<configuration>
<!-- 指定 HDFS 集群 NameNode 地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://node2:8020</value>
</property>
<!-- 指定 Hadoop 运行时产生文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop</value>
</property>
</configuration>
6、配置 /opt/hadoop-3.3.1/etc/hadoop/hdfs-site.xml
<configuration>
<!-- 指定 HDFS 集群辅助节点 SecondaryNameNode 地址 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node2:50090</value>
</property>
<!-- 副本数量 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
7、配置 workers
[root@node1 hadoop]# cat workers
node2
8、格式化
hadoop namenode -format
9、启动
hdfs --daemon start namenode
hdfs --daemon start datanode
2、helm 安装 dolphinscheduler
2.1、下载源码
wget https://dlcdn.apache.org/dolphinscheduler/3.1.7/apache-dolphinscheduler-3.1.7-src.tar.gz
2.2、到kubernetes的dolphinscheduler下
cd /opt/apache-dolphinscheduler-3.1.7-src/deploy/kubernetes/dolphinscheduler
2.3、helm 添加仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
2.4、下载关联chart
helm dependency update .
注意 : 很慢,而且要重试多次,看运气
2.5、修改values
replicas: "1" # master/worker副本都设置1,因为我这就一个工作节点
resource.hdfs.root.user: root
resource.hdfs.fs.defaultFS: hdfs://172.24.251.132:8020
resource.storage.type: HDFS # 这个应该默认的是HDFS,如果不是修改为HDFS
2.5、helm进行安装
helm install dolphinscheduler .
2.6、查看安装列表
[root@node1 dolphinscheduler]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
dolphinscheduler default 1 2023-07-18 19:37:59.461425633 +0800 CST deployed dolphinscheduler-helm-3.1.7 3.1.7
2.7、查看pods列表
[root@node1 dolphinscheduler]# kubectl get pods
NAME READY STATUS RESTARTS AGE
dolphinscheduler-alert-d887cb988-62ppw 1/1 Running 0 32m
dolphinscheduler-api-b548d7ddd-grl2c 1/1 Running 0 32m
dolphinscheduler-db-init-job-4x459 0/1 Completed 0 32m
dolphinscheduler-master-0 1/1 Running 0 32m
dolphinscheduler-postgresql-0 1/1 Running 0 32m
dolphinscheduler-worker-0 1/1 Running 0 32m
dolphinscheduler-zookeeper-0 1/1 Running 0 32m
3、访问
进行端口转发
kubectl port-forward --address 0.0.0.0 svc/dolphinscheduler-api 12345:12345
3.1、登录
http://xx.xx.xx.xx:12345/dolphinscheduler
默认的用户是admin,默认的密码是dolphinscheduler123
3.2、新建租户
3.3、用户绑定租户
3.4、创建一个项目
3.5、建立一个工作流
3.6、上线/运行
3.7、流程实例查看任务日志
如感兴趣,点赞加关注,谢谢!!!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。