问题现象

Flink 部署在K8S启动时,checkpoint目录需要配置成具体某台Hadoop集群的NameNode,如:env.setStateBackend(new FsStateBackend("hdfs://prod02:3333/user/flink-checkpoints")); 如果配置成nameservice模式, 如:env.setStateBackend(new FsStateBackend("hdfs://hztest/user/flink-checkpoints"));则会报错:
图片

解决流程

  1. 考虑flink 启动时,未加载到hadoop 配置文件,当前hadoop配置文件存放在flink任务的jar中,在classpath下,因此尝试使用外置hadoop配置文件方式加载。
  2. 拷贝 hadoop的两个配置文件到部署目录的hadoopconf目录中。
  3. 修改Dockerfile, 增加hadoop conf配置文件到镜像

    FROM registry.cn-shanghai.aliyuncs.com/meow-cn/flink:1.13
    COPY flink-hudi.jar /opt/flink-hudi.jar
    COPY hadoopconf/* /opt/hadoopconf/
  4. 修改deployment.yaml 文件, 增加hadoop配置文件环境变量HADOOP_CONF_DIR

         - name: flink-main-container
           imagePullPolicy: Always
           env:
           - name: HADOOP_CONF_DIR
             value: /opt/hadoopconf
  5. Flink 启动后正常,但是一直无法执行checkpoint
2023-03-10 18:48:20,786 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] 
- Failed to trigger checkpoint for job 3110562c050f0943caca089038709804 since Checkpoint triggering task Source: Custom Source -> DataSteamToTable(stream=default_catalog.default_database.kafka_source, type=STRING, rowtime=false, watermark=false) -> Calc(select=[CAST(UUID()) AS uuid, f0 AS message, CAST(DATE_FORMAT(NOW(), _UTF-16LE'yyyy-MM-dd hh:mm:ss')) AS event_time, CAST(DATE_FORMAT(NOW(), _UTF-16LE'yyyyMMdd')) AS ds]) -> row_data_to_hoodie_record (1/2) of job 3110562c050f0943caca089038709804 is not being executed at the moment. 
Aborting checkpoint. Failure reason: Not all required tasks are currently running.
  1. 查看pod 列表,发现flink pod对应的taskmanager的pod 无法启动, 使用kt get event |grep flink-hudi-event-log-taskmanager命令,进行查看,发现taskmanger 启动报错:

    MountVolume.SetUp failed for volume "hadoop-config-volume" : configmap "hadoop-config-flink-hudi-event-log" not found
  2. 考虑taskmanager的pod 与flink 的任务pod 并不是通过同一个镜像启动,并不共享环境变量及文件。根据报错信息,taskmanager 是通过configmap的方式,进行hadoop配置文件的加载;配置configmap:

    kt create configmap hadoop-config-flink-hudi-event-log --from-file=/opt/flink-on-ack/flink-hudi/docker/hadoopconf/
  3. 重启pod 后问题解决。

jacky chen
0 声望0 粉丝