1. Introduction
surroundings:
1. JDK 1.8
2. Spring Boot
3. Huawei Kafka Cluster 1.1.0
The user name and password of the Kafka cluster is simple to connect to the cluster, and there is almost no Kerberos authentication cluster online. Here I will share the important points of the connection.
2. Integration
1. Information available
The Kafka cluster is generally not installed by us. When we need to connect to the Kafka cluster, the other party will provide us with the following:
- Key file: user.keytab
- principal: xxxxxx
- Encryption protocol security.protocol: SASL_SSL or SASL_PLAINTEXT
- jaas.conf (if not, we can also build one based on the above)
2, hands-on
2.1, program configuration
Through the official website and online information, our final configuration file is as follows:
spring:
##########################################################################
############# kafka 配置
##########################################################################
kafka:
# kafka实例的broker地址和端口
bootstrap-servers: 100.xxx.xxx.87:909x,100.xxx.xxx.69:909x,100.xxx.xxx.155:909x
# 生产者配置
producer:
# 重试次数,则客户端会将发送失败的记录重新发送
retries: 1
# 16K
batch-size: 16384
# #32M
buffer-memory: 33554432
# 发送确认参数: 0:发送后不管, 1:发送后Partition Leader消息落盘, all: 所有的副本都ok才返回
acks: 1
# 指定消息key和消息体的编解码方式
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
consumer:
# 消费者组
group-id: Consumer-test
# 动提交
enable-auto-commit: true
# 偏移量的方式:
# earliest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
# latest: 当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从新产生的该分区下的数据消费
auto-offset-reset: latest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
jaas:
enabled: true
login-module: com.sun.security.auth.module.Krb5LoginModule
control-flag: required
options:
"useKeyTab": true
"debug": true
"useTicketCache": false
"storeKey": true
"keyTab": "/etc/user.keytab"
"principal": xxxxx
properties:
# 加密协议,目前支持 SASL_SSL、SASL_PLAINTEXT 协议
"security.protocol": SASL_PLAINTEXT
# 域名
"kerberos.domain.name": topinfo.com
# 服务名
"sasl.kerberos.service.name": kafka
Attention: Configuration of jaas attribute, path configuration of "keyTab".
2.2, other configuration
We also need to specify the configuration of java.security.auth.login.config,
It is said on the Internet that related configurations can be set through System.setProperty, such as:
System.setProperty("java.security.auth.login.config", /etc/jaas/root.jaas.conf)
System.setProperty("java.security.krb5.conf", "/etc/krb5.conf");
And I chose to add the following content to Tomcat's /bin/catalina.sh:
JAVA_OPTS=" $JAVA_OPTS -Djava.security.auth.login.config=/etc/jaas/root.jaas.conf"
JAVA_OPTS=" $JAVA_OPTS -Djava.security.krb5.conf=/etc/krb5.conf"
If the system reports KrbException: Cannot locate default realm, generally java.security.krb5.conf=/etc/krb5.conf is not configured
3. Abnormal investigation
Error message: Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)
Error details:
2022-08-30 21:20:52.052 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] INFO (org.apache.kafka.common.network.Selector:?) -
[Consumer clientId=collect-Consumer-3, groupId=collect-Consumer] Failed authentication with topinfo/11.11.11.20
(An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)])
occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly.
You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and `socketChannel.socket().getInetAddress().getHostName()` must match the hostname in `principal/hostname@realm` Kafka Client will go to AUTHENTICATION_FAILED state.)
Solution:
Replace kafka-clients, use kafka-clients in Huawei's dependency warehouse, download kafka-clients , we download the kafka-clients-2.4.0-hw-ei-311006.jar version, which is available for personal testing.
Huawei warehouse address: https://repo.huaweicloud.com/repository/maven/huaweicloudsdk/org/apache/kafka/kafka-clients/
I give my maven here:
<!-- spring kafka 排查依赖的kafka而引入华为的kafak包 -->
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.3.4.RELEASE</version>
<exclusions>
<exclusion>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- 华为 组件 kafka start -->
<dependency>
<groupId>com.huawei</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.4.0</version>
<scope>system</scope>
<systemPath>${project.basedir}/lib/kafka-clients-2.4.0-hw-ei-311006.jar</systemPath>
</dependency>
<dependency>
<groupId>com.huawei</groupId>
<artifactId>kafka</artifactId>
<version>2.11</version>
<scope>system</scope>
<systemPath>${project.basedir}/lib/kafka_2.11-1.1.0.jar</systemPath>
</dependency>
<dependency>
<groupId>com.huawei</groupId>
<artifactId>kafka-streams-examples</artifactId>
<version>1.1.0</version>
<scope>system</scope>
<systemPath>${project.basedir}/lib/kafka-streams-examples-1.1.0.jar</systemPath>
</dependency>
<dependency>
<groupId>com.huawei</groupId>
<artifactId>kafka-streams</artifactId>
<version>1.1.0</version>
<scope>system</scope>
<systemPath>${project.basedir}/lib/kafka-streams-1.1.0.jar</systemPath>
</dependency>
<!-- 华为 组件 kafka end -->
Reference: https://www.baojieearth.cn/post/22
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。