foreword

I don't know if you have encountered such a scenario, that is, a project needs to consume multiple kafka messages, and different consumers consume specified kafka messages. In this scenario, we can configure it through the api provided by kafka. But many times we will use spring-kafka to simplify development, but the native configuration items of spring-kafka do not provide multiple kafka configurations, so this article will talk about how to transform spring-kafka to support multiple kafkas configure

text

1. Specify the KafkaProperties prefix through @ConfigurationProperties
 @Primary
    @ConfigurationProperties(prefix = "lybgeek.kafka.one")
    @Bean
    public KafkaProperties oneKafkaProperties(){
        return new KafkaProperties();
    }

If there are multiple, configure multiple, like

 @ConfigurationProperties(prefix = "lybgeek.kafka.two")
    @Bean
    public KafkaProperties twoKafkaProperties(){
        return new KafkaProperties();
    }

    @ConfigurationProperties(prefix = "lybgeek.kafka.three")
    @Bean
    public KafkaProperties threeKafkaProperties(){
        return new KafkaProperties();
    }
2. Configure the consumer factory, and the consumer factory binds the corresponding KafkaProperties
 @Bean
    public ConsumerFactory twoConsumerFactory(@Autowired @Qualifier("twoKafkaProperties") KafkaProperties twoKafkaProperties){

        return new DefaultKafkaConsumerFactory(twoKafkaProperties.buildConsumerProperties());
    }
3. Configure the consumer listener factory and bind the specified consumer factory and consumer configuration
 @Bean(MultiKafkaConstant.KAFKA_LISTENER_CONTAINER_FACTORY_TWO)
    public KafkaListenerContainerFactory twoKafkaListenerContainerFactory(@Autowired @Qualifier("twoKafkaProperties") KafkaProperties twoKafkaProperties, @Autowired @Qualifier("twoConsumerFactory") ConsumerFactory twoConsumerFactory) {
        ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
        factory.setConsumerFactory(twoConsumerFactory);
        factory.setConcurrency(ObjectUtil.isEmpty(twoKafkaProperties.getListener().getConcurrency()) ? Runtime.getRuntime().availableProcessors() : twoKafkaProperties.getListener().getConcurrency());
        factory.getContainerProperties().setAckMode(ObjectUtil.isEmpty(twoKafkaProperties.getListener().getAckMode()) ? ContainerProperties.AckMode.MANUAL:twoKafkaProperties.getListener().getAckMode());

        return factory;
    }

The complete configuration example is as follows

 @Configuration
@EnableConfigurationProperties(MultiKafkaComsumeProperties.class)
public class OneKafkaComsumeAutoConfiguration {

    @Bean(MultiKafkaConstant.KAFKA_LISTENER_CONTAINER_FACTORY_ONE)
    public KafkaListenerContainerFactory oneKafkaListenerContainerFactory(@Autowired @Qualifier("oneKafkaProperties") KafkaProperties oneKafkaProperties, @Autowired @Qualifier("oneConsumerFactory") ConsumerFactory oneConsumerFactory) {
        ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
        factory.setConsumerFactory(oneConsumerFactory);
        factory.setConcurrency(ObjectUtil.isEmpty(oneKafkaProperties.getListener().getConcurrency()) ? Runtime.getRuntime().availableProcessors() : oneKafkaProperties.getListener().getConcurrency());
        factory.getContainerProperties().setAckMode(ObjectUtil.isEmpty(oneKafkaProperties.getListener().getAckMode()) ? ContainerProperties.AckMode.MANUAL:oneKafkaProperties.getListener().getAckMode());
        return factory;
    }

    @Primary
    @Bean
    public ConsumerFactory oneConsumerFactory(@Autowired @Qualifier("oneKafkaProperties") KafkaProperties oneKafkaProperties){
        return new DefaultKafkaConsumerFactory(oneKafkaProperties.buildConsumerProperties());
    }


    @Primary
    @ConfigurationProperties(prefix = "lybgeek.kafka.one")
    @Bean
    public KafkaProperties oneKafkaProperties(){
        return new KafkaProperties();
    }

}

The @Primary should be specified, otherwise the startup will report an error because there are multiple KafkaProperties, and the automatic assembly of kafka will not know which one to choose.

 @Configuration
@ConditionalOnClass(KafkaTemplate.class)
@EnableConfigurationProperties(KafkaProperties.class)
@Import({ KafkaAnnotationDrivenConfiguration.class, KafkaStreamsAnnotationDrivenConfiguration.class })
public class KafkaAutoConfiguration {

    private final KafkaProperties properties;

    private final RecordMessageConverter messageConverter;

    public KafkaAutoConfiguration(KafkaProperties properties, ObjectProvider<RecordMessageConverter> messageConverter) {
        this.properties = properties;
        this.messageConverter = messageConverter.getIfUnique();
    }

    @Bean
    @ConditionalOnMissingBean(KafkaTemplate.class)
    public KafkaTemplate<?, ?> kafkaTemplate(ProducerFactory<Object, Object> kafkaProducerFactory,
            ProducerListener<Object, Object> kafkaProducerListener) {
        KafkaTemplate<Object, Object> kafkaTemplate = new KafkaTemplate<>(kafkaProducerFactory);
        if (this.messageConverter != null) {
            kafkaTemplate.setMessageConverter(this.messageConverter);
        }
        kafkaTemplate.setProducerListener(kafkaProducerListener);
        kafkaTemplate.setDefaultTopic(this.properties.getTemplate().getDefaultTopic());
        return kafkaTemplate;
    }

    @Bean
    @ConditionalOnMissingBean(ProducerListener.class)
    public ProducerListener<Object, Object> kafkaProducerListener() {
        return new LoggingProducerListener<>();
    }

    @Bean
    @ConditionalOnMissingBean(ConsumerFactory.class)
    public ConsumerFactory<?, ?> kafkaConsumerFactory() {
        return new DefaultKafkaConsumerFactory<>(this.properties.buildConsumerProperties());
    }

    @Bean
    @ConditionalOnMissingBean(ProducerFactory.class)
    public ProducerFactory<?, ?> kafkaProducerFactory() {
        DefaultKafkaProducerFactory<?, ?> factory = new DefaultKafkaProducerFactory<>(
                this.properties.buildProducerProperties());
        String transactionIdPrefix = this.properties.getProducer().getTransactionIdPrefix();
        if (transactionIdPrefix != null) {
            factory.setTransactionIdPrefix(transactionIdPrefix);
        }
        return factory;
    }

    @Bean
    @ConditionalOnProperty(name = "spring.kafka.producer.transaction-id-prefix")
    @ConditionalOnMissingBean
    public KafkaTransactionManager<?, ?> kafkaTransactionManager(ProducerFactory<?, ?> producerFactory) {
        return new KafkaTransactionManager<>(producerFactory);
    }

    @Bean
    @ConditionalOnProperty(name = "spring.kafka.jaas.enabled")
    @ConditionalOnMissingBean
    public KafkaJaasLoginModuleInitializer kafkaJaasInitializer() throws IOException {
        KafkaJaasLoginModuleInitializer jaas = new KafkaJaasLoginModuleInitializer();
        Jaas jaasProperties = this.properties.getJaas();
        if (jaasProperties.getControlFlag() != null) {
            jaas.setControlFlag(jaasProperties.getControlFlag());
        }
        if (jaasProperties.getLoginModule() != null) {
            jaas.setLoginModule(jaasProperties.getLoginModule());
        }
        jaas.setOptions(jaasProperties.getOptions());
        return jaas;
    }

    @Bean
    @ConditionalOnMissingBean
    public KafkaAdmin kafkaAdmin() {
        KafkaAdmin kafkaAdmin = new KafkaAdmin(this.properties.buildAdminProperties());
        kafkaAdmin.setFatalIfBrokerNotAvailable(this.properties.getAdmin().isFailFast());
        return kafkaAdmin;
    }

}

Example of using multiple kafka consumers in the same project

1. Introduce spring-kafka GAV in the project's pom
 <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>
2. Configure the following content in the yml of the project
 lybgeek:
    kafka:
        multi:
            comsume-enabled: false
        one:
            producer:
                # kafka生产者服务端地址
                bootstrap-servers: ${KAFKA_PRODUCER_BOOTSTRAP_SERVER:10.1.4.71:32643}
                # 生产者重试的次数
                retries: ${KAFKA_PRODUCER_RETRIES:0}
                # 每次批量发送的数据量
                batch-size: ${KAFKA_PRODUCER_BATCH_SIZE:16384}
                # 每次批量发送消息的缓冲区大小
                buffer-memory: ${KAFKA_PRODUCER_BUFFER_MEMOEY:335554432}
                # 指定消息key和消息体的编码方式
                key-serializer: ${KAFKA_PRODUCER_KEY_SERIALIZER:org.apache.kafka.common.serialization.StringSerializer}
                value-serializer:  ${KAFKA_PRODUCER_KEY_SERIALIZER:org.apache.kafka.common.serialization.StringSerializer}
                # acks=1 只要集群的首领节点收到消息,生产者就会收到一个来自服务器成功响应。
                acks: ${KAFKA_PRODUCER_ACK:1}

            consumer:
                bootstrap-servers: ${KAFKA_ONE_CONSUMER_BOOTSTRAP_SERVER:10.1.4.71:32643}
                # 在偏移量无效的情况下,消费者将从起始位置读取分区的记录
                auto-offset-reset: ${KAFKA_ONE_CONSUMER_AUTO_OFFSET_RESET:earliest}
                #  是否自动提交偏移量,默认值是true,为了避免出现重复数据和数据丢失,可以把它设置为false,然后手动提交偏移量
                enable-auto-commit: ${KAFKA_ONE_CONSUMER_ENABLE_AUTO_COMMIT:false}
                # 指定消息key和消息体的解码方式
                key-deserializer: ${KAFKA_ONE_CONSUMER_KEY_DESERIALIZER:org.apache.kafka.common.serialization.StringDeserializer}
                value-deserializer:  ${KAFKA_ONE_CONSUMER_VALUE_DESERIALIZER:org.apache.kafka.common.serialization.StringDeserializer}
            listener:
                # 在侦听器容器中运行的线程数。
                concurrency: ${KAFKA_ONE_CONSUMER_CONCURRENCY:1}
                missing-topics-fatal: false
                ack-mode: ${KAFKA_ONE_CONSUMER_ACK_MODE:manual}
                
    two:
        producer:
            # kafka生产者服务端地址
            bootstrap-servers: ${KAFKA_PRODUCER_BOOTSTRAP_SERVER:192.168.1.3:9202}
            # 生产者重试的次数
            retries: ${KAFKA_PRODUCER_RETRIES:0}
            # 每次批量发送的数据量
            batch-size: ${KAFKA_PRODUCER_BATCH_SIZE:16384}
            # 每次批量发送消息的缓冲区大小
            buffer-memory: ${KAFKA_PRODUCER_BUFFER_MEMOEY:335554432}
            # 指定消息key和消息体的编码方式
            key-serializer: ${KAFKA_PRODUCER_KEY_SERIALIZER:org.apache.kafka.common.serialization.StringSerializer}
            value-serializer:  ${KAFKA_PRODUCER_KEY_SERIALIZER:org.apache.kafka.common.serialization.StringSerializer}
            # acks=1 只要集群的首领节点收到消息,生产者就会收到一个来自服务器成功响应。
            acks: ${KAFKA_PRODUCER_ACK:1}

            consumer:
                bootstrap-servers: ${KAFKA_ONE_CONSUMER_BOOTSTRAP_SERVER:192.168.1.3:9202}
                # 在偏移量无效的情况下,消费者将从起始位置读取分区的记录
                auto-offset-reset: ${KAFKA_ONE_CONSUMER_AUTO_OFFSET_RESET:earliest}
                #  是否自动提交偏移量,默认值是true,为了避免出现重复数据和数据丢失,可以把它设置为false,然后手动提交偏移量
                enable-auto-commit: ${KAFKA_ONE_CONSUMER_ENABLE_AUTO_COMMIT:false}
                # 指定消息key和消息体的解码方式
                key-deserializer: ${KAFKA_ONE_CONSUMER_KEY_DESERIALIZER:org.apache.kafka.common.serialization.StringDeserializer}
                value-deserializer:  ${KAFKA_ONE_CONSUMER_VALUE_DESERIALIZER:org.apache.kafka.common.serialization.StringDeserializer}
            listener:
                # 在侦听器容器中运行的线程数。
                concurrency: ${KAFKA_ONE_CONSUMER_CONCURRENCY:1}
                missing-topics-fatal: false
                ack-mode: ${KAFKA_ONE_CONSUMER_ACK_MODE:manual}
3. Configure the producer
 private KafkaTemplate kafkaTemplate;

    @Override
    public MqResp sendSync(MqReq mqReq) {
        ListenableFuture<SendResult<String, String>> result = this.send(mqReq);
        MqResp mqResp = this.buildMqResp(result);
        return mqResp;
    }

This KafkaTemplate is bound to the kafkaProperties configured by @Primary

4. Configure consumer monitoring and bind containerFactory
 @LybGeekKafkaListener(id = "createUser",containerFactory = MultiKafkaConstant.KAFKA_LISTENER_CONTAINER_FACTORY_ONE,topics = Constant.USER_TOPIC)
public class UserComsumer extends BaseComusmeListener {

    @Autowired
    private UserService userService;

    @Override
    public boolean isRepeateConsume(KafkaComsumePayLoad kafkaComsumePayLoad) {
        User user = JSON.parseObject(kafkaComsumePayLoad.getData(),User.class);
        System.out.println("-----------------------");
        return userService.isExistUserByUsername(user.getUsername());
    }

    @Override
    public boolean doBiz(KafkaComsumePayLoad kafkaComsumerPayLoad) {
        User user = JSON.parseObject(kafkaComsumerPayLoad.getData(),User.class);
        System.out.println(user);
        return userService.save(user);
    }
}

Consume the specified kafka message by specifying the containerFactory

5. Test
 User user = User.builder().username("test")
                .email("test@qq.com")
                .fullname("test")
                .mobile("1350000001")
                .password("1234561")
                .build();
      userService.saveAndPush(user);

Send a message and watch the console output

 : messageKey:【null】,topic:【user-sync】存在重复消息数据-->【{"email":"test@qq.com","fullname":"test","mobile":"1350000000","password":"123456","username":"test"}】

This happens because the database already has this record, just to verify the repeated consumption

Summarize

The core of the implementation in this article is to achieve multi-configuration by injecting multiple kafkaProperties. I don't know if you have discovered that it is the modified configuration. After configuring the consumer, the producer still needs to configure it. Because if it is not configured, the default configuration information of kafkaProperties is taken, which is localhost. There are also careful friends who may find that the annotation used by the consumer listener in my example is @LybGeekKafkaListener, which is basically the same as the function implemented by @KafkaListener. Because this example and the previous article talk about how to implement a kafka consumer monitor with an idempotent template is the same code, it is directly reused

demo link

https://github.com/lyb-geek/springboot-learning/tree/master/springboot-kafka-template


linyb极客之路
344 声望193 粉丝