Article source shenyifengtk.github.io reprint please indicate
The origin of this article is that there is a requirement to enter the Kafka topic in the browser, and automatically start consumption after the consumer group is submitted. This is relatively simple to do. My colleagues use the Kafka driver package to complete this quickly. I suddenly thought about whether this function can be completed through Spring Kafka's own framework, without using the underlying driver package. And lead to analyze how the entire Spring Kafka realizes annotation consumption information and calls methods. And finally complete the above small requirements through a few simple codes.
Source code analysis
EnableKafka entry
The beginning of the kafka module starts with @Import(KafkaListenerConfigurationSelector.class) on @EnableKafka
@Override
public String[] selectImports(AnnotationMetadata importingClassMetadata) {
return new String[] { KafkaBootstrapConfiguration.class.getName() };
}
Then continue to look at the KafkaBootstrapConfiguration class
public class KafkaBootstrapConfiguration implements ImportBeanDefinitionRegistrar {
@Override
public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata, BeanDefinitionRegistry registry) {
if (!registry.containsBeanDefinition(
KafkaListenerConfigUtils.KAFKA_LISTENER_ANNOTATION_PROCESSOR_BEAN_NAME)) {
registry.registerBeanDefinition(KafkaListenerConfigUtils.KAFKA_LISTENER_ANNOTATION_PROCESSOR_BEAN_NAME,
new RootBeanDefinition(KafkaListenerAnnotationBeanPostProcessor.class));
}
if (!registry.containsBeanDefinition(KafkaListenerConfigUtils.KAFKA_LISTENER_ENDPOINT_REGISTRY_BEAN_NAME)) {
registry.registerBeanDefinition(KafkaListenerConfigUtils.KAFKA_LISTENER_ENDPOINT_REGISTRY_BEAN_NAME,
new RootBeanDefinition(KafkaListenerEndpointRegistry.class));
}
}
}
Use BeanDefinitionRegistry to convert the class into beanDefinition and register it in the beanDefinitionMap container. The container will uniformly instantiate all the Map Classes, which is actually handed over to Spring for initialization.
KafkaListenerAnnotationBeanPostProcessor parsing
Let's see how Kafka's core processing class KafkaListenerAnnotationBeanPostProcessor parses the @KafkaListener annotation, postProcessAfterInitialization calls the method after the bean is instantiated to enhance the bean.
public Object postProcessAfterInitialization(final Object bean, final String beanName) throws BeansException {
if (!this.nonAnnotatedClasses.contains(bean.getClass())) {
//如果此时bean可能是代理类,则获取原始class ,否则直接class
Class<?> targetClass = AopUtils.getTargetClass(bean);
//这时类上去找@KafkaListener ,因为在class 上可能出现多种复杂情况,这个方法封装一系列方法能包装找到注解
//这里可能存在子父类同时使用注解,所有只有找到一个就进行对应方法处理
Collection<KafkaListener> classLevelListeners = findListenerAnnotations(targetClass);
final boolean hasClassLevelListeners = classLevelListeners.size() > 0;
final List<Method> multiMethods = new ArrayList<>();
//从方法上找注解,找到方法放到map中,Method 当作key
Map<Method, Set<KafkaListener>> annotatedMethods = MethodIntrospector.selectMethods(targetClass,
(MethodIntrospector.MetadataLookup<Set<KafkaListener>>) method -> {
Set<KafkaListener> listenerMethods = findListenerAnnotations(method);
return (!listenerMethods.isEmpty() ? listenerMethods : null);
});
if (hasClassLevelListeners) { //如果类上有注解的话,都有搭配@KafkaHandler使用的,方法上找这个注解
Set<Method> methodsWithHandler = MethodIntrospector.selectMethods(targetClass,
(ReflectionUtils.MethodFilter) method ->
AnnotationUtils.findAnnotation(method, KafkaHandler.class) != null);
multiMethods.addAll(methodsWithHandler);
}
if (annotatedMethods.isEmpty()) { //将解析过class 缓存起来
this.nonAnnotatedClasses.add(bean.getClass());
else {
// Non-empty set of methods
for (Map.Entry<Method, Set<KafkaListener>> entry : annotatedMethods.entrySet()) {
Method method = entry.getKey();
for (KafkaListener listener : entry.getValue()) {
processKafkaListener(listener, method, bean, beanName); //方法监听处理的逻辑
}
}
this.logger.debug(() -> annotatedMethods.size() + " @KafkaListener methods processed on bean '"
+ beanName + "': " + annotatedMethods);
}
if (hasClassLevelListeners) {
processMultiMethodListeners(classLevelListeners, multiMethods, bean, beanName); //KafkaHandler 处理逻辑
}
}
return bean;
}
@kafkaListener can actually act on Class, and use it with @KafkaHandler. How to use it, I will show it with a simple example.
@KafkaListener(topics = "${topic-name.lists}",groupId = "${group}",concurrency = 4)
public class Kddk {
@KafkaHandler
public void user(User user){
}
@KafkaHandler
public void std(Dog dog){
}
}
Different objects of consumption information are processed separately, saving the trouble of object conversion. I temporarily think of these scenes, which are rarely seen. I will not analyze the principle of this implementation in depth.
protected void processKafkaListener(KafkaListener kafkaListener, Method method, Object bean, String beanName) {
//如果方法刚好被代理增强了,返回原始class 方法
Method methodToUse = checkProxy(method, bean);
MethodKafkaListenerEndpoint<K, V> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setMethod(methodToUse);
String beanRef = kafkaListener.beanRef();
this.listenerScope.addListener(beanRef, bean);
String[] topics = resolveTopics(kafkaListener);
TopicPartitionOffset[] tps = resolveTopicPartitions(kafkaListener);
//这个方法是判断方法上是否有@RetryableTopic 注解,有的话则放回true,注册到KafkaListenerEndpointRegistry
if (!processMainAndRetryListeners(kafkaListener, bean, beanName, methodToUse, endpoint, topics, tps)) {
//解析@kafkaListener 属性,设置到endpoint ,注册到KafkaListenerEndpointRegistry
processListener(endpoint, kafkaListener, bean, beanName, topics, tps);
}
this.listenerScope.removeListener(beanRef);
}
protected void processListener(MethodKafkaListenerEndpoint<?, ?> endpoint, KafkaListener kafkaListener,
Object bean, String beanName, String[] topics, TopicPartitionOffset[] tps) {
processKafkaListenerAnnotationBeforeRegistration(endpoint, kafkaListener, bean, topics, tps);
String containerFactory = resolve(kafkaListener.containerFactory());
KafkaListenerContainerFactory<?> listenerContainerFactory = resolveContainerFactory(kafkaListener, containerFactory, beanName);
//这里主要核心了,解析完成后,注册到KafkaListenerEndpointRegistry 中,等待下一步操作了
this.registrar.registerEndpoint(endpoint, listenerContainerFactory);
processKafkaListenerEndpointAfterRegistration(endpoint, kafkaListener);
}
The class name MethodKafkaListenerEndpoint can be understood as an endpoint object. Simply put, an endpoint is one end of a communication channel. It can be understood that this endpoint connects the communication endpoint between business methods and kafka information.
@RetryableTopic is an annotation issued after spring kafka 2.7. Its main function is to process dead letter information after failure and retry when consumption abnormality occurs when consuming kafka information. Because there is no dead letter queue or dead letter information in Kafka thing. Spring has come up with a DLT topic ( Dead-Letter Topic
), which means that when the consumption of information fails to reach a certain number of times, the information will be sent to the specified DLT topic. Annotations can set the number of retries, retry time, failure exception, failure strategy, etc.
In fact, the processMainAndRetryListeners method is similar to the following processListener. It parses the annotation content, and then calls the KafkaListenerEndpointRegistry.registerEndpoint method.
KafkaListenerEndpointRegistry is mainly created by Spring container to instantiate MessageListenerContainer
The main code of KafkaListenerEndpointRegistrar is newly created, and it is not managed by the spring container. It is used to help the bean register in the KafkaListenerEndpointRegistry. The two classes have very similar class names. When analyzing the source code, I was confused. It's not a waste of time to understand the class names.
register endpoint
public void registerEndpoint(KafkaLiEstenerEndpoint endpoint, @Nullable KafkaListenerContainerFactory<?> factory) {
// Factory may be null, we defer the resolution right before actually creating the container
// 这个只是一个内部类,用来装两个对象的,没有任何实现意义,factory 实际可能为空,这里使用延时创建解析这个问题
KafkaListenerEndpointDescriptor descriptor = new KafkaListenerEndpointDescriptor(endpoint, factory);
synchronized (this.endpointDescriptors) {
//这个 startImmediately 并没有被初始化,这里一定是false,当被设置true,会直接创建监听器容器,这时应该是spring 容器已经初始化完成了
if (this.startImmediately) { // Register and start immediately
this.endpointRegistry.registerListenerContainer(descriptor.endpoint,
resolveContainerFactory(descriptor), true);
}
else {
this.endpointDescriptors.add(descriptor);
}
}
}
Why is there a startImmediately switch here? It is just to put the endpoint into the container and save it. After all the additions are completed, use the afterPropertiesSet method of the Spring InitializingBean interface to start the basic registration. This is triggered by the Spring bean life cycle method. If it is After Spring is fully started, the added endpoint cannot be started, so it is equivalent to a threshold switch, which starts immediately after it is turned on.
Let's look at calling KafkaListenerEndpointRegistrar.afterPropertiesSet to start the operation of each endpoint.
@Override
public void afterPropertiesSet() {
registerAllEndpoints();
}
protected void registerAllEndpoints() {
synchronized (this.endpointDescriptors) {
for (KafkaListenerEndpointDescriptor descriptor : this.endpointDescriptors) {
if (descriptor.endpoint instanceof MultiMethodKafkaListenerEndpoint //只有使用@KafkaHandler 才会生成这个对象
&& this.validator != null) {
((MultiMethodKafkaListenerEndpoint) descriptor.endpoint).setValidator(this.validator);
}
//通过endpoint ,containerFactory 创建信息容器MessageListenerContainer
this.endpointRegistry.registerListenerContainer(
descriptor.endpoint, resolveContainerFactory(descriptor));
}
//全部处理完成了,就可以开启start启动按钮,让新增进来立即启动
this.startImmediately = true; // trigger immediate startup
}
}
//获取内部类KafkaListenerContainerFactory 具体实例,在延时启动时,可能存在空,这时可以使用Spring 内部默认
// 如果注解上已经备注了要使用ContainerFactory 则使用自定义,为空则使用默认ConcurrentKafkaListenerContainerFactory
private KafkaListenerContainerFactory<?> resolveContainerFactory(KafkaListenerEndpointDescriptor descriptor) {
if (descriptor.containerFactory != null) {
return descriptor.containerFactory;
}
else if (this.containerFactory != null) {
return this.containerFactory;
}
else if (this.containerFactoryBeanName != null) {
Assert.state(this.beanFactory != null, "BeanFactory must be set to obtain container factory by bean name");
this.containerFactory = this.beanFactory.getBean(
this.containerFactoryBeanName, KafkaListenerContainerFactory.class);
return this.containerFactory; // Consider changing this if live change of the factory is required
}
else {
//.....
}
}
MessageListenerContainer
See how the KafkaListenerEndpointRegistry.registerListenerContainer method generates a message listener.
public void registerListenerContainer(KafkaListenerEndpoint endpoint, KafkaListenerContainerFactory<?> factory) {
registerListenerContainer(endpoint, factory, false);
}
public void registerListenerContainer(KafkaListenerEndpoint endpoint, KafkaListenerContainerFactory<?> factory,
boolean startImmediately) {
String id = endpoint.getId();
Assert.hasText(id, "Endpoint id must not be empty");
synchronized (this.listenerContainers) {
Assert.state(!this.listenerContainers.containsKey(id),
"Another endpoint is already registered with id '" + id + "'");
//创建监听器容器
MessageListenerContainer container = createListenerContainer(endpoint, factory);
//使用map 将实例化容器保存起来,key就是 @KafkaListener id ,这个就是所谓的beanName
this.listenerContainers.put(id, container);
ConfigurableApplicationContext appContext = this.applicationContext;
String groupName = endpoint.getGroup();
//如果注解中有设置自定义监听组,这时需要获取到监听组实例,将监听器容器装起来
if (StringUtils.hasText(groupName) && appContext != null) {
//省略部分内容
}
if (startImmediately) { //如果是立即启动,这时需要手动调用监听器start 方法
startIfNecessary(container);
}
}
}
protected MessageListenerContainer createListenerContainer(KafkaListenerEndpoint endpoint,
KafkaListenercContainerFactory<?> factory) {
//监听器被创建了
MessageListenerContainer listenerContainer = factory.createListenerContainer(endpoint);
if (listenerContainer instanceof InitializingBean) { //这时spring 容器已经初始化完成了,生命周期方法不会再执行了,这里显式调用它
try {
((InitializingBean) listenerContainer).afterPropertiesSet();
}
catch (Exception ex) {
throw new BeanInitializationException("Failed to initialize message listener container", ex);
}
}
int containerPhase = listenerContainer.getPhase();
if (listenerContainer.isAutoStartup() &&
containerPhase != AbstractMessageListenerContainer.DEFAULT_PHASE) { // a custom phase value
if (this.phase != AbstractMessageListenerContainer.DEFAULT_PHASE && this.phase != containerPhase) {
throw new IllegalStateException("Encountered phase mismatch between container "
+ "factory definitions: " + this.phase + " vs " + containerPhase);
}
this.phase = listenerContainer.getPhase();
}
return listenerContainer;
}
private void startIfNecessary(MessageListenerContainer listenerContainer) {
// contextRefreshed Spring 完全启动完成true
if (this.contextRefreshed || listenerContainer.isAutoStartup()) {
listenerContainer.start();
}
}
The main thing is to create the listener MessageListenerContainer through the KafkaListenercContainerFactory information monitoring factory, which inherits SmartLifecycle. The SmartLifecycle interface is the corresponding start() in the class that implements the interface according to the return value of the interface isAutoStartup() after Spring is initialized. Spring When spring is fully initialized, the SmartLifecycle interface will not be called and executed by Spring. At this time, the start method needs to be executed manually, so the startIfNecessary method determines that the container has been started.
MessageListenerContainer
public C createListenerContainer(KafkaListenerEndpoint endpoint) {
C instance = createContainerInstance(endpoint);
JavaUtils.INSTANCE
.acceptIfNotNull(endpoint.getId(), instance::setBeanName);
if (endpoint instanceof AbstractKafkaListenerEndpoint) {
//配置kafka 设置,因为像信息消费提交ack,信息消费批量这些设置都是通过配置设定的,这些信息都在factory保存着,这时将配置信息设置给endpoint
configureEndpoint((AbstractKafkaListenerEndpoint<K, V>) endpoint);
}
//这里是核心,将注解声明bean method 创建成MessagingMessageListenerAdapter 信息监听适配器,在将适配器初始化参数去创建信息监听器,交给instance
endpoint.setupListenerContainer(instance, this.messageConverter);
//将concurrency 并发数设置上
initializeContainer(instance, endpoint);
//自定义配置
customizeContainer(instance);
return instance;
}
At this time, the kafka configuration information, @KafkaListener information, consumption methods, and beans have all been set to createListenerContainer. At this time, the listener container can start Kafka to pull information and call the method for processing.
Start directly from the message listener ConcurrentMessageListenerContainer start method
public final void start() {
checkGroupId();
synchronized (this.lifecycleMonitor) {
if (!isRunning()) { //监听状态,测试还没有开始监听,所以监听状态应该为false
Assert.state(this.containerProperties.getMessageListener() instanceof GenericMessageListener,
() -> "A " + GenericMessageListener.class.getName() + " implementation must be provided");
//抽象方法,由子类去实现
doStart();
}
}
}
@Override
protected void doStart() {
if (!isRunning()) {
//topic 正则匹配,根据规则去匹配sever所有topic,没有则抛出异常
checkTopics();
ContainerProperties containerProperties = getContainerProperties();
//已经获取到消费组的分区和offset
TopicPartitionOffset[] topicPartitions = containerProperties.getTopicPartitions();
if (topicPartitions != null && this.concurrency > topicPartitions.length) {
// 当 concurrency 并发数超过分区时,这里会打印警告日志
this.logger.warn(() -> "When specific partitions are provided, the concurrency must be less than or "
+ "equal to the number of partitions; reduced from " + this.concurrency + " to "
+ topicPartitions.length);
//注意这里,强制将并发数改成最大分数,在设置消费并发时,不用担心分区数量并发超过
this.concurrency = topicPartitions.length;
}
setRunning(true); //开始监听
//concurrency 就是创建容器时,从@KafkaListener 解析处理的并发数
// 可以看出并发数控制着 KafkaMessageListenerContainer 实例产生
for (int i = 0; i < this.concurrency; i++) {
//创建 KafkaMessageListenerContainer 对象
KafkaMessageListenerContainer<K, V> container =
constructContainer(containerProperties, topicPartitions, i);
//配置监听器容器拦截器、通知这些,如果没有配置默认都是null
configureChildContainer(i, container);
if (isPaused()) {
container.pause();
}
container.start(); //启动任务
//因为所有消费现场都是同一个容器创建的,当要停止某个消费topic,需要对containers进行操作
this.containers.add(container);
}
}
}
private KafkaMessageListenerContainer<K, V> constructContainer(ContainerProperties containerProperties,
@Nullable TopicPartitionOffset[] topicPartitions, int i) {
KafkaMessageListenerContainer<K, V> container;
if (topicPartitions == null) {
container = new KafkaMessageListenerContainer<>(this, this.consumerFactory, containerProperties); // NOSONAR
}
else { //如果存在分区,每一个消费都有平分分区
container = new KafkaMessageListenerContainer<>(this, this.consumerFactory, // NOSONAR
containerProperties, partitionSubset(containerProperties, i));
}
return container;
}
I saw how the concurrency of @KafkaListener is achieved, and the number of concurrency cannot exceed the number of partitions. If the number of concurrency is less than the number of partitions, there will be an even split, which may allow one consumer to occupy multiple partitions. Here, a KafkaMessageListenerContainer is created to consume the Kafka topic.
KafkaMessageListenerContainer
Because KafkaMessageListenerContainer and ConcurrentMessageListenerContainer both rewrite doStart() to start the task through extends AbstractMessageListenerContainer, you can know the program entry directly by seeing doStart.
protected void doStart() {
if (isRunning()) {
return;
}
if (this.clientIdSuffix == null) { // stand-alone container
checkTopics();
}
ContainerProperties containerProperties = getContainerProperties();
//检查是否非自动ack,在org.springframework.kafka.listener.ContainerProperties.AckMode 有多种模式
checkAckMode(containerProperties);
//
Object = containerProperties.getMessageListener();
//任务执行器,看起俩像一个线程池Executor ,本质上是直接使用Thread来启动任务的
AsyncListenableTaskExecutor consumerExecutor = containerProperties.getConsumerTaskExecutor();
if (consumerExecutor == null) {
consumerExecutor = new SimpleAsyncTaskExecutor(
(getBeanName() == null ? "" : getBeanName()) + "-C-");
containerProperties.setConsumerTaskExecutor(consumerExecutor);
}
GenericMessageListener<?> listener = (GenericMessageListener<?>) messageListener;
//这个一个枚举类,根据类型生成type,type 标记着如何处理kafka 信息,有批量的、单条的、手动提交、自动提交
ListenerType listenerType = determineListenerType(listener);
//ListenerConsumer 内部类,有关Kafka 任何信息都可以直接去取的
this.listenerConsumer = new ListenerConsumer(listener, listenerType);
setRunning(true); //设置运行状态
this.startLatch = new CountDownLatch(1);
this.listenerConsumerFuture = consumerExecutor
.submitListenable(this.listenerConsumer);//启动线程
try {
if (!this.startLatch.await(containerProperties.getConsumerStartTimeout().toMillis(), TimeUnit.MILLISECONDS)) {
this.logger.error("Consumer thread failed to start - does the configured task executor "
+ "have enough threads to support all containers and concurrency?");
publishConsumerFailedToStart();
}
}
catch (@SuppressWarnings(UNUSED) InterruptedException e) {
Thread.currentThread().interrupt();
}
}
The main logic here is to start a thread to process kafka information pull. We just go to ListenerConsumer run() directly.
@Override // NOSONAR complexity
public void run() {
ListenerUtils.setLogOnlyMetadata(this.containerProperties.isOnlyLogRecordMetadata());
//向spring容器发布事件
publishConsumerStartingEvent();
this.consumerThread = Thread.currentThread();
setupSeeks();
KafkaUtils.setConsumerGroupId(this.consumerGroupId);
this.count = 0;
this.last = System.currentTimeMillis();
//从kafka 获取消费组 分区 offset,保存起来
initAssignedPartitions();
//发布事件
publishConsumerStartedEvent();
Throwable exitThrowable = null;
while (isRunning()) {
try {
//核心 拉取信息和 调用方法去处理信息
pollAndInvoke();
}
//省略
The pollAndInvoke method is the process of pulling information and processing. The method is too cumbersome. It is nothing more than how to call the endpoint to generate the information processor and inject the parameters into the method.
Summarize
Combined with the above figure, a brief summary of how Spring Kafka implements method consumption information through a simple annotation. First, use KafkaListenerAnnotationBeanPostProcessor to scan all the instantiated beans through the Spring preprocessor mechanism, find out the beans and methods with @KafkaListener, set the content of the parsing annotation to MethodKafkaListenerEndpoint, and register it with KafkaListenerEndpointRegistry, and save it uniformly until execution. Set the processor to uniformly save the KafkaListenerEndpointRegistry of the endpoint, register it with the KafkaListenerEndpointRegistrar, generate ConcurrentMessageListenerContainer according to the endpoint, generate the corresponding number of KafkaMessageListenerContainer according to the number of concurrency, and finally use Thread to asynchronously start the Kafka information pull, and call the bean method for processing.
I also understood how topic partitions and concurrency are related, and I also know that Kafka consumption is controllable, and the method of processing Kafka information, the return value can be pushed to another topic, and it is the first time I know that there is a @RetryableTopic retry mechanism, and There is a DLT dead letter topic. If it is not for source code analysis, it is estimated that these are rarely used in normal work scenarios. Now that I look at the source code more and more, I feel more and more that looking at the code can deepen your learning and experience of the framework.
Dynamic subscription
After reading so much code, just compare the processor CV, the simple version of dynamic monitoring can be realized
@Component
public class ListenerMessageCommand<K,V> implements CommandLineRunner {
@Autowired
private Cusmotd cusmotd;
@Autowired
private KafkaListenerEndpointRegistry endpointRegistry;
@Autowired
private KafkaListenerContainerFactory<?> kafkaListenerContainerFactory;
private Logger logger = LoggerFactory.getLogger(ListenerMessageCommand.class);
@Override
public void run(String... args) throws Exception {
MethodKafkaListenerEndpoint<K, V> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setBean(cusmotd);
Method method = ReflectionUtils.findMethod(cusmotd.getClass(), "dis", ConsumerRecord.class);
endpoint.setMethod(method);
endpoint.setMessageHandlerMethodFactory(new DefaultMessageHandlerMethodFactory());
endpoint.setId("tk.shengyifeng.custom#1");
endpoint.setGroupId("test");
endpoint.setTopicPartitions(new TopicPartitionOffset[0]);
endpoint.setTopics("skdsk");
endpoint.setClientIdPrefix("comuserd_");
endpoint.setConcurrency(1);
endpointRegistry.registerListenerContainer(endpoint,kafkaListenerContainerFactory,true);
logger.info("register...............");
}
}
We have seen the complete code and know that the listening action is started by calling the instance start method after the KafkaListenerContainerFactory is created, and we can also get the listening container object, call various APIs of the object, and dynamically stop topic consumption.
@RestController
@RequestMapping("kafka")
public class KafkaController<K,V> {
@Autowired
private Cusmotd cusmotd;
@Autowired
private KafkaListenerContainerFactory<?> kafkaListenerContainerFactory;
private Map<String,MessageListenerContainer> containerMap = new ConcurrentReferenceHashMap<>();
@GetMapping("start/topic")
public void startTopic(String topicName,String groupName){
MethodKafkaListenerEndpoint<K, V> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setBean(cusmotd);
Method method = ReflectionUtils.findMethod(cusmotd.getClass(), "dis", ConsumerRecord.class);
endpoint.setMethod(method);
endpoint.setMessageHandlerMethodFactory(new DefaultMessageHandlerMethodFactory());
endpoint.setId("tk.shengyifeng.custom#1");
endpoint.setGroupId(groupName);
endpoint.setTopicPartitions(new TopicPartitionOffset[0]);
endpoint.setTopics(topicName);
endpoint.setClientIdPrefix("comuserd_");
endpoint.setConcurrency(1);
MessageListenerContainer listenerContainer = kafkaListenerContainerFactory.createListenerContainer(endpoint);
listenerContainer.start();
containerMap.put(topicName,listenerContainer);
}
@GetMapping("stop/topic")
public void stopTopic(String topicName){
if (containerMap.containsKey(topicName))
containerMap.get(topicName).stop();
}
}
This simple http interface supports dynamic subscription of channels through external expansion, and supports the consumption of subscribed topics to stop.
Students who use the @kafkaListener declaration method to consume need not be envious. Spring provides a mechanism to obtain the MessageListenerContainer. From the above code analysis, we know that the listenerContainers inside the KafkaListenerEndpointRegistry will save all container instances, and provide external methods to obtain objects according to id, and the KafkaListenerEndpointRegistry still has Spring is instantiated, so....
In order to easily obtain the id, you can manually specify the id value when using the annotation. If not specified, the id will be generated by default. The default generation rule is org.springframework.kafka.KafkaListenerEndpointContainer# + self-increasing
SpringBoot auto-configuration
You may be curious, how the Kafka configuration information in Spring boot is given to kafkaListenerContainerFactory, because it is initialized by the Spring container, and the parameter injection with the constructor is not seen in the source code. If you want to know more, just look at KafkaAnnotationDrivenConfiguration, ConcurrentKafkaListenerContainerFactoryConfigurer
@Configuration(proxyBeanMethods = false)
@ConditionalOnClass(EnableKafka.class)
class KafkaAnnotationDrivenConfiguration {
private final KafkaProperties properties;
private final RecordMessageConverter messageConverter;
private final RecordFilterStrategy<Object, Object> recordFilterStrategy;
private final BatchMessageConverter batchMessageConverter;
private final KafkaTemplate<Object, Object> kafkaTemplate;
private final KafkaAwareTransactionManager<Object, Object> transactionManager;
private final ConsumerAwareRebalanceListener rebalanceListener;
private final ErrorHandler errorHandler;
private final BatchErrorHandler batchErrorHandler;
private final AfterRollbackProcessor<Object, Object> afterRollbackProcessor;
private final RecordInterceptor<Object, Object> recordInterceptor;
KafkaAnnotationDrivenConfiguration(KafkaProperties properties,
ObjectProvider<RecordMessageConverter> messageConverter,
ObjectProvider<RecordFilterStrategy<Object, Object>> recordFilterStrategy,
ObjectProvider<BatchMessageConverter> batchMessageConverter,
ObjectProvider<KafkaTemplate<Object, Object>> kafkaTemplate,
ObjectProvider<KafkaAwareTransactionManager<Object, Object>> kafkaTransactionManager,
ObjectProvider<ConsumerAwareRebalanceListener> rebalanceListener, ObjectProvider<ErrorHandler> errorHandler,
ObjectProvider<BatchErrorHandler> batchErrorHandler,
ObjectProvider<AfterRollbackProcessor<Object, Object>> afterRollbackProcessor,
ObjectProvider<RecordInterceptor<Object, Object>> recordInterceptor) {
this.properties = properties;
this.messageConverter = messageConverter.getIfUnique();
this.recordFilterStrategy = recordFilterStrategy.getIfUnique();
this.batchMessageConverter = batchMessageConverter
.getIfUnique(() -> new BatchMessagingMessageConverter(this.messageConverter));
this.kafkaTemplate = kafkaTemplate.getIfUnique();
this.transactionManager = kafkaTransactionManager.getIfUnique();
this.rebalanceListener = rebalanceListener.getIfUnique();
this.errorHandler = errorHandler.getIfUnique();
this.batchErrorHandler = batchErrorHandler.getIfUnique();
this.afterRollbackProcessor = afterRollbackProcessor.getIfUnique();
this.recordInterceptor = recordInterceptor.getIfUnique();
}
As a matter of fact, the principle of Spring Boot's automatic configuration is implemented by the spring-boot-autoconfigure package. It is determined whether to start the configuration class according to the @ConditionalOnClass annotation, so when you introduce the corresponding pox, the configuration class will be started, and the configuration information will be injected. Go to the KafkaProperties object, then set the properties to the factory object, and instantiate the object to the spring container. You will find that most automatic configurations are like this.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。