vivo Internet Server Team - Li Kui

1. Introduction

1.1 Introduction to RocketMQ

RocketMQ is a distributed message middleware open sourced by Alibaba. It supports sequential messages, scheduled messages, custom filters, load balancing, and pull/push messages. RocketMQ is mainly composed of four parts: Producer, Broker, Consumer, and NameServer. The Producer is responsible for producing messages, the Consumer is responsible for consuming messages, and the Broker is responsible for storing messages. NameServer acts as a name routing service. The overall architecture diagram is as follows:

图片

  • roducer: responsible for producing messages, generally produced by business systems, which can be deployed in clusters. RocketMQ provides a variety of sending methods, synchronous sending, asynchronous sending, sequential sending, and one-way sending. Both synchronous and asynchronous modes require the broker to return confirmation information, and one-way sending is not required.
  • Consumer: Responsible for consuming messages, generally the background system is responsible for asynchronous consumption, which can be deployed in a cluster. A message consumer pulls messages from the Broker server and serves them to the application. Provides both pull/push consumption modes.
  • Broker Server: Responsible for storing and forwarding messages. The RocketMQ system is responsible for receiving and storing messages sent from producers, preparing for consumers' pull requests, and storing message-related metadata, including consumer groups, consumption progress offsets, and topic and queue messages.
  • Name Server: A name service that acts as a provider for routing messages. Producers or consumers can look up the Broker IP list corresponding to each topic through the name service. Multiple NameServer instances form a cluster, which are independent of each other and do not exchange information.

Based on the latest version of Apache RocketMQ, this article mainly describes the consumer mechanism of RocketMQ, and analyzes its startup process, pull/push mechanism, message ack mechanism, and the difference between timed messages and sequential messages.

1.2 Workflow

(1) Start NameServer.

After the NameServer gets up, it listens to the port and waits for the Broker, Producer, and Consumer to connect, which is equivalent to a routing control center.

(2) Start Broker.

Keep long connections with all NameServers and send heartbeat packets regularly. The heartbeat packet contains the current Broker information (IP + port, etc.) and stores all topic information. After successful registration, there is a mapping relationship between topics and brokers in the NameServer cluster.

(3) Create a topic.

When creating a Topic, you need to specify which Brokers to store the Topic on, or you can automatically create a Topic when sending a message.

(4) Producer sends messages.

When starting, first establish a long connection with one of the NameServer clusters, and obtain from the NameServer which Brokers the currently sent Topic exists on, poll to select a queue from the queue list, and then establish a long connection with the Broker where the queue is located. Broker sends a message.

(5) Consumer consumes messages.

Establish a long connection with one of the NameServers, obtain which Brokers the currently subscribed Topic exists on, and then directly establish a connection channel with the Broker to start consuming messages.

2. Consumer initiation process

The official consumer implementation code is as follows:

 public class Consumer {
    public static void main(String[] args) throws InterruptedException, MQClientException {
        // 实例化消费者
        DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("TestConsumer");
        // 设置NameServer的地址
        consumer.setNamesrvAddr("localhost:9876");
        // 订阅一个Topic,以及Tag来过滤需要消费的消息
        consumer.subscribe("Test", "*");
        // 注册回调实现类来处理从broker拉取回来的消息
        consumer.registerMessageListener(new MessageListenerConcurrently() {
            @Override
            public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs, ConsumeConcurrentlyContext context) {
                System.out.printf("%s Receive New Messages: %s %n", Thread.currentThread().getName(), msgs);
                // 标记该消息已经被成功消费
                return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
            }
        });
        // 启动消费者实例
        consumer.start();
        System.out.printf("Consumer Started.%n");
    }
}

Let's analyze what consumers do at each stage of startup, let's go.

2.1 Instantiate the consumer

The first step is mainly to instantiate the consumer. The default Push consumer mode is adopted here. The parameter in the constructor is the corresponding consumer group. Specifying the same group can consume the same type of message. If not specified, the default grouping will be used. mode, a DefaultMQPushConsumerImpl object is instantiated here, which is the main implementation class of the subsequent consumption function.

 // 实例化消费者
DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("TestConsumer");

The DefaultMQPushConsumerImpl is instantiated mainly through DefaultMQPushConsumer, which is the main consumption function implementation class.

 public DefaultMQPushConsumer(final String consumerGroup, RPCHook rpcHook,
       AllocateMessageQueueStrategy allocateMessageQueueStrategy) {
       this.consumerGroup = consumerGroup;
       this.allocateMessageQueueStrategy = allocateMessageQueueStrategy;
       defaultMQPushConsumerImpl = new DefaultMQPushConsumerImpl(this, rpcHook);
   }

2.2 Setting up NameServer and subscribing to topic process

 // 设置NameServer的地址
consumer.setNamesrvAddr("localhost:9876");
// 订阅一个或者多个Topic,以及Tag来过滤需要消费的消息
consumer.subscribe("Test", "*");

2.2.1 Add tags

After setting the NameServer address, this address is the address of your name service cluster, which is similar to the zookeeper cluster address. The example gives the local address of a single machine. After the cluster is built, it can be set to the cluster address. Next, we need to subscribe to a topic The message, set the corresponding topic, can be classified by setting different tags, but currently only supports "||" for connection, such as: "tag1 || tag2 || tag3". The bottom line is that when constructing subscription data, the source code is divided into strings by "||", as shown below:

 public static SubscriptionData buildSubscriptionData(final String consumerGroup, String topic,
    String subString) throws Exception {
    SubscriptionData subscriptionData = new SubscriptionData();
    subscriptionData.setTopic(topic);
    subscriptionData.setSubString(subString);
 
    if (null == subString || subString.equals(SubscriptionData.SUB_ALL) || subString.length() == 0) {
        subscriptionData.setSubString(SubscriptionData.SUB_ALL);
    } else {
        String[] tags = subString.split("\\|\\|");
        if (tags.length > 0) {
            for (String tag : tags) {
                if (tag.length() > 0) {
                    String trimString = tag.trim();
                    if (trimString.length() > 0) {
                        subscriptionData.getTagsSet().add(trimString);
                        subscriptionData.getCodeSet().add(trimString.hashCode());
                    }
                }
            }
        } else {
            throw new Exception("subString split error");
        }
    }
 
    return subscriptionData;
}

2.2.2 Send heartbeat to Broker

After constructing the subscription topic and classification, put it into a ConcurrentMap, and call the sendHeartbeatToAllBrokerWithLock() method to perform heartbeat detection and upload the filter class to the broker cluster (this step will also be performed during the producer startup process). As follows:

 public void sendHeartbeatToAllBrokerWithLock() {
    if (this.lockHeartbeat.tryLock()) {
        try {
            this.sendHeartbeatToAllBroker();
            this.uploadFilterClassSource();
        } catch (final Exception e) {
            log.error("sendHeartbeatToAllBroker exception", e);
        } finally {
            this.lockHeartbeat.unlock();
        }
    } else {
        log.warn("lock heartBeat, but failed.");
    }
}

First of all, it will perform heartbeat detection on the broker cluster. During this process, a lock will be applied. It will execute the sendHeartbeatToAllBroker method to construct the heartbeat data heartbeatData, then traverse the consumer and producer tables, and add the consumer and producer information to heartbeatData. In the presence of consumers and producers, the brokerAddrTable will be traversed, and a heartbeat will be sent to each broker address, which is equivalent to sending an http request to the corresponding address to detect whether the current broker is alive.

 this.mQClientAPIImpl.sendHearbeat(addr, heartbeatData, 3000);

2.2.3 Upload the filter class to FilterServer

After that, the uploadFilterClassSource() method will be executed. Only the push mode will have this process. In this mode, it will loop through the subscription data SubscriptionData. If the subscription data is filtered using the class mode, the uploadFilterClassToAllFilterServer() method will be called: upload the user from Defines the filter message implementation class to the filter server.

 private void uploadFilterClassSource() {
    Iterator<Entry<String, MQConsumerInner>> it = this.consumerTable.entrySet().iterator();
    while (it.hasNext()) {
        Entry<String, MQConsumerInner> next = it.next();
        MQConsumerInner consumer = next.getValue();
        if (ConsumeType.CONSUME_PASSIVELY == consumer.consumeType()) {
            Set<SubscriptionData> subscriptions = consumer.subscriptions();
            for (SubscriptionData sub : subscriptions) {
                if (sub.isClassFilterMode() && sub.getFilterClassSource() != null) {
                    final String consumerGroup = consumer.groupName();
                    final String className = sub.getSubString();
                    final String topic = sub.getTopic();
                    final String filterClassSource = sub.getFilterClassSource();
                    try {
                        this.uploadFilterClassToAllFilterServer(consumerGroup, className, topic, filterClassSource);
                    } catch (Exception e) {
                        log.error("uploadFilterClassToAllFilterServer Exception", e);
                    }
                }
            }
        }
    }
}

The role of the filter class: the consumer can upload a Class file to the FilterServer. When the Consumer pulls the message from the FilterServer, the FilterServer will forward the request to the Broker. After the FilterServer receives the Broker message, it will be filtered according to the logic in the uploaded filter class. After the filtering is completed, the message is sent to the Consumer, and the user can customize the implementation class of the filtering message.

2.3 Register the callback implementation class

Next is the registration callback implementation class in the code. Of course, if you are in pull mode, you don't need to implement it. The push mode needs to be defined. The difference between the two will be discussed later. It is mainly used to obtain real-time messages from the broker. There are two consumption context types here, for different consumption types.

ConsumeConcurrentlyContext: Delayed message context, used to delay messages, that is, timed messages, the default is not delayed, the delay level can be set, each level corresponds to a fixed time scale, the delay time cannot be customized in RocketMQ, the delay level starts from 1, corresponding to The time interval is as follows:

"1s 5s 10s 30s 1m 2m 3m 4m 5m 6m 7m 8m 9m 10m 20m 30m 1h 2h";

ConsumeOrderlyContext : sequential message context, which controls the order in which messages are sent. After the producer sets the shard routing rules, the same key only falls on the specified queue. During the consumption process, the queue where the sequential message is located will be locked to ensure the order of the message. .

2.4 Consumer Initiation

Let's first look at the process of consumer startup, as follows:

图片

(1) this.checkConfig(): The first is to detect the existence of consumption configuration items, including consumption group, message model (cluster, broadcast), subscription data, message listener, etc. If it does not exist, an exception will be thrown.

(2) copySubscription(): Construct the topic subscription information SubscriptionData and add it to the subscription information of the RebalanceImpl load balancing method.

(3) getAndCreateMQClientInstance(): Initialize the MQ client instance.

(4) offsetStore.load(): Create and load the consumption progress offsetStore according to different message modes: BROADCASTING-broadcast mode, consumers in the same consumption group consume once, CLUSTERING-cluster mode, the default mode, only consumed once.

 switch (this.defaultMQPushConsumer.getMessageModel()) {
    case BROADCASTING:
        this.offsetStore = new LocalFileOffsetStore(this.mQClientFactory, this.defaultMQPushConsumer.getConsumerGroup());
        break;
    case CLUSTERING:
        this.offsetStore = new RemoteBrokerOffsetStore(this.mQClientFactory, this.defaultMQPushConsumer.getConsumerGroup());
        break;
    default:
        break;
}

You can set different modes through setMessageModel; in broadcast mode, consumers in the same consumer group are independent of each other, and the consumption progress is stored separately locally; in cluster mode, the same message will only be consumed by the same consumer group once, and the consumption progress will be involved. In load balancing, the consumption progress is shared among the entire consumption group.

(5) consumeMessageService.start(): Instantiate and start according to different message monitoring types. There are delayed messages and sequential messages.

Here we mainly talk about sequential messages. RocketMQ also helps us to realize it. At startup, if it is a cluster mode and a sequential type, it will start a scheduled task, send a batch lock to the broker regularly, and lock the message queue to which the current sequential consumption is sent. , sequential messages will only be sent to one consumer queue because the producer specifies the fragmentation strategy and message context when producing the message.

The scheduled task sends a batch lock to lock the current sequential message queue.

 public void start() {
        if (MessageModel.CLUSTERING.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.messageModel())) {
            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
                @Override
                public void run() {
                    ConsumeMessageOrderlyService.this.lockMQPeriodically();
                }
            }, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
        }
    }

Send the message that locks the queue to the broker, and the broker side returns the successfully locked queue set lockOKMQSet. For the specific implementation of sequential messages, see the fourth section below.

(6) mQClientFactory.registerConsumer(): MQClientInstance registers the consumer and starts the MQClientInstance. If the registration is not successful, the consumer service will end.

(7) mQClientFactory.start(): Finally, the following services will be started: remote client, scheduled task, pull message service, load balancing service, push message service, and then change the status to running.

 switch (this.serviceState) {
    case CREATE_JUST:
        this.serviceState = ServiceState.START_FAILED;
        // If not specified,looking address from name server
        if (null == this.clientConfig.getNamesrvAddr()) {
            this.mQClientAPIImpl.fetchNameServerAddr();
        }
        // Start request-response channel
        this.mQClientAPIImpl.start();
        // Start various schedule tasks
        this.startScheduledTask();
        // Start pull service
        this.pullMessageService.start();
        // Start rebalance service
        this.rebalanceService.start();
        // Start push service
        this.defaultMQProducer.getDefaultMQProducerImpl().start(false);
        log.info("the client factory [{}] start OK", this.clientId);
        this.serviceState = ServiceState.RUNNING;
        break;
    case RUNNING:
        break;
    case SHUTDOWN_ALREADY:
        break;
    case START_FAILED:
        throw new MQClientException("The Factory object[" + this.getClientId() + "] has been created before, and failed.", null);
    default:
        break;
}

After all the startups are completed, the entire consumer is started, and then the message sent by the producer can be consumed, so how to consume the message? What is the difference between different message modes?

3. Consumption in pull/push mode

图片

3.1 pull mode - DefaultMQPullConsumer

Pull consumption: The application usually actively calls the consumer's pull message method to pull messages from the Broker server, the initiative is controlled by the application, and the displacement of the consumption can be specified. The [pseudo code] is as follows:

 DefaultMQPullConsumer consumer = new DefaultMQPullConsumer("TestConsumer");
// 设置NameServer的地址
consumer.setNamesrvAddr("localhost:9876");
// 启动消费者实例
consumer.start();
//获取主题下所有的消息队列,这里根据主题从nameserver获取的
Set<MessageQueue> mqs = consumer.fetchSubscribeMessageQueues("Test");
for (MessageQueue queue : mqs) {
    //获取当前队列的消费位移,指定消费进度offset,fromstore:从broker中获取还是本地获取,true-broker
    long offset = consumer.fetchConsumeOffset(queue, true);
    PullResult pullResult = null;
    while (offset < pullResult.getMaxOffset()) {
        //第二个参数为tag,获取指定topic下的tag
        //第三个参数表示从哪个位移下开始消费消息
        //第四个参数表示一次最大拉取多少个消息
        try {
            pullResult = consumer.pullBlockIfNotFound(queue, "*", offset, 32);
        } catch (Exception e) {
            e.printStackTrace();
            System.out.println("pull拉取消息失败");
        }
        //代码省略,记录消息位移
        offset = pullResult.getNextBeginOffset();
        //代码省略,这里为消费消息
    }
}

It can be seen that we are actively pulling the message queues corresponding to topics, and then traversing them to obtain the current consumption progress and consume them.

3.2 push mode - DefaultMQPushConsumer

In this mode, the Broker will actively push the data to the consumer after receiving the data. This consumption mode is generally more real-time, and it is generally recommended to use this method. For specific examples, you can watch the official demo at the beginning of Chapter 1.

It is also achieved by implementing the pull method. First, after the consumer in the previous 2.4 is started , the pull message service pullMessageService and the load balancing rebalanceService service will finally be started. After they start, there will always be threads for consumption.

 case CREATE_JUST:
               //......
                // Start pull service
                this.pullMessageService.start();
                // Start rebalance service
                this.rebalanceService.start();
                //.......
                this.serviceState = ServiceState.RUNNING;
                break;
  case RUNNING:

The doRebalance() method is called here to perform load balancing. By default, it is done every 20s, and all topics subscribed to the instance will be polled.

 public class RebalanceService extends ServiceThread {
    //初始化,省略....
 
    @Override
    public void run() {
        log.info(this.getServiceName() + " service started");
 
        while (!this.isStopped()) {
            this.waitForRunning(waitInterval);
            //做负载均衡
            this.mqClientFactory.doRebalance();
        }
 
        log.info(this.getServiceName() + " service end");
    }
 
    @Override
    public String getServiceName() {
        return RebalanceService.class.getSimpleName();
    }
}

Then rebalance is done based on each topic and whether it is in sequential message mode.

The specific method is to first sort the message consumption queue and consumer ID under the topic, and then use the average distribution algorithm of the message queue to calculate the message queue to be pulled, and make a filter comparison between the allocated message queue set and the processQueueTable , the new queue does not contain or has expired, it will be removed.

 public void doRebalance(final boolean isOrder) {
      Map<String, SubscriptionData> subTable = this.getSubscriptionInner();
      if (subTable != null) {
          for (final Map.Entry<String, SubscriptionData> entry : subTable.entrySet()) {
              final String topic = entry.getKey();
              try {
                  /根据 /每个topic,以及它是否顺序消息模式来做rebalance
                  this.rebalanceByTopic(topic, isOrder);
              } catch (Throwable e) {
                  if (!topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX)) {
                      log.warn("rebalanceByTopic Exception", e);
                  }
              }
          }
      }
 
      this.truncateMessageQueueNotMyTopic();
  }

In rebalanceByTopic, the updateProcessQueueTableInRebalance() method will be executed in both broadcast and cluster modes, and finally the request dispatchPullRequest will be distributed. The pull request is put into the pull request queue pullRequestQueue through the executePullRequestImmediately() method. Note that the actual implementation of the dispatch request method dispatchPullRequest() in pull mode is a Empty method, the two here are very different, the push mode is implemented as follows :

 @Override
 public void dispatchPullRequest(List<PullRequest> pullRequestList) {
     for (PullRequest pullRequest : pullRequestList) {
         this.defaultMQPushConsumerImpl.executePullRequestImmediately(pullRequest);
         log.info("doRebalance, {}, add a new pull request {}", consumerGroup, pullRequest);
     }
 }

Then in the PullMessageService, because the previous consumer started successfully, the PullMessageService thread will fetch the pull request in the pullRequestQueue in real time.

 @Override
  public void run() {
      log.info(this.getServiceName() + " service started");
 
      while (!this.isStopped()) {
          try {
              PullRequest pullRequest = this.pullRequestQueue.take();
              if (pullRequest != null) {
                  this.pullMessage(pullRequest);
              }
          } catch (InterruptedException e) {
          } catch (Exception e) {
              log.error("Pull Message Service Run Method exception", e);
          }
      }
 
      log.info(this.getServiceName() + " service end");
  }

The pulled pull request will call the pullMessage() method through the message listener class of DefaultMQPushConsumerImpl.

 private void pullMessage(final PullRequest pullRequest) {
     final MQConsumerInner consumer = this.mQClientFactory.selectConsumer(pullRequest.getConsumerGroup());
     if (consumer != null) {
         DefaultMQPushConsumerImpl impl = (DefaultMQPushConsumerImpl) consumer;
         impl.pullMessage(pullRequest);
     } else {
         log.warn("No matched consumer for the PullRequest {}, drop it", pullRequest);
     }
 }

PullKernelImpl() in pullMessage() has a Pullback method for executing the callback of the message. It processes the message through the submitConsumeRequest() method. All in all, the listener in the push mode can perceive it through the thread callback.

 //Pull回调
PullCallback pullCallback = new PullCallback() {
            @Override
            public void onSuccess(PullResult pullResult) {
                if (pullResult != null) {
                    pullResult = DefaultMQPushConsumerImpl.this.pullAPIWrapper.processPullResult(pullRequest.getMessageQueue(), pullResult,
                        subscriptionData);
 
                    switch (pullResult.getPullStatus()) {
                        case FOUND:
                         //省略...消费位移更新
                                DefaultMQPushConsumerImpl.this.consumeMessageService.submitConsumeRequest(
                                    pullResult.getMsgFoundList(),
                                    processQueue,
                                    pullRequest.getMessageQueue(),
                                    dispathToConsume);

Different consumption modes corresponding to this method have different implementations, but all of them will construct a consumption request ConsumeRequest, which has a run() method. After the construction is completed, it will be put into the listener listener.

 //监听消息
 status = listener.consumeMessage(Collections.unmodifiableList(msgs), context);

Remember the registration listener callback processing method given in our example earlier?

We can click on the consumeMessage method above, check its implementation location in the source code, and find that it has returned to our previous 2.3 registration callback implementation class. Is the whole process smooth? This listener will receive the push message and pull it out for business consumption logic. The following is our own defined message callback processing method.

 // 注册回调实现类来处理从broker拉取回来的消息
 consumer.registerMessageListener(new MessageListenerConcurrently() {
     @Override
     public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs, ConsumeConcurrentlyContext context) {
         System.out.printf("%s Receive New Messages: %s %n", Thread.currentThread().getName(), msgs);
         // 标记该消息已经被成功消费
         return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
     }
 });

3.3 Summary

The difference between the push mode and the pull mode is that when doing load balancing, the pullRequest request will be put into the pullRequestQueue, and then the PullMessageService thread will take out the request in real time, store the message in the ProcessQueue, and let the push mode monitor through thread callbacks. In this way, the message is real-time from the distribution request to the reception, and the pull mode is that the consumer actively pulls the specified message, and the consumption progress needs to be specified.

For our developers, which mode is more appropriate to choose to implement our business logic? Don't worry, let's summarize their characteristics first:

Common ground:

  1. The bottom layer of the two is actually the same, and the push mode is also implemented based on the pull mode.
  2. The pull mode requires us to actively pull messages to the broker through the consumer through the program, while the push mode of the message only requires us to provide a listener to monitor and obtain messages in real time.

advantage:

The push mode uses long polling blocking to obtain messages, and the real-time performance is very high;

The push mode rocketMQ handles the details of getting the message, and it is relatively simple and convenient to use;

The pull mode can specify the consumption progress, and consume as much as you want, with great flexibility.

shortcoming:

  1. In push mode, when the consumer's ability is far lower than the producer's ability, a certain accumulation of consumer messages will occur;
  2. The real-time performance of pull mode is very low, and the frequency is not easy to set;
  3. The interval for pulling messages is not easy to set. If it is too short, it will generate a lot of RPC overhead of invalid Pull requests, which will affect the overall network performance of MQ. If it is too long, the real-time performance will be poor.

Applicable scene:

  1. When the server-side production message data is relatively large, the consumer-side processing is more complicated, and the consumption capacity is relatively low, the pull mode is applicable to this situation;
  2. For scenarios with high data real-time requirements, the push mode is more suitable.

Are you now clear on which model to use in your business?

4. Sequential messages

4.1 There are problems in implementing MQ sequential message sending

(1) Generally, message sending will take polling method to send messages to different queues (partition queues); when consuming messages, messages are pulled from multiple queues, and brokers are unaware. In this case, sending and Consumption is not guaranteed order.

(2) When sending a message asynchronously, it is not sent one by one, and it cannot be guaranteed that the time when the message arrives at the Broker is also in the order of sending.

There are so many steps to send a message to storage and finally to consumption. How can we use sequential messages in our business? Let's break it down step by step.

4.2 Implementing MQ sequential message key points

Since the order cannot be traced across multiple brokers, it is possible to control the order in which messages are sent to the same queue in turn, and only pull from this queue in turn when consuming, then the order is guaranteed. Set the fragment routing rules when sending, so that the messages with the same key only fall on the specified queue, and then lock the queue where the sequential messages are located during the consumption process to ensure the order of the messages, so that the messages on this queue follow the FIFO order to consume. So can we satisfy the following three conditions?

1) Sequential sending of messages: Messages sent by multiple threads cannot guarantee the orderliness. Therefore, when the business party sends, the messages for the same business number (like an order) need to be guaranteed to be sent sequentially in one thread. After a message is sent successfully, the next message is sent. Corresponding to mq, the message sending method must use synchronous sending, and asynchronous sending cannot guarantee the sequence.

 //采用的同步发送方式,在一个线程内顺序发送,异步发送方式为:producer.send(msg, new SendCallback() {...})
SendResult sendResult = producer.send(msg, new MessageQueueSelector() {//…}

2) Sequential storage of messages: There will be multiple queues under the topic of MQ. To ensure the sequential storage of messages, messages with the same service number need to be sent to one queue. Corresponding to mq, you need to use MessageQueueSelector to select the queue to be sent. That is, you can set routing rules for the business number, such as taking the remainder of the business field hash according to the number of queues, and sending the message to a queue.

 //使用"%"操作,使得订单id取余后相同的数据路由到同一个queue中,也可以自定义路由规则
long index = id % mqs.size();  
return mqs.get((int) index);

3) Sequential consumption of messages: To ensure sequential consumption of messages, the same queue can only be consumed by one consumer, so locking the consumer queue in the broker is unavoidable. At the same time, a consumption queue can only be consumed by one consumer, and within the consumer, only one consumer thread can consume the queue. Here RocketMQ has already implemented it for us.

 List<PullRequest> pullRequestList = new ArrayList<PullRequest>();
    for (MessageQueue mq : mqSet) {
        if (!this.processQueueTable.containsKey(mq)) {
            if (isOrder && !this.lock(mq)) {
                log.warn("doRebalance, {}, add a new mq failed, {}, because lock failed", consumerGroup, mq);
                continue;
            }
 
         //....省略
        }
    }

After the consumer reloads and allocates the consumption queue, it needs to initiate a message pull request to the mq server. The code is implemented in RebalanceImpl#updateProcessQueueTableInRebalance(). For the message pull of sequential messages, mq makes the above judgment, that is, the consumer client First initiate a lock request to the messageQueue on the broker side. Only when the lock is successful will a pullRequest be created to pull the message. The pullRequest here is the message body of the previous pull and push modes, and the method updateProcessQueueTableInRebalance is also included in the previous consumer startup process. Oh it was mentioned.

The specific locking logic is as follows:

 public boolean lock(final MessageQueue mq) {
     FindBrokerResult findBrokerResult = this.mQClientFactory.findBrokerAddressInSubscribe(mq.getBrokerName(), MixAll.MASTER_ID, true);
     if (findBrokerResult != null) {
         LockBatchRequestBody requestBody = new LockBatchRequestBody();
         requestBody.setConsumerGroup(this.consumerGroup);
         requestBody.setClientId(this.mQClientFactory.getClientId());
         requestBody.getMqSet().add(mq);
 
         try {
             Set<MessageQueue> lockedMq =
                 this.mQClientFactory.getMQClientAPIImpl().lockBatchMQ(findBrokerResult.getBrokerAddr(), requestBody, 1000);
             for (MessageQueue mmqq : lockedMq) {
                 ProcessQueue processQueue = this.processQueueTable.get(mmqq);
                 if (processQueue != null) {
                     processQueue.setLocked(true);
                     processQueue.setLastLockTimestamp(System.currentTimeMillis());
                 }
             }
 
             boolean lockOK = lockedMq.contains(mq);
             log.info("the message queue lock {}, {} {}",
                 lockOK ? "OK" : "Failed",
                 this.consumerGroup,
                 mq);
             return lockOK;
         } catch (Exception e) {
             log.error("lockBatchMQ exception, " + mq, e);
         }
     }
 
     return false;
 }

It can be seen that the lockBatchMQ method is called to send a lock request. If the message processing queue is successfully obtained, it is set to obtain the lock, and the lock is returned successfully. If the lock is successful, only one thread consumes the message at the same time. If the locking fails, it will delay 1000ms and try to apply to the broker to lock the messageQueue, and resubmit the consumption request after the lock is successful.

So, is this locking method very similar to the distributed locks we usually use? It's up to you to design and implement what would you do?

5. Message ack mechanism

5.1 Message Consumption Failure Handling

The message is consumed by the consumer, so how to ensure the successful consumption? What happens when message consumption fails?

The message is consumed, so how to ensure that it is successfully consumed? There is only control by the user, and only if the user confirms the success will the consumption be successful, otherwise it will be re-delivered.

RocketMQ actually uses the ACK mechanism to retry and notify failed messages. The specific process is as follows:

图片

The success of the message is controlled by the user. Only if the user confirms the success will the consumption be successful. Otherwise, it will be re-delivered. The Consumer will monitor the callback message through the listener and return ConsumeConcurrentlyStatus.CONSUME\_SUCCESS to indicate that the consumption is successful. If it fails, return the status of ConsumeConcurrentlyStatus.RECONSUME\_LATER (consumption retry), RocketMQ will default to the message failure, after a certain delay (default 10s, configurable), it will be re-delivered to the ConsumerGroup, the number of retries and the interval The time relationship is shown in the figure above. If this continues, if the failure reaches a certain number of times (default 16 times), it will enter the DLQ dead letter queue and will no longer be delivered. At this time, manual monitoring can be used to intervene.

5.2 Problems caused by message retransmission

A big problem of RocketMQ consuming messages because of message re-casting is that it cannot guarantee that messages are only consumed once, so developers need to process them in the business.

图片

6. Summary

This article mainly introduces the consumer startup process of RocketMQ. Combined with the official source code and examples, it describes the working principle and content of consumers in startup and message consumption step by step. Combined with the usual business work, we are familiar with the sequence, push /pull mode, etc. for detailed analysis, as well as for problems caused by message consumption failures and retransmissions.

For myself, I hope that by actively learning the source code method, I can understand the principle of startup and learn the excellent solutions in it, such as pull/push and sequential messages. After learning, I can understand how the push mode can pull messages in real time. , how is the sequential message guaranteed, and then you can think of how to deal with this kind of problem at ordinary times, such as the sequential message is kept in the same order as the storage when the message is consumed, can you implement distributed lock writing by yourself here, etc. , there are also many guiding questions in the article, I hope it can arouse readers' own thinking, and can have a more intuitive understanding of the entire consumer startup and message consumption process, but there are still some technical details that have not been explained in detail due to space reasons. Welcome Let's discuss and communicate together~

References:


vivo互联网技术
3.4k 声望10.2k 粉丝