Author: Sun Yuanyuan | Senior Development of Ping An Life Insurance
Why choose RocketMQ
First of all, let's talk about why we choose RocketMQ. In the process of technology selection, the application scenario should be the first consideration. Only when the application scenario is determined can we have clear goals and objectives in the process of technology selection. standard of measurement. The common features of message middleware such as asynchrony, decoupling, and peak shaving and valley filling will not be introduced one by one. These features determine whether you need to use message middleware in your scenario. , and how to choose which message middleware.
Synchronous double write to ensure business data is safe, reliable and not lost
Our positioning when building a message middleware platform is to transmit business data for business systems. A very important requirement for business data is that data is not allowed to be lost, so the first point of choosing RocketMQ is that it has a synchronous double write mechanism , the data is sent successfully only when the data is successfully flushed on the master and slave servers. Under the condition of synchronous double write, the write performance of MQ will definitely decrease compared with the asynchronous assignment of asynchronous disk flushing. Compared with the asynchronous condition, there will be a decrease of about 20%. Under the single master-slave architecture, the write performance of 1K messages It can still reach 8W+ TPS. For most business scenarios, the performance can fully meet the requirements. In addition, the reduced performance can be compensated by the horizontal expansion of the broker. Therefore, under the condition of synchronous double writing, the performance can be satisfied. business needs.
In multi-topic application scenarios, the performance is still strong
The second point is that there are many usage scenarios of the business system. The problem caused by the wide range of usage scenarios is that a large number of topics will be created. Therefore, at this time, it is necessary to measure whether the performance of the message middleware can meet the requirements in multi-topic scenarios. When I tested it myself, I used 1K messages to randomly write data to 10,000 topics, and the TPS of about 2W can be achieved in a single broker state, which is much stronger than Kafka. Therefore, in the multi-topic application scenario, the performance is still strong, which is the second reason why we choose topic. This is also determined by the underlying file storage structure. Message middleware such as Kafka and RocketMQ can achieve read and write capabilities close to memory, which mainly depends on the sequential read and write of files and memory mapping. The messages of all topics in RocketMQ are written in the same commitLog file, but the messages in Kafka are organized by topic as the basic unit, and different topics are independent of each other. In the multi-topic scenario, a large number of small files are created, and a large number of small files have an addressing process when reading and writing, which is somewhat similar to random reading and writing, which affects the overall performance.
Support transaction messages, sequential messages, delayed messages, message consumption failure retry, etc.
RocketMQ supports transaction messages, sequential messages, message consumption failure retry, delayed messages, etc. It has rich functions and is more suitable for complex and changeable business scenarios.
Active community building, Alibaba open source system
In addition, when choosing message middleware, you should also consider the activity of the community and the development language used in the source code. RocketMQ is developed in Java, which is more friendly to Java developers, whether it is reading the source code to troubleshoot problems or doing it on the basis of MQ. Secondary development is easier. Most of the students in the community are domestic partners, and they are relatively close to everyone participating in the RocketMQ open source contribution. Here, I also hope that more small partners can participate and contribute more to the domestic open source projects.
Introduction and Application of SPI Mechanism
After introducing why we choose RocketMQ, let me introduce to you how we apply RocketMQ based on the SPI mechanism. The full name of SPI is (Service Provider Interface), which is a built-in service provider discovery mechanism in JDK. My personal understanding is interface-oriented programming, which leaves users with an extension point. For example, spring.factories in springBoot is also one of the SPI mechanisms. application. The figure shows you an application of SPI in RocketMQ. The application of our RocketMQ client based on the SPI mechanism is also inspired by the application of the SPI mechanism in MQ. When RocketMQ implements ACL permission verification, it implements the AccessValidator interface. PlainAccessValidator is the default implementation in MQ. Permission verification may be implemented in different ways due to different organizational structures. An interface is provided through the SPI mechanism to provide extension points for developers to customize development. When there are customization requirements, only the AccessValidator interface needs to be re-implemented, and there is no need to make a big effort to the source code.
Next, let me introduce a simple model of our configuration file. In this configuration file, except for the three configuration items sendMsgService, consumeMsgConcurrently, and consumeMsgOrderly, the rest are RocketMQ native configuration files, sending messages and consuming messages. The configuration item is the application of the SPI mechanism and the interface provided for the specific implementation. Some students may have questions, shouldn't the SPI configuration file be placed in the META-INF.service path? Here, in order to facilitate the management of configuration files, we simply put them together with the MQ configuration files. As mentioned earlier, META-INF.service is just a default path, and making corresponding modifications for the convenience of management does not violate the idea of SPI mechanism.
Let's take a look at this configuration file model. The configuration items here include all the options that need to be configured when using MQ. ProConfigs supports all MQ native configurations, which also realizes the decoupling of configuration and application implementation. The application side only You only need to pay attention to the specific business logic. The implementation of the producer and the consumer and the topic consumed by the consumer can be specified through the configuration file. In addition, this configuration file also supports the use of multiple nameservers in multiple environments. In more complex applications, it supports sending messages to multiple RocketMQ environments and consuming messages from multiple sets of different environments. The consumer provides two interfaces mainly to support concurrent consumption and sequential consumption of RocketMQ. Next, I will share with you how to initialize the producer and consumer according to this configuration file. First of all, let me introduce a core process of client loading that we abstracted.
Client core process details
As you can see in the figure, the core process of the client is abstracted into three parts, namely the startup period, the running period and the termination period. The first thing to load the configuration file is to load the configuration file model just introduced. When the configuration is completely decoupled from the application, the configuration file must be loaded before the subsequent process can be initialized. Before initializing the producer and consumer, the business logic objects of the producer and consumer implemented by the application should be created for the producer and consumer to use. Monitor configuration file changes at runtime, and dynamically adjust producer and consumer instances according to the changes. Here it is again emphasized that the decoupling of configuration and application provides the possibility for dynamic adjustment. The termination period is simpler, which is to shut down the producers and consumers and remove them from the container. The termination period here refers to the termination of producers and consumers, not the termination of the entire application. The termination of producers and consumers may occur in the process of dynamic adjustment, so terminated instances must be removed from the container. It is convenient to initialize subsequent producers and consumers. After introducing the basic process, let me introduce the loading process of the configuration file.
How to load configuration files
If the configuration file loads this block, the process is relatively simple. The main thing here is how to be compatible with older projects. The minimum version of JDK supported by the RocketMQ client is 1.6, so the compatibility of new and old projects should be considered when packaging the client. Here, our client's core package supports JDK1.6. The early project configuration files of spring are generally placed in the resources path. We have implemented a set of methods for reading configuration files and monitoring configuration files. For details, you can refer to the reading and monitoring of configuration files in acl. On the basis of the core package, springBoot encapsulates a set of packages for automatically loading configuration files for the use of microservice projects, and the set of spring is used for both reading and monitoring of configuration files. After the configuration file is loaded, how are the producers and consumers implemented by the application in the configuration file associated with the RocketMQ producers and consumers? Next, I will share this content with you.
How to associate production consumers with business implementations
First, let's look at how consumers are associated. The above picture is the message listener of the MQ consumer, which requires us to implement specific business logic processing. By associating the consumption logic implemented in the configuration file here, the association between consumers in the configuration file and RocketMQ consumers can be achieved. The interface definition of the consumer is also very simple, that is, to consume messages. The type of the consumed message can be specified by generics, and the parameter type of the concrete implementation is obtained when the consumer is initialized, and the
The messages received by MQ are converted into specific business type data. The conversion of message types is uniformly encapsulated by the client. You can map the return value of the consumed message with the status provided by MQ according to your needs. The demo here is simply displayed. When obtaining specific application consumer instances, if you use spring-managed objects in your consumption logic, then the consumption logic objects you implement should also be handed over to spring management, and the initialized objects are obtained through the spring context; if your The consumption logic does not use spring for management, and you can create specific application instances yourself through reflection.
Unlike the consumer, the producer needs to pass the initialized producer object to the application code, while the consumer is to obtain the logic object implemented in the application, so how to pass the producer to the business application?
The producer implemented in the business code needs to inherit SendMessage, so that the business code obtains the RmqProducer object, which is an encapsulated producer object, which standardizes the method of sending messages, so that it conforms to the company's corresponding specifications System, the methods in this object will also check the naming convention of the topic, and the standard topic has a unified naming convention.
How to dynamically adjust production consumers
First of all, when we talk about dynamic adjustment, we need to talk about the scene in which dynamic adjustment occurs. If there is no suitable use scene, dynamic adjustment is a bit flashy. Here I list four scenarios where the configuration file changes:
When the nameserver changes, all producers and consumers need to be re-initialized. This is usually when MQ is migrated or the current MQ cluster is unavailable, which requires emergency switching of MQ;
In the scenario of adding or removing instances, you only need to start or close the corresponding instance. In the scenario of adding an application instance, it is generally necessary to add a consumer to consume a new topic. When reducing consumers, it is generally necessary to urgently need an exception when a consumer occurs. Close this consumer and stop loss in time.
In the scenario of adjusting consumer threads, we made a little modification to the source code, so that the application side can obtain the thread pool object of the consumer, so as to dynamically adjust the number of core threads in the thread pool. The application scenario of this is generally when a consumer consumes a lot of data and occupies too much CPU resources, resulting in messages with higher priority not being processed in time, you can first reduce the thread size of the consumer .
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。