Introduction to This article has compiled a dry knowledge of IoT queues, allowing IoT practitioners to further understand IoT scenario queues and discuss a message queue suitable for IoT systems.
Traditional message queues ((Kafka, RocketMQ, etc.) have been polished for many years and have done extremely well in many aspects such as high performance, massive accumulation, message reliability, etc. However, in IoT scenarios, they often need to face massive message delivery. , The performance of the traditional message queue is "powerless".
In the IoT field, event messages need to be transmitted from application servers to embedded chips, such as opening the cabinet of a shared power bank, sending light-on instructions from the server to the device, and high-frequency message flow of industrial gateways, etc., in the process of transmitting these information , The greatest significance of queues is to make the entire message event into a smoothly operating system under uncontrollable environmental factors, because IoT devices will cause a large number of message floods from time to time due to failures or network jitter.
As a leader and innovator in the Internet of Things field, Alibaba Cloud AIoT continues to cultivate and settle in the field of message queues. In order to let IoT practitioners know more about IoT scene queues, Alibaba Cloud technical expert Lu Jianwen compiled a list of IoT queues. Dry goods knowledge, and discuss with you a message queue suitable for the Internet of Things system.
1. The difference between IoT queue and ordinary queue
1. Upstream and downstream isolation split
In the IoT scenario, we divide the required queues into two scenarios, one is the upstream queue and the other is the downstream queue. After splitting, you can isolate the uplink and downlink, and control a device. For example, if the payment is successful, you must open the cabinet and so on. If there is any problem in the uplink, it must not affect the downlink business. In addition, the characteristics of the uplink and downlink are very different. The amount of concurrency for device uplink messages is very high, but in many scenarios the reliability and delay requirements are low, while for device downlink messages, the amount of concurrency is relatively low, but downlink messages (usually control device commands) require high power to achieve.
2. Supports massive device-level
The core appeal of the traditional queue is that its performance will not be affected no matter how much it accumulates. When Kafka has more topics, the original advantage of writing files in order of messages will cause a broker to degenerate to random writing and lose its advantage. In addition, zookeeper is required to coordinate so many topics. There are also limitations, so these queues themselves provide an external proxy bridge. The entrance is multiple device topics, and then bridged and mapped to a small number of actual Kafka topics. This scheme is feasible, but it cannot achieve the isolation effect, and it is not a cure for the symptoms.
Through the comparison between Figure 1 and Figure 2, it is obvious that congestion in a queue minimizes the impact on other devices. What we need is to "isolate a large number of topics as far as possible without affecting the overall performance", and try to make device A's message accumulation topic as far as possible without affecting device B.
3, real-time generated messages are sent first
To give an example, a queue of express cabinet business piles up, and then the user who is next to the cabinet "at this moment" desperately clicks on the cabinet with his mobile phone and can't open the cabinet (the back-end system is restored at this time), the problem That is, there are hundreds of thousands of messages in the queue. New messages need to be queued, waiting for the previous messages to be consumed, regardless of whether these messages are still useful. Therefore, messages generated in real time are sent first, and accumulated messages enter a degraded mode.
2. The birth of IoT message queue
1. The design idea of IoT queue
The design goal is to create a queue gateway that supports uplink and downlink isolation, real-time priority, and massive topics. The design principles are as follows:
- Completely follow the open source ecology, complementary and compatible with traditional queues
- Preserving order and degrading, real-time priority, accumulation and degradation; only real-time messages are relatively orderly.
- Mass topic, multi-tenant isolation
- Separation of connection, calculation and storage
2, message mode
The picture is just a fragment. From this mode, we can see the difference in mechanism. Everyone is not wrong, but the starting point is different.
3. Separation of connection, calculation and storage
The broker does not connect, but connects to the gateway agent. The broker only performs transfer distribution, stateless + horizontal expansion; storage is handed over to nosql DB, high-throughput writing.
4. Message strategy-push-pull combination
This should be one of the core difficulties of queues. The difference from traditional queues is that we consider a platform model and exclusive resources are too expensive. But the problem is that the consumer is uncontrollable, so the combined mode is used to pull the accumulated messages only when the consumer is online, and the pull is done by the AMQP queue gateway, and the user interface is always pushed to the onMessage callback.
- The broker does not directly allow the consumer to connect, but strips the queue gateway out, which will be more flexible, and even for some users, our queue can be switched to ons, kafka, and other implementations. The method of kafka and rocketmq is to assign a broker access address to the client when connecting.
- The broker real-time message is first pushed to the consumer, and the failure will only fall to the queue; this is a complete event, if it is not completed, the producer will not be committed.
- Asynchronous ACK
5. Linear expansion-offline message part
The real-time part of the news uses the push method, which basically does not become a bottleneck, and the consumption cannot enter the accumulation mode. Since the underlying storage dependency has helped us solve the expansion of core storage, the remaining main problem is how to eliminate write hotspots and consumption hotspots, so that the broker can be completely stateless.
Three, one thought-how to solve massive topic problems?
First of all, facing the "large number" problem, generally consider partitioning, unitization, grouping and other isolation and splitting. Here we discuss how to achieve as many topics as possible in a single instance mode, and any number can be 100. % No problem is definitely unrealistic.
Since the broker and storage have been isolated, there is no longer any relationship between the broker and the topic, or any topic data generation, what the broker does is write and distribute.
- Mass topics, each topic has a limited number of subscriptions: The relationship between topic and subscribers uses redis cache or local cache, and there is a topic tree tree algorithm for mqtt topic matching. Hivemq has an implementation version.
- Mass subscription for a single topic: This scenario is actually multicast and broadcast. We will not consider doing this on the queue itself, but encapsulate the broadcast component in the upper layer to coordinate tasks and batch sending.
4. Alibaba Cloud AIoT message queue
At present, Alibaba Cloud AIoT queues are also called server-side subscriptions, which means that users use the server to subscribe to their device messages. In order to reduce access costs, users can use the AMQP1.0 protocol to access, which is in line with the open source ecology. At the same time, it is compatible with traditional queues and new queues. It is left to users to choose according to scenarios. Users can choose to use kafka, mq, or iot queues, or even combined modes, such as configuring the flow queue according to message feature rules.
Alibaba Cloud’s AIoT scenario queue practice, in addition to the integration of the existing mq queue and kafka queue, adds its own real-time priority queue implementation. At the same time, it adds a queue gateway proxy, which allows users to choose ordinary message queues or Choose a lightweight IoT message queue.
Copyright Statement: content of this article is contributed spontaneously by Alibaba Cloud real-name registered users, and the copyright belongs to the original author. The Alibaba Cloud Developer Community does not own its copyright and does not assume corresponding legal responsibilities. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and the "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find suspected plagiarism in this community, fill in the infringement complaint form to report it. Once verified, the community will immediately delete the suspected infringing content.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。