1. Brief description
The words "reaching strategy" sound rather abstract, and they cannot be understood directly at first. Briefly described, the core of the reach strategy is the mission. First, configure tasks. There are policies under tasks and actions under policies. Actions can send push notifications, issue coupons, send message boxes, and send text messages. Then wait until the task execution time and execute all the actions under the task.
The reservoir is the realization of improving the algorithm push capability of the reach strategy platform, and provides technical support for the algorithm's ability to push billion-level data.
2. Get things to reach the strategy platform
The Dewuda strategy platform is a system that draws out a group of users based on their many characteristics, and then executes actions such as sending push, SMS, and message boxes to users at a specified time. It is mainly composed of tasks, strategies, behaviors, reservoirs, and copywriting pools. Next, it will be introduced through the "task main structure diagram" and "task execution structure diagram".
2.1 Task main structure diagram
From this picture, you can easily see that there are strategies under tasks and behaviors under strategies. The corresponding push pushes are actually executed through the behaviors under strategies.
The above is the screenshot corresponding to the App when the push is actually sent.
2.2 Main structure diagram of task execution
3. Algorithm personalization ranges from one million to one hundred million levels.
The touch strategy platform provides a powerful communication channel with Dewu users. The touch strategy platform has determined the best solution for the touch strategy platform to communicate with users through AB experiments, copywriting horse racing, backflow data analysis, push sending timing and other dimensions. In order to pursue a better user experience, the touch strategy platform has fully introduced algorithms. The reach strategy platform uses the ability of the reservoir to support the needs of algorithm billion-level data processing. Through our practice and access strategy task execution model and reservoir function, the evolution from algorithm personalization from millions to billions has been completed.
4. Reach the strategic platform reservoir
4.1 Why do we make a reservoir
The above describes the case where the main business of the strategy platform is reached, and there is an algorithm push behavior in the behavior. The push content pushed by the algorithm comes from the algorithm. In real time, the push data of each person is not consistent, and the push data of some people will be the same for a period of time.
Simply put, the reservoir of the access strategy is to temporarily store the user data to be executed in the pool. After the data in the reservoir meets certain conditions, the user data that meets a certain feature will be removed for processing.
For example, in the algorithm price reduction notification scenario, the algorithm push volume of version 1.0 is still relatively small. When users stream from the platform crowd in version 1.0, we send the push in real time, because most people in a batch of data do not push data. The same requires a single call. If we keep doing this, and the algorithm pushes 100 million calls from 100 million users within an hour, the messaging platform will not be able to handle it, and the pressure on the reaching strategy system will also increase a lot.
It is not OK to call the messaging platform 100 million times in one hour. In order to solve the above problems and take into account the characteristics of our business, for example, the price reduction push of the same product within 15 minutes is likely to be sent to dozens of users. From this, we can make a reservoir, temporarily store the received users, and then execute them in batches according to the id of the discounted commodity to which they belong. To sum it up in one sentence, the data of the user's price-cutting product exceeds a certain amount or the user's price-cutting product data exceeds the specified time. As long as any threshold is met, the corresponding row will be executed, and the reservoir function will be put on the agenda.
In short, what the reservoir does is to temporarily store the real-time data streams that cannot be batched, and then group all the temporarily stored data according to a certain feature, and finally batch the grouped data.
Reservoir business structure diagram:
4.2 Why choose HBase as the storage layer of the reservoir
- Mass storage and persistence
HBase is suitable for storing PB-level massive data, and the storage medium can use cheap PC. Its persistence meets our need for no data loss, and can return data within tens to hundreds of milliseconds. HBase has good horizontal scalability and realizes the storage capacity of massive data. A single table can meet the needs of our daily billion-level data processing and the data limit will store billions of rows.
- Store using columnar storage
The column storage here actually refers to the column family (ColumnFamily) storage, and HBase stores data according to the column family. A large number of columns can be dynamically expanded under the column family to meet our needs for flexible format and dynamic expansion of data storage.
- Easy to scale horizontally
The scalability of HBase is mainly reflected in two levels, one is the horizontal expansion of the upper layer (RegionServer) to improve the processing capacity, and the other is the horizontal expansion of the storage (HDFS) to improve the storage capacity of large data volumes. By adding RegionSever machines, horizontal expansion is performed to improve the ability of Hbsae service RegionServer to support more Regions.
Remarks: The role of RegionServer is to manage Regions, undertake business access, expand the storage layer by adding Datanode machines horizontally, improve the data storage capability of HBase and improve the read and write capabilities of the back-end storage.
- High concurrency, high read and write performance
Since most of the current architectures using HBase single IO latency is acceptable. And in a high concurrency environment, the single IO latency of HBase does not drop seriously. HBase is naturally convenient to build high-concurrency, low-latency services. Of course, it satisfies our demand for reading and writing data at the same time in hundreds of millions of hours.
- sparse
Sparse is mainly aimed at the flexibility of HBase columns. In a column family, you can specify as many columns as you want. When the column data is empty, it will not occupy storage space.
- rowKey prefix scan
HBase supports rowKey prefix scan to pull data in batches, and pre-pull data is cached locally. The primary key scan function can cover the business query requirements of the access policy reservoir. The pull performance of hundreds of thousands of pieces of data per second under large data volumes meets the performance requirements of queries.
4.3 A brief introduction to HBase
4.3.1 HBase Architecture Diagram
4.3.2 Introduce HBase used in the reservoir function
HBase is a natural distributed database. Table is analogous to Table in MySQL table, Region is analogous to a sub-table in MySQL sub-database sub-table, and rowKey is analogous to primary key id of MySQL table. When there is too much data in a single Region, HBase will split the Region into multiple Regions, which are controlled by HBase. When building an HBase table, you can specify pre-partitioning. For example, according to the size of rowKey, you can set any character less than 1 to 1, 1 to 2, 2 to 3, 3 to 4, 4 to 5, 5 to any character greater than 5. 6 Region partitions. The purpose of the preset partition is to make the data and traffic stored by HBase evenly distributed to different regions.
4.4 Introduction to the technical structure of the reservoir
Examples of algorithmic price reduction products:
- producer
After obtaining the price-cut product data, rowKey = hash(spuId+separator+templateCode)+separator+spuId+separator+templateCode+separator+userId, rowKey is sorted in order on HBase, such rowKey can separate different groups of data discretely To different HBase partitions, while ensuring that the same push spuId data is in a contiguous piece of memory, the data reading performance is improved by an order of magnitude through sequential reading. The rowKey design prefix hash (spuId+delimiter+templateCode), the purpose of this is to disperse the data to all regions, solve the problem of hot data caused by the same spuId data prefix, and at the same time after hashing, all rowKey data distribution is more discrete. The HBase data converter can compress the data that HBase needs to store as much as possible to reduce space occupation and ensure the high performance of HBase.
- HBase data pool
The responsibility of the HBase data pool is to store data, and divide it into different pre-partitions equally according to the distribution and range of rowKey values. Then, the partitions are equally distributed to different RegionServers and sorted by rowKey.
- consumer
The consumer is a resident thread on the server. The data allocator tries to obtain the processing authority of the data of a certain partition. The data obtainer obtains the data from HBase in batches according to the rowKey range. The data obtained at one time is push data that can be sent in batches, which will meet the time threshold and quantity. Threshold data flows into the data parser. The data parser parses the HBase data into business data, and then enters the current limiter and behavior executor. Eventually the processed data is removed from HBase. After processing, the data distributor tries to obtain the processing authority of the next partition data, and repeats the above steps.
- Reaching the rowKey of the strategy reservoir, pre-partitioning practice
Reaching the strategy HBase has n instance machines. To assign m partitions to each HBase machine, set m n partitions in the pre-partition, and set the partition interval to any character less than 1 to 1, 1 to 2, 2 to 3, 3 to 4, 4 to 5....m n-1 to any character greater than m n-1. The hash prefix of rowKey is set to bits 0, 1, 2, 3, 4, 5 to m n-1, and then data and subsequent HBase request traffic will be evenly distributed to each machine and each partition. The consumer sets n consumers to ensure that each partition has one consumer processing data in real time. Of course, the above partitions and the number of consumers can be dynamically scaled horizontally to meet the needs of subsequent expansion and improvement of processing capacity.
5 Conclusion
The access strategy platform is generally about tasks, strategies and behaviors. When the task execution time is reached, the circled crowd is obtained, and then everyone in the crowd executes the behavior under the task according to the strategy under the task. The follow-up is to enrich and expand various functions and details for this core process.
Satisfactory results have been received after the reservoir function was launched, and the algorithm push data of 100 million levels within 2 hours a day can run stably. Subsequent algorithm pushes will continue to add new scenarios and expand the number of push users. According to the current system load, it is foreseeable that the reservoir can support stably.
References: https://hbase.apache.org/book.html
*Text/Wang Pengliang
@德物科技public account
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。