头图

My boss asked me to study a Plausible stat for 7 days

kumfo
中文

Before starting the article, let's talk

When I was thinking about the statistical function recently, I was thinking about using the program with mysql for storage, but if I use mysql to store some more detailed data and then do statistics, the amount of data will be very large, and then I think about using mysql to store data , and then use timed scripts or queues to update to redis according to specific requirements, so that the query is very fast.

Later, I remembered that we used plausible to record data, and then thought about using plausible 's event to record custom goals , and found that it was indeed feasible. After simulating 5 million data, it performed well and felt great. Who knows that after the amount of data increases, 20 million data will blow up the service. During this period, I also encountered some other problems, and I also gave some suggestions to the official. I had to learn elixir as a last resort. Of course, this is a later story. The purpose of mentioning this plausible here is mainly to remind the latecomers. If you also want to use plausible to do some custom statistical functions, it is best to give up and use ClickHouse to deal with it directly.

In the end, I had no choice. I observed the data storage structure of plausible and found that the root cause was the storage type of event , and the performance was misunderstood, so I started to study ClickHouse to see if my own implementation could get good performance. After reading this article, if it is useful to everyone, welcome to discuss it. If there is a wrong understanding, please correct me.

Row database vs column database structure comparison

The following is a table of official documents to introduce some ideas and record the learning results of clickhouse during this period.

row database

Row WatchID JavaEnable Title GoodEvent EventTime
#0 89354350662 1 Investor Relations 1 2016-05-18 05:19:20
#1 90329509958 0 Contact us 1 2016-05-18 08:10:20
#2 89953706054 1 Mission 1 2016-05-18 07:38:00
#N

columnar database

Row: #0 #1 #2 #N
WatchID: 89354350662 90329509958 89953706054
JavaEnable: 1 0 1
Title: Investor Relations Contact us Mission
GoodEvent: 1 1 1
EventTime: 2016-05-18 05:19:20 2016-05-18 08:10:20 2016-05-18 07:38:00

From the table, it is difficult to understand at once. The row database is the same as what we see in the spreadsheet, but the storage is row by row, and the column database is stored by column by column. The following is an example of the file storage structure:

That is to say, for a row database, a row of data forms a complete file (the actual underlying storage is not like this, here is the convenience to explain the whole idea), this file contains each data field of the entire row, get this You can get the complete data in one line.

For columnar storage, a column of data is a file. As shown above, all WatchId data is stored in the WatchId file.

Then I did a brain map to compare the structure again.
数据表对比.png

The following uses json to describe the difference between the two data structures:

  1. row database storage structure

     [ { "id": 1, "title": "t1", "content": "c1" }, { "id": 2, "title": "t2", "content": "c2" }, { "id": 2, "title": "t2", "content": "c2" }, { "id": 3, "title": "t3", "content": "c3" } ]
  2. Columnar database structure

     { "id": [ 1, 2, 3, 4 ], "title": [ "t1", "t2", "t3", "t4" ], "content": [ "c1", "c2", "c3", "c4" ] }

Main application scenarios of columnar database

The main application scenario of columnar database is OLAP (online analysis), which means that it is mainly used in data analysis, and is usually not used to deal with specific business data storage. The row database is mainly used to process the storage of business data. The characteristic of business data is that a single piece of data is often updated, and there are very strict requirements for data consistency. For OLAP, data rarely changes, so generally speaking, the main purpose of columnar database is for faster writing and faster data query and statistics.

Below I list some key characteristics of OLAP scenarios:

  • The vast majority are read requests
  • The data is updated in sizable batches (>1000 rows) rather than a single row; or no updates at all.
  • Data that has been added to the database cannot be modified.
  • For reads, fetch quite a few rows from the database, but only a fraction of the columns.
  • Wide tables, i.e. each table contains a large number of columns
  • Relatively few queries (usually hundreds of queries per second or less per server)
  • For simple queries, allow a delay of about 50ms
  • The data in the columns is relatively small: numbers and short strings (e.g. 60 bytes per URL)
  • Requires high throughput when processing a single query (up to billions of rows per second per server)
  • business is not necessary
  • Low requirements for data consistency
  • There is one large table per query. Except for him, the others are small.
  • The query result is significantly smaller than the source data. In other words, the data is filtered or aggregated so the results fit in the RAM of a single server

MergeTree (merge tree) data table engine

main feature

MergeTree is ClickHouse 's most basic data storage engine, and this data engine is used most of the time.

MergeTree family of engines is designed to insert extremely large amounts of data into a table. Data can be written quickly one by one in the form of data fragments, which are merged in the background according to certain rules. This strategy is much more efficient than constantly modifying (rewriting) the stored data while inserting.

main feature:

  • Stored data is sorted by primary key.

    This enables you to create a small sparse index to speed up data retrieval.

  • Partitioning can be used if a分区键is specified.

    Some partitioned operations in ClickHouse are faster than normal operations on the same dataset and the same result set. ClickHouse will automatically intercept the partition data when the partition key is specified in the query. This also effectively increases query performance.

    Generally speaking, partitioning is based on time. From the appearance point of view, a partition is a folder, which can avoid storing a large amount of data in one file.

  • Data copies are supported.

    The tables of the ReplicatedMergeTree series provide data replication.

  • Data sampling is supported.

    If desired, you can set a sampling method for the table.

Structure description

table_name.png

During this period of time, in order to study the statistical performance of plausible , it was mainly based on the business, and I did not have an in-depth understanding of other data engines. Other engines are also inherited from the MergeTree engine. If you are interested, you can learn about it. Out of several other data engines:

  • VersionedCollapsingMergeTree

    • Allows fast writes to changing object state.
    • Remove old object state in the background. This significantly reduces storage volume.
  • GraphiteMergeTree

    • This engine is used to thin and aggregate Graphite data. May be useful for developers who want to use CH to store Graphite data.
    • If you don't need to aggregate Graphite data, you can use any CH table engine; but if you need it, use the GraphiteMergeTree engine. It can reduce storage space and improve the query efficiency of Graphite data.
  • AggregatingMergeTree

    • The engine inherits from MergeTree and changes the merging logic of data fragments. ClickHouse replaces all rows with the same primary key (or sort key , to be precise) in a data segment with a row that stores the state of a series of aggregate functions.
  • CollapsingMergeTree

    • The engine inherits from MergeTree and adds the logic of collapsing rows to the data block merging algorithm.
  • ReplacingMergeTree

    • This engine differs from MergeTree in that it removes duplicates with the same sort key value
  • SummingMergeTree

    • When merging pieces of data from SummingMergeTree tables, ClickHouse will merge all rows with the same primary key into a single row that contains the aggregated values for columns with numeric data types in the merged row. If the primary keys are combined in such a way that a single key value corresponds to a large number of rows, the storage space can be significantly reduced and the data query speed can be accelerated.

Some points to note about the ClickHouse database

  • Not suitable for handling transactions, there is no full transaction support.
  • Lack of high-frequency, low-latency modification or deletion of existing data. Can only be used to delete or modify data in batches, data updates are asynchronous
  • Most data queries use standard SQL statements. If you always use a row-based database such as MySQL , there is not much switching cost.
  • Define the sorting field or define the index, and it will be stored according to the order when it is stored, and the range query speed is very fast
  • ClickHouse provides a variety of ways to speed up queries while allowing data accuracy to be sacrificed (I haven't used it at the moment, I don't know the application scenario for the time being)
  • Sparse indexes make ClickHouse unsuitable for point queries that retrieve a single row by its key.
  • Rich data types, support Map(key,value) , Tuple(T1,T2,...) , Array(T) , Geo , Nested嵌套数据结构(类似于嵌套表) , etc.
  • Datetime type group by has a conversion function starting with toStartOf* , such as toStartOfDay , which can improve performance dozens of times compared to converting with DATE function.
阅读 2k

Kumfo的杂货铺
做一些经验总结和一些学习心得分享,主要围绕PHP。 现在正在学习机器学习,会增加一些机器学习的思考分享。
avatar
kumfo
SegmentFault 后端工程师

程序生存法则:ฏ๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎...

6.4k 声望
4k 粉丝
0 条评论
avatar
kumfo
SegmentFault 后端工程师

程序生存法则:ฏ๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎๎...

6.4k 声望
4k 粉丝
文章目录
宣传栏