1 Background
As a veteran driver who has driven in the back-end circle for many years, have you often heard, "It's better for a single mysql table not to exceed 2000w", "data migration should be considered when a single table exceeds 2000w", "you need all the data in this table soon. It's 2000w, no wonder the query speed is slow"
These famous sayings are like "only discuss technology in the group, don't drive, don't drive faster than 120 yards, or you will automatically kick the group", only heard, never tried, haha.
Next, let's step down the speed to the end and try to reach 180 yards....
2 experiments
Experiment and see... Build a table
insert a piece of data
Use the mysql pseudo column rownum to set the pseudo column starting point to 1
Run the following sql and execute it 20 times in a row, that is, the 20th power of 2 is approximately equal to 100w of data; if executed 23 times, the 23rd power of 2 is approximately equal to 800w. In this way, the insertion of tens of millions of test data can be achieved. To increase the data doubled and doubled, but to increase the data in small amounts, there is a trick that is to add a where condition to the back of the SQL, such as id> a certain value to control the amount of increased data.
It should be noted here that maybe when you execute nearly 800w or 1000w of data, you will report an error: The total number of locks exceeds the lock table size, this is because your temporary table memory setting is not large enough, you only need to expand Just set the parameters.
Let's first look at a set of test data. This set of data is in the mysql8.0 version and is on my local machine. Since this machine also runs various tools such as idea and browser, it is not the machine configuration that is used for Database configuration, so test data is limited to reference only.
Seeing that this set of data seems to really correspond to the title, when the data reaches 2000w, the query time increases sharply; is this an iron law? Then let's take a look at how this recommended value of 2kw came from?
3 single table quantity limit
First of all, let's think about the maximum number of rows in a single table in the database?
Look at the above table building sql, id is the primary key, which is unique in itself, that is to say, the size of the primary key can limit the upper limit of the table, if the primary key declares the size of int, which is 32 bits, then supports 2^32-1 ~~21 billion; if it's a bigint, that's 2^62-1? (36893488147419103232), it's hard to imagine how big this is. Generally, before this limit is reached, the database may be full! !
According to statistics, if the auto-increment field selects unsigned bigint when creating a table, then the maximum value of auto-increment is 18446744073709551615. According to the speed of adding one record per second, when will it run out?
4 Tablespace
Let's take a look at the structure of the index. By the way, the content we will talk about below is based on the Innodb engine. Everyone knows that the index of Innodb uses B+ trees.
The data of this table is stored in the same way on the hard disk. It is actually placed in a file called person.ibd (innodb data), also called table space; although in the data table, they appear to be connected one by one. , but in fact it is divided into many small data pages in the file, and each of them is 16K. It is probably like the following, of course, this is just an abstraction from us. There are many concepts such as segments, areas, and groups in the table space, but we need to jump out and see.
5 pages of data structures
Because each page is only 16K in size, but if there is a lot of data, that page will definitely not be able to hold the data, and the data will definitely be divided into other pages, so in order to associate these pages, there will definitely be records. The address of the front and back pages is easy to find the corresponding page; at the same time, each page is unique, so there will be a unique mark to mark the page, which is the page number; the data will be recorded in the page, so there will be read and write operations, and read and write operations will exist If the data is incomplete due to interruption or other abnormality, then a verification mechanism will be required, so there will be a verification code in it, and the most important issue of read operation is the efficiency issue. , so the corresponding page directory (Page Directory) will also be generated for the data; so the internal structure of the actual page is like the following.
As can be seen from the figure, the storage space of an InnoDB data page is roughly divided into 7 parts. The number of bytes occupied by some parts is determined, and the number of bytes occupied by some parts is uncertain.
Among the 7 components of the page, our own stored records will be stored in the User Records section according to the row format we specify.
However, when the page was first generated, there was no User Records part. Whenever we inserted a record, we would apply for a record-sized space from the Free Space part, that is, the unused storage space, and divide it into the User Records part. When all the space in the Free Space part is replaced by the User Records part, it means that the page is used up. If there are new records inserted, you need to apply for a new page. An illustration of this process is shown below.
The process of adding data has just been mentioned above.
Let's talk about the data search process. If we need to find a record, we can load each page in the tablespace into memory, and then judge whether the records are what we want one by one. When I was young, there was no problem, and the memory could support it; but the reality is so cruel, it will not give you this situation; in order to solve this problem, there is the concept of index in mysql; everyone knows that index can speed up data query, So what's going on? I'll take a look below.
6 Indexed data structures
The data structure of the index in mysql is almost the same as the page just described, and the size is 16K, but the minimum primary key id and page number of the page (data page, index page) are recorded in the index page, as well as the index page. The information of the level is added, starting from 0 and counting up, so there is a concept of upper and lower levels between pages.
After seeing this picture, does it feel a bit similar, is it like a binary tree, yes, yes! It is a tree, but we simply draw three nodes here, with a 2-layer structure. If there is more data, it may expand to a 3-layer tree. This is what we often call the B+ tree. The page level = 0 of the lower layer is the leaf node, and the rest are non-leaf nodes.
Looking at the above figure, we are looking at a single node. First of all, it is a non-leaf node (index page), and its content area has two parts: id and page number address. This id is the smallest record in the corresponding page. Record the id value, the page number address is a pointer to the corresponding page; and the data page is almost the same as this, the difference is that the data page records the real row data instead of the page address, and the id is also sequential.
7 Single Table Recommended Values
Next, we will use the legend of 3 layers and 2 forks (actually M forks) to illustrate the process of finding a row of data.
For example, we need to find a row data with id=6, because the page number and the smallest id of the page are stored in the non-leaf node, so we start the comparison from the top level, first look at the directory in page number 10, there are [id =1,page number=20],[id=5,page number=30], indicating that the minimum id of the node on the left is 1, and the minimum id of the node on the right is 5; 6>5, then according to the rule of binary search, sure Continue to look for the node on the right. After finding the node with page number 30, it is found that this node has child nodes (non-leaf nodes), then continue to compare, for the same reason, 6>5&&6<7, so page number 60 is found , after finding page number 60, it is found that this node is a leaf node (data node), so the data of this page is loaded into the memory for one-to-one comparison, and the data row with id=6 is found.
From the above process, we found that we have queried three pages in total in order to find the data with id=6. If all three pages are in the disk (not loaded into memory in advance), it will take up to three disk IOs.
It should be noted that the page number in the figure is just an example, it is not continuous in actual situation, and the storage in the disk is not necessarily sequential.
So far, we have probably understood the structure of the data in the table, and we also know the process of querying the data, so that we can roughly estimate how much data such a structure can store.
From the above diagram, we know that the leaf nodes of the B+ number have data, and the non-leaf nodes are used to store index data.
Therefore, for the same 16K page, each piece of data in the non-leaf node points to a new page, and the new page has two possibilities
If it is a leaf node, then there is a row of data in it. If it is a non-leaf node, it will continue to point to a new page
Assumption
The number of non-leaf nodes pointing to other pages is x
The number of data rows that can be accommodated in a leaf node is y
The number of layers of the B+ number is z
as shown in the image below
Total =x^(z-1) *y That means the total will be equal to the product of x raised to the z-1 power and Y.
X=?
The structure of the page has been introduced at the beginning of the article, and the index is no exception. There will be File Header (38 byte), Page Header (56 Byte), Infimum + Supermum (26 byte), File Trailer (8byte), plus The page directory, about 1k, we take it as 1K, the size of the entire page is 16K, and the remaining 15k is used to store data. The primary key and page number are mainly recorded in the index page. We assume that the primary key is Bigint (8 byte), and the page number is also fixed (4Byte), then a piece of data in the index page is 12byte; so x=15*1024/12≈1280 lines.
Y=?
The structure of leaf nodes and non-leaf nodes is the same. Similarly, the space that can put data is also 15k; however, the real row data is stored in the leaf nodes, which will affect many factors, such as the type of field, The number of fields; the larger the space occupied by each row of data, the fewer rows will be placed in the page; here we temporarily count one row of data 1k, that page can save 15 rows, Y≈15.
It's over here, do you already have a spectrum in your heart?
According to the above formula, Total =x^(z-1) y, known x=1280, y=15
Assuming the B+ tree is two levels, that is Z = 2, Total = (1280 ^1 ) 15 = 19200
Assuming that the B+ tree is three layers, that is Z = 3, Total = (1280 ^2) *15 = 24576000 (about 2.45kw)
Oh my goodness! Isn't this just the recommended maximum number of lines at the beginning of the article is 2000w! Yes, the general level of B+ number is 3 levels at most. Just imagine, if it is 4 levels, the number of disk IOs will increase except when querying, and what the Total value will be, it should be more than 30 billion. It is not very reasonable, so 3 layers should be a more reasonable value.
Is it over here?
Do not
We just assumed 1K when we were talking about the value of Y. For example, the data space of my actual row is not 1K, but 5K, so a single data page can only hold 3 pieces of data at most
Similarly, according to the value of Z=3, Total = (1280 ^2) *3 = 4915200 (nearly 500w)
Therefore, in the case of maintaining the same level (similar query performance), in the case of different row data sizes, the maximum recommended value is actually different, and there are many other factors that affect query performance, such as database version, Server configuration, sql writing, etc. In order to improve performance, MySQL will load the index of the table into memory. When the InnoDB buffer size is sufficient, it can be fully loaded into the memory, and there will be no problem with the query. However, when the single-table database reaches the upper limit of a certain magnitude, the memory cannot store its index, so that subsequent SQL queries will generate disk IO, resulting in performance degradation, so increase the hardware configuration (such as using memory as disk), May bring immediate performance improvement.
8 Summary
Mysql table data is stored in the form of pages, and pages are not necessarily continuous in the disk.
The space of the page is 16K, not all the space is used to store data, there will be some fixed information, such as page header, page footer, page number, check code and so on.
In a B+ tree, the data structures of leaf nodes and non-leaf nodes are the same. The difference is that leaf nodes store actual row data, while non-leaf nodes store primary keys and page numbers.
The index structure does not affect the maximum number of rows in a single table, and 2kw is only a recommended value. Exceeding this value may lead to a higher B+ tree level and affect query performance.
9 References
https://www.jianshu.com/p/cf5d381ef637
https://www.modb.pro/db/139052
"MYSQL Kernel: INNODB Storage Engine Volume 1"
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。