2
头图

I. Introduction:

Encountered an incident today. Two programmers were fascinated by insert into select for one night. Is this a god or a monster? Here is a brief explanation of the reason;

On the same day, in the MySql database , programmer A used insert into select to back up part of the data in a certain table, and then programmer B used the program to insert data into this table, and finally programmer B's insert operation appeared. The failure situation, and then they studied the problem overnight, and finally found the cause of the problem.

The following is to reproduce the problem through a simple simulation scenario, and then study the cause of the problem, and finally find a way to avoid the problem;

Main line of this article:

①Scene reappearance: simulate the scene of the problem

② Problem solving: study the reasons for the problems

③, the final result: ways to avoid problems

2. Scene reappearance: simulate the scene of the problem

1. Ensure that the environment is consistent:

The version of the mysql database used when the problem occurs is 5.7.16-log ;

If you don’t know the mysql version you are using, you can use the following statement to query, as long as the version is similar;

select version();

Note: Create a new database to ensure that this database can only be operated by yourself, mainly to reproduce the problem more intuitively and study the cause of the problem, and avoid other interference factors;

2. Create a test table:

The table SQL is as follows:

Log table:

DROP TABLE IF EXISTS `t_log`;
CREATE TABLE `t_log` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `log` varchar(1024) DEFAULT NULL COMMENT '日志内容',
  `createts` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '创建时间',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Log backup table:

DROP TABLE IF EXISTS `t_log_back`;
CREATE TABLE `t_log_back` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `log` varchar(1024) DEFAULT NULL COMMENT '日志内容',
  `createts` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '创建时间',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

3. Construct the data for test backup:

Add 200,000 pieces of data to the t_log table, which are used for backup; use stored procedures to insert data into the table;

# 如果存储过程存在则删除 
DROP PROCEDURE IF EXISTS proc_initData;

DELIMITER $
# 创建存储过程
CREATE PROCEDURE proc_initData()
BEGIN
    DECLARE i INT  DEFAULT 1;
    WHILE i<=200000 DO   # 循环遍历插入的次数,循环遍历20万次
        INSERT INTO t_log ( log ) VALUES ( CONCAT('测试日志log', i) ); #执行的插入sql语句
        SET i = i+1;
    END WHILE;
END $

# 调用存储过程
CALL proc_initData();

4. Reproduce the problem scenario:

Through the above three steps, the basic environment at the time of the problem has been restored, and then the SQL statement is used to reproduce the problem that fascinated the two programmers:

4.1. First use the Navicat visualization tool to connect to the database created;
4.2. Then open two command line interfaces:

4.3. Then first manually open a transaction in the first command interface, and then open another transaction in another interface:
start transaction ;
4.4. Then execute the following insert into select backup SQL statement in the first command interface:

Note: createts creation time filter condition value needs to be changed to the corresponding data in your own table;

INSERT INTO t_log_back  SELECT * FROM t_log WHERE CREATETS > STR_TO_DATE('2021-05-11 12:28:16','%Y-%m-%d %H:%i:%s') and CREATETS < STR_TO_DATE('2021-05-11 12:35:33','%Y-%m-%d %H:%i:%s');
4.5. Then execute the following SQL statement for adding data in another command interface:
insert into t_log (log) values('insert into select test log');

At this point, you will find that the insert statement will be blocked until the timeout insert fails; what is causing this?

If this area had to know the students, you probably know that this is due to the use of the first transaction in INSERT INTO the SELECT time backup, not commit the transaction, resulting in t_log table has been in locked state, Therefore, other transactions are blocked when they perform write operations;

It is true that the write operation of other transactions is blocked because the first transaction is not committed. This is also the same as the problem of two programmers. When programmer A uses insert into select to back up data, the amount of data backed up is too large, resulting in The transaction was not completed for a long time, which eventually caused programmer B's insert operation to time out and fail;

So this is also the reason why a lot of information on the Internet says to avoid long transactions as much as possible, because it will block other concurrently executing write transactions;

Do you have two questions here?

① Question 1: When insert into log is executed, the lock table table lock realized by the 1609fc5dee4ca8 directly used at this time?

Note: The innodb storage engine supports row locks, and you should find when you look up information online. "Innodb's row locks are implemented based on indexes: InnoDB uses row-level locks only when the where condition exists in the index, otherwise, InnoDB Table locks will be used directly!" , is this sentence really correct? ;

② Question 2: Is there a difference between table locks and row locks under different transaction isolation levels?

There are differences, mainly because row locks are different under RC read commit and RR repeatable read isolation levels. The specific differences are explained below;

3. Problem solving: study the cause of the problem

Through the above scenario to reproduce, the problem has been reproduced, and it is known that the table is locked due to the long transaction, which causes the write operation of other transactions to block the timeout failure;

After knowing the reason, study it again. Is the table lock directly used to lock the table?

According to the information on the Internet, if the does not exist in the where condition, then the table lock will be used directly. Then according to this, when you use createts to create a time field in insert into select to query data, because createts does not have an index, so it directly uses table locks;

But what everyone needs to know is that there are thousands of materials on the Internet, and there are many wrong conclusions. Therefore, we cannot be regarded as ethics. We need to verify the authenticity under practical operation, and this will deepen ourselves. Understanding

1. Below we will verify whether the table lock has been added in practice:

①. First re-execute the five steps reproduce the problem scene

②. Then open a query interface and enter the following SQL statement to query the information of the lock held in the current transaction:

select trx.trx_id, trx.trx_state, trx.trx_started, trx.trx_query, locks.lock_id, locks.lock_mode, locks.lock_type, locks.lock_table, locks.lock_index, trx.trx_rows_locked, trx.trx_isolation_level
from information_schema.INNODB_TRX trx, INFORMATION_SCHEMA.INNODB_LOCKS locks where trx.trx_id = locks.lock_trx_id

Precautions:

①. Only when the SQL statement is blocked, the execution of the following SQL statement will get the locked data, so after the five steps are executed, the above statement needs to be executed immediately;

② In addition to executing the above SQL statement to obtain transaction and lock information, you can also directly query the state of the innodb storage engine, where the transaction module obtains the required information, the statement is as follows:

show engine innodb status;

2. The execution results are as follows:

Field resolution:

  • trx_id transaction id
  • trx_state transaction state LOCK_WAIT: blocking waiting, RUNNING: running
  • trx_started transaction start time
  • trx_query SQL statement executed in the transaction
  • lock_id The id of the lock held by the firm
  • lock_mode lock mode X: exclusive lock (write lock), S: shared lock (read lock)
  • lock_type The type of lock RECORD: row lock (record lock), TABLE: table lock
  • lock_table is the locked table
  • lock_index The index used by the row lock, null when the table is locked
  • or the number of rows locked by this transaction. This value may include rows marked for deletion that are physically present but not visible to the transaction.
  • trx_isolation_level The isolation level used by this transaction

3. Analysis of execution results:

3.1. trx_id is 235430:

According to the execution result, the transaction id of 235430 is the transaction for executing insert into select. Since the SQL in this transaction has been executed, the specific SQL statement is not displayed in the trx_query field;

This transaction holds row locks, traversing through the whole clustered index all of the rows in the table plus row locks , lock type is a shared read lock, value trx_rows_locked field show probably is lock_table is locked The total amount of data in the table;

The isolation level used by this transaction is REPEATABLE READ can read repeatedly;

3.2. trx_id is 235435:

The transaction id of 235435 is the transaction that executes insert to add new data. This transaction is blocked. When inserting data, it also clustered index 1609fc5dee4f93. The lock type is exclusive lock;

The isolation level used by this transaction is REPEATABLE READ can read repeatedly;

3.3. Conclusion + Questions?

①. Conclusion: Innodb's row lock is based on the index, but if there is no index in the where condition, it will directly add row locks based on the clustered index. Each table has a primary key. Even if the primary key is not manually added, mysql does Will automatically generate a rowid (incremental) as the default primary key;

table lock is not added directly, but the row lock is added;

② Question: If you add row locks to the existing data in the table, then insert the new data, insert the data in the last order, it should not be blocked in theory;

Answer: You need to know that the isolation level used by the current transaction is RR can read repeatedly. Under the RR isolation level, there is gap lock . In addition to row locks, gap locks are also added by default in the clustered index. , The row lock + gap lock ; the specific lock is shown in the following figure:

lock + gap lock in mysql constitute a new lock called 1609fc5dee5022 Next-Key Locks , which can not only implement lock tables under the RR isolation level, but also used to avoid the phantom reading ;

Note: Gap lock does not exist under the Read Uncommited (RU), Read Committed (RC) isolation level;

If the gap lock does not exist under the RC read-commit isolation level, will the table not be locked under RC? And it will not cause the problems mentioned in the article? Let's study it below;

Fourth, the final result: ways to avoid problems

Through the above introduction, you can get two possible ways to avoid the problem, but you need to practice it:

①. Under the RR isolation level, create a secondary index on the createts creation time field;

②. Set the transaction isolation level of the current database to RC read commit;

1. Method one to avoid problems: create an index

Under the RR isolation level, create an index on the createts field, re-execute reproduce the problem scenario , and find that the transaction is not blocked when inserting new data;

Because the row lock of InnoDB is based on the index, if there is an index in the where condition field, the primary key value will be filtered out according to the secondary index, and then back to the table to the clustered index to add the record corresponding to the filtered primary key value Row locks and gap locks are added by default between row locks;

The lock display is as follows: Through the following figure, it is found that the last gap is not locked, so the insert operation will not be blocked;

2. Method two to avoid problems: set the isolation level to RC

In addition to creating an index on the where condition field, you can also directly set the isolation level of the current database transaction to RC read commit;

The SQL statement to set the RC isolation level of a transaction is as follows:

SET  global  TRANSACTION  ISOLATION  LEVEL  Read committed;

After setting the RC isolation level of the transaction, re-execute reproduce the problem scenario , and find that the transaction is not blocked when inserting new data, indicating that this method is also feasible;

And by consulting the official website information: when using the RC isolation level, after MySQL evaluates the WHERE condition, it will release the record lock of the unmatched row; the details are as follows:

Execute the following SQL statement under the RC isolation level:

INSERT INTO t_log_back  SELECT * FROM t_log WHERE CREATETS > STR_TO_DATE('2021-05-11','%Y-%m-%d') and CREATETS < STR_TO_DATE('2021-05-13','%Y-%m-%d');

First scan the clustered index and add row locks:

Then after MySql performs evaluation and optimization, the row record lock that does not meet the where condition is released, and finally as follows:

2.1. Knowledge expansion:

In fact, most of the current Internet projects use RC read commit for the MySQL transaction isolation level. One is because it is possible to use RC in most scenarios, and the other is that the default transaction isolation level of the Oracle database is also RC read commit. When you use the Oracle database, you also use it directly, without modifying its isolation level;

doubt? Then why does mysql set the default transaction isolation level to RR?

Mainly because of a legacy historical problem of mysql, because under the RC isolation level, the use of statement format binlog for master-slave synchronization will cause the master-slave data to be inconsistent; but the following binlog provides row and other formats, at this time Data inconsistency can be avoided under RC;

Therefore, if the isolation level of the transaction is set to RC read commit, and binlog is currently used for master-slave synchronization, the format of the binlog log needs to be changed to row;

View the SQL statement in the binlog format of the current database:

show variables like 'binlog_format'

Modify the SQL statement of the binlog format of the current database:

set globle binlog_format='ROW'

5. Summary:

So far, this article has finished talking about the magic of insert into select;

After reading this article, I hope that when you understand the specific magic of insert into select, you can also understand an important content conveyed in this article, don't use ism, practice more;

Also, when you read this article, you must follow the actual operation again. When encountering a place that is different from the one described in this article, you must remain skeptical and wonder if this is the blogger's mistake;

When looking up information, please go to the official website to check relevant information as much as possible, because the information on the official website is complete, clear and correct;

6. Reference materials:

①, INFORMATION_SCHEMA INNODB_TRX table

②、InnoDB Locking

③, INFORMATION_SCHEMA INNODB_LOCKS table

❤ Like + comment + forward

If this article is helpful to you, please wave your little hand that loves to make a fortune and like it. Your support is the motivation for my continuous creation, thank you!

You can search for [ public account, a large number of Java learning dry goods articles, you can come and have a look!


木子雷
213 声望268 粉丝

Web后端码仔,记录生活,分享技术!