Author: Hu Chengqing

A member of the DBA team of Aikesheng, good at failure analysis and performance optimization, personal blog: https://www.jianshu.com/u/a95ec11f67a8 , welcome to discuss.

Source of this article: original contribution

*The original content is produced by the open source community of Aikesheng, and the original content shall not be used without authorization. For reprinting, please contact the editor and indicate the source.


max_allowed_packet indicates the maximum size of the packet received by the MySQL Server or the client. Packet is the data packet. This limit exists on both the MySQL Server and the client.

data pack

Each data packet consists of a header and a body. The header consists of a 3-byte body length and a 1-byte packet number. 3 bytes can represent up to 2 ^ 24 = 16777216 bytes (16 M), that is to say, the body length of a data packet must be less than or equal to 16 M.

What if you want to send more than 16M of data?

When the data larger than 16M is to be sent, the data will be split into multiple 16M data packets. Except the last data packet, the other data packets are all 16M in size. After MySQL Server receives such a packet, if it finds that the length of the packet body is equal to 16M, it knows that the data received this time consists of multiple data packets, and will first write the content of the current data packet into the buffer, and then read The next data packet is added, and the content of the next data packet is appended to the buffer until the end data packet is read, and the complete data sent by the client is received.

How does that count as a packet?

  • A SQL is a packet
  • When returning query results, one row of data counts as one data packet
  • The parsed binlog, if imported with the mysql client, one SQL counts as one data packet
  • In replication, an event counts as a packet

Below we discuss the actual impact of max_allowed_packet through testing.

Is importing SQL files limited by max_allowed_packet?

If there is a single SQL size in the SQL file that exceeds max_allowed_packet, an error will be reported:

 ##导出时设置 mysqldump --net-buffer-length=16M,这样保证导出的sql文件中单个 multiple-row INSERT 大小为 16M
mysqldump -h127.0.0.1 -P13306 -uroot -proot --net-buffer-length=16M \
--set-gtid-purged=off sbtest sbtest1 > /data/backup/sbtest1.sql

##设置max_allowed_packet=1M

##导入报错
[root@localhost data]# mysql -h127.0.0.1 -P13306 -uroot -proot db3 < /data/backup/sbtest1.sql
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1153 (08S01) at line 41: Got a packet bigger than 'max_allowed_packet' bytes

Is import parsed binlog limited by max_allowed_packet?

For binlog in row format, if the binlog generated by a single SQL-modified data exceeds max_allowed_packet, an error will also be reported.

In the scenario of restoring data to a specified point in time, the size of a single transaction of the parsed binlog exceeds 1G, and this transaction contains only one SQL, and an error of max_allowed_packet will be triggered. But the task of recovering data is very important, what should we do? You can rename binlog to relay log and use sql thread playback to bypass this limitation.

Are query results limited by max_allowed_packet?

In the query result, as long as the single row of data does not exceed the max_allowed_packet set by the client:

 ##插入2行20M大小的数据
[root@localhost tmp]# dd if=/dev/zero of=20m.img bs=1 count=0 seek=20M
记录了0+0 的读入
记录了0+0 的写出
0字节(0 B)已复制,0.000219914 秒,0.0 kB/秒
[root@localhost tmp]# ll -h 20m.img
-rw-r--r-- 1 root root 20M 6月   6 15:15 20m.img

mysql> create table t1(id int auto_increment primary key,a longblob);
Query OK, 0 rows affected (0.03 sec)

mysql> insert into t1 values(NULL,load_file('/tmp/20m.img'));
Query OK, 1 row affected (0.65 sec)

mysql> insert into t1 values(NULL,load_file('/tmp/20m.img'));
Query OK, 1 row affected (0.65 sec)

##mysql客户端默认 --max-allowed-packet=16M,读取失败
mysql> select * from t1;
ERROR 2020 (HY000): Got packet bigger than 'max_allowed_packet' bytes

##设置 mysql 客户端 --max-allowed-packet=22M,读取成功
[root@localhost ~]# mysql -h127.0.0.1 -P13306 -uroot -proot --max-allowed-packet=23068672 sbtest -e "select * from t1;" > /tmp/t1.txt

[root@localhost ~]# ll  -h /tmp/t1.txt
-rw-r--r-- 1 root root 81M 6月   6 15:30 /tmp/t1.txt

Is the load data file size limited by max_allowed_packet?

The size of the load data file and the size of a single line are not affected by max_allowed_packet:

 ##将上一个测试中的数据导出,2行数据一共81M
mysql> select * into outfile '/tmp/t1.csv' from t1;
Query OK, 2 rows affected (0.57 sec)

[root@localhost ~]# ll -h /tmp/t1.csv
-rw-r----- 1 mysql mysql 81M 6月   6 15:32 /tmp/t1.csv

##MySQL Server max_allowed_packet=16M
mysql> select @@max_allowed_packet;
+----------------------+
| @@max_allowed_packet |
+----------------------+
|             16777216 |
+----------------------+
1 row in set (0.00 sec)

##load data 成功,不受 max_allowed_packet 限制
mysql> load data infile '/tmp/t1.csv' into table t1;
Query OK, 2 rows affected (1.10 sec)
Records: 2  Deleted: 0  Skipped: 0  Warnings: 0

How does the SQL exceeding 1G in the binlog break the max_allowed_packet and copy it to the slave library?

The maximum packet size that can be processed by the slave library slave io thread and slave sql thread is controlled by the parameter slave_max_allowed_packet. This is to limit the binlog event size, not the size of a single SQL modification data.

The main library dump thread will automatically set max_allowed_packet to 1G and will not depend on the global variable max_allowed_packet. Used to control the maximum size of each read event by the main library DUMP thread.

For details, please refer to:
https://mp.weixin.qq.com/s/EfNY_UwEthiu-DEBO7TrsA

In addition, for large transactions exceeding 4G, the heartbeat from the library will report an error:
https://opensource.actionsky.com/20201218-mysql/


爱可生开源社区
426 声望209 粉丝

成立于 2017 年,以开源高质量的运维工具、日常分享技术干货内容、持续的全国性的社区活动为社区己任;目前开源的产品有:SQL审核工具 SQLE,分布式中间件 DBLE、数据传输组件DTLE。