Author: Shen Guangyu
A member of the DBA team in the Southern District of Aikesheng, mainly responsible for MySQL fault handling and performance optimization. Persistence in technology, responsible for customers.
Source of this article: original contribution
*The original content is produced by the open source community of Aikesheng, and the original content shall not be used without authorization. For reprinting, please contact the editor and indicate the source.
1. Background
A customer's MySQL instance has a large amount of data (the amount of data on T), and the full/incremental data in MySQL needs to be quickly migrated to DBLE. TiDB DM supports multi-threaded data export, import and incremental synchronization, so TiDB DM is selected as the data migration tool. This article takes this case as the background to introduce the method of using TiDB DM to migrate data from MySQL to DBLE and some problems encountered.
2. Schematic diagram of data migration and server environment
(1) The schematic diagram of data migration is as follows:
(2) The server environment is as follows:
Server IP | Role | port | Version | Remark |
---|---|---|---|---|
10.186.65.83 | dm master | 8261/8291 | V2.0.7 | tiup install node |
10.186.65.118 | dm worker | 8262 | V2.0.7 | |
10.186.65.14 | SRC MySQL | 4490 | MySQL 8.0.18 | |
10.186.65.4 | DBLE | 8066 | DBLE 3.21.10.6 | |
10.186.65.61 | DBLE datanode | 4408 | MySQL 8.0.25 | |
10.186.65.65 | DBLE datanode | 4408 | MySQL 8.0.25 |
3. Install TiDB DM
(1) Prepare the environment and install tiup
#分别在dm master和worker节点主机创建tidb用户,密码为dmadmin,并配置sudo
shell> useradd tidb
shell> echo "dmadmin" | passwd tidb --stdin
shell> echo "tidb ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/tidb
#切换至tidb用户,生成密钥,并做ssh信任,dm集群内机器都需要做ssh信任
shell> su - tidb
shell> ssh-keygen -t rsa
shell> ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@10.186.65.118
shell> ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@10.186.65.83
#安装tiup,在tidb用户下
shell> curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
shell> source .bash_profile
shell> which tiup
~/.tiup/bin/tiup
#安装dmctl
shell> tiup install dm dmctl:v2.0.7
component dm version v1.10.2 is already installed
download https://tiup-mirrors.pingcap.com/dmctl-v2.0.7-linux-amd64.tar.gz 26.92 MiB / 26.92 MiB 100.00% 10.01 MiB/s
(2) Install dm-cluster
#生成配置模版
shell> tiup dm template > dm_topology.yaml
#修改后配置文件如下:
shell> cat dm_topology.yaml |grep -v ^[#-] |grep -v ^$
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/home/tidb/dm/deploy"
data_dir: "/home/tidb/dm/data"
master_servers:
- host: 10.186.65.83
worker_servers:
- host: 10.186.65.118
monitoring_servers:
- host: 10.186.65.83
grafana_servers:
- host: 10.186.65.83
alertmanager_servers:
- host: 10.186.65.83
#安装、并启动DM集群
shell> tiup dm deploy dm-test v2.0.7 ./dm_topology.yaml --user tidb -i /home/tidb/.ssh/id_rsa
shell> tiup dm start dm-test
#查看DM集群
[tidb@10_186_65_83 ~]$ tiup dm display dm-test
tiup is checking updates for component dm ...
Starting component `dm`: /home/tidb/.tiup/components/dm/v1.10.2/tiup-dm display dm-test
Cluster type: dm
Cluster name: dm-test
Cluster version: v2.0.7
Deploy user: tidb
SSH type: builtin
Grafana URL: http://10.186.65.83:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
10.186.65.83:9093 alertmanager 10.186.65.83 9093/9094 linux/x86_64 Up /home/tidb/dm/data/alertmanager-9093 /home/tidb/dm/deploy/alertmanager-9093
10.186.65.83:8261 dm-master 10.186.65.83 8261/8291 linux/x86_64 Healthy|L /home/tidb/dm/data/dm-master-8261 /home/tidb/dm/deploy/dm-master-8261
10.186.65.118:8262 dm-worker 10.186.65.118 8262 linux/x86_64 Free /home/tidb/dm/data/dm-worker-8262 /home/tidb/dm/deploy/dm-worker-8262
10.186.65.83:3000 grafana 10.186.65.83 3000 linux/x86_64 Up - /home/tidb/dm/deploy/grafana-3000
10.186.65.83:9090 prometheus 10.186.65.83 9090 linux/x86_64 Up /home/tidb/dm/data/prometheus-9090 /home/tidb/dm/deploy/prometheus-9090
Total nodes: 5
Fourth, DBLE configuration
(1) db.xml
<?xml version="1.0"?>
<dble:db xmlns:dble="http://dble.cloud/">
<dbGroup name="dbGroup1" rwSplitMode="0" delayThreshold="-1">
<heartbeat>select 1</heartbeat>
<dbInstance name="host_1" url="10.186.65.61:4408" user="sgy" password="QfnHoIeIYL7ZT+EdJMYNuiLw6glbx2hEyrqxK+uxFPU8vja8vZHCLEKvKBOt1vOJdXCCgW7wNidJaTYWORDaUg==" maxCon="2000" minCon="50" primary="true" usingDecrypt="true"></dbInstance>
</dbGroup>
<dbGroup name="dbGroup2" rwSplitMode="0" delayThreshold="-1">
<heartbeat>select 1</heartbeat>
<dbInstance name="host_2" url="10.186.65.65:4408" user="sgy" password="d+rG/82+4h21ARS8D6Gu5MIFQ2UBC0h+ZS0EI7bvWAtTJUBsKuFY8+AVZtW1pZk+qtISoI2WpVt29Z9eqMRP5A==" maxCon="2000" minCon="50" primary="true" usingDecrypt="true"></dbInstance>
</dbGroup>
</dble:db>
#注:用户密码为加密后的字符串
(2) sharding.xml
<?xml version="1.0"?>
<dble:sharding xmlns:dble="http://dble.cloud/" >
<schema name="dm_meta" sqlMaxLimit="-1">
<singleTable name="mysql_sync_to_dble_loader_checkpoint" shardingNode="dm_meta" />
<singleTable name="mysql_sync_to_dble_syncer_checkpoint" shardingNode="dm_meta" />
</schema>
<schema name="sbtest" sqlMaxLimit="-1">
<shardingTable name="sbtest1" shardingNode="dn1,dn2,dn3,dn4" sqlMaxLimit="-1" shardingColumn="id" function="pro_func_jumpHash"></shardingTable>
<shardingTable name="sbtest2" shardingNode="dn1,dn2,dn3,dn4" sqlMaxLimit="-1" shardingColumn="id" function="pro_func_jumpHash"></shardingTable>
<shardingTable name="t1" shardingNode="dn1,dn2,dn3,dn4" sqlMaxLimit="-1" shardingColumn="c1" function="pro_func_jumpHash"></shardingTable>
<shardingTable name="t2" shardingNode="dn1,dn2,dn3,dn4" sqlMaxLimit="-1" shardingColumn="c1" function="pro_func_jumpHash"></shardingTable>
</schema>
<shardingNode name="dn1" dbGroup="dbGroup1" database="dh_dn1"></shardingNode>
<shardingNode name="dn2" dbGroup="dbGroup1" database="dh_dn2"></shardingNode>
<shardingNode name="dn3" dbGroup="dbGroup2" database="dh_dn3"></shardingNode>
<shardingNode name="dn4" dbGroup="dbGroup2" database="dh_dn4"></shardingNode>
<shardingNode name="dm_meta" dbGroup="dbGroup1" database="dm_meta"/>
<function name="pro_func_jumpHash" class="jumpStringHash">
<property name="partitionCount">4</property>
<property name="hashSlice">0:0</property>
</function>
</dble:sharding>
注:需要提前配置好DM同步时保存数据的schema及两张表,表名分别是
dm同步任务名 + _loader_checkpoint 即表mysql_sync_to_dble_loader_checkpoint
dm同步任务名 + _syncer_checkpoint 即表mysql_sync_to_dble_syncer_checkpoint
(3) user.xml
<?xml version="1.0"?>
<dble:user xmlns:dble="http://dble.cloud/">
<managerUser name="root" password="RYQdYYnzbcZlDuhV4LhJxFTM8kbU3d0y183LU+FgRWzNscvvvFPcdmeEKMdswMyzIaA+kObcozliHEYAlT0AjA==" usingDecrypt="true"></managerUser>
<shardingUser name="sz" schemas="sbtest,dm_meta" password="jyaXmGFU+mdTGbUZIVbdEcVwItLPI+Yjxdq4wkOzhAYRB29WGZJd0/PkTJh3ky/v4E2yYoqgUzJXPzPRPiQF0Q==" usingDecrypt="true" readOnly="false" maxCon="1000"></shardingUser>
</dble:user>
#注:用户密码为加密后的字符串
(4) Log in to the DBLE management terminal to execute the create physical database command
shell> /data/mysql/base/5.7.36/bin/mysql -uroot -padmin -h 10.186.65.4 -P 9066
dble> create database @@shardingNode = 'dn$1-4';
Query OK, 1 row affected (0.06 sec)
dble> create database @@shardingNode = 'dm_meta';
Query OK, 1 row affected (0.01 sec)
5. Create test data on the source MySQL stress test
#执行prepare,初始化100W行数据
/usr/share/sysbench/oltp_insert.lua --mysql-db=sbtest --mysql-host=10.186.65.14 --mysql-port=4490 --mysql-user=sgy --mysql-password=admin --tables=2 --table-size=1000000 --threads=16 --time=60 --report-interval=1 --max-requests=0 --percentile=95 --mysql-ignore-errors=1062 prepare
sysbench 1.0.17 (using system LuaJIT 2.0.4)
Initializing worker threads...
Creating table 'sbtest1'...
Creating table 'sbtest2'...
Inserting 1000000 records into 'sbtest2'
Inserting 1000000 records into 'sbtest1'
Creating a secondary index on 'sbtest2'...
Creating a secondary index on 'sbtest1'...
#利用sysbench不断写入数据
shell> /usr/share/sysbench/oltp_read_write.lua --mysql-db=sbtest --mysql-host=10.186.65.14 --mysql-port=4490 --mysql-user=sgy --mysql-password=admin --tables=2 --table-size=2000000 --threads=16 --time=6000 --report-interval=1 --max-requests=0 --percentile=95 --mysql-ignore-errors=1062 run
sysbench 1.0.17 (using system LuaJIT 2.0.4)
Running the test with following options:
Number of threads: 16
Report intermediate results every 1 second(s)
Initializing random number generator from current time
Initializing worker threads...
Threads started!
[ 1s ] thds: 16 tps: 585.44 qps: 11916.49 (r/w/o: 8360.33/1282.63/2273.53) lat (ms,95%): 55.82 err/s: 0.00 reconn/s: 0.00
[ 2s ] thds: 16 tps: 744.18 qps: 14819.02 (r/w/o: 10379.16/1609.36/2830.50) lat (ms,95%): 41.85 err/s: 0.00 reconn/s: 0.00
[ 3s ] thds: 16 tps: 605.11 qps: 12144.20 (r/w/o: 8500.54/1321.24/2322.42) lat (ms,95%): 47.47 err/s: 0.00 reconn/s: 0.00
[ 4s ] thds: 16 tps: 773.91 qps: 15558.16 (r/w/o: 10881.71/1737.79/2938.65) lat (ms,95%): 41.10 err/s: 0.00 reconn/s: 0.00
6. Start the data synchronization task
(1) Configure the data source
#使用dmctl工具生成密码加密字符串,每次执行都会产生不同的加密字符串
shell> tiup dmctl encrypt 'admin'
dmXgktpuF18RP1mDN/B2UkA6pDN9PQ==
#mysql数据库配置文件如下
shell> cat mysql_source_14_4490.yaml
source-id: "mysql_source_14_4490"
from:
host: "10.186.65.14"
port: 4490
user: "sgy"
password: "dmXgktpuF18RP1mDN/B2UkA6pDN9PQ=="
#创建数据源
shell> tiup dmctl --master-addr 10.186.65.83:8261 operate-source create ./mysql_source_14_4490.yaml
Starting component `dmctl`: /home/tidb/.tiup/components/dmctl/v2.0.7/dmctl/dmctl --master-addr 10.186.65.83:8261 operate-source create ./mysql_source_14_4490.yaml
{
"result": true,
"msg": "",
"sources": [
{
"result": true,
"msg": "",
"source": "mysql_source_14_4490",
"worker": "dm-10.186.65.118-8262"
}
]
}
#查看数据源
shell> tiup dmctl --master-addr 10.186.65.83:8261 operate-source show
{
"result": true,
"msg": "",
"sources": [
{
"result": true,
"msg": "",
"source": "mysql_source_14_4490",
"worker": "dm-10.186.65.118-8262"
}
]
}
(2) Configure synchronization tasks
#同步配置文件如下
shell> cat mysql_sync_to_dble.yaml
name: "mysql_sync_to_dble" #同步任务名
task-mode: "all" #全量 + Binlog 实时同步
clean-dump-file: true
timezone: "Asia/Shanghai"
ignore-checking-items: ["auto_increment_ID"]
target-database:
host: "10.186.65.4" #DBLE主机IP地址
port: 8066 #DBLE流量端口
user: "sz" #连接DBLE的用户
password: "KRfSNtdxe63tehpnCYoCz0ABdUGivg==" #连接DBLE用户密码,用tiup dmctl encrypt 'password'生成
mysql-instances:
-
source-id: "mysql_source_14_4490"
block-allow-list: "global"
mydumper-config-name: "global"
loader-config-name: "global"
syncer-config-name: "global"
block-allow-list:
global:
do-dbs: ["sbtest"] #只迁移sbtest库
mydumpers:
global:
extra-args: "-B sbtest" #只dump sbtest库
loaders:
global:
pool-size: 8
syncers:
global:
worker-count: 8
#更详细配置说明参考官方文档:
https://docs.pingcap.com/zh/tidb-data-migration/v2.0/task-configuration-file-full#%E5%AE%8C%E6%95%B4%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E7%A4%BA%E4%BE%8B
#检查配置文件
shell> tiup dmctl:v2.0.7 --master-addr=10.186.65.83:8261 check-task ./mysql_sync_to_dble.yaml
Starting component `dmctl`: /home/tidb/.tiup/components/dmctl/v2.0.7/dmctl/dmctl --master-addr=10.186.65.83:8261 check-task ./mysql_sync_to_dble.yaml
{
"result": true,
"msg": "check pass!!!"
}
#启动同步任务
shell> tiup dmctl:v2.0.7 --master-addr=10.186.65.83:8261 start-task ./mysql_sync_to_dble.yaml
Starting component `dmctl`: /home/tidb/.tiup/components/dmctl/v2.0.7/dmctl/dmctl --master-addr=10.186.65.83:8261 start-task ./mysql_sync_to_dble.yaml
{
"result": true,
"msg": "",
"sources": [
{
"result": true,
"msg": "",
"source": "mysql_source_14_4490",
"worker": "dm-10.186.65.118-8262"
}
]
}
#在dm-worker节点查看导出的SQL文件,
shell> ls -lh /home/tidb/dm/deploy/dm-worker-8262/dumped_data.mysql_sync_to_dble
total 384M
-rw-r--r-- 1 tidb tidb 187 Aug 5 14:04 metadata
-rw-r--r-- 1 tidb tidb 48M Aug 5 14:04 sbtest.sbtest1.0000000000000.sql
-rw-r--r-- 1 tidb tidb 48M Aug 5 14:04 sbtest.sbtest1.0000000010000.sql
-rw-r--r-- 1 tidb tidb 48M Aug 5 14:04 sbtest.sbtest1.0000000020000.sql
-rw-r--r-- 1 tidb tidb 48M Aug 5 14:04 sbtest.sbtest1.0000000030000.sql
-rw-r--r-- 1 tidb tidb 369 Aug 5 14:04 sbtest.sbtest1-schema.sql
-rw-r--r-- 1 tidb tidb 48M Aug 5 14:04 sbtest.sbtest2.0000000000000.sql
-rw-r--r-- 1 tidb tidb 48M Aug 5 14:04 sbtest.sbtest2.0000000010000.sql
-rw-r--r-- 1 tidb tidb 48M Aug 5 14:04 sbtest.sbtest2.0000000020000.sql
-rw-r--r-- 1 tidb tidb 48M Aug 5 14:04 sbtest.sbtest2.0000000030000.sql
-rw-r--r-- 1 tidb tidb 369 Aug 5 14:04 sbtest.sbtest2-schema.sql
-rw-r--r-- 1 tidb tidb 152 Aug 5 14:04 sbtest-schema-create.sql
#创建库SQL文件
shell> head /home/tidb/dm/deploy/dm-worker-8262/dumped_data.mysql_sync_to_dble/sbtest-schema-create.sql
/*!40101 SET NAMES binary*/;
CREATE DATABASE `sbtest` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_bin */ /*!80016 DEFAULT ENCRYPTION='N' */;
#创建表SQL文件
shell> head /home/tidb/dm/deploy/dm-worker-8262/dumped_data.mysql_sync_to_dble/sbtest.sbtest1-schema.sql
/*!40101 SET NAMES binary*/;
CREATE TABLE `sbtest1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
`pad` char(60) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`)
) ENGINE=InnoDB AUTO_INCREMENT=1000001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
#dump生成批量插入数据SQL文件
shell> head /home/tidb/dm/deploy/dm-worker-8262/dumped_data.mysql_sync_to_dble/sbtest.sbtest1.0000000000000.sql
/*!40101 SET NAMES binary*/;
INSERT INTO `sbtest1` (`id`,`k`,`c`,`pad`) VALUES
(1,498670,'31451373586-15688153734-79729593694-96509299839-83724898275-86711833539-78981337422-35049690573-51724173961-87474696253','98996621624-36689827414-04092488557-09587706818-65008859162'),
(2,497778,'21472970079-70972780322-70018558993-71769650003-09270326047-32417012031-10768856803-14235120402-93989080412-18690312264','04776826683-45880822084-77922711547-29057964468-76514263618'),
(3,498956,'49376827441-24903985029-56844662308-79012577859-40518387141-60588419212-24399130405-42612257832-29494881732-71506024440','26843035807-96849339132-53943793991-69741192222-48634174017'),
(4,518727,'85762858421-36258200885-10758669419-44272723583-12529521893-95630803635-53907705724-07005352902-43001596772-53048338959','37979424284-37912826784-31868864947-42903702727-96097885121'),
(5,502480,'24805466175-85245528617-94635882649-46305216925-28637832581-03224489581-68883711727-95491561683-91969681472-12022277774','19288959552-55556468076-14192290426-55457672510-18043372364'),
(6,500774,'52892836230-54177743992-01821871718-48412537487-30066596248-87215430797-00375777469-64498831720-58542556455-90784765418','59487960480-08453890592-99628797439-16757639138-29377916560'),
(7,501466,'85820931248-14475640036-11980694501-86588543167-31029306229-09626867980-90685354565-02350460358-25863585366-53793794448','26081374730-86321700986-51212137094-30635959762-03880194434'),
(8,497838,'81578049255-33453976301-67096870761-27658738403-30546242249-53677469854-26594573136-34292002037-52736825353-99165193170','64289062455-51067794311-09919261228-11533354367-07401173317'),
(3) View synchronization task status
shell> tiup dmctl:v2.0.7 --master-addr=10.186.65.83:8261 query-status ./mysql_sync_to_dble.yaml
Starting component `dmctl`: /home/tidb/.tiup/components/dmctl/v2.0.7/dmctl/dmctl --master-addr=10.186.65.83:8261 query-status ./mysql_sync_to_dble.yaml
{
"result": true,
"msg": "",
"sources": [
{
"result": true,
"msg": "",
"sourceStatus": {
"source": "mysql_source_14_4490",
"worker": "dm-10.186.65.118-8262",
"result": null,
"relayStatus": null
},
"subTaskStatus": [
{
"name": "mysql_sync_to_dble",
"stage": "Running",
"unit": "Sync",
"result": null,
"unresolvedDDLLockID": "",
"sync": {
"totalEvents": "425355",
"totalTps": "1203",
"recentTps": "2386",
"masterBinlog": "(mysql-bin.000027, 76114275)",
"masterBinlogGtid": "c7827165-bf89-11ec-92e6-02000aba410e:1-5091258",
"syncerBinlog": "(mysql-bin.000026, 229462770)",
"syncerBinlogGtid": "",
"blockingDDLs": [
],
"unresolvedGroups": [
],
"synced": false,
"binlogType": "remote",
"secondsBehindMaster": "38"
}
}
]
}
]
}
7. Verify data
#暂停先前执行的sysbench程序,确保dm同步与源端无延迟时,对表作count(*)
#在源端MySQL,对表进行count(*)
mysql> select count(*) from sbtest.sbtest1;
+----------+
| count(*) |
+----------+
| 1116471 |
+----------+
1 row in set (0.68 sec)
mysql> select count(*) from sbtest.sbtest2;
+----------+
| count(*) |
+----------+
| 1117020 |
+----------+
1 row in set (0.54 sec)
#通过dble对表进行count(*)
dble> select count(*) from sbtest.sbtest1;
+----------+
| count(*) |
+----------+
| 1116471 |
+----------+
1 row in set (1.04 sec)
dble> select count(*) from sbtest.sbtest2;
+----------+
| count(*) |
+----------+
| 1117020 |
+----------+
1 row in set (1.58 sec)
#在源端数据进行更新操作
mysql> update sbtest.sbtest1 set c=uuid() where id=20;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> select * from sbtest.sbtest1 where id=20;
+----+--------+--------------------------------------+-------------------------------------------------------------+
| id | k | c | pad |
+----+--------+--------------------------------------+-------------------------------------------------------------+
| 20 | 501448 | 24649f0f-14d1-11ed-b4f2-02000aba410e | 91052688950-96415657187-00012408429-12357288330-41295735957 |
+----+--------+--------------------------------------+-------------------------------------------------------------+
1 row in set (0.00 sec)
#通过dble查询数据
dble> select * from sbtest.sbtest1 where id=20;
+----+--------+--------------------------------------+-------------------------------------------------------------+
| id | k | c | pad |
+----+--------+--------------------------------------+-------------------------------------------------------------+
| 20 | 501448 | 24649f0f-14d1-11ed-b4f2-02000aba410e | 91052688950-96415657187-00012408429-12357288330-41295735957 |
+----+--------+--------------------------------------+-------------------------------------------------------------+
1 row in set (0.01 sec)
Eight, matters needing attention
(1) TiDB DM version requirements: TiDB DM v2.0.7
(2) When importing data volume, you need to close the DBLE slow query log
#由于数据同步在全量导入阶段会产生大量慢查询,需要在执行同步任务之前先关闭DBLE的慢查询日志
shell> mysql --prompt='dble>' -uroot -p -h 10.186.65.4 -P 9066
#查看慢查询日志状态,1表示开启,0表示关闭
dble> show @@slow_query_log;
+------------------+
| @@slow_query_log |
+------------------+
| 1 |
+------------------+
1 row in set (0.00 sec)
#全量数据导入前,关闭dble慢查询日志
dble> disable @@slow_query_log;
Query OK, 1 row affected (0.01 sec)
disable slow_query_log success
#待全量数据导入完成后,开启dble慢查询日志
dble> enable @@slow_query_log;
Query OK, 1 row affected (0.01 sec)
enable slow_query_log success
(3) Check whether there is a program similar to pt-kill running on the source MySQL instance
#如果在源端MySQL实例上运行有pt-kill或类似的程序,DM同步任务在执行全量数据dump阶段,
导出线程可能会被程序kill掉,导致全备失败。dm-worker.log日志如下:
[ERROR] [subtask.go:311] ["unit process error"] [subtask=oms-sync-receiver] [unit=Dump] ["error information"="{\"ErrCode\":32001,\"ErrClass\":\"dump-unit\",\"ErrScope\":\"internal\",\"ErrLevel\":\"high\",\"Message\":\"mydumper/dumpling runs with error, with output (may empty): \",\"RawCause\":\"invalid connection\"}"]
解决方法:先将pt-kill实用工具停止,然后再重新运行TiDM同步任务,待全量数据dump完成后在开启pt-kill。
(4) If you need to modify the structure of the target table during migration, if the source table has a partition and the target table does not need to create a partition or modify the table character set (utf8mb4 is recommended) after the target table is fragmented, you can first change the table structure from Export the source library, import it into DBLE after modification, or modify the table structure after importing DBLE, and finally open the DM synchronization task.
(5) The field order of the source and target tables must be consistent, otherwise data inconsistency may result. When the two fields are both varchar and the lengths are not equal, the synchronization task will not report an error, but the value written to the target table may be truncated. As shown below:
When the DM task starts and connects to the target, the session level sql_mode will be set to the following values:
sql_mode='IGNORE_SPACE,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,ALLOW_INVALID_DATES,ONLY_FULL_GROUP_BY';
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。