Author: Zhang Hao

DBA, mainly responsible for MySQL troubleshooting, DMP product support, good at MySQL, likes to play ball~sing~.

Source of this article: original contribution

*The original content is produced by the open source community of Aikesheng, and the original content shall not be used without authorization. For reprinting, please contact the editor and indicate the source.


This article focuses more on the analysis process, and the details may be insufficient.

chatting session

Some time ago, customers reported that insert was slow. Check slow.log and found that it is really slow, not only it is slow, but other things are slow; (customers only pay attention to the key points of customers, and operation and maintenance should be analyzed comprehensively).

I have encountered similar problems before, and I have obtained the scriptures from the leaders. Let me briefly share the contents of the scriptures with you:

  1. First, observe the record of slow SQL on a certain day or a certain period of time as a whole, focusing on the time point and type of slow SQL recorded;
  2. If sql like insert is slow, the dimensions of suspicion are probably: disk io, lock, semi-synchronous replication;
  3. If there is a batch of slow SQL, focus on select to troubleshoot semi-synchronous replication.

start actual combat

1. Macro statistics slow.log, after statistics, it is found that slow sql shows slow batches

There are many types of sql recorded in slow.log, including select, insert, delete, update, commit, and management sql at the beginning of show.

 [root@big hao]# less slow.log |grep "# Tim"|awk -F "." '{print $1}'|termsql -0 "select COL2 time,COUNT(COL2) sql_count from tbl group by COL2"
time|sql_count
2022-08-30T00:05:56|63
2022-08-30T00:27:37|65
2022-08-30T00:34:58|31
2022-08-30T00:49:22|65
2022-08-30T00:56:41|61
2022-08-30T01:11:04|64
2022-08-30T01:32:50|63
2022-08-30T01:40:06|60
2022-08-30T01:54:32|59
2022-08-30T02:01:47|58
2022-08-30T02:16:16|62
2022-08-30T02:23:28|58
2022-08-30T02:37:58|58
2022-08-30T02:45:10|54
2022-08-30T02:59:40|44
2022-08-30T03:06:52|55
2022-08-30T03:21:24|55
2022-08-30T03:28:35|59
2022-08-30T03:43:08|27
2022-08-30T03:50:17|27
2022-08-30T04:04:53|58
2022-08-30T04:11:59|48
2022-08-30T04:26:34|60
2022-08-30T04:33:41|28
2022-08-30T04:48:19|24
2022-08-30T05:10:04|59
2022-08-30T05:17:07|60
2022-08-30T05:31:47|45
2022-08-30T05:38:48|61
2022-08-30T05:53:30|65
2022-08-30T06:00:30|63
2022-08-30T06:15:16|64
2022-08-30T06:22:14|65
2022-08-30T06:37:01|66
2022-08-30T06:43:55|57
2022-08-30T06:58:45|65
2022-08-30T07:05:39|65
2022-08-30T07:20:28|36
2022-08-30T07:42:10|21
2022-08-30T07:49:03|65
2022-08-30T08:03:53|35
2022-08-30T08:10:47|65
2022-08-30T08:25:37|35
2022-08-30T08:32:28|25
2022-08-30T08:47:23|51
2022-08-30T08:54:09|36
2022-08-30T09:09:07|83
2022-08-30T09:09:13|1
2022-08-30T09:15:51|74
2022-08-30T09:30:49|36
2022-08-30T09:37:35|33
2022-08-30T09:52:34|86
2022-08-30T09:59:17|21
2022-08-30T10:14:16|81
2022-08-30T10:21:01|33
2022-08-30T10:36:02|81
2022-08-30T10:42:44|72
2022-08-30T10:57:45|84
2022-08-30T11:04:27|20
2022-08-30T11:19:25|17
2022-08-30T11:41:10|20
2022-08-30T11:47:50|17
2022-08-30T12:02:52|70
2022-08-30T12:09:34|18
2022-08-30T12:24:39|65
2022-08-30T12:31:19|67
2022-08-30T12:46:20|66
2022-08-30T12:53:05|18
2022-08-30T13:08:09|67
2022-08-30T13:14:50|31
2022-08-30T13:29:56|66
2022-08-30T13:36:33|66


 [root@big hao]#  less slow.log |grep "# Time" -A 6 |egrep  -i "insert|delete|update|select|commit|show"   |awk '{print $1}'|sed -e 's/;/ /g' |termsql -0 "select COL0 sql_stat,count(COL0) sql_count from tbl group by COL0"
sql_stat|sql_count
COMMIT|59
DELETE|11
INSERT|321
SELECT|18
UPDATE|6
commit|2627
select|16
show|17
update|628

2. Check with the flush information in the error.log of mysql

It can be seen from the error.log that there is a large amount of flush information output, and the time point is highly coincident with the slow.log; the flush information in the error.log means that MySQL is inefficient when flushing the disk at this point in time, and you can further check the disk. io situation.

 [root@big hao]# less mysql-error.log |grep "2022-08-30T" |grep flush
2022-08-30T00:05:56.685140+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 5301ms to flush 121 and evict 0 pages
2022-08-30T00:27:37.800463+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4028ms to flush 120 and evict 0 pages
2022-08-30T00:49:22.874045+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4992ms to flush 24 and evict 0 pages
2022-08-30T01:11:04.900200+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4947ms to flush 16 and evict 0 pages
2022-08-30T01:32:50.117427+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 6136ms to flush 112 and evict 0 pages
2022-08-30T01:54:32.106385+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4912ms to flush 18 and evict 0 pages
2022-08-30T02:16:16.060394+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4870ms to flush 16 and evict 0 pages
2022-08-30T02:37:58.933258+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4796ms to flush 16 and evict 0 pages
2022-08-30T02:59:40.755206+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4741ms to flush 24 and evict 0 pages
2022-08-30T03:43:08.053816+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4230ms to flush 105 and evict 0 pages
2022-08-30T04:04:53.504756+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4381ms to flush 16 and evict 0 pages
2022-08-30T06:15:16.414777+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4380ms to flush 18 and evict 0 pages
2022-08-30T06:37:01.679180+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 7198ms to flush 24 and evict 0 pages
2022-08-30T06:58:45.807908+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 6052ms to flush 120 and evict 0 pages
2022-08-30T07:20:28.078480+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 5189ms to flush 26 and evict 0 pages
2022-08-30T07:42:10.509536+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 7349ms to flush 120 and evict 0 pages
2022-08-30T08:03:53.622772+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 7035ms to flush 40 and evict 0 pages
2022-08-30T08:25:37.692469+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 6982ms to flush 120 and evict 0 pages
2022-08-30T08:47:23.035222+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10264ms to flush 137 and evict 0 pages
2022-08-30T09:09:07.647213+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 8538ms to flush 136 and evict 0 pages
2022-08-30T09:30:49.287417+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 6570ms to flush 121 and evict 0 pages
2022-08-30T09:52:34.201334+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 8821ms to flush 120 and evict 0 pages
2022-08-30T10:14:16.708964+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 6419ms to flush 128 and evict 0 pages
2022-08-30T10:36:02.241341+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 7444ms to flush 128 and evict 0 pages
2022-08-30T10:57:45.240046+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 7926ms to flush 144 and evict 0 pages
2022-08-30T11:19:25.457821+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4146ms to flush 11 and evict 0 pages
2022-08-30T11:41:10.135784+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 5599ms to flush 9 and evict 0 pages
2022-08-30T12:02:52.696129+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 6495ms to flush 10 and evict 0 pages
2022-08-30T12:24:39.010150+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 8246ms to flush 10 and evict 0 pages
2022-08-30T12:46:20.517719+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 4433ms to flush 11 and evict 0 pages
2022-08-30T13:08:09.203494+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10607ms to flush 123 and evict 0 pages
2022-08-30T13:29:56.427760+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 11156ms to flush 24 and evict 0 pages
2022-08-30T13:51:44.961560+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 12464ms to flush 19 and evict 0 pages
2022-08-30T14:13:30.720673+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 9677ms to flush 17 and evict 0 pages
2022-08-30T14:35:16.044102+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 11250ms to flush 185 and evict 0 pages
2022-08-30T14:57:01.166176+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10031ms to flush 113 and evict 0 pages
2022-08-30T15:18:46.788154+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 11545ms to flush 104 and evict 0 pages
2022-08-30T15:40:31.365101+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 9504ms to flush 10 and evict 0 pages
2022-08-30T16:02:18.469655+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10029ms to flush 9 and evict 0 pages
2022-08-30T16:24:00.665416+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 7106ms to flush 13 and evict 0 pages
2022-08-30T16:45:48.000669+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10248ms to flush 10 and evict 0 pages
2022-08-30T17:07:32.486231+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10406ms to flush 24 and evict 0 pages
2022-08-30T17:29:19.041210+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10477ms to flush 24 and evict 0 pages
2022-08-30T17:51:06.170389+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 12039ms to flush 104 and evict 0 pages
2022-08-30T18:12:52.601286+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10351ms to flush 16 and evict 0 pages
2022-08-30T18:34:40.850308+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 12176ms to flush 106 and evict 0 pages
2022-08-30T18:56:27.521509+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 8591ms to flush 25 and evict 0 pages
2022-08-30T19:18:12.374391+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 9778ms to flush 17 and evict 0 pages
2022-08-30T19:39:59.425549+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10982ms to flush 17 and evict 0 pages
2022-08-30T20:01:43.432897+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 8925ms to flush 16 and evict 0 pages
2022-08-30T20:23:31.657359+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 12138ms to flush 18 and evict 0 pages
2022-08-30T20:45:16.677588+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 9932ms to flush 27 and evict 0 pages
2022-08-30T21:07:03.512955+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10763ms to flush 25 and evict 0 pages
2022-08-30T21:28:48.726420+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 9139ms to flush 18 and evict 0 pages
2022-08-30T21:50:35.496745+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10696ms to flush 16 and evict 0 pages
2022-08-30T22:12:20.785958+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 9216ms to flush 16 and evict 0 pages
2022-08-30T22:34:05.561838+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 8703ms to flush 18 and evict 0 pages
2022-08-30T22:55:50.686158+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 9042ms to flush 16 and evict 0 pages
2022-08-30T23:17:36.178498+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10405ms to flush 18 and evict 0 pages
2022-08-30T23:39:22.707252+08:00 0 [Note] [MY-011953] [InnoDB] Page cleaner took 10440ms to flush 16 and evict 0 pages

3. Configure disk monitoring scripts to analyze monitoring data

After analyzing some time points, it is found that there is indeed an abnormal situation in disk io, which completely coincides with the time point of flush in mysql-error.log.

 //磁盘io监控脚本
#!/bin/bash
while sleep 1;
do
echo "########`date '+%F %H:%M:%S'`" >> /app/monitor/diskmoni$(date "+%Y%m%d").log
sar -d -p 1 10  >> /app/monitor/diskmoni$(date "+%Y%m%d").log
done

4. Slow SQL analysis at other time points

There are also a lot of slow SQL at some time points in slow.log. At this time point, the disk io of the main library is normal. For example, "2022-08-30T11:04:27" in slow.log, analyze this For slow sql at a time point, firstly, no select statement is found, and secondly, the lock waiting time can be basically ignored, so the influence of the semi-synchronous replication mechanism should be considered at this time.

 [root@big hao]# less slow.log |grep "2022-08-30T11:04:27" -A 6 |egrep  -i "2022-08-30T11:04:27|insert|delete|update|select|commit|show"  |sed -e 's/# Time:/ /g'|awk '{print $1,$2}'|awk '{print $1}'|awk -F '.' '{print $1}'|termsql -0 "select COL0 ,count(COL0) count from tbl group by COL0"
COL0|count
2022-08-30T11:04:27|20
COMMIT;|1
DELETE|1
INSERT|5
update|13
 
  
[root@big hao]# less slow.log |grep "2022-08-30T11:04:27" -A 3 |grep "Lock_time"|awk '{print $4,$5,$8,$9}'
Lock_time: 0.000105 Rows_examined: 1
Lock_time: 0.000130 Rows_examined: 1
Lock_time: 0.000113 Rows_examined: 1
Lock_time: 0.000000 Rows_examined: 0
Lock_time: 0.000105 Rows_examined: 0
Lock_time: 0.000469 Rows_examined: 0
Lock_time: 0.000109 Rows_examined: 1
Lock_time: 0.000110 Rows_examined: 1
Lock_time: 0.000114 Rows_examined: 1
Lock_time: 0.000112 Rows_examined: 2
Lock_time: 0.000114 Rows_examined: 2
Lock_time: 0.000107 Rows_examined: 0
Lock_time: 0.000107 Rows_examined: 1
Lock_time: 0.000116 Rows_examined: 1
Lock_time: 0.000137 Rows_examined: 1
Lock_time: 0.000112 Rows_examined: 1
Lock_time: 0.000097 Rows_examined: 0
Lock_time: 0.000143 Rows_examined: 1
Lock_time: 0.000098 Rows_examined: 3
Lock_time: 0.000611 Rows_examined: 0

Open the strace log of the semi-synchronous replication of the main library. The steps to open it are as follows. After opening, the semi-synchronous information will be recorded in mysql-error.log. After opening, it will take up a lot of disk space first, followed by disk io, so the production environment is the best Do not turn it on for a long time.

 //开启半同步strace日志
mysql> show variables like 'rpl_semi_sync_slave_trace_level';
+---------------------------------+-------+
| Variable_name                   | Value |
+---------------------------------+-------+
| rpl_semi_sync_slave_trace_level | 32    |
+---------------------------------+-------+
1 row in set (0.01 sec)
 
mysql> set global rpl_semi_sync_slave_trace_level = 16;
Query OK, 0 rows affected (0.00 sec)
 
mysql> show variables like 'rpl_semi_sync_slave_trace_level';
+---------------------------------+-------+
| Variable_name                   | Value |
+---------------------------------+-------+
| rpl_semi_sync_slave_trace_level | 16    |
+---------------------------------+-------+
1 row in set (0.04 sec)
 
//关闭半同步strace日志
  
mysql> set global rpl_semi_sync_slave_trace_level = 32;
Query OK, 0 rows affected (0.00 sec)
 
mysql> show variables like 'rpl_semi_sync_slave_trace_level';
+---------------------------------+-------+
| Variable_name                   | Value |
+---------------------------------+-------+
| rpl_semi_sync_slave_trace_level | 32    |
+---------------------------------+-------+
1 row in set (0.01 sec)

Before viewing semi-synchronized logs, share two log identifiers:

ReplSemiSyncMaster::updateSyncHeader: The master library updates the packet header flag to Sync, and then sends binlog to the slave library;

ReplSemiSyncMaster::reportReplyPacket: The master library receives the ack information returned from the slave library, with log_file, log_pos, and the server_id of the slave library.

The time difference between these two identifiers can be understood as the main impact of the semi-synchronous replication mechanism on business performance. You can see in the log below that the master library sends the slave library binlog 6s before receiving the ack information from the slave library; first of all, the network situation It will affect semi-synchronous replication, and secondly, the io capability of the slave library will also affect semi-synchronous replication, because after the master library sends binlog to the slave library, the slave library must first write the binlog into its own relaylog and then return it to the master library ack.

 //提取出的半同步部分日志
2022-08-30T11:04:21.919587+08:00 23176 [Note] [MY-011133] [Repl] ActiveTranx:insert_tranx_node: insert (mysql-bin.003497, 9432086) in entry(4191).
2022-08-30T11:04:21.920085+08:00 655 [Note] [MY-011134] [Repl] ActiveTranx::is_tranx_end_pos: probe (mysql-bin.003497, 9432086) in entry(4191).
2022-08-30T11:04:21.920102+08:00 655 [Note] [MY-011158] [Repl] ReplSemiSyncMaster::updateSyncHeader: server(11111), (mysql-bin.003497, 9432086) sync(1), repl(1).
2022-08-30T11:04:27.599408+08:00 0 [Note] [MY-011140] [Repl] ReplSemiSyncMaster::reportReplyPacket: Got reply(mysql-bin.003497, 9432086) from server 11111.
2022-08-30T11:04:27.599423+08:00 0 [Note] [MY-011146] [Repl] ReplSemiSyncMaster::reportReplyBinlog: Got reply at (mysql-bin.003497, 9432086).

Looking at the network monitoring of the master library and the slave library during this period, it is found that there is no major problem with the network.

 [root@big hao]# grep -A 20 "2022-08-30 11:04:" ping20220830.log
########2022-08-30 11:04:04
PING *.*.*.* (*.*.*.*) 56(84) bytes of data.
64 bytes from *.*.*.*: icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from *.*.*.*: icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from *.*.*.*: icmp_seq=3 ttl=64 time=0.044 ms
64 bytes from *.*.*.*: icmp_seq=4 ttl=64 time=0.046 ms
64 bytes from *.*.*.*: icmp_seq=5 ttl=64 time=0.057 ms
64 bytes from *.*.*.*: icmp_seq=6 ttl=64 time=0.062 ms
64 bytes from *.*.*.*: icmp_seq=7 ttl=64 time=0.067 ms
64 bytes from *.*.*.*: icmp_seq=8 ttl=64 time=0.063 ms
64 bytes from *.*.*.*: icmp_seq=9 ttl=64 time=0.057 ms
64 bytes from *.*.*.*: icmp_seq=10 ttl=64 time=0.067 ms
 
--- *.*.*.* ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9229ms
rtt min/avg/max/mdev = 0.044/0.057/0.067/0.010 ms
########2022-08-30 11:04:14
PING *.*.*.* (*.*.*.*) 56(84) bytes of data.
64 bytes from *.*.*.*: icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from *.*.*.*: icmp_seq=2 ttl=64 time=0.051 ms
64 bytes from *.*.*.*: icmp_seq=3 ttl=64 time=0.059 ms
64 bytes from *.*.*.*: icmp_seq=4 ttl=64 time=0.063 ms
64 bytes from *.*.*.*: icmp_seq=5 ttl=64 time=0.070 ms
64 bytes from *.*.*.*: icmp_seq=6 ttl=64 time=0.062 ms
64 bytes from *.*.*.*: icmp_seq=7 ttl=64 time=0.069 ms
64 bytes from *.*.*.*: icmp_seq=8 ttl=64 time=0.050 ms
64 bytes from *.*.*.*: icmp_seq=9 ttl=64 time=0.049 ms
64 bytes from *.*.*.*: icmp_seq=10 ttl=64 time=0.065 ms
 
--- *.*.*.* ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9230ms
rtt min/avg/max/mdev = 0.049/0.059/0.070/0.010 ms
########2022-08-30 11:04:25
PING *.*.*.* (*.*.*.*) 56(84) bytes of data.
64 bytes from *.*.*.*: icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from *.*.*.*: icmp_seq=2 ttl=64 time=0.053 ms
64 bytes from *.*.*.*: icmp_seq=3 ttl=64 time=0.045 ms
64 bytes from *.*.*.*: icmp_seq=4 ttl=64 time=0.057 ms
64 bytes from *.*.*.*: icmp_seq=5 ttl=64 time=0.063 ms
64 bytes from *.*.*.*: icmp_seq=6 ttl=64 time=0.055 ms
64 bytes from *.*.*.*: icmp_seq=7 ttl=64 time=0.061 ms
64 bytes from *.*.*.*: icmp_seq=8 ttl=64 time=0.069 ms
64 bytes from *.*.*.*: icmp_seq=9 ttl=64 time=0.062 ms
64 bytes from *.*.*.*: icmp_seq=10 ttl=64 time=0.067 ms

Looking at the io monitoring of the slave disk, it is found that the slave disk has an io exception, that is to say, the slow sql at other time points in the slow.log is related to semi-synchronous replication, but the reason is still caused by the disk io exception.

Summarize

  1. The sql that is recorded in large quantities in the slow log is commit, followed by update and insert. This kind of sql is relatively more io-intensive;
  2. If a batch of SQL contains a simple select statement, the probability is not related to semi-synchronous replication;
  3. The 88show master status statement is recorded in the slow log. You can guess why it is slow?

Tips

The frequency of most monitoring platforms for disk io monitoring data collection is 15s/time. At this time, it is difficult to see the abnormal situation of the disk when viewing the monitoring platform. Therefore, this time, the monitoring script is manually deployed to detect the disk io situation.


爱可生开源社区
426 声望211 粉丝

成立于 2017 年,以开源高质量的运维工具、日常分享技术干货内容、持续的全国性的社区活动为社区己任;目前开源的产品有:SQL审核工具 SQLE,分布式中间件 DBLE、数据传输组件DTLE。


引用和评论

0 条评论