Author: Yang Taotao

Senior database expert, specializing in MySQL research for more than ten years. Good at MySQL, PostgreSQL, MongoDB and other open source databases related to backup and recovery, SQL tuning, monitoring operation and maintenance, high-availability architecture design, etc. Currently working in Aikesheng, providing MySQL-related technical support and MySQL-related course training for major operators and banking and financial enterprises.

Source of this article: original contribution

*The original content is produced by the open source community of Aikesheng, and the original content shall not be used without authorization. For reprinting, please contact the editor and indicate the source.


MySQL InnoDB Cluster (MIC) must be familiar to everyone. It consists of three components: MySQL Shell, MySQL Router, and MySQL MGR.

MySQL Shell is used for daily operation and maintenance of MGR, MySQL Router provides a simple read-write separation entry for upper-layer applications, and MySQL MGR is used to store multiple MySQL instances for real data. The corresponding architecture is as follows:

So if you want to do a disaster recovery function for the MIC, what should you do? If you have been using MySQL 8.0 and keep the MySQL version up to date, the answer is yes (the latest MySQL 8.0 minor version is 8.0.28), the new name is MySQL InnoDB Cluster Set (MICS).

This new feature is actually based on pure MIC for disaster recovery. For example, one set of MICs in Beijing Shangdi provides services to the outside world, another set of MICs in Dongzhimen is used for disaster recovery, and the two sets of MICs synchronize data through the dedicated replication channel clusterset_replication . The structure of the intercepted official website is as follows:

Although MICS looks good, it has many limitations. The main limitations are as follows:
  1. The biggest feature of MICS is high availability, not consistency. Since data transfer relies on traditional MySQL asynchronous replication ( semi-synchronous cannot be used ), the pitfalls of asynchronous replication cannot be avoided: data latency, data consistency, manual failover is required, and so on.
  2. The slave library cannot be an existing MIC and must be newly created. Therefore, before building the MICS, we have to find a way to solve the problem of saving the existing data.
  3. The internal MIC of MICS is limited to single-master mode and cannot be multi-master.
  4. There is only one set of MICs within MICS to provide external services, and the others can only be used as backup libraries.
  5. Only MySQL 8.0 is supported.
Let me briefly demonstrate the building process: (For simplicity, the MySQL ROUTER component is not included)
  1. Prepare at least 6 MySQL instances. (The ports are 3381, 3382, 3383, 3384, 3385, 3386 respectively)
 MySQL  Py > for i in range(3381,3387):
             ->     dba.deploy_sandbox_instance(i,{"password":"root"})
             -> 
   A new MySQL sandbox instance will be created on this host in 
   /root/mysql-sandboxes/3381
   ...
  1. Ports 3381, 3382, and 3383 build a set of MGR named ytt-rc1.
 MySQL  Py > \c root:root@localhost:3381
   Creating a session to 'root@localhost:3381'
   ...
   Server version: 8.0.28 MySQL Community Server - GPL
   ...
   
   MySQL  localhost:3381 ssl  Py > rc1 = dba.create_cluster('ytt-rc1');
   A new InnoDB cluster will be created on instance 'localhost:3381'.
   ...
   MySQL  localhost:3381 ssl  Py > rc1.add_instance("root:root@localhost:3382",{"recoveryMethod":"clone"})
   ...
   MySQL  localhost:3381 ssl  Py > rc1.add_instance("root:root@localhost:3383",{"recoveryMethod":"clone"})
   ...
  1. Create a set of MICS, named: ytt-rc-set, the main library is ytt-rc1.
 MySQL  localhost:3381 ssl  Py > rc1_set = rc1.create_cluster_set('ytt-rc-set')
   A new ClusterSet will be created based on the Cluster 'ytt-rc1'.
   
   * Validating Cluster 'ytt-rc1' for ClusterSet compliance.
   
   * Creating InnoDB ClusterSet 'ytt-rc-set' on 'ytt-rc1'...
   
   * Updating metadata...
   
   ClusterSet successfully created. Use ClusterSet.create_replica_cluster() to add Replica Clusters to it.
  1. Check the MICS status: the name is ytt-rc-set, the cluster role is primary, the corresponding primary instance is 127.0.0.1:3381, and the status is healthy.
 MySQL  localhost:3381 ssl  Py > rc1_set.status()
   {
       "clusters": {
           "ytt-rc1": {
               "clusterRole": "PRIMARY", 
               "globalStatus": "OK", 
               "primary": "127.0.0.1:3381"
           }
       }, 
       "domainName": "ytt-rc-set", 
       "globalPrimaryInstance": "127.0.0.1:3381", 
       "primaryCluster": "ytt-rc1", 
       "status": "HEALTHY", 
       "statusText": "All Clusters available."
  1. Next add the slave library for ytt-rc-set, the slave library must not belong to any MIC.

Create a MICS administrative user for synchronizing master database data:

 MySQL  localhost:3381 ssl  Py > dba.configure_instance("root:root@localhost:3384",{"clusterAdmin":"ics_admin","clusterAdminPassword":"root"})
   Configuring local MySQL instance listening at port 3384 for use in an InnoDB cluster...
   ...
   
   Cluster admin user 'ics_admin'@'%' created.
   The instance '127.0.0.1:3384' is already ready to be used in an InnoDB cluster.
   
   Successfully enabled parallel appliers.

Add slave library: The name of the slave library MIC is ytt-rc2.

 MySQL  localhost:3381 ssl  Py > rc2=rc1_set.create_replica_cluster("root:root@localhost:3384","ytt-rc2",{"recoveryMethod":"clone","recoveryProgress":2})
   Setting up replica 'ytt-rc2' of cluster 'ytt-rc1' at instance '127.0.0.1:3384'.
   ...
   Replica Cluster 'ytt-rc2' successfully created on ClusterSet 'ytt-rc-set'.

Add additional instances to ytt-rc2:

 MySQL  localhost:3381 ssl  Py > rc2.add_instance("root:root@localhost:3385",{"recoveryMethod":"clone"})
   
   MySQL  localhost:3381 ssl  Py > rc2.add_instance("root:root@localhost:3386",{"recoveryMethod":"clone"})
  1. Look at the latest status of ytt-rc-set: ytt-rc1 is the main, ytt-rc2 is the backup.
 MySQL  localhost:3381 ssl  Py > rc1_set.describe()
   {
       "clusters": {
           "ytt-rc1": {
               "clusterRole": "PRIMARY", 
               "topology": [
                   {
                       "address": "127.0.0.1:3381", 
                       "label": "127.0.0.1:3381"
                   }, 
                   {
                       "address": "127.0.0.1:3382", 
                       "label": "127.0.0.1:3382"
                   }, 
                   {
                       "address": "127.0.0.1:3383", 
                       "label": "127.0.0.1:3383"
                   }
               ]
           }, 
           "ytt-rc2": {
               "clusterRole": "REPLICA", 
               "topology": [
                   {
                       "address": "127.0.0.1:3384", 
                       "label": "127.0.0.1:3384"
                   }, 
                   {
                       "address": "127.0.0.1:3385", 
                       "label": "127.0.0.1:3385"
                   }, 
                   {
                       "address": "127.0.0.1:3386", 
                       "label": "127.0.0.1:3386"
                   }
               ]
           }
       }, 
       "domainName": "ytt-rc-set", 
       "primaryCluster": "ytt-rc1"
   }
  1. Write a piece of data to simply test data synchronization: insert a record into the main database table t1.
 MySQL  localhost:3381 ssl  SQL > \sql
    MySQL  localhost:3381 ssl  SQL > create database ytt;
   Query OK, 1 row affected (0.0167 sec)
    MySQL  localhost:3381 ssl  SQL > use ytt;
   Default schema set to `ytt`.
   Fetching table and column names from `ytt` for auto-completion... Press ^C to stop.
    MySQL  localhost:3381 ssl  ytt  SQL > create table t1(id int primary key,r1 int);
   Query OK, 0 rows affected (0.1561 sec)
    MySQL  localhost:3381 ssl  ytt  SQL > insert t1 values (1,100);
   Query OK, 1 row affected (0.0185 sec)
    MySQL  localhost:3381 ssl  ytt  SQL > table t1;
   +----+-----+
   | id | r1  |
   +----+-----+
   |  1 | 100 |
   +----+-----+
   1 row in set (0.0004 sec)

Switch to the slave library and see the effect: the data of table t1 has been synchronized normally and is read-only.

 MySQL  localhost:3381 ssl  ytt  SQL > \c root@localhost:3384
   Creating a session to 'root@localhost:3384'
   ...
    MySQL  localhost:3384 ssl  SQL > use ytt
   ...
    MySQL  localhost:3384 ssl  ytt  SQL > table t1;
   +----+-----+
   | id | r1  |
   +----+-----+
   |  1 | 100 |
   +----+-----+
   1 row in set (0.0006 sec)
   
    MySQL  localhost:3384 ssl  ytt  SQL > create table t2 like t1;
   ERROR: 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
  1. Manual master-slave switchover: ytt-rc2 has been promoted as the new master.
 MySQL  localhost:3384 ssl  ytt  Py > rc1_set.set_primary_cluster('ytt-rc2')
   Switching the primary cluster of the clusterset to 'ytt-rc2'
   ...
     Cluster 'ytt-rc2' was promoted to PRIMARY of the clusterset. The PRIMARY instance is '127.0.0.1:3384'
  1. Take a look at the latest status of ytt-rc-set: ytt-rc1 becomes the standby, and the new master is ytt-rc2.
 MySQL  localhost:3384 ssl  ytt  Py > rc1_set.status()
   {
       "clusters": {
           "ytt-rc1": {
               "clusterRole": "REPLICA", 
               "clusterSetReplicationStatus": "OK", 
               "globalStatus": "OK"
           }, 
           "ytt-rc2": {
               "clusterRole": "PRIMARY", 
               "globalStatus": "OK", 
               "primary": "127.0.0.1:3384"
           }
       }, 
       "domainName": "ytt-rc-set", 
       "globalPrimaryInstance": "127.0.0.1:3384", 
       "primaryCluster": "ytt-rc2", 
       "status": "HEALTHY", 
       "statusText": "All Clusters available."

Summarize:

MySQL InnoDB Cluster Set should be used with caution because the data transmission channel is asynchronous replication!

爱可生开源社区
426 声望209 粉丝

成立于 2017 年,以开源高质量的运维工具、日常分享技术干货内容、持续的全国性的社区活动为社区己任;目前开源的产品有:SQL审核工具 SQLE,分布式中间件 DBLE、数据传输组件DTLE。