Article source Amazon AWS official blog
Jinhua Sun <br>Senior solution architect of Amazon Cloud Technology, responsible for helping customers design and consulting on cloud architecture. Before joining AWS, he started his own business and was responsible for the construction of the e-commerce platform and the overall architecture design of the car company's e-commerce platform. Worked in the world's leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of the LTE equipment system. He has rich experience in high concurrency, high availability system architecture design, microservice architecture design, database, middleware, IOT, etc.
1 Introduction
Amazon Aurora is a relational database service developed by Amazon Cloud Technology. It provides full compatibility with open source databases MySQL and PostgreSQL, and can also provide performance and availability comparable to commercial databases. In terms of performance, Aurora MySQL can support five times the throughput with the same configuration as the open source standard MySQL, and Aurora PostgreSQL can support three times the throughput improvement with the same configuration as the open source standard PostgreSQL. From the perspective of scalability, Aurora has made enhancements and innovations in storage and computing, horizontal and vertical.
Aurora's maximum data storage now supports up to 128TB and can support dynamic shrinking of storage. In terms of computing, Aurora provides the scalability configuration of multiple read replicas to support the expansion of up to 15 read replicas in a region, provides a multi-master architecture to support the expansion of 4 write nodes in the same region, and provides serverless serverless. The architecture instance-level scale-up in seconds, providing a global database to achieve low-latency cross-region expansion of the database.
With the growth of user data volume, Aurora has provided good scalability. Can it further enhance more data volume and more concurrent access capabilities? You can consider using sharding and sharding to support the configuration of multiple underlying Aurora clusters. Based on this, a series of blogs including this blog will be introduced accordingly, aiming to provide a reference for your choice of proxy or JDBC when sub-database sub-table.
1.1 Why do we need to sub-library and sub-table
AWS Aurora provides relational database stand-alone, master-slave, multi-master, global database and other managed architectures that can meet the above architecture scenarios, but Aurora does not provide direct support in the scenario of sub-database and sub-table, and sub-database and sub-table also Derived from various forms such as vertical and horizontal, and further increasing the data capacity, it also brings some problems that need to be solved, such as cross-node database Join associated query, distributed transaction, executed SQL sorting, page turning, function calculation , database global primary key, capacity planning, secondary expansion after sub-database and sub-table, etc.
1.2 The way of sub-library and sub-table
The time it takes to query once is generally accepted by the industry that the MySQL single table capacity is less than 10 million is the best state, because its BTREE index tree height is between 3 and 5 at this time. Through data segmentation, the data volume of a single table can be reduced, and the pressure of reading and writing can be distributed to different data nodes. Data segmentation can be divided into vertical segmentation and horizontal segmentation.
1. The advantages of vertical segmentation
Solve the coupling at the business system level, and the business is clear;
Similar to the governance of microservices, it can also perform hierarchical management, maintenance, monitoring, and expansion of data of different businesses;
In high concurrency scenarios, vertical segmentation improves the bottleneck of IO, database connections, and single-machine hardware resources to a certain extent.
2. Disadvantages of vertical segmentation
After the database is divided, it is impossible to join, and it can only be solved by interface aggregation, which increases the complexity of development;
Distributed transaction processing is complex after database division;
There is still the problem of too much data in a single table (horizontal segmentation is required).
3. The advantages of horizontal segmentation
There is no performance bottleneck of excessive data volume and high concurrency in a single database, improving system stability and load capacity;
The transformation of the application side is small, and there is no need to split the business module.
4. Disadvantages of horizontal segmentation
Transaction consistency across shards is difficult to guarantee;
Cross-database Join association query performance is poor;
It is difficult to expand the data multiple times and maintain a great amount of maintenance.
Combined with the above analysis, on the basis of investigating the common middleware for sub-database and sub-table, we select ShardingSphere open source product combined with Amazon Aurora, and introduce how the combination of these two products can satisfy various forms of sub-database and table and how to solve the problem. Some problems caused by sub-database and sub-table.
2. Sharding-JDBC functional test
2.1 Sample project description
Download the sample project code to the local. In order to ensure the stability of the test code, we choose to use the tag version of shardingsphere-example-4.0.0 here.
git clone https://github.com/apache/shardingsphere-example.git
Project description:
shardingsphere-example
├── example-core
│ ├── config-utility
│ ├── example-api
│ ├── example-raw-jdbc
│ ├── example-spring-jpa #spring+jpa集成基础的entity,repository
│ └── example-spring-mybatis
├── sharding-jdbc-example
│ ├── sharding-example
│ │ ├── sharding-raw-jdbc-example
│ │ ├── sharding-spring-boot-jpa-example #集成基础的sharding-jdbc的功能
│ │ ├── sharding-spring-boot-mybatis-example
│ │ ├── sharding-spring-namespace-jpa-example
│ │ └── sharding-spring-namespace-mybatis-example
│ ├── orchestration-example
│ │ ├── orchestration-raw-jdbc-example
│ │ ├── orchestration-spring-boot-example #集成基础的sharding-jdbc的治理的功能
│ │ └── orchestration-spring-namespace-example
│ ├── transaction-example
│ │ ├── transaction-2pc-xa-example #sharding-jdbc分布式事务两阶段提交的样例
│ │ └──transaction-base-seata-example #sharding-jdbc分布式事务seata的样例
│ ├── other-feature-example
│ │ ├── hint-example
│ │ └── encrypt-example
├── sharding-proxy-example
│ └── sharding-proxy-boot-mybatis-example
└── src/resources
└── manual_schema.sql
Configuration file description:
application-master-slave.properties #读写分离配置文件
application-sharding-databases-tables.properties #分库分表配置文件
application-sharding-databases.properties #仅分库配置文件
application-sharding-master-slave.properties #分库分表加读写分离的配置文件
application-sharding-tables.properties #分表配置文件
application.properties #spring boot 配置文件
Code logic description:
The entry class of the Spring Boot application, execute this class to run the project
The execution logic of the demo is as follows:
2.2 Read-write separation verification
With business growth, separating write and read requests to different database nodes can effectively improve the processing capacity of the entire database cluster. Aurora can meet the user's needs for writing and strongly consistent reading through read/write endpoints, and a separate read-only endpoint can meet users' needs for non-strongly consistent reading. Aurora's read and write latency is usually on the millisecond level, which is much lower than MySQL's binlog-based logical replication, so there is a lot of load going directly to read-only endpoints.
Through the configuration of one master and multiple slaves, query requests can be evenly distributed to multiple data copies, which can further improve the processing capacity of the system. Although read-write separation can improve the throughput and availability of the system, it also brings about the problem of data inconsistency. Aurora provides a master-slave architecture in a fully managed form, but when the upper-layer application interacts with Aurora, it still needs to manage multiple data sources, and route SQL requests to different nodes according to the read and write types of SQL statements and certain routing strategies .
The read-write separation feature provided by Sharding-JDBC, the application is integrated with Sharding-JDBC, and the complex configuration relationship between the application and the database cluster is separated from the application. The developer manages the Shard through the configuration file, and then combines some ORM frameworks Such as Spring JPA, Mybatis can completely separate these copied logic from the code. It greatly improves the maintainability of the code and reduces the coupling between the code and the database.
2.2.1 Database environment preparation
First, create a set of Aurora MySQL read-write separation cluster, the model is db.r5.2xlarge, each cluster has one write node and two read nodes. As shown below
2.2.2 Sharding-JDBC Configuration
application.properties spring boot main configuration file description
As shown in the figure below: You need to replace the part marked in green with the configuration on your own environment
# jpa自动根据实体创建和drop数据表
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#激活master-slave 配置项,这样sharding-jdbc将使用master-slave配置文件
spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-master-slave.properties sharding-jdbc configuration file description
spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1
# 数据源 主库-master
spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master.password= 您自己的主db密码
spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master.jdbc-url=您自己的主db数据源url spring.shardingsphere.datasource.ds_master.username=您自己的主db用户名
# 数据源 从库
spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_0.password= 您自己的从db密码
spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_0.jdbc-url=您自己的从db数据源url
spring.shardingsphere.datasource.ds_slave_0.username= 您自己的从db用户名
# 数据源 从库
spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_slave_1.password=您自己的从db密码
spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_slave_1.jdbc-url= 您自己的从db数据源url
spring.shardingsphere.datasource.ds_slave_1.username= 您自己的从db用户名
# 路由策略配置
spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin
spring.shardingsphere.masterslave.name=ds_ms
spring.shardingsphere.masterslave.master-data-source-name=ds_master
spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1
# sharding-jdbc 配置信息存储方式
spring.shardingsphere.mode.type=Memory
# 开启shardingsphere 日志,开启的情况下从打印中可以看到逻辑SQL到实际SQL的转换
spring.shardingsphere.props.sql.show=true
2.2.3 Description of the test verification process
Test environment data initialization: Spring JPA initialization automatically creates tables for testing
- Write data on the master instance
As shown in the following ShardingSphere-SQL log, write SQL is executed on the ds_master data source.
- Data query operations are performed on the slave library
As shown in the following ShardingSphere-SQL log, read SQL is executed on the ds_slave data source in a polling manner.
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:39,376 --main-- [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_,
orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0
---------------------------- Print OrderItem Data -------------------
Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id
as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] Rule Type: master-slave
[INFO ] 2022-04-02 19:43:40,898 --main-- [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_,
orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1
Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the main database; if the read and write requests are not in a transaction, the corresponding read requests will be distributed to the main database according to the routing policy on different read nodes.
@Override
@Transactional // 开启事务时在该事务中读写都走主库;关闭事务时,读走从库,写走主库
public void processSuccess() throws SQLException {
System.out.println("-------------- Process Success Begin ---------------");
List<Long> orderIds = insertData();
printData();
deleteData(orderIds);
printData();
System.out.println("-------------- Process Success Finish --------------");
}
2.2.4 Aurora failover scenario verification
The Aurora database environment uses the configuration in 2.2.1.
2.2.4.1 Description of the verification process
1. Start the Spring-Boot project
2. Perform a failover operation on the Aurora console
3. Execute the Rest API request
4. Execute POST ( http://localhost:8088/save-user ) multiple times until the API call to write to Aurora fails to recover successfully.
5. Observe the execution code failover process as shown in the figure below. From the log, you can analyze the success of the last SQL execution write operation to the next execution. It takes about 37s to successfully write again, that is, the application can automatically recover from Aurora failover, and the recovery time Probably 37s.
2.3 Sub-table function verification only
2.3.1 Sharding-JDBC Configuration
application.properties spring boot main configuration file description
# jpa 自动根据实体创建和drop数据表
spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
#spring.profiles.active=sharding-databases
#激活sharding-tables配置项
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
# spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-tables.properties sharding-jdbc configuration file description
## 主键策略配置
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# 配置t_order与 t_order_item的绑定关系
spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item
# 配置广播表
spring.shardingsphere.sharding.broadcast-tables=t_address
# sharding-jdbc的模式
spring.shardingsphere.mode.type=Memory
# 开启shardingsphere日志
spring.shardingsphere.props.sql.show=true
2.3.2 Description of the test verification process
1. DDL operation
As shown in the figure below, JPA automatically creates a table for testing. When the routing rules of Sharding-JDBC are configured, the client side executes DDL, and Sharding-JDBC will automatically create the corresponding table according to the table splitting rules; for example, t_address is a broadcast table , since there is only one main instance, create a t_address; t_order divides the table according to the modulo. When creating t_order, two physical tables, t_order_0 and t_order_1, will be created.
2. Write operation
As shown in the figure below, Logic SQL inserts a record into t_order. When Sharding-JDBC executes, it will distribute the data into t_order_0 and t_order_1 according to the table sharding rules.
When a binding relationship is configured between t_order and t_order_item, the records related to order_item and order will be placed in the same physical sub-table.
3. Read operation
The join query operations order and order_item under the binding table, as shown in the figure below, will precisely locate the corresponding physical shard according to the binding relationship.
The join query operations order and order_item under the unbound table, as shown in the figure below, will traverse all shards.
2.4 Only sub-library function verification
2.4.1 Database environment preparation
As shown in the figure below, create two instances on Aurora: ds_0 and ds_1
When the Sharding-spring-boot-jpa-example project is started, the tables t_order, t_order_item, t_address will be created on the two Aurora instances
2.4.2 Sharding-JDBC Configuration
application.properties springboot main configuration file description
# jpa 自动根据实体创建和drop数据表
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# 激活sharding-databases配置项
spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc 配置文件说明
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc的模式
spring.shardingsphere.mode.type=Memory
# 开启shardingsphere日志
spring.shardingsphere.props.sql.show=true
2.4.3 Description of test verification process
1. DDL operation
JPA automatically creates a table for testing, as shown in the figure below. When the sub-database routing rules of Sharding-JDBC are configured, the client side executes DDL, and Sharding-JDBC will automatically create the corresponding table according to the sub-table rules; for example, t_address is The broadcast table will create physical tables t_address, t_order, and t_order_item on ds_0 and ds_1, and divide the database according to the modulo. These three tables will be created on ds_0 and ds_1 respectively.
2. Write operation
For the broadcast table t_address, each record written will be written on the t_address table of ds_0 and ds_1
For the sub-database tables t_order and t_order_item, they will be written to the table on the corresponding instance according to the sub-database field and routing policy.
3. Read operation
As shown in the figure below, the query order is routed to the corresponding Aurora instance according to the sub-database routing rules.
As shown in the figure below, when querying Address, since address is a broadcast table, an instance where address is located will be randomly selected from the nodes used for query.
As shown in the figure below, when the join query under the binding table operates order and order_item, it will precisely locate the corresponding physical shard according to the binding relationship.
2.5 Function verification of sub-database and sub-table
2.5.1 Database environment preparation
Create two instances on Aurora as shown in the image below: ds_0 and ds_1
When the sharding-spring-boot-jpa-example project is started, the physical tables t_order_01, t_order_02, t_order_item_01, t_order_item_02 and t_address global tables are created on the two Aurora instances.
2.5.2 Sharding-JDBC Configuration
application.properties springboot main configuration file description
# jpa 自动根据实体创建和drop数据表
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# 激活sharding-databases-tables配置项
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
#spring.profiles.active=sharding-master-slave
application-sharding-databases.properties sharding-jdbc configuration file description
spring.shardingsphere.datasource.names=ds_0,ds_1
# ds_0
spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8
spring.shardingsphere.datasource.ds_0.username=
spring.shardingsphere.datasource.ds_0.password=
spring.shardingsphere.datasource.ds_0.max-active=16
# ds_1
spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_1.jdbc-url=
spring.shardingsphere.datasource.ds_1.username=
spring.shardingsphere.datasource.ds_1.password=
spring.shardingsphere.datasource.ds_1.max-active=16
# 默认的分库策略
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
# 不满足分库策略的表放在ds_0上
spring.shardingsphere.sharding.default-data-source-name=ds_0
# t_order分表策略
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
# t_order_item分表策略
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# sharding-jdbc的模式
spring.shardingsphere.mode.type=Memory
# 开启shardingsphere日志
spring.shardingsphere.props.sql.show=true
2.5.3 Description of test verification process
1. DDL operation
JPA automatically creates tables for testing, as shown in the figure below, when Sharding-JDBC's sub-database sub-table routing rules are configured, the client side executes DDL, and Sharding-JDBC will automatically create corresponding tables according to the sub-table rules; If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1. t_order, t_order_item are divided into tables according to the modulo, and these three tables will be created on ds_0 and ds_1 respectively.
2. Write operation
For the broadcast table t_address, each record written will be written on the t_address table of ds_0 and ds_1.
For the sub-database tables t_order and t_order_item, they will be written to the table on the corresponding instance according to the sub-database field and routing policy.
3. Read operation
The read operation is similar to the sub-database function verification only, and will not be repeated here.
2.6 Sub-database sub-table plus read-write separation function verification
2.6.1 Database environment preparation
The created database instance and the corresponding physical table are shown in the following figure.
2.6.2 Sharding-JDBC Configuration
application.properties spring boot main configuration file description
# jpa 自动根据实体创建和drop数据表
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.show_sql=true
# 激活sharding-databases-tables配置项
#spring.profiles.active=sharding-databases
#spring.profiles.active=sharding-tables
#spring.profiles.active=sharding-databases-tables
#spring.profiles.active=master-slave
spring.profiles.active=sharding-master-slave
application-sharding-master-slave.properties sharding-jdbc configuration file description
The url, name, and password of the database need to be modified to the parameters of your own database.
spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username=
spring.shardingsphere.datasource.ds_master_0.password=
spring.shardingsphere.datasource.ds_master_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username=
spring.shardingsphere.datasource.ds_master_0_slave_0.password=
spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username=
spring.shardingsphere.datasource.ds_master_0_slave_1.password=
spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16
spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1.jdbc-url=
spring.shardingsphere.datasource.ds_master_1.username=
spring.shardingsphere.datasource.ds_master_1.password=
spring.shardingsphere.datasource.ds_master_1.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url=
spring.shardingsphere.datasource.ds_master_1_slave_0.username=
spring.shardingsphere.datasource.ds_master_1_slave_0.password=
spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16
spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin
spring.shardingsphere.datasource.ds_master_1_slave_1.password=
spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16
spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2}
spring.shardingsphere.sharding.binding-tables=t_order,t_order_item
spring.shardingsphere.sharding.broadcast-tables=t_address
spring.shardingsphere.sharding.default-data-source-name=ds_master_0
spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1}
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id
spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123
spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1}
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2}
spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id
spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE
spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123
# 主从数据源,分库数据源配置
spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0
spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1
spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1
# sharding-jdbc的模式
spring.shardingsphere.mode.type=Memory
# 开启shardingsphere日志
spring.shardingsphere.props.sql.show=true
2.6.3 Description of test verification process
1. DDL operation belongs to
JPA automatically creates a table for testing, as shown in the figure below. When the sub-database routing rules of Sharding-JDBC are configured, the client side executes DDL, and Sharding-JDBC will automatically create the corresponding table according to the sub-table rules; for example, t_address is broadcast Tables are created on both ds_0 and ds_1. t_address, t_order, and t_order_item are divided according to modulo, and these three tables will be created on ds_0 and ds_1 respectively.
2. Write operation
For the broadcast table t_address, each record written will be written on the t_address table of ds_0 and ds_1
For the sub-database tables t_order and t_order_item, they will be written to the table on the corresponding instance according to the sub-database field and routing policy.
3. Read operation
The join query under the binding table operates order and order_item, as shown in the following figure.
3. Conclusion
As an open source product focused on database enhancement, ShardingSphere is relatively good in terms of community activity, product maturity, and richness of documentation. Among them, Sharding-JDBC is a client-based sub-database sub-table scheme, which supports all sub-database sub-table scenarios, and does not need to introduce an intermediate layer such as Proxy, so it reduces the complexity of operation and maintenance. Because of the lack of middle layer, the delay will theoretically be lower than that of Proxy. Secondly, Sharding-JDBC can support various relational databases based on SQL standards, such as MySQL/PostgreSQL/Oracle/SQL Server. However, due to the integration of Sharding-JDBC with the application, the currently supported language is limited to Java, which has a certain coupling to the application. However, Sharding-JDBC separates the configuration of all sub-databases and sub-tables from the application, which is faced with switching other The resulting changes in middleware are relatively minor. To sum up, if you do not want to introduce a middle layer, and use a system developed based on the Java language, and need to interface with different relational databases, Sharding-JDBC will be a good choice.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。