background
As we all know, Redis is a powerful middleware. So how strong is its performance? Everyone only gets the data given by the official. Is this really the case? With this question in mind, I selected Redis single machine and cluster for stress testing, obtained performance data, and made Analyze whether the relationship between the two performance is linear.
Preparation
A brief introduction to the industry's Redis cluster:
1. Codis
Codis is a distributed Redis service developed by Wandoujia Infrastructure Team. Users can regard it as a Redis service with unlimited memory, with the ability to expand/shrink dynamically. It is more practical for partial storage business, if you need something like SUBPUB Commands, Codis does not support. Always remember that Codis is a distributed storage project. For massive keys, the value is not too large ( <= 1M ), and the business scenario that the cache also expands as the business expands has special effects .
Redis obtains the ability to dynamically expand/shrink the capacity. The increase or decrease of Redis instances is completely transparent to the client, and there is no need to restart the service. There is no need for the business side to worry about the memory explosion of Redis. There is no need to worry about the application being too large, causing waste. The business side does not need to worry about You need to maintain Redis yourself. Codis supports horizontal expansion/reduction. For expansion, you can directly click the "Auto Rebalance" button on the interface. To reduce the capacity, you only need to migrate the slots owned by the instance to be offline to other instances, and then delete the offline group on the interface.
Codis uses Pre-sharding technology to achieve data sharding, which is divided into 1024 slots (0-1023) by default. For each Key, the Slot Id it belongs to is determined by the following formula: SlotId = crc32(key) % 1024. Each slot will have one and must have a specific server group id to indicate which server group the data of this slot is provided by. The migration of data is also in units of slots.
2. Twemproxy
Twemproxy is a proxy sharding mechanism open sourced by Twitter. As a proxy, Twemproxy can accept access from multiple programs, forward it to each Redis server in the background according to the routing rules, and then return the same way. This solution solves the problem of the carrying capacity of a single Redis instance very well. Of course, Twemproxy itself is also a single point and needs to use Keepalived as a high-availability solution. Through Twemproxy, multiple servers can be used to horizontally expand the redis service, which can effectively avoid the single point of failure problem. Although using Twemproxy requires more hardware resources and a certain loss in Redis performance (about 20% in the Twitter test), it is quite cost-effective to improve the HA of the entire system.
Fast and lightweight. Maintain persistent server connections. Keep the number of connections to the backend cache server low. Enables pipelining of requests and responses. Support for proxying to multiple servers. Multiple server pools are supported simultaneously. Automatically shard data across multiple servers. Implement the complete memcached ascii and Redis protocols. Easily configure server pools via YAML files. Multiple hashing modes are supported, including consistent hashing and distribution. Can be configured to disable a node in the event of a failure. Observability through statistics displayed on the statistics monitoring port. Available for Linux, *BSD, OS X and SmartOS (Solaris).
3. Redis Cluster
Redis Cluster is an assembly that provides data sharing among multiple Redis nodes. Redis cluster does not support commands for processing multiple keys, because this requires moving data between different nodes, so that the performance like Redis cannot be achieved, and it may cause unpredictable errors under high load conditions. Redis cluster through Partitioning to provide a certain degree of availability, in the actual environment when a node is down or unreachable to continue processing commands.
The advantages of Redis cluster: 1. Automatically split data to different nodes. 2. If some nodes of the entire cluster fail or are unreachable, they can continue to process commands.
RedisCluster sharding strategy
Redis Cluster does not use traditional consistent hashing to distribute data, but uses another method called hash slot to distribute data. Redis cluster allocates 16384 slots by default. When we set a key, we will use the CRC16 algorithm to take the modulo to get the corresponding slot, and then assign the key to the nodes in the hash slot interval. The specific algorithm is: CRC16(key) %16384. It should be noted that there must be 3 or later master nodes, otherwise it will fail when creating the cluster, which we will practice in the future. So, assuming that there are three nodes that have formed a cluster, namely: A, B, C three nodes, they can be three ports on one machine, or three different servers. Then, if 16384 slots are allocated by means of hash slots, the slot intervals borne by the three nodes are: node A covers 0-5460; node B covers 5461-10922; node C covers 10923-16383 .
Online and offline process of Redis Cluster nodes
There is a cluster of six Redis nodes of 7001, 7002, 7003, 7004, 7005, 7006, of which 7001, 7002, 7003 are master nodes, and their slave nodes are 7001master corresponding to 7004slave, 7002master corresponding to 7004slave, 7003master corresponds to 7006slave. According to the official recommendation of Redis-Cluster, the number of slots is 16384, so 7001 (0-5460), 7002 (5461-10922), 7003 (10923-16383) correspond to the card slots of the three master nodes. Now we will simulate a master node. If the node 7002 hangs up, for example, if the node 7002 hangs up, according to the principle of Redis-cluster, the slave node 7005 of 7002 will be elected as the master node, and continue to restart the node 7002. It will be found that 7002 will automatically join the cluster and add nodes to the slave node cluster of 7005. It is divided into adding master node and slave node. First add master node 7007. After joining the cluster, it is found that there is no card slot allocated to 7007. It is necessary to manually re-shard the cluster to migrate data, and manually calculate because there are currently 4 master nodes. Therefore, the node needs to allocate 4096 card slots to 7007, and it is required to fill in which nodes to allocate the card slots to the new master node. If you enter all, the left and right master nodes in the cluster will extract a part of the card slots, and there are enough 4096 slots. Move to 7007; if a slave node is added, if the master node --master-id is not given, Redis-trib will randomize the newly added slave node to the master node with fewer slave nodes.
The removal of cluster nodes, like adding, is divided into removal of the master node and slave node. If there is data in the removal of the master node, the card slot needs to be moved to another master node first, and the slave node on the master node will be removed. It is assigned to the target master node of the card slot movement; it is more convenient to remove the slave node and remove it directly.
Why are there 16384 slots in RedisCluster?
According to the CRC16 algorithm (of course, I have not researched what this algorithm is), a maximum of 65535 (2^16-1) slots can be allocated, 65535=65K, which is 8K after bitmap compression. Since the nodes between clusters will have heartbeats, they will 8K data packets are all carried, which is a bit of a waste of traffic. Therefore, it is recommended to use 16384 slots and then use bitmap to compress them to 2K. Generally speaking, the number of Redis cluster nodes will not exceed 1000.
Let's enter our key stress test
Now that you are ready to stress test Redis, you must prepare the stress testing machine and Redis server. There are still many detours here. The general situation is as follows:
redis single point pressure test
1. Three Jmeters press 5-10 Java servers, and a single-point Redis behind the pressure test
When I first started the stress test, I thought of simulating a normal user call scenario, so I wrote a simple Java service and deployed it in a cluster. They were connected to a Redis server, as shown in the figure below:
Of course, I also checked the Redis official website before pressing, and used Redis-bechmark to try Redis's limit performance to about 100,000 GPS, but the pressure test in this way can't suppress the bottleneck at all, even if the number of connections between the Java server and Redis is increased. . I keep adding Java service machines, but I can't push the limit of Redis. At this time, I can feel the powerful performance of Redis. Although there are official data, but after so many machines' stress test, it still can't reach its bottleneck. It probably knows that there is a problem with the stress test method.
A single Java server presses a single Redis
For another solution, Java server, write interface multi-threaded stress test Redis, the performance explodes, the stress test data is as follows:
The peak value of this data has reached 110,000, which is even higher than the rumors.
Redis-cluster stress test
After pressing the single-point Redis service, the confidence surged. After asking for operation and maintenance, I added some machines and used 6 Linux servers to build a Redis-cluster cluster with 6 masters and 6 slaves, as shown in the following figure:
So I immediately started the code and started the stress test, but the stress test results were not satisfactory. Please see the data below:
After analyzing the above pressure test data, the Redis machine has been increased by 6 times, and the performance has not exceeded 1 times. There must be something wrong, so I continue to analyze the code. When I used JedisPoll, every time I used a link, I had to If it is thrown to the pool, the performance may be lost here, so the code is changed immediately, and the thread does not need to be discarded. Each stress test thread uses a Redis link, and the data obtained is very explosive.
JedisPool is set up with 100 links, and it is requested by 100 threads at the same time.
Another 1000 connections are set up,
From this, it is found that more connections are not the best, but will lead to performance degradation. This is what we usually call performance tuning.
Summarize
1. The single-machine performance of Redis reaches 100,000 GPS, and the performance loss of the cluster is not too much. We have squeezed more than 500,000 for the 6-node cluster. As long as Redis is used reasonably, the performance of Redis is far enough.
2. The number of connections between the Java server and Redis is not as much as possible. There must be an intermediate value with the best performance, which requires constant stress testing and tuning to obtain.
3. In the end, it is shallow on paper, and it is absolutely necessary to carry out this matter. This stress test has brought a lot of benefits. Only by learning to calculate the stress threshold of your own system can you go wider and wider on the road of future development.
*Text/Devin
@德物科技public account
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。