1.节点规划:
2种方式:1主+多从,多对主从,第二种分担主节点、容易扩容缩减。
容器名称 容器IP地址 映射端口号 服务运行模式
Redis-master1 172.1.50.11 6391->6379,16391->6379 master
Redis-master2 172.1.50.12 6392->6379,16392->6379 master
Redis-master3 172.1.50.13 6393->6379,16393->6379 master
redis-slave1 172.1.30.11 6394->6379,16394->6379 Slave
redis-slave2 172.1.30.12 6395->6379,16395->6379 Slave
redis-slave3 172.1.30.13 6396->6379,16396->6379 Slave
添加10000+ 的“bus-port”端口,可以查看官方redis.conf的说明:
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
# 解释在上述段落下方,多余端口用于集群的自动检测。
官方参考文档。
a.创建自定义网络
这里为了方便,把redis加入之前的mybridge网络,方便php、lua的调用。高可用时通过主机网络访问,所以主从都要对主机开放端口。如上。
《redis5.0+直接跳到结果》
2.初步创建集群
a.制作容器创建脚本
简化名称:
clmx 主服务器
clsx 从服务器
docker stop clm1 clm2 clm3 cls1 cls2 cls3
docker rm clm1 clm2 clm3 cls1 cls2 cls3
docker run --name clm1 \
-p 6391:6379 -p 16391:16379 \
--restart=always \
--network=mybridge --ip=172.1.50.11 \
-v /root/tmp/dk/cluster_redis/6391/data:/data \
-v /root/tmp/dk/cluster_redis/6391:/etc/redis \
-d cffycls/redis5:1.7
docker run --name clm2 \
-p 6392:6379 -p 16392:16379 \
--restart=always \
--network=mybridge --ip=172.1.50.12 \
-v /root/tmp/dk/cluster_redis/6392/data:/data \
-v /root/tmp/dk/cluster_redis/6392:/etc/redis \
-d cffycls/redis5:1.7
...
b.测试配置文件,建立集群
在之前主从配置基础上,搜索修改
cluster-enabled yes
重启所有容器。
-- 节点发现然,后需要在容器命令行设置:
#进入 172.1.50.1.11
docker exec -it clm1 bash
/ # redis-cli
127.0.0.1:6379> auth 123456
127.0.0.1:6379> info cluster
127.0.0.1:6379> cluster meet 172.1.50.12 6379
127.0.0.1:6379> cluster meet 172.1.50.13 6379
127.0.0.1:6379> cluster meet 172.1.30.11 6379
127.0.0.1:6379> cluster meet 172.1.30.12 6379
127.0.0.1:6379> cluster meet 172.1.30.13 6379
127.0.0.1:6379> cluster nodes
e6f4def93bb888c144c4db308b5a7846d95d257b 172.1.50.11:6379@16379 myself,master - 0 1562061736000 2 connected
a0a5d4e10d97ba63fbf4f6eba3f4cf1f73d53423 172.1.30.12:6379@16379 master - 0 1562061738000 4 connected
c9f17946ca2c22a6dd0269614293e1bf38ae869b 172.1.50.12:6379@16379 master - 0 1562061737000 1 connected
2c5395040cfb9611b515d0424f30c91eba1ec6e8 172.1.30.11:6379@16379 master - 0 1562061740000 3 connected
eff5996aa6d5d9ab048a778246fcc1663322fe7d 172.1.50.13:6379@16379 master - 0 1562061739802 0 connected
67cf166b61e7d892affa6d754a563b5993a9c5a3 172.1.30.13:6379@16379 master - 0 1562061740802 5 connected
这里集群发现一气呵成,感觉很快。不行的话:
看下相应服务器是否能ping通;配置文件修改是否到位,这里6个节点的配置文件是一致的。
[节点建立握手之后集群还不能正常工作,这时集群处于下线状态,所有的数据读写都被禁止。]
c.设置从节点
使用 cluster replicate {nodeId}命令让一个节点成为从节点。其中命令执行必须在对应的从节点上执行,将当前节点设置为 node_id 指定的节点的从节点。
# cls1容器操作
127.0.0.1:6379> cluster nodes
a0a5d4e10d97ba63fbf4f6eba3f4cf1f73d53423 172.1.30.12:6379@16379 master - 0 1562063939000 4 connected
eff5996aa6d5d9ab048a778246fcc1663322fe7d 172.1.50.13:6379@16379 master - 0 1562063942507 0 connected
c9f17946ca2c22a6dd0269614293e1bf38ae869b 172.1.50.12:6379@16379 master - 0 1562063938000 1 connected
67cf166b61e7d892affa6d754a563b5993a9c5a3 172.1.30.13:6379@16379 master - 0 1562063941505 5 connected
2c5395040cfb9611b515d0424f30c91eba1ec6e8 172.1.30.11:6379@16379 myself,master - 0 1562063940000 3 connected
e6f4def93bb888c144c4db308b5a7846d95d257b 172.1.50.11:6379@16379 master - 0 1562063940503 2 connected
127.0.0.1:6379> cluster REPLICATE e6f4def93bb888c144c4db308b5a7846d95d257b
OK
127.0.0.1:6379> cluster nodes
a0a5d4e10d97ba63fbf4f6eba3f4cf1f73d53423 172.1.30.12:6379@16379 master - 0 1562063991000 4 connected
eff5996aa6d5d9ab048a778246fcc1663322fe7d 172.1.50.13:6379@16379 master - 0 1562063993000 0 connected
c9f17946ca2c22a6dd0269614293e1bf38ae869b 172.1.50.12:6379@16379 master - 0 1562063994634 1 connected
67cf166b61e7d892affa6d754a563b5993a9c5a3 172.1.30.13:6379@16379 master - 0 1562063992000 5 connected
2c5395040cfb9611b515d0424f30c91eba1ec6e8 172.1.30.11:6379@16379 myself,slave e6f4def93bb888c144c4db308b5a7846d95d257b 0 1562063993000 3 connected
e6f4def93bb888c144c4db308b5a7846d95d257b 172.1.50.11:6379@16379 master - 0 1562063993632 2 connected
127.0.0.1:6379>
#同理,在cls3上最终结果
/ # redis-cli -h 172.1.30.13
172.1.30.13:6379> cluster nodes
c9f17946ca2c22a6dd0269614293e1bf38ae869b 172.1.50.12:6379@16379 master - 0 1562064342897 1 connected
a0a5d4e10d97ba63fbf4f6eba3f4cf1f73d53423 172.1.30.12:6379@16379 slave c9f17946ca2c22a6dd0269614293e1bf38ae869b 0 1562064345000 4 connected
67cf166b61e7d892affa6d754a563b5993a9c5a3 172.1.30.13:6379@16379 myself,slave eff5996aa6d5d9ab048a778246fcc1663322fe7d 0 1562064343000 5 connected
e6f4def93bb888c144c4db308b5a7846d95d257b 172.1.50.11:6379@16379 master - 0 1562064344000 2 connected
eff5996aa6d5d9ab048a778246fcc1663322fe7d 172.1.50.13:6379@16379 master - 0 1562064345000 0 connected
2c5395040cfb9611b515d0424f30c91eba1ec6e8 172.1.30.11:6379@16379 slave e6f4def93bb888c144c4db308b5a7846d95d257b 0 1562064345904 3 connected
d.分配槽
[Redis 集群把所有的数据映射到16384个槽中。每个 key 会映射为一个固定的槽,只有当节点分配了槽,才能响应和这些槽关联的键命令。]
手动是需要批量分配的,Invalid or out of range slot,百度:
redis-cli -h 服务器IP -p 端口号 cluster addslots {0..5460} 许多这种没用的,后来找到一位大神的 redis ADDSLOTS支持区间输入实现,把ifelse{}里面的全部替换、加上函数,重新打包编译镜像OK了。效果:
172.1.50.13:6379> cluster slots
1) 1) (integer) 0
2) (integer) 5461
3) 1) "172.1.50.11"
2) (integer) 6379
3) "e6f4def93bb888c144c4db308b5a7846d95d257b"
4) 1) "172.1.30.11"
2) (integer) 6379
3) "2c5395040cfb9611b515d0424f30c91eba1ec6e8"
2) 1) (integer) 5462
2) (integer) 10922
3) 1) "172.1.50.12"
2) (integer) 6379
3) "c9f17946ca2c22a6dd0269614293e1bf38ae869b"
4) 1) "172.1.30.12"
2) (integer) 6379
3) "a0a5d4e10d97ba63fbf4f6eba3f4cf1f73d53423"
3) 1) (integer) 10923
2) (integer) 16383
3) 1) "172.1.50.13"
2) (integer) 6379
3) "eff5996aa6d5d9ab048a778246fcc1663322fe7d"
4) 1) "172.1.30.13"
2) (integer) 6379
3) "67cf166b61e7d892affa6d754a563b5993a9c5a3"
172.1.50.13:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
... ...
其他操作可以看命令帮助或官网。
3.测试手动增加节点,删除节点
a.添加节点
先测试添加一对情况
容器名称 容器IP地址 映射端口号 服务运行模式
redis-master4 172.1.50.21 6381->6379,16381->6379 master
redis-slave4 172.1.30.21 6382->6379,16382->6379 slave
kz.sh脚本:
docker stop clm4 cls4
docker rm clm4 cls4
docker run --name clm4 \
-p 6381:6379 -p 16381:16379 \
--restart=always \
--network=mybridge --ip=172.1.50.21 \
-v /root/tmp/dk/cluster_redis/6381/data:/data \
-v /root/tmp/dk/cluster_redis/6381:/etc/redis \
-d cffycls/redis5:cluster2
docker run --name cls4 \
-p 6382:6379 -p 16382:16379 \
--restart=always \
--network=mybridge --ip=172.1.30.21 \
-v /root/tmp/dk/cluster_redis/6382/data:/data \
-v /root/tmp/dk/cluster_redis/6382:/etc/redis \
-d cffycls/redis5:cluster2
但这时候cluster nodes命令显示集群节点数和cluster_known_nodes仍是6个;4对主从失败,如上再行添加配置文件建立建立5对主从顺利成功,手动应该是?:redis集群扩展节点需要按3+2n的的主从对添加。
同理,meet、replicate后查看cluster info:cluster_state:ok。但cluster slots依然是上面的,槽分配并没有变化,需要重新调整槽。
b.扩展后分配槽
[
CLUSTER setslot <slot> node <node_id> 将槽指派给指定的节点,如果槽已经指派给另一个节点,那么先让另一个节点删除该槽,然后再进行指派。
CLUSTER setslot <slot> migrating <node_id> 将本节点的槽迁移到指定的节点中。
CLUSTER setslot <slot> importing <node_id> 从 node_id 指定的节点中导入槽 slot 到本节点。
]
以集群方式(-c参数)登录节点(这是集群操作,不然下面会报参数无效)。
172.1.50.21:6379> cluster setslot 5461 importing e6f4def93bb888c144c4db308b5a7846d95d257b
这里轮到多个节点批量操作又不行,不可能一个一个地去操作槽,考虑添加官方的redis-trib.rb方便管理集群节点。
c.redis-trib.rb安装和使用
文件位置,官方安装包解压: redis-5.0.5/src/redis-trib.rb。对Dockerfile文件添加行,重新制作镜像、修改容器创建脚本:
apk add ruby
&& mv /usr/src/redis/src/redis-trib.rb /bin && chmod +x /bin/redis-trib.rb \
完成后进入cls1看到之前的配置信息依然是保留的,5对主从的集群。
/ # redis-trib.rb help
WARNING: redis-trib.rb is not longer available!
You should use redis-cli instead.
All commands and features belonging to redis-trib.rb have been moved
to redis-cli.
In order to use them you should call redis-cli with the --cluster
option followed by the subcommand name, arguments and options.
... ...
官方为了更方便的安装部署把之前的ruby脚本已经合并进来了,也是为了让我们看到rb脚本的路已经堵死,所以下面操作依旧使用前面的 cffycls/redis5:1.7。
这时时候前面已经有的/data数据信息清空(或许有也没有关系),使用官方工具重新操作。
4.新版集群操作
使用1.7的启动后查看,如下:
/ # redis-cli --cluster help
Cluster Manager Commands:
create host1:port1 ... hostN:portN
--cluster-replicas <arg>
check host:port
--cluster-search-multiple-owners
info host:port
fix host:port
--cluster-search-multiple-owners
reshard host:port
--cluster-from <arg>
--cluster-to <arg>
--cluster-slots <arg>
--cluster-yes
--cluster-timeout <arg>
--cluster-pipeline <arg>
--cluster-replace
rebalance host:port
--cluster-weight <node1=w1...nodeN=wN>
--cluster-use-empty-masters
--cluster-timeout <arg>
--cluster-simulate
--cluster-pipeline <arg>
--cluster-threshold <arg>
--cluster-replace
add-node new_host:new_port existing_host:existing_port
--cluster-slave
--cluster-master-id <arg>
del-node host:port node_id
call host:port command arg arg .. arg
set-timeout host:port milliseconds
import host:port
--cluster-from <arg>
--cluster-copy
--cluster-replace
help
For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster
a.创建集群
使用创建命令加上密码,下面是运行过程[replicas 1 表示我们希望为集群中的每个主节点创建一个从节点]。操作代码:
重置集群状态
redis-cli>
cluster reset hard
flushdb
quit
此处开始
使用 redis-cli --cluster create 命令一键创建集群:
redis-cli --cluster create 172.1.50.11:6379 172.1.50.12:6379 172.1.50.13:6379 \
172.1.30.11:6379 172.1.30.12:6379 172.1.30.13:6379 --cluster-replicas 1 -a 123456
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.1.30.12:6379 to 172.1.50.11:6379
Adding replica 172.1.30.13:6379 to 172.1.50.12:6379
Adding replica 172.1.30.11:6379 to 172.1.50.13:6379
M: ee0dcbbcc3634ca6e5d079835695bfe822ce17e6 172.1.50.11:6379
slots:[0-5460] (5461 slots) master
M: f02ee958993c79b63ffbef5238bb65b3cf552418 172.1.50.12:6379
slots:[5461-10922] (5462 slots) master
M: 819ad37676cc77b6691d0e74258c9f8b2d163121 172.1.50.13:6379
slots:[10923-16383] (5461 slots) master
S: cd2d78f87dd8a696dc127f762a168129ab91d9c6 172.1.30.11:6379
replicates 819ad37676cc77b6691d0e74258c9f8b2d163121
S: b69937a22d69d71596167104a3c2a9b8e308622c 172.1.30.12:6379
replicates ee0dcbbcc3634ca6e5d079835695bfe822ce17e6
S: 775bf0b33a34898a6a33bee85299982aae0d8a72 172.1.30.13:6379
replicates f02ee958993c79b63ffbef5238bb65b3cf552418
Can I set the above configuration? (type 'yes' to accept): **yes**输入
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.....
>>> Performing Cluster Check (using node 172.1.50.11:6379)
M: ee0dcbbcc3634ca6e5d079835695bfe822ce17e6 172.1.50.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: b69937a22d69d71596167104a3c2a9b8e308622c 172.1.30.12:6379
slots: (0 slots) slave
replicates ee0dcbbcc3634ca6e5d079835695bfe822ce17e6
M: f02ee958993c79b63ffbef5238bb65b3cf552418 172.1.50.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: 819ad37676cc77b6691d0e74258c9f8b2d163121 172.1.50.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 775bf0b33a34898a6a33bee85299982aae0d8a72 172.1.30.13:6379
slots: (0 slots) slave
replicates f02ee958993c79b63ffbef5238bb65b3cf552418
S: cd2d78f87dd8a696dc127f762a168129ab91d9c6 172.1.30.11:6379
slots: (0 slots) slave
replicates 819ad37676cc77b6691d0e74258c9f8b2d163121
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
可谓是傻瓜式一条命令建立集群,很方便。
【问题】
Waiting for the cluster to join...
确认配置无误,本机集群公用一个`redis.conf`文件:bind+protected-mode+cluster-enabled,本机在同一bridge内,互通(不用操心开放端口)。
##
因为redis 在meet 的时候
阶段1: node 1 ack -> node2:6379 #这时候node2会看到node1的网关IP与自身链接
于是
阶段2: node2 ack-> node1(127.1.0.1):16379 会被一直RST ...
###
单独配置每个容器的redis.conf,如 6391/redis.conf:
cluster-announce-ip 172.1.30.11 //容器net域内可达ip,下边可以使用默认端口
cluster-announce-port 6379
cluster-announce-bus-port 16379
b.扩容、平衡槽,删除节点、添加再平衡
-- 扩容、平衡,查看结果
这里运行起 172.1.50.21/22 172.1.30.21/22 四个容器。先添加一对,这里命令是只能一台一台的添加了
# 把新节点添加到172.1.50.11的集群
/ # redis-cli --cluster add-node 172.1.50.21:6379 172.1.50.11:6379 -a 123456
/ # redis-cli --cluster add-node 172.1.30.21:6379 172.1.50.11:6379 \
--cluster-slave --cluster-master-id 6d1b7a14a6d0be55a5fcb9266358bd1a42244d47 -a 123456
#查看结果,打印出4主4从,、接着节点添加权重
/ # redis-cli -h 172.1.50.11 -a 123456 cluster nodes
/ # redis-cli --cluster rebalance 172.1.50.21:6379 --cluster-weight 6d1b7a14a6d0be55a5fcb9266358bd1a42244d47=2 -a 123456
[OK] All nodes agree about slots configuration.
*** No rebalancing needed! All nodes are within the 2.00% threshold.
#这里报错负载数据量过小,不能执行,后续添加数据后再测试。
/ # redis-cli --cluster info 172.1.50.21:6379 -a 123456
--删除节点、添加再平衡
#这里在主节点执行报错,从节点先执行
/ # redis-cli --cluster del-node 172.1.30.21:6379 97737fa211ff9dcfb2ada9b7480964adecc6ccc9 -a 123456
/ # redis-cli --cluster del-node 172.1.50.21:6379 6d1b7a14a6d0be55a5fcb9266358bd1a42244d47 -a 123456
#再添加回去,测试重新分片、分(172.1.50.21:6379是集群节点即可)
/ # redis-cli --cluster reshard 172.1.50.21:6379 -a 123456
>>> Rebalancing across 4 nodes. Total weight = 5.00
然后在对话框中依次填写:
槽数:如500,其余节点的id:如:ee0dcbbcc3634ca6e5d079835695bfe822ce17e6,重复上个操作知道添加其他所有主节点,最后的节点是done(对话提示);确认计算结果、开始迁移。
/ # redis-cli --cluster rebalance 172.1.50.21:6379 --cluster-weight 6d1b7a14a6d0be55a5fcb9266358bd1a42244d47=2 -a 123456
#这次平衡会自动迁移:分片-槽按权重。
/ # redis-cli --cluster info 172.1.50.21:6379 -a 123456
172.1.50.21:6379 (6d1b7a14...) -> 6 keys | 6556 slots | 1 slaves.
172.1.50.13:6379 (819ad376...) -> 5 keys | 3276 slots | 1 slaves.
172.1.50.12:6379 (f02ee958...) -> 0 keys | 3276 slots | 1 slaves.
172.1.50.11:6379 (ee0dcbbc...) -> 4 keys | 3276 slots | 1 slaves.
这里应该是redis-cli --cluster的bug,中间没有不当操作。自动meet会与网关混淆,之前也没出现过,此次反复删除文件、cluster reset不管用,调用 sentinel 哨兵看建立日志感觉是没问题。
集群搭建完成。
5.集群hash槽的使用测试
a.主机开发环境安装php和composer
wget -O /usr/local/bin/composer https://getcomposer.org/downl... && chmod +x /usr/local/bin/composer; 设置源composer config -g repo.packagist composer https://packagist.laravel-chi...; 添加swoft初始化composer require swoft/swoft -vvv、composer require predis/predis -vvv,借用其autoload.php等。
b.集群的php测试部分
-- 被动确认数据
使用php脚本添加数据,目前没有找到集群模式+密码验证的redis操作方法。以普通连接操作数据:
//从缓存当中读取,可以更新的,根据集群状态更新
$servers = ['172.1.50.11:6379', '172.1.50.12:6379', '172.1.50.13:6379', '172.1.50.21:6379'];
$rs = [];
foreach ($servers as $addr){
$r = new Redis();
$server=explode(':',$addr);
$r->connect($server[0], (int) $server[1]);
$r->auth('123456');
$rs[$addr] = $r;
}
function getValue($rs, $key){
foreach ($rs as $ss => $r){
try{
return $r->get($key);
}catch (\RedisException $e){
//print_r("----- {$key}-{$ss}获取错误跳过:".$e->getMessage(). '<br/>');
continue;
}
}
}
function setValue($rs, $key, $value){
foreach ($rs as $addr=> $r){
try{
//echo "+++++ {$key}-{$addr}".'设置: '.$key. '<br/>';
return $r->set($key, $value);
}catch (\RedisException $e){
print_r("+++++ {$key}-{$addr}设置错误跳过:".$e->getMessage(). '<br/>');
continue;
}
}
}
for($i=0; $i<20000; $i++){
$key = 'set-'.$i;
if(getValue($rs, $key)) {
continue;
}else{
setValue($rs, $key, md5(time().$i));
}
}
foreach ($rs as $r){
$r->close();
}
2万条数据分布情况:
/ # redis-cli --cluster info 172.1.50.21:6379 -a 123456
172.1.50.21:6379 (6d1b7a14...) -> 8011 keys | 6556 slots | 1 slaves.
172.1.50.13:6379 (819ad376...) -> 3993 keys | 3276 slots | 1 slaves.
172.1.50.12:6379 (f02ee958...) -> 4000 keys | 3276 slots | 1 slaves.
172.1.50.11:6379 (ee0dcbbc...) -> 3996 keys | 3276 slots | 1 slaves.
-- 主动猜测数据
redis官网提供了个c的crc16算法实例,predis也提供了PredisClusterHashCRC16(),可以使用这个算法结合分槽情况,“猜出”redis将会把数据放到哪片服务器上。
require "vendor/autoload.php";
//从缓存当中读取,可以更新的,根据集群状态更新
$servers = ['172.1.50.11:6379', '172.1.50.12:6379', '172.1.50.13:6379', '172.1.50.21:6379'];
$rs = [];
//查出所有节点分布
$slotNodes = [];
foreach ($servers as $addr){
$r = new Redis();
$server=explode(':',$addr);
$r->connect($server[0], (int) $server[1]);
$r->auth('123456');
$rs[$addr] = $r;
if(empty($slotInfo)){
//单一节点可以看到所有存在槽的节点
$slotInfo = $r->rawCommand('cluster','slots');
//print_r($slotInfo);exit; 这里单个服务器可能有多个slot片段,所以引入$ix,+1与命令行显示分片序号相同
foreach ($slotInfo as $ix => $value){
$slotNodes[$value[2][0].':'.$value[2][1].' '.($ix+1)]=[$value[0], $value[1]];
}
}
}
//计算,测试批量查询
$crc = new \Predis\Cluster\Hash\CRC16();
$getAddr = function ($key) use (&$slotNodes, &$crc, &$rs) {
$code = $crc->hash($key) % 16384; //关键点
foreach ($slotNodes as $addr => $boundry){
if( $code>=$boundry[0] && $code<=$boundry[1] ){
$host =explode(' ', $addr)[0];
return $addr. ' = '. $rs[$host]->get($key);
}
}
};
$result=[];
for($i=10; $i<30; $i++){
$key = 'set-'.$i;
$result[$key] = $getAddr($key);
}
echo '<pre>';
print_r($result);
foreach ($rs as $r){
$r->close();
}
页面查询结果:
Array
(
[set-10] => 172.1.50.13:6379 3 = a521489f10f40fac11b6e18b0ae308f7
[set-11] => 172.1.50.21:6379 6 = f41ddf1be79148f98e084c94e94dd1c0
[set-12] => 172.1.50.21:6379 5 = c8e8c65131068796aaa294d6080d647e
[set-13] => 172.1.50.11:6379 2 = 4696932eef866ca3e2394ef20b67a23b
[set-14] => 172.1.50.13:6379 3 = 2d52e3229c681946734979060b91aa14
[set-15] => 172.1.50.21:6379 6 = 732ea4fd4dbe9ae9df70919b87e2ad2f
[set-16] => 172.1.50.21:6379 5 = c4d92fbb2984798e84bfb91ade87ca5f
[set-17] => 172.1.50.11:6379 2 = c386d5afb3018bf7f42b7d4e94d48e31
[set-18] => 172.1.50.13:6379 3 = ad1fd2bc72b5870d8198a7ac85d84ced
[set-19] => 172.1.50.21:6379 6 = 3ec03676746821bb356c2c76d7b293b9
[set-20] => 172.1.50.12:6379 1 = 4d7bc33ed4969cfad452a85bdd3829a2
[set-21] => 172.1.50.13:6379 3 = 9793ea679a37d1b397937aaba8fb5dd9
[set-22] => 172.1.50.21:6379 4 = 280b8d2ff4174b3f2a238103d2b1a7a5
[set-23] => 172.1.50.21:6379 5 = 49553566576d542f95210bef8b6c7bdb
[set-24] => 172.1.50.12:6379 1 = ca4689eec6789119b4703a66e3a7c68a
[set-25] => 172.1.50.13:6379 3 = 7410c8f7edd68a36f9aac0e9cbd60666
[set-26] => 172.1.50.21:6379 4 = e6801ec0f7c2c5bcebe69906b7cb37de
[set-27] => 172.1.50.21:6379 5 = 00df0f3e75ea79cb09f741b0c7a3b2d6
[set-28] => 172.1.50.12:6379 1 = eb4361e00535cba15b99bf7781d0550a
[set-29] => 172.1.50.13:6379 3 = 8d7d1c1976f19f2d7f042fde3dee77e7
)
ok,测试告一段落。
小结
集群搭建注意点:查看命令帮助,部分帮助里面有换行可以用测试加深理解;php测试方面有phpredis、predis这2都可以用,看哪个方便;集群分片操作转移到--cluster下了,分槽reshard和rebalance协作方便;宕机测试未进行。
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。