背景
最近项目需求,接触到了Redis的搭建,简单记录下搭建过程中遇到的坑
总体配置
- 192.168.1.100:6379 -> master
192.168.1.101:6379 -> slave
192.168.1.102:6379 -> slave
192.168.1.100:26379 -> sentinel
192.168.1.101:26379 -> sentinel
192.168.1.102:26379 -> sentinel
搭建步骤
1.安装redis
# 解压
tar -xvf /usr/local/redis-3.2.11.tar.gz
mkdir -p /usr/local/redis/bin
cp /usr/local/redis/src/{redis-benchmark,redis-check-aof,redis-check-rdb,redis-cli,redis-sentinel,redis-server,redis-trib.rb} /usr/local/redis/bin
mkdir -p /u01/redis/{6379/{log,data,pid,conf},26379/{log,data,pid,conf}
# 添加环境变量
echo "export PATH=/usr/local/redis/bin:$PATH" >> /etc/profile
source /etc/profile
2.redis-6379配置
redis节点配置基本如下,把如下配置分别cp
到三台虚拟机的/u01/redis/6379/conf/redis_6379.conf
bind 0.0.0.0
protected-mode no
daemonize yes
pidfile "/u01/redis/6379/pid/redis_6379.pid"
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/u01/redis/6379/log/redis_6379.log"
databases 16
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir "/u01/redis/6379/data"
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
min-slaves-to-write 1
min-slaves-max-lag 10
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
启动服务
# 在三台虚拟机上分别执行
redis-server /u01/redis/6379/conf/redis_6379.conf
建立主从关系
# 在192.168.1.101
redis-cli -p 6379 SLAVEOF 192.168.1.100 6379
# 在192.168.1.102
redis-cli -p 6379 SLAVEOF 192.168.1.100 6379
查看Replication
192.168.1.101:6379> info replication
# Replication
role:master
connected_slaves:2
min_slaves_good_slaves:2
slave0:ip=192.168.1.102,port=6379,state=online,offset=9577826,lag=1
slave1:ip=192.168.1.103,port=6379,state=online,offset=9577965,lag=0
master_repl_offset:9577965
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:8529390
repl_backlog_histlen:1048576
192.168.1.102:6379> info replication
# Replication
role:slave
master_host:192.168.1.101
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:9600220
slave_priority:100
slave_read_only:1
connected_slaves:0
min_slaves_good_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
192.168.1.103:6379> info replication
# Replication
role:slave
master_host:192.168.1.101
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:9612675
slave_priority:100
slave_read_only:1
connected_slaves:0
min_slaves_good_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
3.sentinel-6379配置
sentinel节点配置基本如下,把如下配置分别cp
到三台虚拟机的/u01/redis/26379/conf/sentinel_26379.conf
sentinel monitor mymaster 后监控的是redis中的master节点,也就是192.168.1.100,所以这个文件在三台机器上是相同的
port 26379
bind 0.0.0.0
daemonize yes
protected-mode no
dir "/u01/redis/26379/tmp"
logfile "/u01/redis/26379/log/sentinel_26379.log"
sentinel monitor mymaster 192.168.1.100 6379 1
等待启动完毕后观察/u01/redis/26379/conf/sentinel_26379.conf文件变化
查看sentinel状态用info sentinel
redis-cli -h 192.168.1.100 -p 26379 info sentinel
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=zhuanche01,status=ok,address=192.168.1.100:6379,slaves=2,sentinels=3
总结
- 我搭建的时候遇到了192.168.1.101、192.168.1.102上的sentinel启动后一段时间出错的问题,后来发现是没有监控master
- 再就是出问题了多看log
- 来年要多写笔记,年纪大了,记忆力越来越差!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。