搭建Redis的三种集群
Redis支持三种集群方案
主从复制模式
Sentinel哨兵模式
Cluster模式
主从复制模式
工作原理:
优点:
读写分离,提高服务器性能
缺点:
一旦master节点不能提供服务,需要人工将Slave节点切换成master节点
主节点只能有一个,不能承受高并发
客户端无法感知到master节点的变化
搭建主从集群
通过docker-compose运行一个master节点和一个slave节点
docker-compose目录结构
.
├── docker-compose.yml
├── master
│ ├── data
│ │ └── dump.rdb
│ └── redis.conf
└── slave
├── data
│ └── dump.rdb
└── redis.conf
docker-compose.yml
version: "3"
networks:
redis-replication:
driver: bridge
ipam:
config:
- subnet: 192.168.0.0/24
services:
master:
image: redis
container_name: redis-master
ports:
- 6379:6379
volumes:
- "./master/redis.conf:/etc/redis/redis.conf"
- "./master/data:/data"
command: ["redis-server", "/etc/redis/redis.conf"]
restart: always
networks:
redis-replication:
ipv4_address: 192.168.0.2
slave:
image: redis
container_name: redis-slave
ports:
- 6380:6379
volumes:
- "./slave/redis.conf:/etc/redis/redis.conf"
- "./slave/data:/data"
command: ["redis-server", "/etc/redis/redis.conf"]
restart: always
networks:
redis-replication:
ipv4_address: 192.168.0.3
master/redis.conf
port 6379
protected-mode no
#【Master】Diskless 就是直接将要复制的 RDB 文件写入到 Socket 中,而不会先存储到磁盘上
repl-diskless-sync no
#【Master】是否开启 Nagle 算法,可以减少流量占用,但会同步得慢些
repl-disable-tcp-nodelay no
slave/redis.conf
port 6379
protected-mode no
#【Slave】连接 Master 的配置
slaveof 192.168.0.2 6379
#【Slave】只读模式
slave-read-only yes
#【Slave】复制期间是否允许响应查询,可能会返回脏数据
slave-serve-stale-data yes
启动
docker-compose up
启动日志:
[+] Building 0.0s (0/0) docker:desktop-linux
[+] Running 3/0
✔ Network redis_redis-replication Created 0.0s
✔ Container redis-master Created 0.0s
✔ Container redis-slave Created 0.0s
Attaching to redis-master, redis-slave
redis-master | 1:C 19 Nov 2023 09:27:02.640 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-master | 1:C 19 Nov 2023 09:27:02.640 # Redis version=7.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis-master | 1:C 19 Nov 2023 09:27:02.640 # Configuration loaded
redis-master | 1:M 19 Nov 2023 09:27:02.640 * monotonic clock: POSIX clock_gettime
redis-master | 1:M 19 Nov 2023 09:27:02.641 * Running mode=standalone, port=6379.
redis-master | 1:M 19 Nov 2023 09:27:02.641 # Server initialized
redis-master | 1:M 19 Nov 2023 09:27:02.641 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis-master | 1:M 19 Nov 2023 09:27:02.643 * Loading RDB produced by version 7.0.4
redis-master | 1:M 19 Nov 2023 09:27:02.643 * RDB age 3063 seconds
redis-master | 1:M 19 Nov 2023 09:27:02.643 * RDB memory usage when created 0.97 Mb
redis-master | 1:M 19 Nov 2023 09:27:02.643 * Done loading RDB, keys loaded: 0, keys expired: 0.
redis-master | 1:M 19 Nov 2023 09:27:02.644 * DB loaded from disk: 0.001 seconds
redis-master | 1:M 19 Nov 2023 09:27:02.644 * Ready to accept connections
redis-slave | 1:C 19 Nov 2023 09:27:02.649 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-slave | 1:C 19 Nov 2023 09:27:02.649 # Redis version=7.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis-slave | 1:C 19 Nov 2023 09:27:02.649 # Configuration loaded
redis-slave | 1:S 19 Nov 2023 09:27:02.649 * monotonic clock: POSIX clock_gettime
redis-slave | 1:S 19 Nov 2023 09:27:02.650 * Running mode=standalone, port=6379.
redis-slave | 1:S 19 Nov 2023 09:27:02.650 # Server initialized
redis-slave | 1:S 19 Nov 2023 09:27:02.650 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * Loading RDB produced by version 7.0.4
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * RDB age 31 seconds
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * RDB memory usage when created 0.87 Mb
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * Done loading RDB, keys loaded: 0, keys expired: 0.
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * DB loaded from disk: 0.000 seconds
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * Ready to accept connections
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * Connecting to MASTER 192.168.0.2:6379
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * MASTER <-> REPLICA sync started
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * Non blocking connect for SYNC fired the event.
redis-slave | 1:S 19 Nov 2023 09:27:02.652 * Master replied to PING, replication can continue...
redis-slave | 1:S 19 Nov 2023 09:27:02.653 * Trying a partial resynchronization (request d9325e3543cdeb24055e06f3ed5623c7bc2c2c33:267).
redis-master | 1:M 19 Nov 2023 09:27:02.653 * Replica 192.168.0.3:6379 asks for synchronization
redis-master | 1:M 19 Nov 2023 09:27:02.653 * Partial resynchronization request from 192.168.0.3:6379 accepted. Sending 0 bytes of backlog starting from offset 267.
redis-slave | 1:S 19 Nov 2023 09:27:02.653 * Successful partial resynchronization with master.
redis-slave | 1:S 19 Nov 2023 09:27:02.653 # Master replication ID changed to f4b77e85d3ef205932ee0d76d9c65f73b1d56466
redis-slave | 1:S 19 Nov 2023 09:27:02.653 * MASTER <-> REPLICA sync: Master accepted a Partial Resynchronization.
哨兵模式
优点:
对节点进行监控,自动完成故障转移
缺点:
主从切换存在瞬断情况,等待时间比较长
哨兵模式只有一个主节点对外提供服务,没办法支持高并发
哨兵之间的通信方式:
哨兵互相之间的通信,是通过redis的pub/sub系统实现的,每个哨兵都会往订阅通道的里发送消息,这时候所有其他哨兵都可以消费到这个消息,并感知到其他的哨兵的存在。每隔两秒钟,每个哨兵都会往自己监控的某个master+slaves对应的订阅通道里发送一个消息,内容是自己的host、ip和runid还有对这个master的监控配置。每个哨兵也会去监听自己监控的每个master+slaves对应的订阅通道,然后去感知到同样在监听这个master+slaves的其他哨兵的存在。每个哨兵还会跟其他哨兵交换对master的监控配置,互相进行监控配置的同步。
哨兵之间如何选出leader:
每一个sentinel节点都可以成为leader,当一个sentinel节点确认redis集群主节点主观下线之后,会请求其他sentinel节点选自己为leader,如果被请求的节点没有选过这个节点,则同意,否则不同意。当票数大于节点数量的一半时,该sentinel节点选举为leader。否则重新选举。(raft算法)
哨兵模式如何监控主从节点:
哨兵会每隔 1 秒给所有主从节点发送 PING 命令,当主从节点收到 PING 命令后,会发送一个响应命令给哨兵,这样就可以判断它们是否在正常运行。
哨兵模式如何选主节点:
先选出sentinel leader节点,然后再根据以下规则选出master节点
1.用户可以通过slave-priority配置项配置优先级,哨兵节点会将优先级更高、达到切换条件的从节点作为新的主节点
2.和旧主库同步程度最接近的节点(通过对比master_repl_offset和slave_repl_offset)
3.每一个实例都有一个ID,选择ID最小的从节点
搭建哨兵集群
在主从复制的基础上,增加一个sentinel配置
.
├── docker-compose.yml
├── master
│ ├── data
│ │ └── dump.rdb
│ └── redis.conf
├── sentinel
│ └── sentinel.conf
└── slave
├── data
│ └── dump.rdb
└── redis.conf
docker-compose.yml
version: "3"
networks:
redis-replication:
driver: bridge
ipam:
config:
- subnet: 192.168.0.0/24
services:
master:
image: redis
container_name: redis-master
ports:
- 6379:6379
volumes:
- "./master/redis.conf:/etc/redis/redis.conf"
- "./master/data:/data"
command: ["redis-server", "/etc/redis/redis.conf"]
restart: always
networks:
redis-replication:
ipv4_address: 192.168.0.2
slave-1:
image: redis
container_name: redis-slave-1
ports:
- 6380:6379
volumes:
- "./slave/redis.conf:/etc/redis/redis.conf"
- "./slave/data:/data"
command: ["redis-server", "/etc/redis/redis.conf"]
restart: always
networks:
redis-replication:
ipv4_address: 192.168.0.3
slave-2:
image: redis
container_name: redis-slave-2
ports:
- 6381:6379
volumes:
- "./slave/redis.conf:/etc/redis/redis.conf"
- "./slave/data:/data"
command: ["redis-server", "/etc/redis/redis.conf"]
restart: always
networks:
redis-replication:
ipv4_address: 192.168.0.4
sentinel:
image: redis
container_name: redis-sentinel
ports:
- 26379:26379
volumes:
- "./sentinel/sentinel.conf:/etc/redis/sentinel.conf"
command: ["redis-sentinel", "/etc/redis/sentinel.conf"]
restart: always
networks:
redis-replication:
ipv4_address: 192.168.0.100
sentinel.conf
port 26379
# 监视redis主节点,至少有1个sentinel认为主节点失效,才会进行故障转移
sentinel monitor redis-master 192.168.0.2 6379 1
# 5000毫秒内没有收到主节点的心跳,sentinel认为主节点失效
sentinel down-after-milliseconds redis-master 5000
# 指定故障转移的超时时间,超过该时间,故障转移失败
sentinel failover-timeout redis-master 10000
# 执行在故障转移过程中,slave节点数据同步的最大数量
sentinel parallel-syncs redis-master 1
SpringBootData连接redis Sentinel集群
application.yml
spring:
redis:
sentinel:
master: redis-master
nodes:
- 127.0.0.1:26379
RedisConfiguration.java
@Configuration
public class RedisConfiguration {
@Bean
public RedisTemplate<String, Object> redisTemplate(LettuceConnectionFactory lettuceConnectionFactory) {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(lettuceConnectionFactory);
// 设置key序列化方式string,RedisSerializer.string() 等价于 new StringRedisSerializer()
redisTemplate.setKeySerializer(RedisSerializer.string());
// 设置value的序列化方式json,使用GenericJackson2JsonRedisSerializer替换默认序列化,RedisSerializer.json() 等价于 new GenericJackson2JsonRedisSerializer()
redisTemplate.setValueSerializer(RedisSerializer.json());
// 设置hash的key的序列化方式
redisTemplate.setHashKeySerializer(RedisSerializer.string());
// 设置hash的value的序列化方式
redisTemplate.setHashValueSerializer(RedisSerializer.json());
// 使配置生效
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
}
踩坑
Lettuce客户端在哨兵模式下,不能自动切换主从节点!
Cluster模式
cluster模式既有高可用,也可以动态扩容,可以抗住高并发请求。
redis cluster如何将数据进行分片?
redis cluster将所有的key根据哈希函数映射到0~16383之间
Slot = CRC16(key)&16384
每个节点负责维护一部分哈希槽,例如如果你有3个节点,A节点存储0-5460,B节点存储5461-10922,C节点存储10923-16363
redis cluster如何扩容?
新增节点,然后使用redis-cli的rehash命令手动分配数据槽。
redis-cli --cluster reshard 192.168.0.100:6379
redis集群扩容的过程中是集群否可用?
在 Redis 集群进行 rehash(重新分片)过程中,集群仍然是可用的。Redis 集群使用哈希槽(hash slot)来分片数据,每个节点负责一部分哈希槽。当需要进行 rehash 时,集群会将部分哈希槽从一个节点移动到另一个节点,以实现数据的重新分片。