redis哨兵模式环境搭建
介绍
单机版的Redis存在性能瓶颈,Redis通过提高主从复制实现读写分离,提高了了Redis的可用性,另一方便也能实现数据在多个Redis直接的备份。
通过配置Redis的主从复制机制可以提高Redis的可用性,但是一旦主节点出现问题,就需要运维手工切换主从服务节点,即增加了人工成本,且容易出错,而且无法自动化切换,Redis的哨兵机制就能实现自动的主从切换,以及实现对Redis服务的切换。
准备
本次测试环境搭建一主两从,三台虚拟机分别是192.168.127.14
(主)、192.168.127.101
(从)、192.168.127.102
(从)。系统环境centos7。
单机搭建redis环境之前的文章中已经提到过,这里不再叙述,参考:http://www.jiajiajia.club/blog/artical/4ngaxn9dyq0t/537
,我们在此文章的基础上继续。
master主库配置
介绍单机安装的时候redis.conf已经配置了如下参数:
# 绑定的主机,注释掉后允许所有主机登陆
#bind 127.0.0.1
# 关闭保护模式
protected-mode no
# 端口,默认为6379
port 6379
# 开启后台运行模式
daemonize yes
# redis日志文件路径
logfile "/usr/local/redis/log/redis.log"
# 持久化数据文件路径
dir "/usr/local/redis/data"
# 登陆redis数据库的密码认证
requirepass "123456"
# 开启AOF持久化模式
appendonly yes
配置哨兵模式时需要在redis.conf配置文件中增加一项配置如下:
# 哨兵模式中设定主库密码与当前库密码同步,保证从库能够提升为主库
masterauth "123456"
在
/etc/redis/
目录下创建sentinel.conf
哨兵配置文件,该配置文件和redis.conf
在同一个目录(非必须,随意)
# 关闭保护模式
protected-mode no
# sentinel默认端口
port 26379
# 允许后台运行
daemonize yes
# pid文件 默认就好
pidfile "/var/run/redis-sentinel.pid"
# sentinel日志文件
logfile "/usr/local/redis/log/sentinel.log"
# 监听redis主节点是否失效
# sentinel monitor <master-redis-name> <master-redis-ip> <master-redis-port> <quorum>
# quorum是一个数字:指明当有多少个sentinel认为一个master失效时(值一般为:sentinel总数/2 + 1),master才算真正失效。
# mymaster这个名字随便取,客户端访问时会用到
sentinel monitor mymaster 192.168.127.14 6379 2
sentinel auth-pass mymaster 123456
sentinel announce-ip "192.168.127.14"
启动master节点的redis和sentinel
[root@localhost log]# /usr/local/bin/redis-server /etc/redis/redis.conf
[root@localhost log]# /usr/local/bin/redis-sentinel /etc/redis/sentinel.conf
启动客户端,执行一下命令说明启动成功
[root@localhost log]# /usr/local/bin/redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>
三台服务器都记得开放6379和26379端口
firewall-cmd --zone=public --add-port=6379/tcp --permanent && firewall-cmd --zone=public --add-port=26379/tcp --permanent && firewall-cmd --reload
slave从库配置
从库的服务安装和配置基本和主库一致,redis.conf和sentinel.conf配置文件可以直接从主库复制过来,做一些修改即可。两台从库的配置是一样的。
在redis.conf配置文件中增加一项配置:
# 配置主从关系
slaveof 192.168.127.14 6379
修改sentinel.conf配置文件
sentinel announce-ip 192.168.127.101
这项配置中的ip更换为本机ip,其余配置可不用更改,如果有sentinel myid
配置要将它删除。
启动redis和sentinel,方式和启动master节点一样。
待master节点和两台slave节点启动完成后,可以查看集群信息
[root@localhost log]# /usr/local/bin/redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> info replication
# Replication
role:master # 节点角色,这个是主节点
connected_slaves:2
slave0:ip=192.168.127.101,port=6379,state=online,offset=172847,lag=1 # 从节点信息
slave1:ip=192.168.127.102,port=6379,state=online,offset=172991,lag=1 # 从节点信息
master_replid:5e9f379fedf9502122ea1bf0667fa135024958db
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:173278
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:173278
127.0.0.1:6379>
至此,redis的一主两从三哨兵架构搭建完成。
springboot配置redis哨兵模式
pom文件添加配置
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
</dependency>
application.yml配置文件
server:
port: 8080
spring:
redis:
database: 0
password: 123456
timeout: 3000
sentinel: #哨兵模式
master: mymaster #主服务器所在集群名称
nodes: 192.168.127.101:26379,192.168.127.102:26379,192.168.127.14:26379
lettuce:
pool:
max-idle: 50
min-idle: 10
max-active: 100
max-wait: 1000
测试
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.ValueOperations;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import javax.annotation.Resource;
@RestController
public class TestController {
@Resource
private RedisTemplate redisTemplate;
@GetMapping("set")
public boolean set(String key,String value){
final ValueOperations valueOperations = redisTemplate.opsForValue();
valueOperations.set(key,value);
return true;
}
@GetMapping("get")
public Object get(String key){
final ValueOperations valueOperations = redisTemplate.opsForValue();
final Object o = valueOperations.get(key);
return o;
}
}
正常情况下测试,通过接口或者客户端在任意一台服务中添加一个key、value,都会同步到另外两台服务。
如果我们手动关闭主节点的redis服务。
2023-01-13 16:50:59.626 INFO 9192 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2023-01-13 16:50:59.626 INFO 9192 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2023-01-13 16:50:59.630 INFO 9192 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 4 ms
2023-01-13 16:54:10.211 INFO 9192 --- [xecutorLoop-1-5] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was /192.168.127.14:6379
2023-01-13 16:54:12.233 WARN 9192 --- [ioEventLoop-4-4] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [192.168.127.14:6379]: Connection refused: no further information: /192.168.127.14:6379
2023-01-13 16:54:16.497 INFO 9192 --- [ecutorLoop-1-11] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was 192.168.127.14:6379
2023-01-13 16:54:18.512 WARN 9192 --- [oEventLoop-4-10] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [192.168.127.14:6379]: Connection refused: no further information: /192.168.127.14:6379
2023-01-13 16:54:22.799 INFO 9192 --- [xecutorLoop-1-5] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was 192.168.127.14:6379
2023-01-13 16:54:24.815 WARN 9192 --- [ioEventLoop-4-4] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [192.168.127.14:6379]: Connection refused: no further information: /192.168.127.14:6379
2023-01-13 16:54:29.898 INFO 9192 --- [ecutorLoop-1-11] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was 192.168.127.14:6379
2023-01-13 16:54:31.910 WARN 9192 --- [oEventLoop-4-10] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [192.168.127.14:6379]: Connection refused: no further information: /192.168.127.14:6379
2023-01-13 16:54:37.097 INFO 9192 --- [xecutorLoop-1-3] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was 192.168.127.14:6379
2023-01-13 16:54:39.116 WARN 9192 --- [ioEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [192.168.127.14:6379]: Connection refused: no further information: /192.168.127.14:6379
2023-01-13 16:54:43.298 INFO 9192 --- [xecutorLoop-1-5] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was 192.168.127.14:6379
2023-01-13 16:54:43.307 INFO 9192 --- [ioEventLoop-4-4] i.l.core.protocol.ReconnectionHandler : Reconnected to 192.168.127.102:6379
主节点(192.168.127.14)关闭后,业务服务进行重连,重连6次失败后连接到从服务器(192.168.127.102)