侧边栏壁纸
  • 累计撰写 176 篇文章
  • 累计创建 87 个标签
  • 累计收到 1 条评论

目 录CONTENT

文章目录

redis多主多从集群搭建

Z先森
2021-05-27 / 0 评论 / 0 点赞 / 19 阅读 / 0 字 / 正在检测是否收录...

准备

准备两台机器(CenntOS7),本文是做双主机 3主3从集群,下列所有操作两台机器均需操作,根据实际情况,可以多机器,比如6个机器3主3从,或者N主N从

下载安装

直接官网下载,这里用的是redis5.0.7版本

tar -zxvf redis-5.0.7.tar.gz
cd redis-5.0.7
make
make test && make install

系统环境配置

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >>/etc/rc.local
echo "vm.overcommit_memory=1" >> /etc/sysctl.conf
echo "net.core.somaxconn = 1024" >> /etc/sysctl.conf
chmod +x /etc/rc.d/rc.local
sysctl -p

添加用户及目录

groupadd redis
useradd redis -g redis -s /sbin/nologin
mkdir -p /etc/redis-cluster/10001
mkdir -p /etc/redis-cluster/10002
mkdir -p /etc/redis-cluster/10003
mkdir /var/log/redis
chown redis.redis /var/log/redis

增加配置

#节点1 10001
cat > /etc/redis-cluster/10001/redis.conf << EOF
bind 0.0.0.0
protected-mode yes
port 10001
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_10001.pid
loglevel notice
logfile /var/log/redis/redis_10001.log
databases 16
save ""
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump-10001.rdb
dir /var/lib/redis
slave-serve-stale-data no
slave-read-only yes
repl-diskless-sync yes
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
maxclients 10000
#512M
maxmemory 4294967296
maxmemory-policy volatile-lru
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes
cluster-config-file nodes-10001.conf
cluster-node-timeout 15000
cluster-require-full-coverage no
slowlog-log-slower-than 15000
slowlog-max-len 1000
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 512mb 218mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 50
aof-rewrite-incremental-fsync yes
masterauth 123456#
requirepass 123456#
EOF

#节点2 10002
cat > /etc/redis-cluster/10002/redis.conf << EOF
bind 0.0.0.0
protected-mode yes
port 10002
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_10002.pid
loglevel notice
logfile /var/log/redis/redis_10002.log
databases 16
save ""
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump-10002.rdb
dir /var/lib/redis
slave-serve-stale-data no
slave-read-only yes
repl-diskless-sync yes
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
maxclients 10000
#512M
maxmemory 4294967296
maxmemory-policy volatile-lru
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes
cluster-config-file nodes-10002.conf
cluster-node-timeout 15000
cluster-require-full-coverage no
slowlog-log-slower-than 15000
slowlog-max-len 1000
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 512mb 218mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 50
aof-rewrite-incremental-fsync yes
masterauth 123456#
requirepass 123456#
EOF

#节点3 10003
cat > /etc/redis-cluster/10003/redis.conf << EOF
bind 0.0.0.0
protected-mode yes
port 10003
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_10003.pid
loglevel notice
logfile /var/log/redis/redis_10003.log
databases 16
save ""
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump-10003.rdb
dir /var/lib/redis
slave-serve-stale-data no
slave-read-only yes
repl-diskless-sync yes
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
maxclients 10000
#512M
maxmemory 4294967296
maxmemory-policy volatile-lru
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes
cluster-config-file nodes-10003.conf
cluster-node-timeout 15000
cluster-require-full-coverage no
slowlog-log-slower-than 15000
slowlog-max-len 1000
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 512mb 218mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 50
aof-rewrite-incremental-fsync yes
masterauth 123456#
requirepass 123456#
EOF

添加systemd服务

#10001
cat >/usr/lib/systemd/system/redis-10001.service << EOF
[Unit]
Description=Redis persistent key-value database
After=network.target

[Service]
LimitNOFILE=65535
ExecStart=/usr/local/bin/redis-server /etc/redis-cluster/10001/redis.conf --daemonize no
ExecStop=/usr/local/bin/redis-cli -h 127.0.0.1 -p 10001 shutdown
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target
EOF

#10002
cat >/usr/lib/systemd/system/redis-10002.service << EOF
[Unit]
Description=Redis persistent key-value database
After=network.target

[Service]
LimitNOFILE=65535
ExecStart=/usr/local/bin/redis-server /etc/redis-cluster/10002/redis.conf --daemonize no
ExecStop=/usr/local/bin/redis-cli -h 127.0.0.1 -p 10002 shutdown
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target
EOF

#10003
cat >/usr/lib/systemd/system/redis-10003.service << EOF
[Unit]
Description=Redis persistent key-value database
After=network.target

[Service]
LimitNOFILE=65535
ExecStart=/usr/local/bin/redis-server /etc/redis-cluster/10003/redis.conf --daemonize no
ExecStop=/usr/local/bin/redis-cli -h 127.0.0.1 -p 10003 shutdown
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target
EOF

服务的启停

systemctl enable redis-10001.service(start|stop|restart|enable|status)
systemctl enable redis-10002.service(start|stop|restart|enable|status)
systemctl enable redis-10003.service(start|stop|restart|enable|status)
#也可以直接操作多个,比如
systemctl start redis10001 redis10002 redis10003
systemctl daemon-reload

组建集群

#任意一台操作即可
#如果架构不一样的话,把所有的redis节点都加进来就好了,这一步基本1-2分钟就可以了,比较快,如果卡住了,会要删掉重来
redis-cli -a 123456# --cluster create 192.168.10.100:10001 192.168.10.100:10002 192.168.10.100:10003 192.168.10.101:10001 192.168.10.101:10002 192.168.10.101:10003 --cluster-replicas 1

重建集群

#如果上一步的集群卡住了,或没成功,则需要重建集群
#停掉redis,删除所有节点/var/lib/redis/下的内容
systemctl stop redis10001 redis10002 redis10003 && rm -fr /var/lib/redis/*
#重新启动redis,再次组建

redis集群常用命令

#查看集群状态
redis-cli -p 10001 -a 123456# cluster info
#查看节点信息
redis-cli -p 10001 -a 123456# cluster nodes
#查询
redis-cli连接时加入-c参数,则以集群模式连接集群,会自动重定向查询,无需手动每个主节点分别查询

补充

slave-serve-stale-data no

配置文件的这一项会导致集群组建的时候报错(不影响使用)

[ERR] Unable to load info fo node xxx

可以根据实际情况进行决定是否要加(默认是yes),官方解释:

When a slave loses its connection with the master, or when the replication
is still in progress, the slave can act in two different ways:

  1. if slave-serve-stale-data is set to 'yes' (the default) the slave will
    still reply to client requests, possibly with out of date data, or the
    data set may just be empty if this is the first synchronization.

  2. if slave-serve-stale-data is set to 'no' the slave will reply with
    an error "SYNC with master in progress" to all the kind of commands
    but to INFO and SLAVEOF.

slave-serve-stale-data参数设置成yes,主从复制中,从服务器可以响应客户端请求
slave-serve-stale-data参数设置成no,主从复制中,从服务器将阻塞所有请求,有客户端请求时返回“SYNC with master in progress”

0

评论区