0

目前,我们在具有 longhorn 存储的 kubernetes(RKE - on prem)上运行具有 3 个副本( https://github.com/helm/charts/tree/master/stable/redis-ha v4.4.4)的持久性启用 redis-ha类,由于某些未知原因,主从无法同步。它可能会在重启后 30 分钟或重启后 1 天发生,最终我们得到以下错误。

从站 1 错误

redis-cli role
1) "slave"
2) "10.43.6.52"
3) (integer) 6379
4) "connect"
5) (integer) -1

1:S 22 Sep 2020 09:53:22.843 * Master replied to PING, replication can continue... 
1:S 22 Sep 2020 09:53:22.858 * Partial resynchronization not possible (no cached master) 
1:S 22 Sep 2020 09:53:28.189 * Full resync from master: bd87e85aa41950b9844c1bcb29a7870b96b53f79:804594411 
1:S 22 Sep 2020 09:53:33.204 # Opening the temp file needed for MASTER <-> REPLICA synchronization: I/O error 
1:S 22 Sep 2020 09:53:33.905 * Connecting to MASTER 10.43.6.52:6379 
1:S 22 Sep 2020 09:53:33.905 * MASTER <-> REPLICA sync started 
1:S 22 Sep 2020 09:53:33.906 * Non blocking connect for SYNC fired the event. 
1:S 22 Sep 2020 09:53:33.906 * Master replied to PING, replication can continue... 
1:S 22 Sep 2020 09:53:33.907 * Partial resynchronization not possible (no cached master) 
1:S 22 Sep 2020 09:53:36.150 * Full resync from master: bd87e85aa41950b9844c1bcb29a7870b96b53f79:804599892 
1:S 22 Sep 2020 09:53:41.163 # Opening the temp file needed for MASTER <-> REPLICA synchronization: I/O error 
1:S 22 Sep 2020 09:53:41.864 * Connecting to MASTER 10.43.6.52:6379 
1:S 22 Sep 2020 09:53:41.864 * MASTER <-> REPLICA sync started 

从站 2 错误

redis-cli role
1) "slave"
2) "10.43.6.52"
3) (integer) 6379
4) "connected"
5) (integer) 809074465

6049:C 22 Sep 2020 09:55:55.091 # Failed opening the RDB file dump.rdb (in server root dir /data) for saving: I/O error 
1:S 22 Sep 2020 09:55:55.188 # Background saving error 
1:S 22 Sep 2020 09:56:01.002 * 1 changes in 30 seconds. Saving... 
1:S 22 Sep 2020 09:56:01.002 * Background saving started by pid 6050 
6050:C 22 Sep 2020 09:56:01.004 # Failed opening the RDB file dump.rdb (in server root dir /data) for saving: I/O error 
1:S 22 Sep 2020 09:56:01.102 # Background saving error 
1:S 22 Sep 2020 09:56:07.013 * 1 changes in 30 seconds. Saving... 
1:S 22 Sep 2020 09:56:07.014 * Background saving started by pid 6051 
6051:C 22 Sep 2020 09:56:07.016 # Failed opening the RDB file dump.rdb (in server root dir /data) for saving: I/O error 

主错误

redis-cli role
1) "master"
2) (integer) 809012256
3) 1) 1) "10.43.254.123"
      2) "6379"
      3) "809011980"
   2) 1) "10.43.229.244"
      2) "6379"
      3) "0"

3944:C 22 Sep 2020 09:57:16.102 * RDB: 0 MB of memory used by copy-on-write 
1:M 22 Sep 2020 09:57:16.176 * Background saving terminated with success 
1:M 22 Sep 2020 09:57:16.176 * Starting BGSAVE for SYNC with target: replicas sockets 
1:M 22 Sep 2020 09:57:16.177 * Background RDB transfer started by pid 3945 
1:M 22 Sep 2020 09:57:21.283 # Connection with replica 10.43.229.244:6379 lost. 
1:M 22 Sep 2020 09:57:21.286 # Background transfer error 
1:M 22 Sep 2020 09:57:21.601 * Replica 10.43.229.244:6379 asks for synchronization 
1:M 22 Sep 2020 09:57:21.601 * Full resync requested by replica 10.43.229.244:6379 
1:M 22 Sep 2020 09:57:21.601 * Delay next BGSAVE for diskless SYNC 
1:M 22 Sep 2020 09:57:27.241 * Starting BGSAVE for SYNC with target: replicas sockets 
1:M 22 Sep 2020 09:57:27.243 * Background RDB transfer started by pid 3946 
1:M 22 Sep 2020 09:57:32.254 # Connection with replica 10.43.229.244:6379 lost. 
1:M 22 Sep 2020 09:57:32.266 # Background transfer error 
1:M 22 Sep 2020 09:57:32.563 * Replica 10.43.229.244:6379 asks for synchronization 
1:M 22 Sep 2020 09:57:32.563 * Full resync requested by replica 10.43.229.244:6379 
1:M 22 Sep 2020 09:57:32.563 * Delay next BGSAVE for diskless SYNC 
1:M 22 Sep 2020 09:57:38.304 * Starting BGSAVE for SYNC with target: replicas sockets 
1:M 22 Sep 2020 09:57:38.304 * Background RDB transfer started by pid 3947 
1:M 22 Sep 2020 09:57:43.315 # Connection with replica 10.43.229.244:6379 lost. 
1:M 22 Sep 2020 09:57:43.476 # Background transfer error 
1:M 22 Sep 2020 09:57:43.517 * Replica 10.43.229.244:6379 asks for synchronization 
1:M 22 Sep 2020 09:57:43.517 * Full resync requested by replica 10.43.229.244:6379 
1:M 22 Sep 2020 09:57:43.517 * Delay next BGSAVE for diskless SYNC 
1:M 22 Sep 2020 09:57:47.098 * 1 changes in 30 seconds. Saving... 
1:M 22 Sep 2020 09:57:47.098 * Background saving started by pid 3948 
3948:C 22 Sep 2020 09:57:47.124 * DB saved on disk 
3948:C 22 Sep 2020 09:57:47.124 * RDB: 0 MB of memory used by copy-on-write 
1:M 22 Sep 2020 09:57:47.199 * Background saving terminated with success 
1:M 22 Sep 2020 09:57:47.199 * Starting BGSAVE for SYNC with target: replicas sockets 
1:M 22 Sep 2020 09:57:47.199 * Background RDB transfer started by pid 3949 
3949:C 22 Sep 2020 09:57:47.255 * RDB: 1 MB of memory used by copy-on-write 
1:M 22 Sep 2020 09:57:47.299 * Background RDB transfer terminated with success 
1:M 22 Sep 2020 09:57:47.299 # Slave 10.43.229.244:6379 correctly received the streamed RDB file. 
1:M 22 Sep 2020 09:57:47.299 * Streamed RDB transfer with replica 10.43.229.244:6379 succeeded (socket). Waiting for REPLCONF ACK from slave to enable streaming 
1:M 22 Sep 2020 09:57:52.214 # Connection with replica 10.43.229.244:6379 lost. 
1:M 22 Sep 2020 09:57:52.517 * Replica 10.43.229.244:6379 asks for synchronization 
1:M 22 Sep 2020 09:57:52.517 * Full resync requested by replica 10.43.229.244:6379 
1:M 22 Sep 2020 09:57:52.518 * Delay next BGSAVE for diskless SYNC 
1:M 22 Sep 2020 09:57:58.355 * Starting BGSAVE for SYNC with target: replicas sockets 
1:M 22 Sep 2020 09:57:58.357 * Background RDB transfer started by pid 3950 
1:M 22 Sep 2020 09:58:03.422 # Connection with replica 10.43.229.244:6379 lost. 

Redis 配置:

dir "/data"
port 6379
maxmemory 0
maxmemory-policy volatile-lru
min-replicas-max-lag 5
min-replicas-to-write 1
rdbchecksum yes
rdbcompression yes
repl-diskless-sync yes
save 30 1
timeout 1000
slaveof 10.43.254.123 6379
slave-announce-ip 10.43.6.52
slave-announce-port 6379

到目前为止我的想法是:

  • 密钥来自RabbitMQ,有时开发者关闭consumer to stack messeges,堆叠的消息会对redis造成很大的负载,没有找到任何日志
  • Longhorn 存储类可能坏了,没有找到任何日志

我愿意接受任何建议。

4

1 回答 1

0

对此进行更新,这是由于 Longhorn SC。迁移到 SSD 一切正常。

于 2021-03-31T21:33:02.527 回答