Redis Benchmark leading to synchronization issues between master and slaves | Redis 7.0.5

294 views
Skip to first unread message

Ankit Gupta

unread,
Nov 16, 2022, 5:14:33 AM11/16/22
to Redis DB
Hello Experts,

We are testing redis 7.0.5 before rolling out this version to production. For testing we are using below command

  • redis-benchmark -p 6382 -c 1000 -r 1700000 -n 1700000 -t hset -d 18000 -P 750
This command inserted around 21 Gig of data in redis but we observed that data synching between master and slave node didn't get completed even after 20 minutes. Plesae see below logs

11:M 16 Nov 2022 09:17:19.695 * Replica 10.3.50.102:6379 asks for synchronization
11:M 16 Nov 2022 09:17:19.695 * Unable to partial resync with replica 10.3.50.102:6379 for lack of backlog (Replica request was: 3866175099).
11:M 16 Nov 2022 09:17:19.695 * Can't attach the replica to the current BGSAVE. Waiting for next BGSAVE for SYNC
11:M 16 Nov 2022 09:17:19.695 * Replica 10.3.52.107:6381 asks for synchronization
11:M 16 Nov 2022 09:17:19.695 * Unable to partial resync with replica 10.3.52.107:6381 for lack of backlog (Replica request was: 3446617532).
11:M 16 Nov 2022 09:17:19.695 * Can't attach the replica to the current BGSAVE. Waiting for next BGSAVE for SYNC
11:M 16 Nov 2022 09:17:19.806 # Background saving terminated by signal 10
11:M 16 Nov 2022 09:17:19.806 * Starting BGSAVE for SYNC with target: disk
11:M 16 Nov 2022 09:17:20.144 * Background saving started by pid 101
101:C 16 Nov 2022 09:20:10.362 * DB saved on disk
101:C 16 Nov 2022 09:20:10.462 * Fork CoW for RDB: current 621 MB, peak 621 MB, average 621 MB
11:M 16 Nov 2022 09:20:10.742 * Background saving terminated with success
11:M 16 Nov 2022 09:22:55.841 # Write error sending DB to replica: Connection reset by peer
11:M 16 Nov 2022 09:22:55.841 # Connection with replica client id #1099 lost.
11:M 16 Nov 2022 09:22:55.850 * Replica 10.3.50.102:6379 asks for synchronization
11:M 16 Nov 2022 09:22:55.850 * Unable to partial resync with replica 10.3.50.102:6379 for lack of backlog (Replica request was: 3866175099).
11:M 16 Nov 2022 09:22:55.850 * Starting BGSAVE for SYNC with target: disk
11:M 16 Nov 2022 09:22:56.168 * Background saving started by pid 104
11:M 16 Nov 2022 09:22:56.523 * Synchronization with replica 10.3.52.107:6381 succeeded
104:C 16 Nov 2022 09:31:08.475 * DB saved on disk
104:C 16 Nov 2022 09:31:08.568 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 1 MB
11:M 16 Nov 2022 09:31:10.804 * Background saving terminated with success
11:M 16 Nov 2022 09:35:57.878 # Connection with replica client id #1101 lost.
11:M 16 Nov 2022 09:35:57.879 * Replica 10.3.50.102:6379 asks for synchronization
11:M 16 Nov 2022 09:35:57.879 * Unable to partial resync with replica 10.3.50.102:6379 for lack of backlog (Replica request was: 3866175099).
11:M 16 Nov 2022 09:35:57.879 * Starting BGSAVE for SYNC with target: disk
11:M 16 Nov 2022 09:35:58.191 * Background saving started by pid 105
105:C 16 Nov 2022 09:45:13.829 * DB saved on disk
105:C 16 Nov 2022 09:45:13.925 * Fork CoW for RDB: current 7 MB, peak 7 MB, average 7 MB
11:M 16 Nov 2022 09:45:14.124 * Background saving terminated with success
11:M 16 Nov 2022 09:47:59.335 # Connection with replica client id #1102 lost.
11:M 16 Nov 2022 09:47:59.337 * Replica 10.3.50.102:6379 asks for synchronization
11:M 16 Nov 2022 09:47:59.337 * Unable to partial resync with replica 10.3.50.102:6379 for lack of backlog (Replica request was: 3866175099).
11:M 16 Nov 2022 09:47:59.337 * Starting BGSAVE for SYNC with target: disk
11:M 16 Nov 2022 09:47:59.656 * Background saving started by pid 139
11:M 16 Nov 2022 09:54:31.039 # Connection with replica 10.3.52.107:6381 lost.
11:M 16 Nov 2022 09:54:31.316 # Connection with replica 10.3.50.102:6379 lost.
139:signal-handler (1668592471) Received SIGUSR1 in child, exiting now.
11:M 16 Nov 2022 09:54:31.818 # Background saving terminated by signal 10
11:S 16 Nov 2022 10:01:42.719 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
11:S 16 Nov 2022 10:01:42.719 * Connecting to MASTER 10.3.52.107:6381
11:S 16 Nov 2022 10:01:42.719 * MASTER <-> REPLICA sync started
11:S 16 Nov 2022 10:01:42.719 * REPLICAOF 10.3.52.107:6381 enabled (user request from 'id=1120 addr=10.3.50.102:41422 laddr=172.17.0.3:6382 fd=12 name=sentinel-58a5b870-cmd age=401 idle=0 flags=x db=0 sub=0 psub=0 ssub=0 multi=4 qbuf=226 qbuf-free=20248 argv-mem=4 multi-mem=205 rbs=2048 rbp=1024 obl=45 oll=0 omem=0 tot-mem=23625 events=r cmd=exec user=default redir=-1 resp=2')
11:S 16 Nov 2022 10:01:42.749 # CONFIG REWRITE executed with success.
11:S 16 Nov 2022 10:01:42.749 * Non blocking connect for SYNC fired the event.
11:S 16 Nov 2022 10:01:42.749 * Master replied to PING, replication can continue...
11:S 16 Nov 2022 10:01:42.750 * Trying a partial resynchronization (request bd3d8dd8abf63a071a75ded6bce1c0f406bdbb83:34125929899).
11:S 16 Nov 2022 10:01:42.750 * Full resync from master: 5ff5e4f82dc501c9c5fb3a1a4619d11b245cbd0e:34125844776
11:S 16 Nov 2022 10:03:15.777 * MASTER <-> REPLICA sync: receiving 21054514277 bytes from master to disk
11:S 16 Nov 2022 10:06:03.335 * Discarding previously cached master state.
11:S 16 Nov 2022 10:06:03.335 * MASTER <-> REPLICA sync: Flushing old data
11:S 16 Nov 2022 10:06:06.070 * MASTER <-> REPLICA sync: Loading DB in memory
11:S 16 Nov 2022 10:06:06.080 * Loading RDB produced by version 7.0.5
11:S 16 Nov 2022 10:06:06.080 * RDB age 348 seconds
11:S 16 Nov 2022 10:06:06.080 * RDB memory usage when created 21682.32 Mb
11:S 16 Nov 2022 10:06:36.085 * Done loading RDB, keys loaded: 1, keys expired: 0.
11:S 16 Nov 2022 10:06:36.085 * MASTER <-> REPLICA sync: Finished with success

Logs from another node:
9:S 16 Nov 2022 10:05:43.344 * MASTER <-> REPLICA sync started
9:S 16 Nov 2022 10:05:43.344 # Error condition on socket for SYNC: Connection refused


Is there any way to optimize the sync process because we write to masters and read from slaves if data isn't present in the slave nodes then we risk reloading data from db.


Reply all
Reply to author
Forward
0 new messages