Galera Cluster as slave of MySQL/MariaDB master-master cluster

189 views
Skip to first unread message

Sergio Charrua

unread,
Apr 26, 2021, 3:07:27 PM4/26/21
to codership
Hello,

I have a Master-Master 2 nodes cluster with MySql5.6 running for about 2.5 years, where both clusters replicate data to each other. Nodes are identified as Node1 and Node2, and both have different server_id.

Due to some dependencies, we need to use MariaDB 10.5, so I created a new 3 nodes cluster. Nodes are identified as Node3, Node4 and Node5 and the all have the same server_id.

After dumping data from Node2, to a .SQL file, I loaded the .SQL to Node3 and this is OK.

and set Node3 as slave of Node2. 
While Node4 and Node5 are stopped, Node3 replicates all the transactions from Node2 without any issues.

But if I start Node4 (while Node5 is still offline), it starts replicating data from Node3 (while Node3 is still receiving data from Node2), and after a while the Galera cluster get losts: on Node4 I get the following logs:

Apr 26 10:43:38 SIPDB04 mariadbd: 2021-04-26 10:43:38 2 [ERROR] Error in Log_event::read_log_event(): 'Found invalid event in binary log', data_len: 42, event_type: -94
Apr 26 10:43:38 SIPDB04 mariadbd: 2021-04-26 10:43:38 2 [ERROR] WSREP: applier could not read binlog event, seqno: 1232279, len: 115
Apr 26 10:43:38 SIPDB04 mariadbd: 2021-04-26 10:43:38 0 [Note] WSREP: Member 1(node2) initiates vote on 9ce34c76-a38f-11eb-8225-02b9187a0281:1232279,984da6543308f296:
Apr 26 10:43:38 SIPDB04 mariadbd: 2021-04-26 10:43:38 0 [Note] WSREP: Member 0(node1) responds to vote on 9ce34c76-a38f-11eb-8225-02b9187a0281:1232279,0000000000000000: Success
Apr 26 10:43:38 SIPDB04 mariadbd: 2021-04-26 10:43:38 0 [Warning] WSREP: Received bogus VOTE message: 1232279.0, from node 826d13f0-a672-11eb-b56b-8b095d1d2aa6, expected > 1232293. Ignoring.

After that I can't query Node4.
And I can't shutdown Node4, as it gets ages to stop Mariadb (in fact, I had to "kill" the process).
And on Node3 the slave status is Ok, seems to be up without any error, but it doesn't replicate anymore. I have to "kill" MariaDB on Node3 (a systemctl stop doesn't work) and restart MariaDB and slave starts replicating from Node2 once again.

What am I doing something wrong here? Is MariaDB Galera not supposed to support replication from Mysql, while copying data to other Galera nodes?

Here is my MariaDB Galera configurations (/etc/mt.cnf.d/server.conf):
[mysqld]
log_slave_updates=1

[mariadb-10.5]
bind-address=0.0.0.0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
wsrep_on=ON
query_cache_size=0
query_cache_type=0
datadir=/var/lib/mysql
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so
wsrep_cluster_address="gcomm://10.19.139.10,10.19.139.11,10.19.139.12"
#wsrep_cluster_address="gcomm://"
wsrep_cluster_name='galera_cluster'
wsrep_node_address='10.19.139.10'
wsrep_node_name='node1'
wsrep_sst_method=rsync
wsrep_sst_auth=db_user:admin
server_id=3

this configuration is identical on Node4 and Node5 (only difference is the wsrep_node_address and wsrep_node_name parameters)

Any clue? 
Does any have a Galera Cluster slave of a MySql/MariaDB Master replicating data up and running in production? 

Thanks in advance!

Sergio

Reply all
Reply to author
Forward
0 new messages