node1.galera - 10.10.5.105 - wsrep.cnf :
wsrep_cluster_address="gcomm://"
node2.galera - 10.10.5.106 - wsrep.cnf :
wsrep_cluster_address="gcomm://10.10.5.105"
node3.galera - 10.10.5.107 - wsrep.cnf :
wsrep_cluster_address="gcomm://10.10.5.106"
node4.galera - 10.10.5.108 - wsrep.cnf :
wsrep_cluster_address="gcomm://10.10.5.107"
Is the above one is correct or not. Do i have to change any thing in
this.
And can any one provide me good my.cnf configuration parameters for
using with 50 GB Database. Each nodes have 20 GB RAM (I can increase
it to more).
Thanks in advance.
Waiting for reply.
Thanks
Munazir
NEVER-EVER leave "gcomm://" in my.cnf after node startup. In this case
it could be changed to gcomm://10.10.5.108
> node2.galera - 10.10.5.106 - wsrep.cnf :
> wsrep_cluster_address="gcomm://10.10.5.105"
> node3.galera - 10.10.5.107 - wsrep.cnf :
> wsrep_cluster_address="gcomm://10.10.5.106"
> node4.galera - 10.10.5.108 - wsrep.cnf :
> wsrep_cluster_address="gcomm://10.10.5.107"
>
> Is the above one is correct or not. Do i have to change any thing in
> this.
>
> And can any one provide me good my.cnf configuration parameters for
> using with 50 GB Database. Each nodes have 20 GB RAM (I can increase
> it to more).
innodb_buffer_pool_size=17G
innodb_log_file_size=2G
wsrep_sst_method=rsync
You also might need a serious number of wsrep_slave_threads.
Regards,
Alex
NEVER-EVER leave "gcomm://" in my.cnf after node startup. In this case
it could be changed to gcomm://10.10.5.108
If this node is restarted for whatever reason, you'll end up with two
independent clusters.
> On another hand, using "gcomm://local_ip/local_dns_name" of 1st node
> used
> worked in 0.8 but not anymore.
Instructing the node to connect to itself is pointless and syggests a
typo in configuration, so it makes sense to stop right away. I don't
think it was ever supported, and even if it passed, it was just ignored.
Regards,
Alex
It won't be a split brain, it will be just a non-primary component. If
at some point all cluster nodes are down, that means that the cluster in
its current incarnation is no more. You need to rebootstrap it again. In
a sense it will be a new cluster instance consisting of new node
instances.
yes
> But in normal operation, having 2 nodes pointing to each other is
> fine? And
It is not just fine, it allows automatic restart (say, in case of crash
or machine reboot) and prevents cluster split in such cases.
> the arbiter can point any of these?
yes.
Hi Alexey,Is possible use a load balancer VIP address on gcomm:// ? Let me explain... I hava a RHEL Piranha LVS configured and I associate a IP address to load balancing mysql connections with failover. After start a cluster with first node with gcomm://,. I configure the other node with gcomm://<node1 address>. But is ok to configure gcomm:// with VIP address used by load balancer?