my.cnf and wsrep.cnf

719 views
Skip to first unread message

Munazir

unread,
Dec 18, 2011, 7:26:37 AM12/18/11
to codership
I have installed mysql galera on 4 nodes, in wsrep.cnf i put this
one..

node1.galera - 10.10.5.105 - wsrep.cnf :
wsrep_cluster_address="gcomm://"
node2.galera - 10.10.5.106 - wsrep.cnf :
wsrep_cluster_address="gcomm://10.10.5.105"
node3.galera - 10.10.5.107 - wsrep.cnf :
wsrep_cluster_address="gcomm://10.10.5.106"
node4.galera - 10.10.5.108 - wsrep.cnf :
wsrep_cluster_address="gcomm://10.10.5.107"

Is the above one is correct or not. Do i have to change any thing in
this.

And can any one provide me good my.cnf configuration parameters for
using with 50 GB Database. Each nodes have 20 GB RAM (I can increase
it to more).

Thanks in advance.

Waiting for reply.

Thanks
Munazir

Alex Yurchenko

unread,
Dec 18, 2011, 12:49:49 PM12/18/11
to codersh...@googlegroups.com
On 18.12.2011 15:26, Munazir wrote:
> I have installed mysql galera on 4 nodes, in wsrep.cnf i put this
> one..
>
> node1.galera - 10.10.5.105 - wsrep.cnf :
> wsrep_cluster_address="gcomm://"

NEVER-EVER leave "gcomm://" in my.cnf after node startup. In this case
it could be changed to gcomm://10.10.5.108

> node2.galera - 10.10.5.106 - wsrep.cnf :
> wsrep_cluster_address="gcomm://10.10.5.105"
> node3.galera - 10.10.5.107 - wsrep.cnf :
> wsrep_cluster_address="gcomm://10.10.5.106"
> node4.galera - 10.10.5.108 - wsrep.cnf :
> wsrep_cluster_address="gcomm://10.10.5.107"
>
> Is the above one is correct or not. Do i have to change any thing in
> this.
>
> And can any one provide me good my.cnf configuration parameters for
> using with 50 GB Database. Each nodes have 20 GB RAM (I can increase
> it to more).

innodb_buffer_pool_size=17G
innodb_log_file_size=2G
wsrep_sst_method=rsync

You also might need a serious number of wsrep_slave_threads.

Regards,
Alex

SyRenity

unread,
Dec 28, 2011, 6:51:01 AM12/28/11
to codersh...@googlegroups.com
Hi.


NEVER-EVER leave "gcomm://" in my.cnf after node startup. In this case
it could be changed to gcomm://10.10.5.108

This applies to 2 nodes as well? I usually leave 1st node as "gcomm://", and never had any troubles with this approach.

On another hand, using "gcomm://local_ip/local_dns_name" of 1st node used worked in 0.8 but not anymore.

Regards.

Alex Yurchenko

unread,
Dec 28, 2011, 7:48:20 AM12/28/11
to codersh...@googlegroups.com
On 28.12.2011 14:51, SyRenity wrote:
> Hi.
>
> NEVER-EVER leave "gcomm://" in my.cnf after node startup. In this
> case
>> it could be changed to gcomm://10.10.5.108
>>
>
> This applies to 2 nodes as well? I usually leave 1st node as
> "gcomm://",
> and never had any troubles with this approach.

If this node is restarted for whatever reason, you'll end up with two
independent clusters.

> On another hand, using "gcomm://local_ip/local_dns_name" of 1st node
> used
> worked in 0.8 but not anymore.

Instructing the node to connect to itself is pointless and syggests a
typo in configuration, so it makes sense to stop right away. I don't
think it was ever supported, and even if it passed, it was just ignored.

Regards,
Alex

SyRenity

unread,
Dec 28, 2011, 8:01:28 AM12/28/11
to codersh...@googlegroups.com
Wouldn't two nodes pointing one to another (without arbitrator), have a risk of split-brain (in case of both nodes restarts)?

Alex Yurchenko

unread,
Dec 28, 2011, 9:40:50 AM12/28/11
to codersh...@googlegroups.com

It won't be a split brain, it will be just a non-primary component. If
at some point all cluster nodes are down, that means that the cluster in
its current incarnation is no more. You need to rebootstrap it again. In
a sense it will be a new cluster instance consisting of new node
instances.

SyRenity

unread,
Dec 28, 2011, 2:36:09 PM12/28/11
to codersh...@googlegroups.com
I see, so I will need to decide which comes first and which comes next.

But in normal operation, having 2 nodes pointing to each other is fine? And the arbiter can point any of these?

Alex Yurchenko

unread,
Dec 28, 2011, 3:21:01 PM12/28/11
to codersh...@googlegroups.com
On 28.12.2011 22:36, SyRenity wrote:
> I see, so I will need to decide which comes first and which comes
> next.

yes

> But in normal operation, having 2 nodes pointing to each other is
> fine? And

It is not just fine, it allows automatic restart (say, in case of crash
or machine reboot) and prevents cluster split in such cases.

> the arbiter can point any of these?

yes.

Gessy Junior

unread,
May 22, 2012, 10:25:42 AM5/22/12
to codersh...@googlegroups.com
Hi  Alexey,


Is possible use a  load balancer VIP address on gcomm:// ? Let me explain... I hava a RHEL Piranha LVS configured and I associate a IP address to load balancing mysql connections with failover. After start a cluster with first node with gcomm://,. I configure the other node with gcomm://<node1 address>. But is ok to configure gcomm:// with VIP address used by load balancer?


Thanks a lot
Gessy Jr.

Alexey Yurchenko

unread,
May 25, 2012, 2:02:55 PM5/25/12
to codersh...@googlegroups.com


On Tuesday, May 22, 2012 5:25:42 PM UTC+3, Gessy Junior wrote:
Hi  Alexey,


Is possible use a  load balancer VIP address on gcomm:// ? Let me explain... I hava a RHEL Piranha LVS configured and I associate a IP address to load balancing mysql connections with failover. After start a cluster with first node with gcomm://,. I configure the other node with gcomm://<node1 address>. But is ok to configure gcomm:// with VIP address used by load balancer?
Hi,

Not sure what you mean by that. VIP address is obviously for clients to connect to and is supposed to abstract the cluster, i.e. it does not identify any node in particular. Using it as a wsrep_cluster_address might have some purpose, but I don't think it is what you want. The primary reason for that is that LVS in your setup most probably works asynchronously from Galera, so there is no guarantee that it points to live node. Moreover, I guess there is no guarantee that it points to ANOTHER node or to any node at all. What you want when you (re)start a node is that it connects first and foremost to ANOTHER node.

If you have only 2 nodes, node1 and node2, it is very simple: you should start node1 with wsrep_cluster_address=gcomm://, but then change it to gcomm://node2 in my.cnf. Node2 should have wsrep_cluster_address=gcomm://node1. Empty gcomm:// schema should be used only to bootstrap a new cluster.

Regards,
Alex
Reply all
Reply to author
Forward
0 new messages