----- Original Message -----
> Hi Damien,
>
> Sorry for the delay in my response, I've been to hospital with my wife.
>
> I have set master-max=100 and ordered=true
>
> I startup ONLY first node vdiccs01:
>
> [root@vdiccs01 ~]# crm_mon -1A
> Last updated: Tue Oct 25 20:44:39 2016 Last change: Tue Oct 25
> 20:40:31 2016 by root via crm_attribute on vdiccs02
> Stack: corosync
> Current DC: vdiccs01 (version 1.1.13-10.el7_2.4-44eb2dd) - partition
> WITHOUT quorum
> 3 nodes and 15 resources configured
>
> Online: [ vdiccs01 ]
> OFFLINE: [ vdiccs02 vdiccs03 ]
>
> Clone Set: nfs_setup-clone [nfs_setup]
> Started: [ vdiccs01 ]
> Clone Set: nfs-mon-clone [nfs-mon]
> Started: [ vdiccs01 ]
> Clone Set: nfs-grace-clone [nfs-grace]
> Started: [ vdiccs01 ]
> Clone Set: vdic-nfs-cluster-clone [vdic-nfs-cluster]
> Started: [ vdiccs01 ]
> Master/Slave Set: vdic-galera-cluster-master [vdic-galera-cluster]
> *Slaves: [ vdiccs01 ]*
>
> Node Attributes:
> * Node vdiccs01:
> + ganesha-active : 1
> + grace-active : 1
> + vdic-galera-cluster-last-committed : 30844
>
>
> Now, I start second node vdiccs02:
>
> [root@vdiccs01 ~]# crm_mon -1A
> Last updated: Tue Oct 25 20:50:23 2016 Last change: Tue Oct 25
> 20:48:39 2016 by root via crm_attribute on vdiccs02
> Stack: corosync
> Current DC: vdiccs01 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with
> quorum
> 3 nodes and 15 resources configured
>
> Online: [ vdiccs01 vdiccs02 ]
> OFFLINE: [ vdiccs03 ]
>
> Clone Set: nfs_setup-clone [nfs_setup]
> Started: [ vdiccs01 vdiccs02 ]
> Clone Set: nfs-mon-clone [nfs-mon]
> Started: [ vdiccs01 vdiccs02 ]
> Clone Set: nfs-grace-clone [nfs-grace]
> Started: [ vdiccs01 vdiccs02 ]
> Clone Set: vdic-nfs-cluster-clone [vdic-nfs-cluster]
> Started: [ vdiccs01 vdiccs02 ]
> Master/Slave Set: vdic-galera-cluster-master [vdic-galera-cluster]
> *Slaves: [ vdiccs01 vdiccs02 ]*
>
> Node Attributes:
> * Node vdiccs01:
> + ganesha-active : 1
> + grace-active : 1
> + vdic-galera-cluster-last-committed : 30844
> * Node vdiccs02:
> + ganesha-active : 1
> + grace-active : 1
> + vdic-galera-cluster-last-committed : 30844
>
> In this scenario is expected to start both nodes, isn't it?
>
> In logs:
>
> galera(vdic-galera-cluster)[13209]: 2016/10/25_20:52:04 INFO:* Waiting
> on node <vdiccs03> to report database status before Master instances can
> start.*
> *My conclusion:*
>
> If I set master-max=1 system is able to start mysqld daemon in vdiccs01
> leaving vdiccs02 and vdiccs03 as slave. If I poweroff that node (vdiccs01)
> the remaining nodes vdiccs02 and vdiccs03 require that node vdiccs01 to be
> started in order to get the most recent commit.
>
master-max should be set the 3, that's just a hint to pacemaker for how many
galera servers it's allowed to spawn. In your case, you want 1 galera server
to be spawned per host, hence master-nmax=3
> That is, as looks cluster works, master-max=1 can never work, at least it
> must be set to master-max=2 in order that system relays on a running mysql
> daemon controlling the last commit number.
>
> *My question:*
>
> Is this correct?
Well, you have it correct that in order to bootstrap the galera cluster with
pacemaker, the galera resource agent expects all nodes to be available for
fetching the last seqno before determining which node to bootstrap the cluster
from.
> Is there any way to configure system with master-max=1 ?
You don't want it to happen, that would mean only one galera node allowed.
> Is there any way to force stickiness of master in any special node?
You don't want it either, because should you need to rebootstrap, you want
to make sure you start the cluster from the most recent node.
What you may want however is a means to override the automatic boot sequence,
say in the case where you know that a node is out for maintenance and its not
the most recent node. In that case you could follow this procedure to force
bootstrap manually:
http://damien.ciabrini.name/posts/2015/10/galera-boot-process-in-open-stack-ha-and-manual-override.html
However please make sure you understand how pacemakefr boot process works so
you don't risk losing data when forcing bootstrap manually :)
>
> *thanks a lot*