MariaDB

156 views
Skip to first unread message

jefferson...@gmail.com

unread,
Jan 23, 2014, 9:56:38 AM1/23/14
to prm-d...@googlegroups.com
Hi,

Sorry for my English.

How the best way to use pacemaker with MariaDB/Galera?

Regards,

Yves Trudeau

unread,
Jan 23, 2014, 11:19:25 AM1/23/14
to prm-d...@googlegroups.com
I haven't created a specific PXC agent but that would be fairly easy.

Regards,

Yves

--
You received this message because you are subscribed to the Google Groups "PRM-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prm-discuss...@googlegroups.com.
To post to this group, send email to prm-d...@googlegroups.com.
Visit this group at http://groups.google.com/group/prm-discuss.
For more options, visit https://groups.google.com/groups/opt_out.

jefferson...@gmail.com

unread,
Jan 23, 2014, 2:19:13 PM1/23/14
to prm-d...@googlegroups.com
Hi,

Today I am using rsync, I need change to XtraDB, correct?
Could you send me my.cnf as example?

Regards,

Yves Trudeau

unread,
Jan 23, 2014, 3:34:20 PM1/23/14
to prm-d...@googlegroups.com
Hi,
   not sure I am following you with rsync?  Are you talking about the galera SST.  Percona XtraDB cluster (PXC) is a mysql database using the galera library like MariaDB.  I was not talking about Percona Xtrabackup.

Regards,

Yves

jefferson...@gmail.com

unread,
Feb 10, 2014, 8:18:39 AM2/10/14
to prm-d...@googlegroups.com
Hi Yves,

I restart this work.
Now I understood the process.
But I can connect in the VIP, must be with IPTABLES.
I recheck all config and information, to me all is fine.
Do you know how I can debug this?

Regards,

Yves Trudeau

unread,
Feb 10, 2014, 8:49:21 AM2/10/14
to prm-d...@googlegroups.com
Hi,
   Can you provide the output of these commands:

crm configure show

crm_mon -A1

Regards,

Yves

jefferson...@gmail.com

unread,
Feb 11, 2014, 9:49:08 AM2/11/14
to prm-d...@googlegroups.com
Hi,

I am using pcs.

[root@teste ~]# pcs config
Cluster Name: 
Corosync Nodes:
 
Pacemaker Nodes:
 teste.results.intranet teste2.results.intranet 

Resources: 
 Clone: ClusterIP-clone
  Meta Attrs: clusterip_hash=sourceip 
  Resource: ClusterIP (class=ocf provider=percona type=IPaddr3)
   Attributes: ip=10.10.2.96 nic=eth1 
   Meta Attrs: resource-stickiness=0 
   Operations: monitor interval=10s (ClusterIP-monitor-interval-10s)

Stonith Devices: 
Fencing Levels: 

Location Constraints:
Ordering Constraints:
Colocation Constraints:

Cluster Properties:
 cluster-infrastructure: classic openais (with plugin)
 dc-version: 1.1.10-14.el6_5.2-368c726
 expected-quorum-votes: 2
 stonith-enabled: false


[root@teste ~]# 
[root@teste ~]# crm_mon -A1
Last updated: Tue Feb 11 12:42:00 2014
Last change: Sun Feb  9 14:38:57 2014 via cibadmin on teste.results.intranet
Stack: classic openais (with plugin)
Current DC: teste2.results.intranet - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
2 Resources configured


Online: [ teste.results.intranet teste2.results.intranet ]

 Clone Set: ClusterIP-clone [ClusterIP]
     Started: [ teste.results.intranet teste2.results.intranet ]

Node Attributes:
* Node teste.results.intranet:
    + ClusterIP_clone_count           : 1         
* Node teste2.results.intranet:
    + ClusterIP_clone_count           : 1         
[root@teste ~]# 

Regards,

Yves Trudeau

unread,
Feb 11, 2014, 3:13:56 PM2/11/14
to prm-d...@googlegroups.com
You are missing the meta for the clone set, was 'meta clone-max="3" clone-node-max="3" globally-unique="true"' for my 3 nodes examples and there's no location rules using the ClusterIP_clone_count like this:

location loc-distrib-cluster-vip cl_cluster_vip \
        rule $id="loc-distrib-cluster-vip-rule" -1: p_cluster_vip_clone_count gt 1

which is mandatory to get the clones to redistribute themselves after being all on the same host.  Start by these changes, it should help a lot.

Regards,

Yves

jefferson...@gmail.com

unread,
Feb 17, 2014, 1:09:32 PM2/17/14
to prm-d...@googlegroups.com
Hi,

I install crmsh in my linux box.
But IP dont work yet, to me the config is ok. Please check to me?!?! ;)

[root@srvmysqlm ~]# crm configure show
node srvmysql0.results.intranet \
attributes standby="off"
node srvmysql1.results.intranet \
attributes standby="off"
node srvmysql2.results.intranet \
attributes standby="off"
node srvmysql3.results.intranet \
attributes standby="off"
node srvmysqlm.results.intranet \
attributes standby="off"
node srvmysqlm2.results.intranet \
attributes standby="off"
primitive p_cluster_vip ocf:percona:IPaddr3 \
params ip="10.10.2.99" nic="eth0" \
meta resource-stickiness="0" \
op monitor interval="10s"
primitive p_mysql_monit ocf:percona:mysql_monitor \
params reader_attribute="readable_monit" writer_attribute="writable_monit" user="mysql" password="" pid="/banco/mysql2/mysqld.pid" socket="/var/lib/mysql/mysql.sock" max_slave_lag="5" cluster_type="pxc" \
op monitor interval="15s" timeout="30s" OCF_CHECK_LEVEL="1"
clone cl_cluster_vip p_cluster_vip \
meta clone-max="6" clone-node-max="6" globally-unique="true"
clone cl_mysql_monitor p_mysql_monit \
meta clone-max="6" clone-node-max="1"
location loc-distrib-cluster-vip cl_cluster_vip \
rule $id="loc-distrib-cluster-vip-rule" -1: p_cluster_vip_clone_count gt 1
location loc-enable-cluster-vip cl_cluster_vip \
rule $id="loc-enable-cluster-vip-rule" 2: writable_monit eq 1
location loc-no-cluster-vip cl_cluster_vip \
rule $id="loc-no-cluster-vip-rule" -inf: writable_monit eq 0
property $id="cib-bootstrap-options" \
expected-quorum-votes="6" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
maintenance-mode="off" \
dc-version="1.1.10-14.el6_5.2-368c726" \
cluster-infrastructure="classic openais (with plugin)"

[root@srvmysqlm ~]# crm_mon -A1
Last updated: Mon Feb 17 15:04:05 2014
Last change: Mon Feb 17 14:20:50 2014 via cibadmin on srvmysqlm.results.intranet
Stack: classic openais (with plugin)
Current DC: srvmysqlm2.results.intranet - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
6 Nodes configured, 6 expected votes
12 Resources configured


Online: [ srvmysql0.results.intranet srvmysql1.results.intranet srvmysql2.results.intranet srvmysql3.results.intranet srvmysqlm.results.intranet srvmysqlm2.results.intranet ]

 Clone Set: cl_cluster_vip [p_cluster_vip] (unique)
     p_cluster_vip:0 (ocf::percona:IPaddr3): Started srvmysql2.results.intranet 
     p_cluster_vip:1 (ocf::percona:IPaddr3): Started srvmysql0.results.intranet 
     p_cluster_vip:2 (ocf::percona:IPaddr3): Started srvmysql3.results.intranet 
     p_cluster_vip:3 (ocf::percona:IPaddr3): Started srvmysql1.results.intranet 
     p_cluster_vip:4 (ocf::percona:IPaddr3): Started srvmysqlm2.results.intranet 
     p_cluster_vip:5 (ocf::percona:IPaddr3): Started srvmysqlm.results.intranet 
 Clone Set: cl_mysql_monitor [p_mysql_monit]
     Started: [ srvmysql0.results.intranet srvmysql1.results.intranet srvmysql2.results.intranet srvmysql3.results.intranet srvmysqlm.results.intranet srvmysqlm2.results.intranet ]

Node Attributes:
* Node srvmysql0.results.intranet:
    + p_cluster_vip_clone_count       : 1         
    + readable_monit                   : 1         
    + writable_monit                   : 1         
* Node srvmysql1.results.intranet:
    + p_cluster_vip_clone_count       : 1         
    + readable_monit                   : 1         
    + writable_monit                   : 1         
* Node srvmysql2.results.intranet:
    + p_cluster_vip_clone_count       : 1         
    + readable_monit                   : 1         
    + writable_monit                   : 1         
* Node srvmysql3.results.intranet:
    + p_cluster_vip_clone_count       : 1         
    + readable_monit                   : 1         
    + writable_monit                   : 1         
* Node srvmysqlm.results.intranet:
    + p_cluster_vip_clone_count       : 1         
    + readable_monit                   : 1         
    + writable_monit                   : 1         
* Node srvmysqlm2.results.intranet:
    + p_cluster_vip_clone_count       : 1         
    + readable_monit                   : 1         
    + writable_monit                   : 1         
[root@srvmysqlm ~]# 

ping out of cluster 

[root@srvradius0 ~]# ping -c 3 10.10.2.99
PING 10.10.2.99 (10.10.2.99) 56(84) bytes of data.

--- 10.10.2.99 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 12010ms


ping on cluster

root@srvmysqlm ~]# ping -c 3 10.10.2.99
PING 10.10.2.99 (10.10.2.99) 56(84) bytes of data.
64 bytes from 10.10.2.99: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 10.10.2.99: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 10.10.2.99: icmp_seq=3 ttl=64 time=0.069 ms

--- 10.10.2.99 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.063/0.068/0.073/0.007 ms


Regards, 
Reply all
Reply to author
Forward
0 new messages