mmm_agent not able to connect to localhost.

161 views
Skip to first unread message
Message has been deleted
Message has been deleted

Gurvinder

unread,
Jun 7, 2011, 7:17:39 AM6/7/11
to MySQL Multi Master Manager Development, gurv...@techblue.co.uk
Dear Reader
I am facing little issue with mmm-mysql setup. Let
me explain you about configuration first
I have setup Master Master replication setup and 3 virtual Ips and 2
Ips for MySQL nodes.
Monitoring Node >> 192.168.100.96
DB1 >> 192.168.100.97 >> Node 1 >> Slave of DB2
DB2 >> 192.168.100.98 >> Node 2 >> Slave of DB1
Virutal IP >> 192.168.100.196 >> Writer
Virutal IP >> 192.168.100.197 >> Reader
Virutal IP >> 192.168.100.198 >> Reader

Monitor configurations file mmm_mon.conf is as follows
include mmm_common.conf

<monitor>
ip 192.168.100.96
pid_path /var/run/mysql-mmm/mmm_mond.pid
bin_path /usr/libexec/mysql-mmm
status_path /var/lib/mysql-mmm/mmm_mond.status
ping_ips 192.168.100.97, 192.168.100.98
auto_set_online 60
# The kill_host_bin does not exist by default, though the monitor
will
# throw a warning about it missing. See the section 5.10 "Kill
Host
# Functionality" in the PDF documentation.
#
# kill_host_bin /usr/libexec/mysql-mmm/monitor/kill_host
#

</monitor>

# Ping checker
<check ping>
check_period 3
trap_period 5
timeout 2
</check>

# Mysql checker
<check mysql>
check_period 3
trap_period 2
timeout 2
</check>

# Mysql replication backlog checker
<check rep_backlog>
check_period 5
trap_period 10
max_backlog 60
timeout 2
</check>

# Mysql replication threads checker
<check rep_threads>
check_period 3
trap_period 5
timeout 2
</check>

<host default>
monitor_user mmm_monitor
monitor_password monitor_password
</host>

debug 0
# File Ends here

common configuration file mmm_common.conf is as follows
active_master_role writer

<host default>
cluster_interface eth0
pid_path /var/run/mysql-mmm/mmm_agentd.pid
bin_path /usr/libexec/mysql-mmm/
replication_user replicant
replication_password mypwd
agent_user mmm_agent
agent_password agent_password
</host>

<host db1>
ip 192.168.100.97
mode master
peer db2
</host>

<host db2>
ip 192.168.100.98
mode master
peer db1
</host>

#<host db3>
# ip 192.168.100.51
# mode slave
#</host>

<role writer>
hosts db1, db2
ips 192.168.100.196
mode exclusive
</role>

<role reader>
hosts db1, db2
ips 192.168.100.197, 192.168.100.198
mode balanced
</role>
# File ends Here.

I have same copy of mmm_common.conf file across all three nodes
192.168.100.96, 192.168.100.97 and 192.168.100.98. and I have also
set three virtual IPs in the monitoring node as well.

I have also created following used credentials on both mysql nodes as
follows:
GRANT REPLICATION SLAVE ON *.* TO 'replicant'@'%' IDENTIFIED BY
'mypwd';

GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'%' IDENTIFIED BY
'monitor_password';

GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'%'
IDENTIFIED BY 'agent_password';

I have used % sign in all three statements to make sure that
everything is accessible. I have also edited both mmm_agent.conf files
on DB1 and DB2 as follows
>> File from DB1

#File starts here
include mmm_common.conf

# The 'this' variable refers to this server. Proper operation
requires
# that 'this' server (db1 by default), as well as all other servers,
have the
# proper IP addresses set in mmm_common.conf.
this db1
debug 1
#File Ends here

>> File from DB2

#File starts here
include mmm_common.conf

# The 'this' variable refers to this server. Proper operation
requires
# that 'this' server (db1 by default), as well as all other servers,
have the
# proper IP addresses set in mmm_common.conf.
this db2
debug 1
#File Ends here

This is all about configuration i have. Now When i start monitor and
agents, MySQL on both Nodes everything works fine. In between when
everything is running i get message in one of the mmm_agent log files
stating

2011/06/07 14:12:40 FATAL Couldn't allow writes: ERROR: Can't connect
to MySQL (host = 192.168.100.97:3306, user = mmm_agent)! Can't connect
to MySQL server on '192.168.100.97' (4)

This error message is not permanent however after lets say after half
an hour
or 15 mins. It sometimes comes on Db1 and sometimes on DB1.
I also checked user credentials on both DB nodes just make sure that i
am able to login in specific user credentials
I tried login to MySQL on Node 1 and Node 2 using following command
mysql -u mmm_agent -h 192.168.100.98 -p
and
mysql -u mmm_agent -h 192.168.100.97 -p
I am able to login using above credentials. It means mmm_agent has
access to mysql on both nodes.

So I want to know why i am getting error message
2011/06/07 14:12:40 FATAL Couldn't allow writes: ERROR: Can't connect
to MySQL (host = 192.168.100.97:3306, user = mmm_agent)! Can't connect
to MySQL server on '192.168.100.97' (4)
in between entire process. How come mmm_agent is not able to connect
to MySQL after 15 mins or 30 mins during entire process.

Please advice am i doing anything wrong.

Manuel

unread,
Jun 10, 2011, 1:33:14 PM6/10/11
to mmm-...@googlegroups.com, gurv...@techblue.co.uk

Hi Gurvinder,

If I were you, I'd try to use VIP in the slaves.
That's to say, 100.197 and 100.198 instead of 100.97 and 100.98. That
doesn't explain your problem though.

Can you try one thing, once you get the errors in either node1 or node2,
can you try: "ip addr" in both machines?

Cheers,
Manuel


--
Manuel Arostegui


Message has been deleted

Gurvinder

unread,
Jun 13, 2011, 3:43:37 AM6/13/11
to MySQL Multi Master Manager Development, Gurvinder Dadyala TBS
Hi Manuel
Thanx for at least replying on my post. I have been
waiting for someone to reply about my query. Ok Now let me explain
once again.
According to you i should have used virtual IP for slaves. I have
Master - Master configuration(Node 1 and Node 2). Now I have
192.168.100.97 and 192.168.100.98 assigned IP addresses to both
machines. 97 and 98 are the actual addresses of machines, it is not
virtual ip. I am also using three VIP 192.168.100.196 (Writer),
192.168.100.197 (Reader) and 192.168.100.198 (Reader). According to
MMM documentation my configuration is absolutely correct and it is
working also. However the node that gets writer role assigned starts
throwing error message that mmm-agent is not able to connect to MySQL
interminently. Lets says I start all agents and monitor, everything
starts working the it should work. In between after 15-20 mins the mmm
agent of assigned writer role starts throwing error message mentioned
below.
011/06/07 14:12:40 FATAL Couldn't allow writes: ERROR: Can't connect
to MySQL (host = 192.168.100.97:3306, user = mmm_agent)! Can't connect
to MySQL server on '192.168.100.97' (4)
As i said this is not permanent error. it comes occasionally and this
gets fixed automatically. I configured all user on all nodes according
to documentation. If there is some issue with user name of passwords
for any user either for montor or agents. The application should start
throwing this meesage as soon as i start agents. But this is not
happening, when is start monitor and agents everything works well. My
application is also able to read and write perfectly but after 15-20
mins i start getting message that mmm agent on writer node is not able
to connect with MySQL.
011/06/07 14:12:40 FATAL Couldn't allow writes: ERROR: Can't connect
to MySQL (host = 192.168.100.97:3306, user = mmm_agent)! Can't connect
to MySQL server on '192.168.100.97' (4)
And then after 2-3 mins this gets fixed again. So i want to know why
mmm agent is throwing this message when it is properly configured on
both nodes. I am not sure if someone else has got this issue or but
situation i have is very strange. Can you help in this.

Regards
Gurvinder
Reply all
Reply to author
Forward
0 new messages