WSREP: last inactive check more than PT1.5S ago (PT2.72057S), skipping check - google-clock-sync

773 views
Skip to first unread message

Marko Sutic

unread,
Jul 23, 2015, 9:18:26 AM7/23/15
to Percona Discussion
Hello all!

We're running Percona XtraDB Cluster with 3 nodes in Google Cloud.

UBUNTU 14.04
Percona XtraDB Cluster 5.6.24-25.11


In error log on NODE1 I've noticed:
2015-07-22 17:57:33 26831 [Warning] WSREP: last inactive check more than PT1.5S ago (PT2.72057S), skipping check

At the same time in syslog:
Jul 22 17:57:33 node-1 google-clock-sync: INFO Clock drift token has changed: 7252954419244247687
Jul 22 17:57:33 node-1 google-clock-sync: INFO Syncing system time with hardware clock...
Jul 22 17:57:33 node-1 google-clock-sync: INFO Synced system time with hardware clock.


Anyone experienced such behaviour?
Can we avoid this happening in the future?

Thanks for answer.


Regards,
Marko

Jaime Crespo

unread,
Jul 24, 2015, 7:06:06 AM7/24/15
to percona-d...@googlegroups.com
I have no idea of google cloud, but one common problem I found in the past on PXC was caused by not having the servers ntp-synced (different clocks).

--
You received this message because you are subscribed to the Google Groups "Percona Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to percona-discuss...@googlegroups.com.
To post to this group, send email to percona-d...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Jaime Crespo

Marko Sutic

unread,
Jul 24, 2015, 6:31:31 PM7/24/15
to Percona Discussion, jy...@jynus.com
I think I've found what was the cause of the warning...

Every node has this files:
/etc/init/google-clock-sync-manager.conf
/usr/share/google/google_daemon/manage_clock_sync.py
/usr/share/google/google_daemon/manage_clock_sync.pyc

If you open manage_clock_sync.py there is comment:

"""Manages clock syncing after migration on GCE instances."""


This could be short blackout during Google maintenance on the servers.

Regards,
Marko
Reply all
Reply to author
Forward
0 new messages