No valid servers in cluster 'MariaDB-Monitor'

15 views
Skip to first unread message

Dmitry Podshivalov

unread,
Mar 1, 2023, 4:31:45 AM3/1/23
to MaxScale
Hi :)

I'm using 2 MaxScale  22.08.3 instances in a cluster (the configuration is bellow), everything seems to be working fine, the traffic is routed properly, I could switch master/slave, GTID is the same on both DB servers.

But the command maxctrl show maxscale on db1 shows an error in the Sync section, though checksums are the same:

db1:
{                                                                
    "checksum": "3ed123cc4400120c992a540745d3de675c6d4575",      
    "nodes": {                                                  
        "0cca53d01c64": "OK",                                    
        "51d4030d7309": "OK"                                    
    },                                                          
    "origin": "",                                                
    "status": "No valid servers in cluster 'MariaDB-Monitor'.",  
    "version": 1                                                
}    
                                                          

db2:
{                                                                  
    "checksum": "3ed123cc4400120c992a540745d3de675c6d4575",        
    "nodes": {                                                      
        "0cca53d01c64": "OK",                                      
        "51d4030d7309": "OK"                                        
    },                                                              
    "origin": "",                                                  
    "status": "OK",                                                
    "version": 1                                                    
}
                                                                 

 maxctrl list servers shows that replication is ok and the DBs are ok:
listservers.png

What could be wrong?
Thanks a lot!


The following configuration is used:
-------------------------------
# Global parameters
[maxscale]
# Substitute environment variables
substitute_variables=1
threads=1

config_sync_cluster=MariaDB-Monitor
config_sync_user=$MARIADB_MAXSCALE_USER
config_sync_password=$MARIADB_MAXSCALE_PASSWORD
admin_secure_gui=false
admin_host=127.0.0.1
max_auth_errors_until_block=0
# Extended logging for troubleshooting. It can be activated dynamically using "maxctrl alter maxscale log_info true"
#log_info=true

# Server definitions
[db1]
type=server
address=$MAXSCALE_DB1_ADDRESS
# we use internal port of db1 server, not the exposed by docker
port=$MAXSCALE_DB1_PORT
protocol=MariaDBBackend
proxy_protocol=1
[db2]
type=server
address=$MAXSCALE_DB2_ADDRESS
# we use internal port of db1 server, not the exposed by docker
port=$MAXSCALE_DB2_PORT
protocol=MariaDBBackend
proxy_protocol=1


# Monitor for the servers
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=db1,db2
user=$MARIADB_MAXSCALE_USER
password=$MARIADB_MAXSCALE_PASSWORD
monitor_interval=2s
enforce_read_only_slaves=on


# Service definitions
# https://mariadb.com/kb/en/mariadb-maxscale-2208-readconnroute/
# Readconnroute
[Read-Conn-Service]
type=service
router=readconnroute
router_options=master
servers=db1,db2
user=$MARIADB_MAXSCALE_USER
password=$MARIADB_MAXSCALE_PASSWORD
# Allow login as root
enable_root_user=1

# Listener definitions for the services
[Read-Con-Listener]
type=listener
service=Read-Conn-Service
protocol=MariaDBClient
port=3306
-------------------------------

Markus Mäkelä

unread,
Mar 1, 2023, 5:05:52 AM3/1/23
to maxs...@googlegroups.com

Hi,

This looks like it might be a bug in what the status string shows when there's only been once change to the cluster and the initial connection attempt failed. Can you try to do a change with maxctrl and see if the state changes to the correct one?

Regardless of this result, can you also open a bug report about this on the MariaDB Jira under the MaxScale project? Here's a link you can use: https://jira.mariadb.org/browse/MXS

Markus

--
You received this message because you are subscribed to the Google Groups "MaxScale" group.
To unsubscribe from this group and stop receiving emails from it, send an email to maxscale+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/maxscale/58cfa559-0a50-4847-9f64-ac3a9418491bn%40googlegroups.com.
-- 
Markus Mäkelä, Senior Software Engineer
MariaDB Corporation

Подшивалов Дмитрий

unread,
Mar 1, 2023, 6:06:13 AM3/1/23
to MaxScale
Hi!
Thanks a lot for the prompt reply! I'll try your suggestion and report the issue in Jira. 

--
You received this message because you are subscribed to a topic in the Google Groups "MaxScale" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/maxscale/ExxVP1VNMu8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to maxscale+u...@googlegroups.com.

Dmitry Podshivalov

unread,
Mar 2, 2023, 4:55:26 PM3/2/23
to MaxScale
The Jira ticket is created: https://jira.mariadb.org/browse/MXS-4538
As you suggested there (I guess it was you :) ) I've restarted the maxscale instance which showed the error and it disappeared:)

Thanks a lot

Markus Mäkelä

unread,
Mar 3, 2023, 3:01:44 AM3/3/23
to maxs...@googlegroups.com

Hi,

Thanks for reporting it, it was indeed me who suggested the restart. I think this further supports my theory that it's a display bug in how the status is tracked internally. I suspect this only happens when there have been no changes to the cluster and there's some period of unavailability in the cluster. Luckily it doesn't seem to have any functional effects so the configuration synchronization should still work.

Markus

You received this message because you are subscribed to the Google Groups "MaxScale" group.
To unsubscribe from this group and stop receiving emails from it, send an email to maxscale+u...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/maxscale/f50feaaf-0f94-4548-9330-27d05f617050n%40googlegroups.com.

Подшивалов Дмитрий

unread,
Mar 3, 2023, 3:07:26 AM3/3/23
to Markus Mäkelä, MaxScale
Cool, once again, thanks a lot!

Reply all
Reply to author
Forward
0 new messages