But I am unsure about the best file system strategy.
The project uses local docker volumes. That means that each MariaDB Node will create its own file system on the localhost. But what happens if a docker swarm node fails and I have to recreate it on a completely new host? Will the data be copied from the rest of the healthy MariaDB nodes? What happens performance wise if we are talking about very large databases? ( > 200GB and more)
The problem is that I can't scale the MariaDB Cluster across different nodes, as I can only attach the block storage to a single node at a time.
So this doesn't work:
services:
[… seed service]
node:
image: colinmollenhour/mariadb-galera-swarm:10.1
[… networks, environment, secrets, commands etc.]
volumes:
- database:/var/lib/mysql
deploy:
replicas: 3
volumes:
database:
driver: rexray/dobs
It would only work if all three replicas would sit on the same docker swarm node which makes no sense as I would have no failover if the swarm node fails for some reason.
Aparently locking doesn't work neither when all mariadb nodes use the exact same volume. (I wasn't able to start the swarm with a single volume).
If you are not familiar with docker swarm: It load balances automatically all requests to a single service across all replicas.
Another approach could be to create three different `node services` which each use a different rexray (network) volume. But than I don't have the automatic load balancing anymore which docker swarm mode provides when multiple replicas are run of the same services.
Is there something I could put in front of MariaDB to load balance the requests across all healthy nodes?
Or would a shared file system like GlusterFS make sense?
Thanks