too bad...
after that, I can see the cluser being started. however writing to it, does not result in syncing to the 2nd node in REPLICATED mode.
[2014-11-27 08:00:52.259196] E [rpcsvc.c:1314:rpcsvc_program_register_portmap] 0-rpc-service: Could not register with portmap
[2014-11-27 08:00:52.259207] E [nfs.c:332:nfs_init_versions] 0-nfs: Program MOUNT3 registration failed
[2014-11-27 08:00:52.259214] E [nfs.c:1312:init] 0-nfs: Failed to initialize protocols
[2014-11-27 08:00:52.259220] E [xlator.c:403:xlator_init] 0-nfs-server: Initialization of volume 'nfs-server' failed, review your volfile again
[2014-11-27 08:00:52.259227] E [graph.c:307:glusterfs_graph_init] 0-nfs-server: initializing translator failed
[2014-11-27 08:00:52.259233] E [graph.c:502:glusterfs_graph_activate] 0-graph: init failed
I did nothing with nfs myself, possibly this is used for sharing between the coreos system and de systemd-spawn container.
My idea was like:
- have several servers with same architecture, applications in containers (docker would be best, if not, I'll stick with OpenVZ) and shared replicated storage. Separate storage system would be overkill for my applications.
so:
- mount a disk to coreos system (call is /shared)
- have some kind of container (docker, but your way with systemd-spawn is just as good) to act as a Gluster node using this /share as storagebrick
- publish a share (/mnt/gluster) on the container to the coreos system (via a second mount) as the actual interface to the
- share /mnt/gluster to the docker containers using it as a "normal" file system
The docker containers would not need to know about GlusterFS then? Or would they?