Failing to mount NFS because of multiple ip address

704 views
Skip to first unread message

Alain Richard

unread,
May 18, 2015, 11:49:55 AM5/18/15
to vagra...@googlegroups.com
Hi,

I am trying to build a CoreOS cluster in vagrant for my testing and I'm stuck when I want to mount my Dockerfile on the guest machines.

I use the default Vagrantfile provided by CoreOS (attached to this email renamed to .txt so that google group UI doesn't complain).

Here is the error I get :
greg@skynet:~/coreos-vagrant$ vagrant up
Bringing machine 'core-01' up with 'virtualbox' provider...
Bringing machine 'core-02' up with 'virtualbox' provider...
Bringing machine 'core-03' up with 'virtualbox' provider...
==> core-01: Importing base box 'coreos-stable'...
==> core-01: Matching MAC address for NAT networking...
==> core-01: Checking if box 'coreos-stable' is up to date...
==> core-01: Setting the name of the VM: coreos-vagrant_core-01_1431963017338_96941
==> core-01: Fixed port collision for 22 => 2222. Now on port 2202.
==> core-01: Clearing any previously set network interfaces...
==> core-01: Preparing network interfaces based on configuration...
    core
-01: Adapter 1: nat
    core
-01: Adapter 2: hostonly
==> core-01: Forwarding ports...
    core
-01: 22 => 2202 (adapter 1)
==> core-01: Running 'pre-boot' VM customizations...
==> core-01: Booting VM...
==> core-01: Waiting for machine to boot. This may take a few minutes...
    core
-01: SSH address: 127.0.0.1:2202
    core
-01: SSH username: core
    core
-01: SSH auth method: private key
    core
-01: Warning: Connection timeout. Retrying...
==> core-01: Machine booted and ready!
==> core-01: Setting hostname...
==> core-01: Configuring and enabling network interfaces...
==> core-01: Exporting NFS shared folders...
==> core-01: Preparing to edit /etc/exports. Administrator privileges will be required...
nfs-kernel-server.service - LSB: Kernel NFS server support
   
Loaded: loaded (/etc/init.d/nfs-kernel-server)
   
Active: active (running) since lun. 2015-05-18 16:36:06 CEST; 54min ago
 
Process: 6647 ExecStop=/etc/init.d/nfs-kernel-server stop (code=exited, status=0/SUCCESS)
 
Process: 6655 ExecStart=/etc/init.d/nfs-kernel-server start (code=exited, status=0/SUCCESS)
   
CGroup: /system.slice/nfs-kernel-server.service
           
└─6680 /usr/sbin/rpc.mountd --manage-gids
==> core-01: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount
-o 'nolock,vers=3,udp' 172.17.8.1:'/home/greg/Dockerfiles' /mnt/Dockerfiles

Stdout from the command:



Stderr from the command:

mount
.nfs: access denied by server while mounting 172.17.8.1:/home/greg/Dockerfiles

After a little investigation I find that my machine has an extra ip address that is used to connect to my host and obviously make everything fails because it doesn't correspond to the /etc/exports one.

$ cat /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
# VAGRANT-BEGIN: 1001 b088e07b-39c6-4014-9bee-201347ffdf54
"/home/greg/Dockerfiles" 172.17.8.101(rw,no_subtree_check,all_squash,anonuid=1001,anongid=1001,fsid=3565376335)
# VAGRANT-END: 1001 b088e07b-39c6-4014-9bee-201347ffdf54
$
$
$ vagrant ssh core
-01
CoreOS stable (647.0.0)
core@core
-01 ~ $ sudo mount -o 'nolock,vers=3,udp' 172.17.8.1:'/home/greg/Dockerfiles' /mnt/Dockerfiles                                                                                                
mount
.nfs: access denied by server while mounting 172.17.8.1:/home/greg/Dockerfiles
core@core
-01 ~ $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link
/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet
127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6
::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link
/ether 08:00:27:65:44:4f brd ff:ff:ff:ff:ff:ff
    inet
10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft
85934sec preferred_lft 85934sec
    inet6 fe80
::a00:27ff:fe65:444f/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link
/ether 08:00:27:15:c4:e7 brd ff:ff:ff:ff:ff:ff
    inet
172.17.8.126/24 brd 172.17.8.255 scope global dynamic eth1
       valid_lft
724sec preferred_lft 724sec
    inet
172.17.8.101/24 brd 172.17.8.255 scope global secondary eth1
       valid_lft forever preferred_lft forever
    inet6 fe80
::a00:27ff:fe15:c4e7/64 scope link
       valid_lft forever preferred_lft forever
core@core
-01 ~ $ sudo ip a del 172.17.8.126/24 dev eth1
core@core
-01 ~ $ sudo mount -o 'nolock,vers=3,udp' 172.17.8.1:'/home/greg/Dockerfiles' /mnt/Dockerfiles
core@core
-01 ~ $

So, yeah, now it works this way but this is a manual manipulation hence not satisfying. Moreover, since I'm beginning, I am fast at destroying everything to rebuild from clean environment and I would like this to be automatic.
config.rb.txt
Vagrantfile.txt

Alvaro Miranda Aguilera

unread,
May 18, 2015, 7:41:06 PM5/18/15
to vagra...@googlegroups.com
On Tue, May 19, 2015 at 3:49 AM, Alain Richard <gre...@gmail.com> wrote:
> sudo mount -o 'nolock,vers=3,udp' 172.17.8.1:'/home/greg/Dockerfiles'
> /mnt/Dockerfiles

As a workaround, will be you happy with a shell provisioner that does that run?

sudo mount -o 'nolock,vers=3,udp' 172.17.8.1:'/home/greg/Dockerfiles'
/mnt/Dockerfiles

?

Alain Richard

unread,
May 19, 2015, 4:10:11 AM5/19/15
to vagra...@googlegroups.com
Not really.

I don't want to hardcode the physical machine ip address in my Vagrantfile (what if it changes tomorrow ?). Plus, I have to manually create the mount point before and worst, I have to manually adjust /etc/exports with every addresses I may give to my virtual machines.

Does anybody know where this 172.17.8.126/24 address comes from ?

dragon788

unread,
May 20, 2015, 7:14:46 PM5/20/15
to vagra...@googlegroups.com
Alain, that sounds a lot like the default Docker IP address range. You need to set a new -bip (bridge IP) in your /etc/default/docker file if you want the containers to live in another subnet.
Reply all
Reply to author
Forward
0 new messages