eth1 private ip not set up for OpenStack VM

950 views
Skip to first unread message

Kevin Heatwole

unread,
Nov 11, 2016, 11:43:55 PM11/11/16
to CoreOS User
I am having a problem booting an OpenStack VM at OVH Public Cloud where the VM is configured with both a public and a private ip.

Last month, when I would create a new VM on two networks (private and public), the VM would not be accessible using the public ip unless I rebooted the instance. I figure it was an ignition bug that caused the initial configuration issue.

Now, I am running with the latest Alpha channel version and I am getting different behavior. This time, the public ip is correctly set up on eth0, but eth1 doesn't get the private ip configured at all.

Here is the journalctl -u systemd-networkd output after reboot:

-- Reboot --
Nov 12 04:21:23 localhost systemd[1]: Starting Network Service...
Nov 12 04:21:23 localhost systemd-networkd[216]: Enumeration completed
Nov 12 04:21:23 localhost systemd[1]: Started Network Service.
Nov 12 04:21:24 localhost systemd-networkd[216]: lo: Configured
Nov 12 04:21:24 localhost systemd-networkd[216]: eth0: IPv6 enabled for interface: Success
Nov 12 04:21:24 localhost systemd-networkd[216]: eth1: IPv6 enabled for interface: Success
Nov 12 04:21:24 localhost systemd-networkd[216]: eth1: Gained carrier
Nov 12 04:21:24 localhost systemd[1]: Stopping Network Service...
Nov 12 04:21:24 localhost systemd[1]: Stopped Network Service.
Nov 12 04:21:27 server-2 systemd[1]: Starting Network Service...
Nov 12 04:21:27 server-2 systemd-networkd[914]: Enumeration completed
Nov 12 04:21:27 server-2 systemd-networkd[914]: eth1: IPv6 enabled for interface: Success
Nov 12 04:21:27 server-2 systemd-networkd[914]: eth0: IPv6 enabled for interface: Success
Nov 12 04:21:27 server-2 systemd-networkd[914]: lo: Configured
Nov 12 04:21:27 server-2 systemd-networkd[914]: eth1: Gained carrier
Nov 12 04:21:27 server-2 systemd-networkd[914]: eth0: Gained carrier
Nov 12 04:21:27 server-2 systemd-networkd[914]: eth0: DHCPv4 address 158.69.69.159/32 via 158.69.64.1
Nov 12 04:21:27 server-2 systemd[1]: Started Network Service.
Nov 12 04:21:28 server-2 systemd-networkd[914]: eth1: Gained IPv6LL
Nov 12 04:21:28 server-2 systemd-networkd[914]: eth0: Gained IPv6LL
Nov 12 04:21:29 server-2 systemd-networkd[914]: docker0: IPv6 enabled for interface: Success
Nov 12 04:21:29 server-2 systemd-networkd[914]: docker0: Could not append VLANs: Operation not permitted
Nov 12 04:21:29 server-2 systemd-networkd[914]: docker0: Failed to assign VLANs to bridge port: Operation not permitted
Nov 12 04:21:29 server-2 systemd-networkd[914]: docker0: Could not set bridge vlan: Operation not permitted
Nov 12 04:21:29 server-2 systemd-networkd[914]: docker0: Gained carrier
Nov 12 04:21:29 server-2 systemd-networkd[914]: docker0: Lost carrier
Nov 12 04:21:29 server-2 systemd-networkd[914]: br-a28690be6c02: IPv6 enabled for interface: Success
Nov 12 04:21:29 server-2 systemd-networkd[914]: br-a28690be6c02: Could not append VLANs: Operation not permitted
Nov 12 04:21:29 server-2 systemd-networkd[914]: br-a28690be6c02: Failed to assign VLANs to bridge port: Operation not permitted
Nov 12 04:21:29 server-2 systemd-networkd[914]: br-a28690be6c02: Could not set bridge vlan: Operation not permitted
Nov 12 04:21:31 server-2 systemd-networkd[914]: docker0: Gained IPv6LL
Nov 12 04:21:41 server-2 systemd-networkd[914]: eth0: Configured
Nov 12 04:21:44 server-2 systemd-networkd[914]: docker0: Configured

Note that I see DHCP setup of eth0, but no DHCP setup of eth1.

Is there something I should be doing to force DHCP setup of eth1?

If I remember correctly, the problem before was that it was only doing eth1 DHCP setup on initial boot (and not eth0 setup), but on subsequent reboot, it setup both eth1 and eth0 via DHCP. Now, it only ever does DHCP on eth0.

Is this a bug in CoreOS? 

Kevin Heatwole

unread,
Nov 12, 2016, 9:54:42 AM11/12/16
to CoreOS User
Looks like this bug might be the same one I ran into a month ago, but might not have been fixed. I started a new instance this morning on both a public and a private network this morning and couldn't connect to it. But, this time I set a password on the core user so I could access the VM from the VNC Console. Turns out, at least this time, both eth0 and eth1 were configured with their respective IPs, but the route table had multiple default routes:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.2.1     0.0.0.0         UG    1024   0        0 eth1
0.0.0.0         192.168.2.1     0.0.0.0         UG    1024   0        0 eth1
0.0.0.0         158.69.64.1     0.0.0.0         UG    1024   0        0 eth0
158.69.64.1     0.0.0.0         255.255.255.255 UH    1024   0        0 eth0
169.254.169.254 192.168.2.3     255.255.255.255 UGH   1024   0        0 eth1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-a28690be6c02
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.2.1     0.0.0.0         255.255.255.255 UH    1024   0        0 eth1

I manually removed the two redundant eth1 default routes and the VM is now accessible from my Mac using the public IP.

Rebooting the VM still results in the two redundant eth1 default routes so there is a bug somewhere. Here is the journalctl output for systemd-networkd after reboot. Note that it took about 30 seconds for the DHCPv4 responses (but this time they do come through).

-- Reboot --
Nov 12 14:26:50 localhost systemd[1]: Starting Network Service...
Nov 12 14:26:50 localhost systemd-networkd[216]: Enumeration completed
Nov 12 14:26:50 localhost systemd[1]: Started Network Service.
Nov 12 14:26:51 localhost systemd-networkd[216]: lo: Configured
Nov 12 14:26:51 localhost systemd-networkd[216]: eth0: IPv6 enabled for interface: Success
Nov 12 14:26:51 localhost systemd-networkd[216]: eth1: IPv6 enabled for interface: Success
Nov 12 14:26:51 localhost systemd-networkd[216]: eth1: Gained carrier
Nov 12 14:26:51 localhost systemd[1]: Stopping Network Service...
Nov 12 14:26:51 localhost systemd-networkd[216]: eth0: Removing non-existent address: fe80::f816:3eff:fef4:ff6/64 (valid forever)
Nov 12 14:26:51 localhost systemd-networkd[216]: eth1: Removing non-existent address: fe80::f816:3eff:fe41:57bb/64 (valid forever)
Nov 12 14:26:51 localhost systemd-networkd[216]: lo: Lost carrier
Nov 12 14:26:51 localhost systemd-networkd[216]: eth1: Lost carrier
Nov 12 14:26:51 localhost systemd[1]: Stopped Network Service.
Nov 12 14:26:54 server-3 systemd[1]: Starting Network Service...
Nov 12 14:26:54 server-3 systemd-networkd[678]: Enumeration completed
Nov 12 14:26:54 server-3 systemd-networkd[678]: eth1: IPv6 enabled for interface: Success
Nov 12 14:26:54 server-3 systemd-networkd[678]: eth0: IPv6 enabled for interface: Success
Nov 12 14:26:54 server-3 systemd-networkd[678]: lo: Configured
Nov 12 14:26:54 server-3 systemd-networkd[678]: eth1: Gained carrier
Nov 12 14:26:54 server-3 systemd-networkd[678]: eth0: Gained carrier
Nov 12 14:26:54 server-3 systemd[1]: Started Network Service.
Nov 12 14:26:56 server-3 systemd-networkd[678]: eth0: Gained IPv6LL
Nov 12 14:26:56 server-3 systemd-networkd[678]: br-a28690be6c02: IPv6 enabled for interface: Success
Nov 12 14:26:56 server-3 systemd-networkd[678]: br-a28690be6c02: Could not append VLANs: Operation not permitted
Nov 12 14:26:56 server-3 systemd-networkd[678]: br-a28690be6c02: Failed to assign VLANs to bridge port: Operation not permitted
Nov 12 14:26:56 server-3 systemd-networkd[678]: br-a28690be6c02: Could not set bridge vlan: Operation not permitted
Nov 12 14:26:56 server-3 systemd-networkd[678]: docker0: IPv6 enabled for interface: Success
Nov 12 14:26:56 server-3 systemd-networkd[678]: docker0: Could not append VLANs: Operation not permitted
Nov 12 14:26:56 server-3 systemd-networkd[678]: docker0: Failed to assign VLANs to bridge port: Operation not permitted
Nov 12 14:26:56 server-3 systemd-networkd[678]: docker0: Could not set bridge vlan: Operation not permitted
Nov 12 14:26:56 server-3 systemd-networkd[678]: eth1: Gained IPv6LL
Nov 12 14:27:25 server-3 systemd-networkd[678]: eth1: DHCPv4 address 192.168.2.4/24 via 192.168.2.1
Nov 12 14:27:25 server-3 systemd-networkd[678]: eth1: Configured
Nov 12 14:27:26 server-3 systemd-networkd[678]: eth0: DHCPv4 address 158.69.72.198/32 via 158.69.64.1
Nov 12 14:27:26 server-3 systemd-networkd[678]: eth0: Configured

Alex Crawford

unread,
Nov 12, 2016, 11:31:06 AM11/12/16
to Kevin Heatwole, CoreOS User
This is expected behavior given the default network configs. In order to
get this working in your environment, you'll need to use Ignition to lay
down some custom network configs. I don't know anything about OVH, but
if both interfaces have internet availability, you'll need to give one
of them a lower metric than the other. I suspect, given the names, that
one of them has access to the internet while the other only has access
to an intranet. If that's the case, you can adjust the destination range
for that private interface (e.g. maybe it's just 192.168.1.0/24). If
this is indeed the case, the DHCP offer is erroneous and you may need to
contact their support for assistance.

-Alex
signature.asc

Kevin Heatwole

unread,
Nov 12, 2016, 1:45:42 PM11/12/16
to CoreOS User, ktwa...@gmail.com
The private networks for OVH Public Cloud do not have access to the public internet. They only have access to the IP range I specify when I create the private network (e.g., 192.168.1.0/24). I create each private network in the OVH Manager CP which allows me to set the VLAN ID and IP range and whether I want DHCP for the network (or just a static network). Regardless of whether the network has DHCP or is static, the OVH Manager shows a private IP (and public IP) assigned to each newly created VM. I am creating these VMs without specifying any user-data file (this is OpenStack).

What I don't understand is what part of CoreOS default network configuration is deciding that eth1 needs to have default route created for it. In fact, it seems CoreOS is creating 2 identical default routes for eth1. Is OVH Public Cloud providing a default user-data file that attempts to give both eth0 and eth1 default routes? Or, is this coming from CoreOS?

I did find that /usr/share/oem/cloud-config.yml has the ec2-compatible yml file from the coreos-overlay github repo.

I suppose I could work-around this issue by creating a oneshot systemd service that does the 'route del -net 0.0.0.0 dev eth1' twice after the systemd-networkd service runs, but I would prefer that CoreOS initialize these private networks correctly and not try to add default routes for the private networks.

Do you have any further suggestions for me? OVH Public Cloud has only supported Private Networks for a few months (previously, these Private Networks could only be used on dedicated OVH servers and OVH's Private Cloud (vmware based) and I'd like to start using them in production.

Alex Crawford

unread,
Nov 12, 2016, 3:09:03 PM11/12/16
to Kevin Heatwole, CoreOS User
On 11/12, Kevin Heatwole wrote:
> What I don't understand is what part of CoreOS default network
> configuration is deciding that eth1 needs to have default route created for
> it. In fact, it seems CoreOS is creating 2 identical default routes for
> eth1. Is OVH Public Cloud providing a default user-data file that attempts
> to give both eth0 and eth1 default routes? Or, is this coming from CoreOS?

This is coming from OVH via DHCP. CoreOS enables DHCP on all interfaces
by default (since we cannot know how any individual network is set up).

> I did find that /usr/share/oem/cloud-config.yml has the ec2-compatible yml
> file from the coreos-overlay github repo.
>
> I suppose I could work-around this issue by creating a oneshot systemd
> service that does the 'route del -net 0.0.0.0 dev eth1' twice after the
> systemd-networkd service runs, but I would prefer that CoreOS initialize
> these private networks correctly and not try to add default routes for the
> private networks.

You'll need to provide network configs to properly configure your
network. Since OVH isn't sending the right offer via DHCP, you should
provide a config for eth1 which statically assigns the address and sets
up routes.

> Do you have any further suggestions for me? OVH Public Cloud has only
> supported Private Networks for a few months (previously, these Private
> Networks could only be used on dedicated OVH servers and OVH's Private
> Cloud (vmware based) and I'd like to start using them in production.

It sounds like OVH is providing the wrong image. If these VMs are
running in OpenStack, they should be using the OpenStack image; not EC2.
If you have the ability to choose a different image and are able to use
the OpenStack image, you can provide an Ignition config that will
configure the network properly. Some examples can be found in our
documentation [1].

-Alex

[1]: https://coreos.com/ignition/docs/latest/network-configuration.html
signature.asc

Kevin Heatwole

unread,
Nov 12, 2016, 6:48:07 PM11/12/16
to CoreOS User, ktwa...@gmail.com
It sounds like OVH is providing the wrong image. If these VMs are 
running in OpenStack, they should be using the OpenStack image; not EC2. 
If you have the ability to choose a different image and are able to use 
the OpenStack image, you can provide an Ignition config that will 
configure the network properly. Some examples can be found in our 
documentation [1]. 

OVH does not provide the CoreOS image I'm using. I uploaded the image myself using glance:

$ wget https://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
$ bunzip2 coreos_production_openstack_image.img.bz2
$ glance image-create --name "CoreOS Alpha 1221.0.0" --container-format bare --disk-format qcow2 --file coreos_production_openstack_image.img 

To workaround the problem, I added a systemd service to remove the bogus routes after the fact:

[Unit]
Description=Remove default routes for eth1
After=systemd-networkd.service
Requires=systemd-networkd.service
 
[Service]
Type=oneshot
ExecStart=/bin/route del -net 0.0.0.0 dev eth1
ExecStart=/bin/route del -net 0.0.0.0 dev eth1

[Install]
WantedBy=multi-user.target 

This does work, but I would still like to understand what the OVH DHCP server for the eth1 network does to cause CoreOS to add the 2 default routes for eth1. If I understand how this works, I can better contact OVH for a possible fix on their side.

Just to test this out a bit more, I launched a new Ubuntu 16.10 image (an image in the Openstack public repo) on the same private network. This VM left the private network interface (called ens4) down and did not attempt to use DHCP to set the IP on ens4. I ssh'd into the VM and added the following to the /etc/network/interfaces file:

auto ens4 
iface ens4 inet dhcp

And brought the interface up:
root@server-3:/etc/network# ifup ens4
Internet Systems Consortium DHCP Client 4.3.3
Copyright 2004-2015 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Listening on LPF/ens4/fa:16:3e:78:f9:95
Sending on   LPF/ens4/fa:16:3e:78:f9:95
Sending on   Socket/fallback
DHCPDISCOVER on ens4 to 255.255.255.255 port 67 interval 3 (xid=0xf8492c71)
DHCPREQUEST of 192.168.2.6 on ens4 to 255.255.255.255 port 67 (xid=0x712c49f8)
DHCPOFFER of 192.168.2.6 from 192.168.2.3
DHCPACK of 192.168.2.6 from 192.168.2.3
bound to 192.168.2.6 -- renewal in 42326 seconds.
root@server-3:/etc/network# ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:52:0a:cd brd ff:ff:ff:ff:ff:ff
    inet 158.69.75.148/32 brd 158.69.75.148 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe52:acd/64 scope link
       valid_lft forever preferred_lft forever
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:78:f9:95 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.6/24 brd 192.168.2.255 scope global ens4
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe78:f995/64 scope link
       valid_lft forever preferred_lft forever
root@server-3:/etc/network# route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         158.69.64.1     0.0.0.0         UG    0      0        0 ens3
158.69.64.1     0.0.0.0         255.255.255.255 UH    0      0        0 ens3
169.254.169.254 192.168.2.3     255.255.255.255 UGH   0      0        0 ens4
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 ens4

As you can see, Ubuntu didn't add 2 default routes for ens4 (only the 2 specific private network routes).

So, everything looks good with the Ubuntu VM.

Just to see if I can tell what is going wrong on CoreOS, I took down eth1 and brought it back up (see below).  CoreOS adds 5 routes for eth1 while Ubuntu only added 2 routes. Why?

server-2 core # ip link set eth1 down
server-2 core # ip addr flush dev eth1
server-2 core # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:b9:20:e4 brd ff:ff:ff:ff:ff:ff
    inet 158.69.74.243/32 brd 158.69.74.243 scope global dynamic eth0
       valid_lft 77896sec preferred_lft 77896sec
    inet6 fe80::f816:3eff:feb9:20e4/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether fa:16:3e:17:91:fb brd ff:ff:ff:ff:ff:ff
server-2 core # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         158.69.64.1     0.0.0.0         UG    1024   0        0 eth0
158.69.64.1     0.0.0.0         255.255.255.255 UH    1024   0        0 eth0
server-2 core # ip link set eth1 up
server-2 core # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         158.69.64.1     0.0.0.0         UG    1024   0        0 eth0
0.0.0.0         192.168.2.1     0.0.0.0         UG    1024   0        0 eth1
0.0.0.0         192.168.2.1     0.0.0.0         UG    1024   0        0 eth1
158.69.64.1     0.0.0.0         255.255.255.255 UH    1024   0        0 eth0
169.254.169.254 192.168.2.3     255.255.255.255 UGH   1024   0        0 eth1
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.2.1     0.0.0.0         255.255.255.255 UH    1024   0        0 eth1
server-2 core # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:b9:20:e4 brd ff:ff:ff:ff:ff:ff
    inet 158.69.74.243/32 brd 158.69.74.243 scope global dynamic eth0
       valid_lft 77748sec preferred_lft 77748sec
    inet6 fe80::f816:3eff:feb9:20e4/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:17:91:fb brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.5/24 brd 192.168.2.255 scope global dynamic eth1
       valid_lft 86283sec preferred_lft 86283sec
    inet6 fe80::f816:3eff:fe17:91fb/64 scope link
       valid_lft forever preferred_lft forever

Why does the Ubuntu VM work with DHCP for this private network while CoreOS adds the extra routes?

Do you really think the problem is with the OVH DHCP server?

Thanks for your help Andrew. I'm starting to understand how networking works, but still think it might be a bug in CoreOS here and not in Openstack.

You could take a look at https://www.ovh.co.uk/g2162.use_vrack_and_private_networks_with_public_cloud_instances to see how OVH documents using private networks on its Public Cloud, if that might help point to where the bug lies here.

Kevin

Alex Crawford

unread,
Nov 12, 2016, 7:10:35 PM11/12/16
to Kevin Heatwole, CoreOS User
On 11/12, Kevin Heatwole wrote:
> OVH does not provide the CoreOS image I'm using. I uploaded the image
> myself using glance:
>
> $ wget
> > https://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
> > $ bunzip2 coreos_production_openstack_image.img.bz2
> > $ glance image-create --name "CoreOS Alpha 1221.0.0" --container-format
> > bare --disk-format qcow2 --file coreos_production_openstack_image.img

Okay, so that means you can use Ignition to configure the network if we
can't figure out what is going on with DHCP.

> As you can see, Ubuntu didn't add 2 default routes for ens4 (only the 2
> specific private network routes).

That's interesting. Can you try running systemd-networkd with debugging
enabled so we can see what it's doing? You can add the following to a
networkd drop-in and restart networkd:

[Service]
Environment=SYSTEMD_LOG_LEVEL=debug

-Alex
signature.asc

Kevin Heatwole

unread,
Nov 12, 2016, 8:35:59 PM11/12/16
to CoreOS User, ktwa...@gmail.com
Okay. Added the debug flag and attached the log file here.

The only thing that looks suspicious to me is the following lines:
Nov 13 00:39:57 server-2 systemd-networkd[1575]: eth1: DHCPv4 address 192.168.2.5/24 via 192.168.2.1
Nov 13 00:39:57 server-2 systemd-networkd[1575]: eth1: Setting transient hostname: 'host-192-168-2-5'
Nov 13 00:39:57 server-2 systemd-networkd[1575]: Sent message type=method_call sender=n/a destination=org.freedesktop.hostname1 object=/org/freedesktop/hostname1 interface=org.freedesktop.hostname1 member=SetHostname cookie=22 reply_cookie=0 error=n/a
Nov 13 00:39:57 server-2 systemd-networkd[1575]: Got message type=method_return sender=:1.124 destination=:1.125 object=n/a interface=n/a member=n/a cookie=7 reply_cookie=22 error=n/a
Nov 13 00:39:57 server-2 systemd-networkd[1575]: eth1: Updating address: 192.168.2.5/24 (valid for 1d)
 
Could it be the method_call to org.freedesktop.hostname1 is causing the 3 extra routes to be created? It looks like eth0 hasn't gotten an IP yet (since it is configured just after eth1) so there really isn't any routes to the internet yet (via eth0) to set a transient hostname. 

The 3 extra routes are:

0.0.0.0         192.168.2.1     0.0.0.0         UG    1024   0        0 eth1
0.0.0.0         192.168.2.1     0.0.0.0         UG    1024   0        0 eth1
192.168.2.1     0.0.0.0         255.255.255.255 UH    1024   0        0 eth1

I see that it added eth1 (link 3) before eth0 (link 2) and started a DHCP client on eth1 before eth0.

This is way beyond me. I found this page that may help https://www.freedesktop.org/wiki/Software/systemd/hostnamed/ but I don't understand why this method call would cause these extra routes to be created.

Of course, the problem may be elsewhere.

What do you think?
networkd.log

Kevin Heatwole

unread,
Nov 12, 2016, 10:01:48 PM11/12/16
to CoreOS User, ktwa...@gmail.com
Okay. I found documentation on systemd that seems to indicate that if UseHostname=false in the [DHCP] section, networkd will not attempt to set the transient hostname. 

So, I copied zz-default.network into /etc/systemd/network and added UseHostname=false, but it didn't fix the problem. No more calls to set transient hostname but I still get the extra routes added for eth1.

Then, I noticed that in the journalctl logs that systemd-timesyncd was running in the middle of restarting systemd-networkd. So, I stopped systemd-timesyncd and restarted systemd-networkd.

But, this didn't prevent the extra routes for eth1.

I now have no clue why these routes are being added. I looked at the source for systemd-networkd and the code does appear to set some routes but I didn't spend enough time to figure out whether this might be a systemd bug or not.

It may be simply that because eth1 is added first, the extra routes are added then even though there is no route to the internet on eth1 and we only need the eth0 default route.

Anyway, I'll keep looking a bit more tomorrow...

Kevin Heatwole

unread,
Nov 12, 2016, 10:58:13 PM11/12/16
to CoreOS User, ktwa...@gmail.com
I think I figured this out.

I added a file to /etc/systemd/network to match on eth1 and set UseRoutes=false. After reboot, I no longer get those extra 3 routes:

CoreOS alpha (1221.0.0)
core@server-2 ~ $ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         158.69.64.1     0.0.0.0         UG    1024   0        0 eth0
158.69.64.1     0.0.0.0         255.255.255.255 UH    1024   0        0 eth0
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 eth1
core@server-2 ~ $ ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:b9:20:e4 brd ff:ff:ff:ff:ff:ff
    inet 158.69.74.243/32 brd 158.69.74.243 scope global dynamic eth0
       valid_lft 86353sec preferred_lft 86353sec
    inet6 fe80::f816:3eff:feb9:20e4/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:17:91:fb brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.5/24 brd 192.168.2.255 scope global dynamic eth1
       valid_lft 86353sec preferred_lft 86353sec
    inet6 fe80::f816:3eff:fe17:91fb/64 scope link
       valid_lft forever preferred_lft forever
core@server-2 ~ $ cat /etc/systemd/network/*
[Match]
Name=eth1

[Network]
DHCP=yes

[DHCP]
UseMTU=true
UseDomains=true
UseHostname=false
UseRoutes=false

Everything looks good to me.

Final question: Should I report this to OVH just incase there is something they are doing in OpenStack to cause default routes on private networks that go nowhere? Or, is this working as you would expect it to? 

Alex Crawford

unread,
Nov 14, 2016, 4:28:15 PM11/14/16
to Kevin Heatwole, CoreOS User
On 11/12, Kevin Heatwole wrote:
> Final question: Should I report this to OVH just incase there is something
> they are doing in OpenStack to cause default routes on private networks
> that go nowhere? Or, is this working as you would expect it to?

I'd still report this to OVH since this isn't working the way I would
expect.

-Alex
signature.asc

Kevin Heatwole

unread,
Nov 14, 2016, 4:50:59 PM11/14/16
to CoreOS User, ktwa...@gmail.com
On Monday, November 14, 2016 at 4:28:15 PM UTC-5, Alex Crawford wrote:
I'd still report this to OVH since this isn't working the way I would
expect.

Turns out that the OVH API has an option to specify that a subnet of a private network not have an associated Gateway IP. Their GUI does not expose this option and always creates the network with a Gateway enabled. I'm not sure where this Gateway routes IPs, but from what I can tell, the gateway associated with the private network has no connectivity to the internet. Perhaps it would route to other private networks that are in my VLANs? I can create up to 4000 private networks, so maybe these Gateways only provide routing between all my defined private networks? Each private network has a different VLAN ID.

Anyway, after I created a private network without a Gateway (using the OVH API and not their GUI), the VMs started on that private network all have proper routes and I can ssh into the VMs using the Public IP and I can ssh between the VMs using the Private IPs. 

Thanks for your help in tracking this down.
Reply all
Reply to author
Forward
0 new messages