Facter returns IPv4 address for IPv6 when IPv6 is disabled

88 views
Skip to first unread message

James Perry

unread,
Oct 5, 2017, 1:59:07 PM10/5/17
to Puppet Developers
I was generating a report from Foreman using the Hammer CLI to list my hosts for management on my Dev box. I started noticing that I wasn't seeing any IP addresses for SUSE 11 SP3. Thinking it was a OS specific issue, I checked my PROD environment that has a large number of SUSE 11 SP1 - SP4 being managed. All of the SUSE 11 SP1 and SP2 hosts all had IPs. 

Parsing out only the SUSE 11 SP3 and SP4 hosts I noticed that some had IPs, others did not. When I ran the /etc/puppetlabs/puppet/node.rb <host> I was getting IPv4 and IPv6 for the servers showing an IP in the Hammer CLI report. 

In the output below, servera is the test box and serverb is the production box.  

Both are configured exactly the same, have the same OS patches and are using puppet-agent-1.8.0-1.sles11.  The only difference is that the PROD server doesn't have IPv6 enabled. 

Is this a bug or by design? I don't want go trying to modify the node.rb to make this work correctly with Foreman if this happens to be a bug. 

Thanks! 

-----------------------------------------------------------------------------------------

Working Host:
----|--------------------------------|-----------------------|------------------------------------|-----------------|------------------
ID  | NAME                           | OPERATING SYSTEM      | HOST GROUP                         | IP              | MAC
----|--------------------------------|-----------------------|------------------------------------|-----------------|------------------
15  | servera                        | SLES 11 SP3           | Linux_Default                      | 10.118.84.22    | 00:50:56:a6:77:9a

  foreman_interfaces:
  - ip: 10.118.84.22
    ip6: fe80::250:56ff:fea6:779a
    mac: 00:50:56:a6:77:9a
    name: servera
    attrs:
      mtu: 1500
      netmask6: 'ffff:ffff:ffff:ffff::'
      netmask: 255.255.255.0
      network6: 'fe80::'
      network: 10.118.84.0
    virtual: false
    link: true
    identifier: eth0

# facter networking.interfaces.eth0
{
  bindings => [
    {
      address => "10.118.84.22",
      netmask => "255.255.255.0",
      network => "10.118.84.0"
    }
  ],
  bindings6 => [
    {
      address => "fe80::250:56ff:fea6:779a",
      netmask => "ffff:ffff:ffff:ffff::",
      network => "fe80::"
    }
  ],
  ip => "10.118.84.22",
  ip6 => "fe80::250:56ff:fea6:779a",
  mac => "00:50:56:a6:77:9a",
  mtu => 1500,
  netmask => "255.255.255.0",
  netmask6 => "ffff:ffff:ffff:ffff::",
  network => "10.118.84.0",
  network6 => "fe80::"
}



Non-working Host:

----|--------------------------------|-----------------------|------------------------------------|-----------------|------------------
ID  | NAME                           | OPERATING SYSTEM      | HOST GROUP                         | IP              | MAC
----|--------------------------------|-----------------------|------------------------------------|-----------------|------------------
103 | serverb                       | SLES 11 SP3           | Linux_Default                      |                 | 00:50:56:a6:50:da

  foreman_interfaces:
  - ip:
    ip6: ''
    mac: 00:50:56:a6:50:da
    name: serverb
    attrs: {}
    virtual: false
    link: true
    identifier: eth0


D facter networking.interfaces.eth0
{
  bindings => [
    {
      address => "10.118.66.67",
      netmask => "255.255.255.0",
      network => "10.118.66.0"
    }
  ],
  bindings6 => [
    {
      address => "10.118.66.67"
    }
  ],
  ip => "10.118.66.67",
  ip6 => "10.118.66.67",
  mac => "00:50:56:a6:50:da",
  mtu => 1500,
  netmask => "255.255.255.0",
  network => "10.118.66.0"
}


Branan Riley

unread,
Oct 5, 2017, 2:05:47 PM10/5/17
to puppe...@googlegroups.com
This looks like https://tickets.puppetlabs.com/browse/FACT-1475. We're aware of it, but it hasn't been a priority to fix. The Facter team has grown a bit recently, though, so I'm hopeful that we'll be able to fix things like this more quickly in the future. Unfortunately, I still can't say for certain when we'll be able to prioritize this.

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/5d6a69f2-86a0-4e33-9906-74b17200a509%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Regards,

Branan Riley
Senior Sofware Engineer, Puppet inc.

James Perry

unread,
Oct 5, 2017, 2:37:05 PM10/5/17
to Puppet Developers
Thanks. Now I know this is a known issue. I can work with that to find a temporary work around, somewhere. As this is a for a mgmt report I will just drop the IP field from the hammercli and do a host lookup to populate it. 

At least this issue is not why I'm going crazy! :) 

James Perry

unread,
Oct 9, 2017, 3:55:38 PM10/9/17
to Puppet Developers
Just some additional details I found when looking at the debug output versus the code blocks.

I don't yet know how the code for facter in networking_resolver.cc (networking_resolver::read_routing_table() and networking_resolver::populate_from_routing_table) need to be modified. What I do see is that the call networking_resolver::associate_src_with_iface seems to be the likely culprit. 

It doesn't seem to be checking for differences in the routing for IPv6 or IPv4.  This is seen in the output for HOST A. The HOST B shows it correctly as that host has IPv6 enabled.

While the code could get way past my rudimentary C/C++ (it has been 20+ years since I wrote in C), how hard would it be to do a check against sysctl to see if IPv6 is even enabled?  Running sysctl -a | grep ipv6 on all of the hosts, which are having an issue showing IPv4 addresses as IPv6, gives back no values as in these cases IPv6 is disabled in the kernel. 

As I don't have any IPv6 only machines I can't test the converse. This seems like it could be easy enough to call the sysctl check before doing the call in networking_resolver::read_routing_table() before lth_exe::each_line(ip_command, {"-6","route","show"}...  and then doing an if to test for return of 0 bytes or something similar. 

I don't know what this will do further down the line if there is no IPv6 data for an interface. I/e null where the code wants data. 

Does this sound like I'm even on the right track?  
HOST A

2017-10-09 15:29:05.473574 DEBUG leatherman.execution:92 - executing command: /sbin/ip route show
2017-10-09 15:29:05.474408 DEBUG | - default via 10.118.108.1 dev eth0
2017-10-09 15:29:05.474553 DEBUG | - 10.118.108.0/24 dev eth0  proto kernel  scope link  src 10.118.108.38
2017-10-09 15:29:05.474622 DEBUG | - 127.0.0.0/8 dev lo  scope link
2017-10-09 15:29:05.474681 DEBUG | - 169.254.0.0/16 dev eth0  scope link
2017-10-09 15:29:05.474817 DEBUG leatherman.execution:556 - process exited with status code 0.
2017-10-09 15:29:05.474876 DEBUG leatherman.execution:92 - executing command: /sbin/ip -6 route show
2017-10-09 15:29:05.475665 DEBUG | - default via 10.118.108.1 dev eth0
2017-10-09 15:29:05.475787 DEBUG | - 10.118.108.0/24 dev eth0  proto kernel  scope link  src 10.118.108.38
2017-10-09 15:29:05.475855 DEBUG | - 127.0.0.0/8 dev lo  scope link
2017-10-09 15:29:05.475913 DEBUG | - 169.254.0.0/16 dev eth0  scope link
2017-10-09 15:29:05.476047 DEBUG leatherman.execution:556 - process exited with status code 0.


HOST B
2017-10-09 15:36:32.331306 DEBUG leatherman.execution:92 - executing command: /sbin/ip route show
2017-10-09 15:36:32.332170 DEBUG | - default via 10.118.108.1 dev eno16777984 proto static metric 100
2017-10-09 15:36:32.332269 DEBUG | - 10.118.108.0/24 dev eno16777984 proto kernel scope link src 10.118.108.26 metric 100
2017-10-09 15:36:32.332352 DEBUG leatherman.execution:556 - process exited with status code 0.
2017-10-09 15:36:32.332411 DEBUG leatherman.execution:92 - executing command: /sbin/ip -6 route show
2017-10-09 15:36:32.333430 DEBUG | - unreachable ::/96 dev lo metric 1024 error -113
2017-10-09 15:36:32.333511 DEBUG | - unreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -113
2017-10-09 15:36:32.333588 DEBUG | - unreachable 2002:a00::/24 dev lo metric 1024 error -113
2017-10-09 15:36:32.333657 DEBUG | - unreachable 2002:7f00::/24 dev lo metric 1024 error -113
2017-10-09 15:36:32.333727 DEBUG | - unreachable 2002:a9fe::/32 dev lo metric 1024 error -113
2017-10-09 15:36:32.333800 DEBUG | - unreachable 2002:ac10::/28 dev lo metric 1024 error -113
2017-10-09 15:36:32.333868 DEBUG | - unreachable 2002:c0a8::/32 dev lo metric 1024 error -113
2017-10-09 15:36:32.333935 DEBUG | - unreachable 2002:e000::/19 dev lo metric 1024 error -113
2017-10-09 15:36:32.334000 DEBUG | - unreachable 3ffe:ffff::/32 dev lo metric 1024 error -113
2017-10-09 15:36:32.334069 DEBUG | - fe80::/64 dev eno16777984 proto kernel metric 256
2017-10-09 15:36:32.334208 DEBUG leatherman.execution:556 - process exited with status code 0.

Reply all
Reply to author
Forward
0 new messages