dynamic /etc/hosts

465 views
Skip to first unread message

Peter Romfeld

unread,
Jan 20, 2014, 1:52:44 AM1/20/14
to puppet...@googlegroups.com
Hi,

I just started learning puppet, and still have problems understanding the advanced puppet language.

I am using the puppet example42/hosts module to have a dynamic /etc/hosts file.
Its working fine so far but i have one more requirement, i need to force update on all other nodes if one node changes or get added.

Thanks,
Peter

Dirk Heinrichs

unread,
Jan 20, 2014, 1:57:28 AM1/20/14
to puppet...@googlegroups.com
Am 20.01.2014 07:52, schrieb Peter Romfeld:

I am using the puppet example42/hosts module to have a dynamic /etc/hosts file.
Its working fine so far but i have one more requirement, i need to force update on all other nodes if one node changes or get added.

This is what it is: an example. In the real world, you'd rather setup a DNS server instead of messing with /etc/hosts.

HTH...

    Dirk
--

Dirk Heinrichs, Senior Systems Engineer, Engineering Solutions
Recommind GmbH, Von-Liebig-Straße 1, 53359 Rheinbach
Tel: +49 2226 1596666 (Ansage) 1149
Email: d...@recommind.com
Skype: dirk.heinrichs.recommind
www.recommind.com

Felix Frank

unread,
Jan 20, 2014, 8:40:14 AM1/20/14
to puppet...@googlegroups.com
Hi,

On 01/20/2014 07:57 AM, Dirk Heinrichs wrote:
>> I am using the puppet example42/hosts module to have a dynamic
>> /etc/hosts file.
>> Its working fine so far but i have one more requirement, i need to
>> force update on all other nodes if one node changes or get added.
>
> This is what it is: an example. In the real world, you'd rather setup a
> DNS server instead of messing with /etc/hosts.

you forgot the mandatory '</troll>' trailing the last statement (I hope :-)

Peter, what do you mean by "forcing" an update? Do you need to trigger
an immediate puppet run?

Felix

Peter Romfeld

unread,
Jan 20, 2014, 9:37:29 PM1/20/14
to puppet...@googlegroups.com
yes.
I have a cluster where all nodes must use hostnames, so we use /etc/hosts. I want to configure autoscaling for this cluster, but my biggest problem is the hosts file, so when a new node comes up all other nodes need to be notify that the hosts file changed, so the make pull the new file.

Thanks for help,
Peter

Dirk Heinrichs

unread,
Jan 21, 2014, 1:41:22 AM1/21/14
to puppet...@googlegroups.com
Am 21.01.2014 03:37, schrieb Peter Romfeld:

I have a cluster where all nodes must use hostnames, so we use /etc/hosts. I want to configure autoscaling for this cluster, but my biggest problem is the hosts file, so when a new node comes up all other nodes need to be notify that the hosts file changed, so the make pull the new file.

Opposed to what Felix is thinking, my previous message was meant seriously. I don't see a trivial solution to your problem that involves Puppet and /etc/hosts. You'd need to:

1) Have the new node update the hosts file that Puppet delivers. What is two noeds come up at the same time? You'd need proper locking.
2) Puppet agents only run every 30 minutes, so you'll have a delay until all other nodes know the new one.
3) What happens when a node goes down?

OTOH, what you want to do can be achieved with DHCP and DNS, such that the DHCP server updates DNS when a new client comes up.

Bye...

Peter Romfeld

unread,
Jan 21, 2014, 2:07:34 AM1/21/14
to puppet...@googlegroups.com
well in the end i would have to add dns healthchecks too to the node spawning. The puppet /etc/hosts approach just looked nice and easy.
I am on Amazon AWS, but i should be able to to it with nagios and some scripts


--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/52DE1692.1060703%40recommind.com.

For more options, visit https://groups.google.com/groups/opt_out.

Logo.gif

Jose Luis Ledesma

unread,
Jan 21, 2014, 2:25:00 AM1/21/14
to puppet...@googlegroups.com
I think this could be accomplished with exported resources, on every node export a file with his IP, and collect the other ones. Then with a custom script you could verify if the entry is on the hosts file.


About the offtopic dhcp vs hosts file, most clusters like to have the ips defined on the hosts file, f.e. in Hacmp cluster it is mandatory. Also, imho, it is not a good idea to have dhcp for cluster nodes.

Jose Luis Ledesma

unread,
Jan 21, 2014, 2:31:27 AM1/21/14
to puppet...@googlegroups.com
In fact, I think that is easier, with host exported resources. See:https://groups.google.com/forum/m/#!topic/puppet-users/uAxbiIYH6Q4

Dirk Heinrichs

unread,
Jan 21, 2014, 2:44:42 AM1/21/14
to puppet...@googlegroups.com
Am 21.01.2014 08:25, schrieb Jose Luis Ledesma:

I think this could be accomplished with exported resources, on every node export a file with his IP, and collect the other ones. Then with a custom script you could verify if the entry is on the hosts file. 

But still, how do you cope with nodes going away?


About the offtopic dhcp vs hosts file, most clusters like to have the ips defined on the hosts file, f.e. in Hacmp cluster it is mandatory. Also, imho, it is not a good idea to have dhcp for cluster nodes.

Why is it a bad idea? DHCP can give each node the same address and name everytime, based on the MAC address. It's like a static IP, with the benefit of automatic DNS updates.

Dick Davies

unread,
Jan 21, 2014, 5:59:18 AM1/21/14
to puppet...@googlegroups.com
The fundamental problem is that such information is only going to be
accurate to the last
Puppet run, it's not realtime. Hence the need for some sort of
external orchestration (which
you'd get for free with DNS/DHCP data). Other options are of course available.

On 21 January 2014 07:25, Jose Luis Ledesma <joseluis...@gmail.com> wrote:
> I think this could be accomplished with exported resources, on every node export a file with his IP, and collect the other ones. Then with a custom script you could verify if the entry is on the hosts file.
>
>
> About the offtopic dhcp vs hosts file, most clusters like to have the ips defined on the hosts file, f.e. in Hacmp cluster it is mandatory. Also, imho, it is not a good idea to have dhcp for cluster nodes.
>
> --
> You received this message because you are subscribed to the Google Groups "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/e83ff07a-0cbf-440d-a8bd-646b4d23f0ca%40googlegroups.com.

Felix Frank

unread,
Jan 21, 2014, 7:08:55 AM1/21/14
to puppet...@googlegroups.com
On 01/21/2014 07:41 AM, Dirk Heinrichs wrote:
> Opposed to what Felix is thinking, my previous message was meant

I thought so, but have issues with the notion that /etc/hosts is to be
ignored in favor of DNS in any production environment.

> seriously. I don't see a trivial solution to your problem that involves
> Puppet and /etc/hosts. You'd need to:

Agreed.

> 1) Have the new node update the hosts file that Puppet delivers. What is
> two noeds come up at the same time? You'd need proper locking.

I don't know how that module works. However, in vanilla puppet, each
machine would export a host{} resource to be collected by all peers.

> 2) Puppet agents only run every 30 minutes, so you'll have a delay until
> all other nodes know the new one.

More or less. There are ways to work around this limitation, e.g. agent
orchestraction using mco etc.

> 3) What happens when a node goes down?

Good point, I'm not sure about the purging of stale exports.

In the case of planned shut down, the node could have a catalog compiled
without the export, so that puppet would take care of cleanup.

> OTOH, what you want to do can be achieved with DHCP and DNS, such that
> the DHCP server updates DNS when a new client comes up.

That, or a custom (peer-to-peer) host file management harness (which is
likely the worse choice).

I concur that this is not in puppet's ball park.

Cheers,
Felix
Reply all
Reply to author
Forward
0 new messages