how do i place a unique id for each minion in a file?

157 views
Skip to first unread message

UnlimitedMoops

unread,
Mar 28, 2015, 2:41:21 PM3/28/15
to salt-...@googlegroups.com
hi all..

I'm trying to place a unique id, from 1 to 255, in two text files on a minion. 

My first try uses a role defined in /etc/salt/minion on each minion:

grains:
  roles:
    - wwwserver

On the master, /srv/salt/top.sls:

base:
  '*':
    - common_packages

and /srv/salt/common_packages.sls contains:

/etc/id.txt
  file:
    - managed
    - source: salt://id
    - template: jinja
    - context:
      ip: {{ salt['network.interfaces']()['eth0']['inet'][0]['address'] }}

this is /srv/salt/id:

{% for server, addrs in salt['mine.get']['roles:wwwserver', 'network.ip_addrs', expr_form='grain').items() %}
{% if {{ ip }} == {{ addrs[0] }} %}
{{ loop.index }}
{% endif %}
{% endfor %}

There seems to be something wrong with /srv/salt/id. When i run
sudo salt '*' state.highstate
it returns
Data failed to compile
-------------
Rendering SLS 'base:common_packages' failed: Unknown yaml render error; line 2

I suspect there is a problem with the jinja 'if' statement. Is there a good way to include the variable in the 'if' statement? It seems like it might be handy for another issue i'm tackling as well.

This approach will run into problems, anyway. There is no guarantee that the loop.index will be consistent if minions are added/removed or multiple files reference the loop.index. I did a grains.items but didn't see anything that looked like it might be a unique id. Does someone know of a good way to do this with salt?

thanks!

UnlimitedMoops

unread,
Mar 28, 2015, 3:28:24 PM3/28/15
to salt-...@googlegroups.com
Using ext_pillar to maintain a dictionary of node id to number mappings seems like a possible way to handle it.. open to suggestions.

UnlimitedMoops

unread,
Mar 28, 2015, 11:43:36 PM3/28/15
to salt-...@googlegroups.com
It's still unclear to me how to do this properly within the salt infrastructure. It needs to be done in two passes. First the IP addresses of all of the minions in the 'server' role have to be mapped to unique IDs then the resulting dictionary has to be used to append the IP addresses and/or IDs to files on each of the minions. A persistent dictionary on the salt master i.e. a JSON file should be sufficient for storage.

Florian Ermisch

unread,
Mar 29, 2015, 4:02:59 AM3/29/15
to salt-...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

The render error is b/c of a missing colon:

Am 28. März 2015 19:41:21 MEZ, schrieb UnlimitedMoops <david....@gmail.com>:
>[…]
>and /srv/salt/common_packages.sls contains:
>
>/etc/id.txt
> file:
> - managed
> - source: salt://id
> - template: jinja
> - context:
> ip: {{ salt['network.interfaces']()['eth0']['inet'][0]['address'] }}
>[…]

Has to start with
/etc/id.txt:
file:

Regards, Florian
-----BEGIN PGP SIGNATURE-----
Version: APG v1.1.1

iQFTBAEBCAA9BQJVF7GoNhxGbG9yaWFuIEVybWlzY2ggPGZsb3JpYW4uZXJtaXNj
aEBhbHVtbmkudHUtYmVybGluLmRlPgAKCRAu8tzCHoBI/cYsCACAIWnjwUxv5HdX
FSE9zFYJnkscPSsUYuO5STuidCjOBE/Ks5FJBSGJ3/AJIIy5sMgg9hHk7y1wIqlV
PnnfNi55i8kV+UFqf26wxHYmUDj7DnOqoAkQdWQUMLpl4kkrIzzDgGEBcmK3Vz5P
BuWHjYxV8HkNnGAQqd053tK5mpvuPsjMSWK6PGQEvAIPHCBNW0jc6xUsqdKx8LaI
NSiY7gtLFZkFjGU8jzKuHTNt1jNpsEp7QfQR6YBRHD89JBkYzvHFppYZgYWaaPtX
BTj5XERGtsraL6WZQQOz0KZi/MGmdcGdyXWegIQD8tE5uyvp+HnDbRzq4Q3yFdy7
jHq8MM4v
=5jml
-----END PGP SIGNATURE-----

UnlimitedMoops

unread,
Mar 29, 2015, 11:32:07 AM3/29/15
to salt-...@googlegroups.com, florian...@alumni.tu-berlin.de
Thanks again, Florian. I think salt mine may be the best way to pool the IP addresses of each minion. That is the way it is done in this hosts file formula: https://github.com/saltstack-formulas/hostsfile-formula .

I think i need to persistently associate a unique ID with each IP address before updating the minion files. A rough straw man approach in some strange procedural language:

nextID(ids, min, max)
  i = min
  while(i <= max)
    if ids[i-1] != i
      return i
    i++
  throw

updateMinionID()
  dict = {IP address, ID} dictionary from persistent storage on salt master

  ids = empty set of integers
  for each IP address in dict
    id = dict[IP address]
    if ids contains id
      error('this should never happen '...)
      dict[IP address] = None
    ids += ID
  sort ids

  newDict = empty {IP Address, ID} dictionary
  for each minion
    ip = IP address from salt mine
    if dict[ip] == None
      newDict[ip] = nextID(ids)
    else
      newDict[ip] = dict[ip]

  replace dict with newDict in persistent storage on salt master
  update salt mine with newDict entries
  ...
  state.highstate updates files on minions using salt mine data

onMinionAddRemove()
...
  updateMinionID()
...

It's assumed that entry to updateMinionID() is atomic. Perhaps the salt mine can be used for both dict and newDict without additional persistent storage.

BTW this is all to populate the zookeeper myid file. Steffen Roegner has written a formula for this at https://github.com/saltstack-formulas/zookeeper-formula/blob/master/zookeeper/settings.sls, but a comment in settings.sls acknowledges that using the node_count to determine the unique ID is not 'pretty'. I suspect that, if a node is automatically added/removed by autoscaling or some failover mechanism, the zookeeper ID could change and zookeeper might not react well. The formula is highly declarative so my pseudocode may well be unnecessary. Email sent to Steffen in the hope that he may be able to help if he has time.

UnlimitedMoops

unread,
Mar 29, 2015, 11:47:07 AM3/29/15
to salt-...@googlegroups.com, florian...@alumni.tu-berlin.de
BTW i'm a bit suspicious of this need to populate myid with a hardwired integer from an external source. It would be Nice if zookeeper could handle this transparently since it knows the context better than external configuration management programs. I have looked at other possibilities, like consul, but the stack is apparently tightly integrated with zookeeper so it seems a solution is needed.

UnlimitedMoops

unread,
Mar 29, 2015, 1:51:38 PM3/29/15
to salt-...@googlegroups.com, florian...@alumni.tu-berlin.de
Just heard back from Steffen. If there is a better way to handle zookeeper nodes in saltstack then i'll revisit this, but i think that for now i'll adopt Steffen's approach. I think zookeeper should be handling changes to node topology in any case so perhaps not worth spending too much time on it. Consul seems to be better in this regard based on a quick look at their install procedure. Thanks again for everyone's help.

Eugene Chepurniy

unread,
Mar 30, 2015, 3:45:53 AM3/30/15
to salt-...@googlegroups.com, florian...@alumni.tu-berlin.de
Hi, thanx for zookeeper - i'm having exactly same problem with ID file.
Have you find any solution?
btw, if you'll use IP address as a dependence in ID generation what will happened in case of IP change (very common case in cloud envs)?

воскресенье, 29 марта 2015 г., 20:51:38 UTC+3 пользователь UnlimitedMoops написал:

UnlimitedMoops

unread,
Mar 31, 2015, 10:03:09 AM3/31/15
to salt-...@googlegroups.com, florian...@alumni.tu-berlin.de
I'm using the approach Steffen adopted for now. Hardwiring the topology based on deployment significantly simplifies the consensus algorithm so i can see why Zookeeper is doing it. I would like to manage the IDs better, but it's on the TBD list for now. 

Consul avoids the hardwiring by using network hostnames as default ids but the user can also specify their own ID (https://www.consul.io/intro/getting-started/join.html). If a network is failing badly then fixing the root cause is likely to be the top priority. As long as the consensus algorithm isn't corrupting data or magnifying the effects of the failure then using hostnames may be ok. This would obviously make deployment simpler.

Nicholas Capo

unread,
Mar 31, 2015, 10:34:56 AM3/31/15
to salt-...@googlegroups.com, florian...@alumni.tu-berlin.de

I haven't used zookeeper, but I would hartily recommend consul. We are using it for service discovery, internal leader election, and aggressive load balancing (haproxy/nginx/consul-template) for all our services.

Nicholas


On Tue, Mar 31, 2015, 09:03 UnlimitedMoops <david....@gmail.com> wrote:
I'm using the approach Steffen adopted for now. Hardwiring the topology based on deployment significantly simplifies the consensus algorithm so i can see why Zookeeper is doing it. I would like to manage the IDs better, but it's on the TBD list for now. 

Consul avoids the hardwiring by using network hostnames as default ids but the user can also specify their own ID (https://www.consul.io/intro/getting-started/join.html). If a network is failing badly then fixing the root cause is likely to be the top priority. As long as the consensus algorithm isn't corrupting data or magnifying the effects of the failure then using hostnames may be ok. This would obviously make deployment simpler.

--
You received this message because you are subscribed to the Google Groups "Salt-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages