Complex setup for pillars, using pillarstack

1,351 views
Skip to first unread message

Eric Veiras Galisson

unread,
Jul 2, 2018, 10:17:16 AM7/2/18
to salt-...@googlegroups.com
Hello everyone,

I'm working a in team where we use Saltstack with success to manage our infrastucture. We have, since 2 years, done a bit of refactoring to adapt the code to our needs. Some of the changes involved moving from custom states/formulas to community formulas when possible, extract configuration values from hard-coded data in states/templates to pillar, switching to pillarstack to have 'pillar in pillar' functionality, etc.

But our pillarstack setup is quite complex/complicated and we are confronted to problems we don't know how to solve, except adding hacks to hacks.
I feel that there must be a 'clean' solution but we can't find which, so I'm asking here hoping that someone could help us or give us some tips from its own setup.


First, our current setup:
- using salt 2018.3 (recently switched)
- two git repositories:
  - "main repo': states, formulas and 'general' pillar data. Managed by one team responsible for the infrastructure
  - 'host repo': one for host specific pillar data, managed by a different team responsible to provision and bootstrap servers
- we use pillarstack
(- formulas are cloned into our repo, but could be managed by Gitfs, i think it doesn't change our current problem)

Pillarstack loading of data follows this logic:
step 1) load all common values from 'main repo'
step 2) load environment specific data from 'main repo'
step 3) load host specific data from 'host repo'
step 4) load more data (we name it 'post_include') from 'main repo' depending of previous host specific data -> this is part of the 'pillar in pillar' need we have, and the tricky part.


This is our setup but don't describe our objectives:

1. we want to have in our 'main repo' all our common infrastructure and logic
2. we want to have host specific data in our 'host repo' which is managed by another team responsible for day-to-day servers operations
3. we want to be able to define 'services' in our host pillar
4. we want to allow possibility of adding config data or overriding config data in the host pillar


1 and 2 are, I think, easily understood, but may not be the more critical need: if we can't find another solution it could be changed but we prefer to keep it that way.


3. is an attempt to use the 'roles and profiles' pattern coming from Puppet (which I used before) and adapting it to Saltstack.
The host pillar contains something like

services:
  - nginx
  - mysql

and then in step 4 we load pillars, depending of this 'services' dict (here nginx and mysql pillars).

I think this need is quite standard, and the only other possibility I see to do it would be to store this value in grains, which I don't like because of the reason if someone has control on the server he can get access to all services pillar data.


4. is where our code is becoming quite complex, and this needs some explanation.

We want to be able to add data (eg: define a new apt repo or some specific packages), override data (like nginx: port: 8080 instead of nginx: port: 443) or completely remove previous data to add new (eg: remove all common apt repo and add a specific one).

In theory all this is already possible with pillarstack using merge-last, merge-first, remove, overwrite strategies. But this adds a lot of complexity in the code (both in the main repo and the host repo) and greatly reduce readability.



So... this e-mail is getting quite long already, but some more information:

- there seems to exist a tendency in other config management tools (Puppet, Chef, Ansible) to use a 'hierarchical datastore' like hiera or the new jerakia. This last one seems quite nice, and I think it could be interesting to use it but
  - I don't know if it interconnects well with Salt as Salt is never explicitely named. (I vaguely remember to have heard/read that the difference between Salt and Puppet/Chef is that Salt expects to have all its pillars available in one step, while these tools tend to answer to a specific request for a specific pillar. Are those tools compatible or not with Salt?)
  - I don't know if it really fits with what we want to do


Ok, I will stop here as I think most information is present.

I would really like (and benefit) to know what more experienced Salt users thinks about our setup and improvements that could be done, and eventually any solution to simplify it.

I can give more details if needed. 


--
Eric Veiras Galisson

Sjoerd Oostdijck

unread,
Jul 4, 2018, 3:40:12 AM7/4/18
to salt-...@googlegroups.com

On 2 Jul 2018, at 16:00, Eric Veiras Galisson wrote:

Pillarstack loading of data follows this logic:

step 1) load all common values from '*main repo*'
step 2) load environment specific data from '*main repo*'
step 3) load host specific data from '*host repo*'
step 4) load more data (we name it '*post_include*') from '*main
repo*' depending


of previous host specific data -> this is part of the 'pillar in pillar'
need we have, and the tricky part.


This is our setup but don't describe our objectives:

Hi Eric,

I think I’ve got something quite similar to what you’re after. The difference being that I’ve been able to get here without pillarstack (I’m still planning to look at it for other reasons though). I’ll explain a bit what I’ve got where with inline replies.

1. we want to have in our '*main repo*' all our common infrastructure and
logic

I’ve got a separate git repo that has all my formulas. This is pretty much exactly like the formulas that you can find here https://github.com/saltstack-formulas
The only thing to note is that my repo is being used over GitFS from my salt master. The beauty is that you have all default settings together with your formulas inside your defaults.yaml.

2. we want to have host specific data in our '*host repo*' which is managed


by another team responsible for day-to-day servers operations

My server specific settings are in a subdir in my pillar repo. I include a specific file with the minion specific variables variables by doing a small trick in my pillar top.sls which is this:

  '*':
    # translate the "." in the minionid to "_"
    - server_vars.{{ grains['id']|replace('.', '_') }}

In this folder called server_vars there are files that match the minion id with - and dots translated to _ because you are not allowed to have - in state filenames. for example hostname_example_com.sls

3. we want to be able to define '*services*' in our host pillar

Inside my state top.sls I’ve used pillar PCRE targeting to apply certains formulas to minions that have a certain pillar value set. It goes something like this:

My pillar:server_vars/some_host.sls has this:

# What services are allowed on this machine
services:
  tomcat: True
  ssl-certs: True

And my state top.sls has this:

base:
  'J@services:tomcat':
    - tomcat
  'J@services:ssl-certs':
    - ssl-certs

p.s. You could make service in my pillar a yaml list instead of booleans /shrug…

4. we want to allow possibility of adding config data or overriding config
data in the host pillar

That’s what the map.jinja takes care of that you should have in each of your formulas from point 1. In combination with importing the result from your map.jinja everywhere. See the import statements at the top of most .sls files in the fomulas on github.

Good luck!

Sjoerd Oostdijck
Senior Systems Engineer - RIPE NCC

Vasiliy Tolstov

unread,
Jul 4, 2018, 6:07:22 AM7/4/18
to salt-...@googlegroups.com
пн, 2 июл. 2018 г. в 17:17, Eric Veiras Galisson
<eric.veir...@gmail.com>:
>
> Hello everyone,
>
> I'm working a in team where we use Saltstack with success to manage our infrastucture. We have, since 2 years, done a bit of refactoring to adapt the code to our needs. Some of the changes involved moving from custom states/formulas to community formulas when possible, extract configuration values from hard-coded data in states/templates to pillar, switching to pillarstack to have 'pillar in pillar' functionality, etc.
>
> But our pillarstack setup is quite complex/complicated and we are confronted to problems we don't know how to solve, except adding hacks to hacks.
> I feel that there must be a 'clean' solution but we can't find which, so I'm asking here hoping that someone could help us or give us some tips from its own setup.
>
>
> First, our current setup:
> - using salt 2018.3 (recently switched)
> - two git repositories:
> - "main repo': states, formulas and 'general' pillar data. Managed by one team responsible for the infrastructure
> - 'host repo': one for host specific pillar data, managed by a different team responsible to provision and bootstrap servers
> - we use pillarstack
> (- formulas are cloned into our repo, but could be managed by Gitfs, i think it doesn't change our current problem)
>
> Pillarstack loading of data follows this logic:
> step 1) load all common values from 'main repo'
> step 2) load environment specific data from 'main repo'
> step 3) load host specific data from 'host repo'
> step 4) load more data (we name it 'post_include') from 'main repo' depending of previous host specific data -> this is part of the 'pillar in pillar' need we have, and the tricky part.
>
>
> This is our setup but don't describe our objectives:
>
> 1. we want to have in our 'main repo' all our common infrastructure and logic
> 2. we want to have host specific data in our 'host repo' which is managed by another team responsible for day-to-day servers operations
> 3. we want to be able to define 'services' in our host pillar
> 4. we want to allow possibility of adding config data or overriding config data in the host pillar
>
>

I'm use pillarstack and have something that need:
i have environments that represents datacenter, i have systems that
represents OS, i have project - represent dedicated company stuff, and
roles that can be attached to servers:
states.top sls looks like:
{%- set archs = {'amd64': 'x86_64', 'i386': 'x86_32'} %}
{%- set system = '-'.join([salt['grains.get']('os'),
salt['grains.get']('osrelease').split('.', 1)[0],
archs.get(salt['grains.get']('osarch'),
salt['grains.get']('osarch'))]).lower() %}
{%- set hostname, domainname = grains['id'].split('.',1) %}
{%- set project = '_'.join(domainname.split('.')[-2:]) %}
{%- set environment = domainname.replace('.','_') %}
{%- set roles = salt['pillar.get']('nodes:'+grains['id']+':roles', []) %}
{%- from "macros.jinja" import include with context %}
base:
'*':
{%- for role in roles %}
{{ include("systems/defaults/{0}".format(role)) }}
{{ include("systems/{0}/{1}".format(system, role)) }}
{%- endfor %}
{%- for role in roles %}
{{ include("projects/defaults/{0}".format(role)) }}
{{ include("projects/{0}/{1}".format(project, role)) }}
{%- endfor %}
{%- for role in roles %}
{{ include("environments/defaults/{0}".format(role)) }}
{{ include("environments/{0}/{1}".format(environment, role)) }}
{%- endfor %}
{{ include("nodes/{0}/{1}".format(environment, hostname)) }}


include is macros that skips absent state files for older saltstack releses:
{%- macro include(path) -%}
{%- set includes = [] -%}
{%- if salt['cp.list_master'](prefix=path) -%}
{%- set inc = path.replace('/','.') + '.*' %}
{%- if inc not in includes -%}
{%- do includes.append(inc) -%}
{%- endif -%}
{%- endif -%}
{%- for inc in includes -%}
- {{ inc }}
{%- endfor -%}
{%- endmacro -%}


and pillarstack config:
{%- set archs = {'amd64': 'x86_64', 'i386': 'x86_32'} -%}
{%- set system = '-'.join([__salt__['grains.get']('os'),
__salt__['grains.get']('osrelease').split('.', 1)[0],
archs.get(__salt__['grains.get']('osarch'),
__salt__['grains.get']('osarch'))]).lower() %}
{%- set hostname, domainname = minion_id.split('.',1) -%}
{%- set project = '_'.join(domainname.split('.')[-2:]) -%}
{%- set environment = domainname.replace('.','_') -%}
{%- set roles = __stack__['traverse'](stack,
'nodes:'+minion_id+':roles', []) -%}
{%- set includes = __stack__['traverse'](stack,
'nodes:'+minion_id+':includes', []) -%}
{%- if not __stack__['traverse'](stack, 'nodes:'+minion_id, False) %}
nodes/{{ environment }}/{{ hostname }}.sls
nodes/{{ environment }}/{{ hostname }}/*.sls
{%- else %}
{%- for path in includes %}
{{ path }}
{%- endfor %}
{%- for role in roles %}
systems/defaults/{{ role }}/*.sls
systems/{{ system }}/{{ role }}/*.sls
projects/defaults/{{ role }}/*.sls
projects/{{ project }}/{{ role }}/*.sls
environments/defaults/{{ role }}/*.sls
environments/{{ environment }}/{{ role }}/*.sls
{%- endfor %}
{%- endif %}


In this case as you see i have hier-like tree:
first pillarstack include node data from which i get roles, after that
i'm include all stuff that specified in node file as include (for
example data for other nodes),
and include data for specific OS, project and environenment.
So i can have different states for different os versions and different
pillar data for different os and environment.

> 1 and 2 are, I think, easily understood, but may not be the more critical need: if we can't find another solution it could be changed but we prefer to keep it that way.
>
>
> 3. is an attempt to use the 'roles and profiles' pattern coming from Puppet (which I used before) and adapting it to Saltstack.
> The host pillar contains something like
>
> services:
> - nginx
> - mysql
>
> and then in step 4 we load pillars, depending of this 'services' dict (here nginx and mysql pillars).
>
> I think this need is quite standard, and the only other possibility I see to do it would be to store this value in grains, which I don't like because of the reason if someone has control on the server he can get access to all services pillar data.
>
>
> 4. is where our code is becoming quite complex, and this needs some explanation.
>
> We want to be able to add data (eg: define a new apt repo or some specific packages), override data (like nginx: port: 8080 instead of nginx: port: 443) or completely remove previous data to add new (eg: remove all common apt repo and add a specific one).
>
> In theory all this is already possible with pillarstack using merge-last, merge-first, remove, overwrite strategies. But this adds a lot of complexity in the code (both in the main repo and the host repo) and greatly reduce readability.
>
>
>
> So... this e-mail is getting quite long already, but some more information:
>
> - there seems to exist a tendency in other config management tools (Puppet, Chef, Ansible) to use a 'hierarchical datastore' like hiera or the new jerakia. This last one seems quite nice, and I think it could be interesting to use it but
> - I don't know if it interconnects well with Salt as Salt is never explicitely named. (I vaguely remember to have heard/read that the difference between Salt and Puppet/Chef is that Salt expects to have all its pillars available in one step, while these tools tend to answer to a specific request for a specific pillar. Are those tools compatible or not with Salt?)
> - I don't know if it really fits with what we want to do
>
>
> Ok, I will stop here as I think most information is present.
>
> I would really like (and benefit) to know what more experienced Salt users thinks about our setup and improvements that could be done, and eventually any solution to simplify it.
>
> I can give more details if needed.
>
>
> --
> Eric Veiras Galisson
>
> --
> You received this message because you are subscribed to the Google Groups "Salt-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/salt-users/CAJmUcmZ9oDPOg8tx-pQvGy790sisVJp2U%3Dg%2BRcT-Z%2BsQtMH%3D9A%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.



--
Vasiliy Tolstov,
e-mail: v.to...@selfip.ru

Max Arnold

unread,
Jul 5, 2018, 10:13:03 AM7/5/18
to salt-...@googlegroups.com
Take a look at Reclass https://reclass.pantsfullofunix.net/operations.html and specifically this fork https://github.com/salt-formulas/reclass/ which is actively maintained. To use it you need to:

1. Remove your top.sls files (for both pillar and state trees)
2. Create a hierarchy of classes (each class can have a list of states, pillar keys and also include other classes)
3. Create one yaml file per node, and in each file assign classes and optional pillars

This way you can design your class hierarchy with arbitrary granularity and apply/override it if necessary.

eric.veir...@gmail.com

unread,
Jul 5, 2018, 10:58:27 AM7/5/18
to Salt-users
On Wednesday, July 4, 2018 at 9:40:12 AM UTC+2, sjoerd oostdijck wrote:

On 2 Jul 2018, at 16:00, Eric Veiras Galisson wrote:

Pillarstack loading of data follows this logic:
step 1) load all common values from '*main repo*'
step 2) load environment specific data from '*main repo*'
step 3) load host specific data from '*host repo*'
step 4) load more data (we name it '*post_include*') from '*main
repo*' depending
of previous host specific data -> this is part of the 'pillar in pillar'
need we have, and the tricky part.


This is our setup but don't describe our objectives:

Hi Eric,

I think I’ve got something quite similar to what you’re after. The difference being that I’ve been able to get here without pillarstack (I’m still planning to look at it for other reasons though). I’ll explain a bit what I’ve got where with inline replies.

1. we want to have in our '*main repo*' all our common infrastructure and
logic

I’ve got a separate git repo that has all my formulas. This is pretty much exactly like the formulas that you can find here https://github.com/saltstack-formulas
The only thing to note is that my repo is being used over GitFS from my salt master. The beauty is that you have all default settings together with your formulas inside your defaults.yaml.

2. we want to have host specific data in our '*host repo*' which is managed
by another team responsible for day-to-day servers operations

My server specific settings are in a subdir in my pillar repo. I include a specific file with the minion specific variables variables by doing a small trick in my pillar top.sls which is this:

  '*':
    # translate the "." in the minionid to "_"
    - server_vars.{{ grains['id']|replace('.', '_') }}

In this folder called server_vars there are files that match the minion id with - and dots translated to _ because you are not allowed to have - in state filenames. for example hostname_example_com.sls



I think this is the equivalent to what we do in pillarstack, we load common pillars then host specific pillar then 'post-include' pillars.

 

3. we want to be able to define '*services*' in our host pillar

Inside my state top.sls I’ve used pillar PCRE targeting to apply certains formulas to minions that have a certain pillar value set. It goes something like this:

My pillar:server_vars/some_host.sls has this:

# What services are allowed on this machine
services:
  tomcat: True
  ssl-certs: True

And my state top.sls has this:

base:
  'J@services:tomcat':
    - tomcat
  'J@services:ssl-certs':
    - ssl-certs

p.s. You could make service in my pillar a yaml list instead of booleans /shrug…


This is quite the same too, except pillarstack syntax is different.

We have something like this

{% set services = stack['services'] %}

{% if 'nginx' in services %}
services/nginx.yml
{% endif %}


 

4. we want to allow possibility of adding config data or overriding config
data in the host pillar

That’s what the map.jinja takes care of that you should have in each of your formulas from point 1. In combination with importing the result from your map.jinja everywhere. See the import statements at the top of most .sls files in the fomulas on github.



Yes, I know.

 

Good luck!


Thank you.

eric.veir...@gmail.com

unread,
Jul 6, 2018, 3:52:35 AM7/6/18
to Salt-users
I understand the logic, seems good to me.
I imagine you can also, before the 'for' loop, include some common states

nodes:<server>:roles contains your server roles.

 
include is macros that skips absent state files for older saltstack releses:
{%- macro include(path) -%}
  {%- set includes = [] -%}
  {%- if salt['cp.list_master'](prefix=path) -%}
    {%- set inc = path.replace('/','.') + '.*' %}
    {%- if inc not in includes -%}
      {%- do includes.append(inc) -%}
    {%- endif -%}
  {%- endif -%}
  {%- for inc in includes -%}
    - {{ inc }}
  {%- endfor -%}
{%- endmacro -%}


Nice trick :)

What is 'traverse' in __stack__['traverse']? Is it a custom dict for your own pillars or something from pillarstack? I don't see it in doc

You try to load pillar for the minion, roles and includes.
If you don't find anything, you load the host-specific data
Else, you only load roles and includes pillar.

Some questions:
- why don't you load host-specific data if you already find minion data?
- where is stored your nodes:<minion> data? In the 'hostname.sls' file? or in another pillar file? (what I want is to store it in hostname.sls)

It seems that this code is done to be processed twice: one to load hostname data, and then another to load roles and includes data (which is the else), but in pillarstack it will be executed/processed only once, no?


Not sure I completely get it, I'll need to make a PoC to see if it solves my complexity problem.

Petr Michalec

unread,
Jul 25, 2018, 10:45:29 AM7/25/18
to Salt-users
Hi, I am using reclass for past two years in Mirantis for really big models and hundreds of nodes from bare-metal to VMs, containers, and applications on top of that. Actually, as of now I am considering a major update of how I use "salt" and modeling pillar data as I have needs to manage 1000 nodes with a lot of dynamic/static data etc..

I bit playing with an idea of reclass+pillarstack+etcd (ext_pillars). But it goes way too complex. I will try to sum up what was already said in this thread, provide additional requirements and would like to hear your ideas.

At the beginning I would differentiate the needs of two types of salt users:
  1. use for small setup, possibly point and shoot (few formulas)
  2. mid-size, big, complex infrastructures where life-cycle management, even for the "salt" code plays an important role
(1) - it's very similar to "ansible" way to go, on a hype now ;), for it's simplicity setup and minimal bootstrap/requirements. I was missing an easy way to start with "Salt" so I recently wrote an prototype of minimalist setup for salt-ssh/masterless/agentless setup (https://github.com/epcim/salt-gun/blob/master/Formulafile), now I am looking for a good model structure for it.

For this case, I would basically vote for "minimalist" approach, pick "classic pillars" (Eric example above is a pretty good one to start with), unless you have a good reason for the more complex setup.

(2) - it's what I am going to talk about in this thread 

Topics:
- what way to structure pillar data
- what way to feed them
- what tools to use to keep dry and use more modeling
- how to generate some data
- dc/cluster/role/minion mapping etc..
- how to test/validate model

First of all, "pillars" were designed for static data. In reality, we abuse this frequently, as we do mentioned "host specific" pillar data (which are in fact static, IP's, MAC, names of interfaces, etc..) but the character is you can dynamically collect them, on the host, from external systems.

Obviously, we have a reason why we do that. I don't have good experience with salt.mine, and even grains works just fine, still someone have to write some functions for it (and at best push them to upstream so everyone can reuse). So instead of dynamically feeding salt, with dynamic/discovered data we tend to "hardcode" them in kind of pillar that holds the "static" description of the node/minion and is later use to control it's configuration.

Certainly, there were such needs, so that lead to ext_pillars. Today a Salt user can use ext_pillars (possibly more than one) to collect some data from external systems, but just one in many cases is not enough. Also, ext_pillars has some cons, for example, you can't use SDB  on your ext_pillar model to ad-hoc query external services like Vault, as ext_pillars are expected to return fully rendered yaml. So if you count to use Vault, for example, you have two choices (update all your formulas, load the data first with classic salt pillars).

There were mentioned two,
  • reclass - I use it (gh:salt-formulas/reclass), it's great for structuring your data, readability, in the recent version we have made great improvements. Cons: complex source code, no additional jinja2 rendering, can't refer to grains.
  • pillarstack - I don't use it but I am familiar to it. For big infrastructures (2), it obviously does the better job then classic salt pillars. It probably has all the features you need (merging strategies, reference grains), and vs. reclass it's code base is like 100x smaller: https://github.com/bbinet/pillarstack/blob/master/stack.py (bbinet did a great job)
Regarding pillarstack I basically like the concept used by SUSE for CEPH: https://github.com/SUSE/DeepSea/blob/master/srv/pillar/ceph/stack/stack.cfg It's not that complex right? Vasili T. mentioned structure has reasoning for complex deployments with many different HW components. For single app SUSE approach is probably better and readable.

One comment here, reclass allows me to validate my models before I use them (as it doesn't rely on any grain (dynamic) data). If you are interested, check this: https://github.com/salt-formulas/salt-formulas/tree/master/deploy/model. If the model will refer to grains, unless I have some mock-up it will not be possible (pillarstack, classic pillars).

With reclass, we have tried 2-3 model structures and now I am looking for a new one. What we use now is documented here: https://salt-formulas.readthedocs.io/en/latest/intro/metadata.html
We have metadata on multiple layers. 

We also use sth like ` ${_param:cluster_node:address}` reference on Formula and System level to generalize variables. (_param pillar here is something like "global" configuration variables, later overloaded by specific values for given "service/system".

So as of today, our reclass repository has this structure:

/classes/system (sub-repository linked to https://github.com/Mirantis/reclass-system-salt-model)
/classes/service (with sym-link to metadata/service, example: `ln -svf $FORMULAS_BASE/$repo/metadata/service $RECLASS_BASE/classes/service/$name`
/classes/cluster/NAME/
                                     - infra/
                                     - cicd/
                                     - ceph/
                                     - opencontrail/
                                     - openstack/
                                          - compute/
                                             - init.yml
                                          - init.yml
                                          - control.yml
                                          - proxy.yml
                                          - database.yml
                                          - ...
                                     - monitoring/
                                     - kubernetes/
                                     - kubernetes-staging/
                                     init.yml

(these we call products, well it's a final composition, so cluster/NAME/openstack/proxy.yml is loads reclass classes from /classes/system/keepalived, nginx, haproxy, openstack horizon).

We with this structure you are not overload with hundreds of lines of pillar data, and you usually only deal on any level (service, system, cluster) to add just like ten-lines to add a feature you need.
We are free to compose any topology we would like to.

Well, there are some downsides to this (or what I miss on the current setup):
  • I like shared system/service but I would like to work with then much easier than sub repositories, or collect them from formulas. The issue is that they fit to me but might not to fit to you (community)? And sure you always have full-stack instead only one you need.
  • We also use a subscription model (far away from perfect) that allows us to classify the minions with the pillar data when they register to master (https://github.com/salt-formulas/salt-formula-reclass/blob/master/tests/pillar/node_classify.slshttps://github.com/salt-formulas/salt-formula-reclass/blob/master/tests/pillar/node_classify.sls)
    • it can feed reclass with information like node interface names and make it more dynamic, anyway I would like to avoid it in favor of another tool.
    • THE NEED HERE, is actually to 
      • generate some static information for a group of nodes (let say computes240-552) what IP's they will have, what is the disk layout, what are physical NIC names, etc..
      • integrate with other systems to get information from them (CMDB, or even existing running system).
      • check what folks in cloud-foundry spruce tool uses: https://github.com/geofffranks/spruce/blob/master/doc/operators.md#-static_ips-
  • Control parameters what class to load, what products to mix, in what setup/version - becomes a hell (especially because we use cookiecutter tool to generate "cluster" level)
  • Control pillar data `_param` is reaching a state of another metadata layers, example:
keepalived_vip_virtual_router_id: 180
keepalived_vip_password: ${_param:cicd_keepalived_vip_password_generated}
keepalived_vip_interface: ens3
cluster_vip_address: ${_param:control_vip_address}
control_vip_address: ${_param:cicd_control_address}
  • Honestly, it's easy on day0 to deploy such model however it's not that bright when you have to upgrade site to new release. Service level get's updated with new version of the formula. System level is updated by just check out a new tag, cluster level is harder if it has manual changes to "customize it to the infrastructure" you can't just regenerate it.

So more I know it's more and more hard to answer the original question. For now, I would say we have these needs
  1. to simplify my service, system, cluster level usage/setup. Ideally shared with the community, a tool to grab/generate instance of  ie: "kubernetes" setup based on the shared best-practice reclass pillar data. And count one day I will do an upgrade of it.
  2. keep all "site/host specific", all customizations, in separate layer
  3. kind of rendering that would help me to generate, ie: static IP's  for nodes (partially it is hand-to-hand with ^^ above point)
  4. collect as much as possible information from infrastructure, rather then have it static

While you probably do not share problems with the first point, as it's mostly due my current reclass setup. The other points are the "hot" topic I would say for everyone.

As I mentioned at the beginning, I resolve these needs with:
  • updated reclass model
  • another ext_pillar (etcd) and tools that will give me a way to loops/query external systems to map "host site" data to the specific group of minions (no idea which one)
    • possibly this can be in files, or it can be "etcd" database (as such it allow me do updates in many records by one command, vs. file aproach)
    • note: to use "etcd" basically requires to have a good structure (to describe given topology setup). Could lead even to some infrastructure designer and description specification in yaml/tree structure?

Additionally, I have this to share:

A friend of me did a nice job with: (it's not just an atempt for new Salt UI; it's early stage)
Totally different area, but worth to mention, folks at deep mind used "reclass" in their Kapitan tool: https://kapitan.dev/ and discovered dozens of interesting patterns how some thing can be resolved. BTW, it's for K8s but I even tend to use the same reclass repository for Salt and Kubernetes metadata thanks to it. For example, the "targets" hold just the "pillars" they need to. Or collecting slowly what files to finally render.

Finally, it was mentioned here gitfs etc.. to collect all pillars, formulas in one repo (for better lifecycle). I played with this idea as well and came out with a containerized salt-master, with many flavors, where one is with preinstalled formulas from salt-formulas and saltstack-formulas, and one even with the full reclass model. Check it out here: https://hub.docker.com/r/epcim/salt/tags/ and https://github.com/epcim/docker-salt-formulas.

I would like to hear from you what is your approach, feedback. I hope the discussion here will now turn in to even more "brainstorming" and deeper thoughts. We have developed many interesting things and ways to use saltstack in past years. Would be nice to come with some recapitulation and suggestions for "new-salt-users" and best-practices for modeling metadata for salt configuration system.

Petr


Dne pátek 6. července 2018 9:52:35 UTC+2 eric.veir...@gmail.com napsal(a):
Reply all
Reply to author
Forward
0 new messages