Salt on multi-tenant environment

138 views
Skip to first unread message

blazey

unread,
Mar 18, 2020, 4:02:53 PM3/18/20
to Salt-users
Hello,

I'm very new to the world of config management and everything else that Salt offers so please forgive my question if it's dumb! I'm looking for some advice/guidance around Salt on a multi-tenant environment in relation to permissions.

We plan on rolling out Salt in a managed, multi-tenant environment. Grains could allow us to define states for each client but they could be subject to abuse, with the client retaining admin/root permissions on the minions.

Are there any general guidelines or best practices around managing the minions in terms of access to the salt-minion itself and how best to split minions up based on factors other than Grains?

blazey

unread,
Mar 31, 2020, 5:11:29 PM3/31/20
to Salt-users
What I've decided to go with, though it needs some work, are Pillar environments for each client. I may eventually use environments in general but I'd like to keep most states held on base, with pillar configuration doing the rest.

My only drawback is getting the pillarenv value populated with the right client information in an automated fashion. I'm thinking some sort of base state that will do this based on IP or hostname but I'd like to avoid upkeep of this process if possible, such as adding new subnets.

Christopher Hawkins

unread,
Apr 2, 2020, 2:24:44 PM4/2/20
to Salt-users
I'm early stages too as far as Salt is concerned, but for what it's worth, I had a similar issue. My problem was not multi-tenant but automatically assigning nodegroups to a large number of minions, so they could be managed separately. Seems to me this might be the same problem as you but for a different reason. I ended up doing two things.

First - the solution you mentioned. Using a top file in pillar, look at a grain or grains and decide to include additional pillar data based on what is found. For me this helps with dev / staging / prod issues. Say dev salt master has ip 1.1.1.1 and prod 2.2.2.2 (you could segregate clients here using different dns names for the same ip, too), I can provide different variables to each (like aws credentials) but they can use the same state files.  top.sls:
base:
 
'*':
   
- set-env

set-env.sls :
{% if grains['master'] == '1.1.1.1' %}

include
:
 
- dev-stuff

{% elif grains['master'] == '2.2.2.2' %}

include
:
 
- prod-stuff

{% else %}

env
: ''

{% endif %}

Second - I wrote some python code that connects to an internal database. The db already knows which nodegroup each node should be in, but salt doesn't and this changes often. So I read the db once a day and regenerate the tail end of the master config file such that every group in the database gets created as a salt nodegroup, and each minion is listed in whatever nodegroup it should belong to. This way as long as my database is right, salt takes care of itself. I can issue commands to a nodegroup and whatever minions are supposed to get the command, get it. If anyone knows a better way I'd love to hear it. This was the simplest solution I found that kept everything on the master where I wanted it. 

blazey

unread,
Apr 22, 2020, 11:46:23 AM4/22/20
to Salt-users
Nodegroups is how I ended up going too, though I haven't implemented it yet. I can't see another way of doing it. And obviously the hurdle of keeping it updated with new minions was on my mine, if I need to use an internal DB then so be it, but I'll keep exploring and will post back whenever I finalise things.

Christian McHugh

unread,
Apr 23, 2020, 5:13:52 AM4/23/20
to Salt-users
In the past, I've had a good experience with pillarstack (or the git stored equivalent gitstack). It allows you to set up an easier to manage dynamic pillar environment, and control data though sub files as needed, while controlling data merging. 

Read over the docs and try it out to see if it might work for your environment, but we had a pillarstack config that worked like:
common/*.yml
{# AWS/accounts/123456_us-east1.yml #}
AWS/accounts/{{ __salt__['grains.get']('ec2:account_id') }}_{{ __salt__['grains.get']('ec2:region') }}.yml
{# Role targeting
#
# Target based on
#   - role
#   - ApplicationName/role
#   - ApplicationName/Cluster/role
#   - Environment_role
#   - Cluster_role
#   - Team_role
#}
roles/{{ __salt__['grains.get']('ec2_tags:Cluster') }}.yml
{%- for role in __grains__.get('ec2_roles', []) %}
roles/{{ role }}.yml
roles/{{ role }}/default.yml
roles/{{ role }}/{{ __grains__['os_family'] }}.yml
roles/{{ __salt__['grains.get']('ec2_tags:ApplicationName') }}/{{ role }}.yml
roles/{{ __salt__['grains.get']('ec2_tags:ApplicationName') }}/{{ __salt__['grains.get']('ec2_tags:Cluster') }}_{{ role }}.yml
roles/{{ __grains__['ec2_tags']['Environment'] }}_{{ role }}.yml
roles/{{ __grains__['ec2_tags']['Cluster'] }}_{{ role }}.yml
roles/{{ __grains__['ec2_tags']['Team'] }}_{{ role }}.yml

minions/{{ minion_id }}.yml

This allowed defaults to be placed into the common/security_thing.yml, and to then get more and more specific until you get to the individual minion. 

Gitstack does not yet support merging of multiple git repos which is probably desired to easily delegate control of some files to be merged into pillars. But I figured it might be worth pointing out in case it's helpful. 

Cheers

blazey

unread,
Apr 27, 2020, 7:10:23 AM4/27/20
to Salt-users
That's new to me. I'll take a look, thanks.
Reply all
Reply to author
Forward
0 new messages