Is there a simple solution to pre-compiling selected pillar so that the rest of pillar and state can reference it?

647 views
Skip to first unread message

sandy

unread,
Mar 13, 2017, 1:59:44 PM3/13/17
to Salt-users
Hi,

Searched the documentation and this list and tried a few things with no real success.  Hoping for some help / course corrections :-)

We have a sub section of our pillar data which is a function of some state obtained via some jinja code.  Call this phase 1 pillar - there is a small amount of jinja that probes the system and returns high level configuration type data.

In addition we have a tad more pillar, call this phase 2 pillar data, that really wants to be a function of the phase 1 pillar.  Neither piece really wants to be in grains.

The saltstack documentation seems to imply that if we move the phase 1 pillar to ext_pillar (somehow - TBD), then that will be compiled first and as such the phase 2 pillar can reference that via jinja.  TaDa.

Is this true?  If yes, is there a simple way of configuring ext_pillar so that the pillar files located there (phase 1 pillar) are consumed as normal (sic) pillar which is then accessible via normal jinja when the phase 2 pillar is being compiled?

If there is complicated way of doing this by writing say custom python modules and distributing that, is there a pointer to an applicable example?  

Thanks much in advance,
-sandy

Daniel Wallace

unread,
Mar 13, 2017, 2:36:02 PM3/13/17
to Salt-users
You should be able to set `ext_pillar_first` to `True` and it should be able to be used to target minions in the top file.


I do not know if this makes pillars available inside jinja in the pillar files, but it should allow the values from ext_pillar to be used for targeting in the top.sls file in pillars, so it is worth testing if it is available for rendering pillar files.

Daniel

--
You received this message because you are subscribed to the Google Groups "Salt-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/salt-users/b8a26b30-3c3e-42af-b596-b9c1c86f3c24%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Seth House

unread,
Mar 13, 2017, 3:01:58 PM3/13/17
to salt users list
Another possible option that doesn't involve ext_pillar is to lean on
Jinja imports for the "phase 1" data. E.g.:

# /srv/pillar/phase1.jinja
{% load_yaml as phase_1_data %}
phase1:
foo: Foo
bar: Bar
baz: {{ salt.grains.get('os_family', 'Unknown') }}
{% endload %}


# /srv/pillar/phase2.sls
{% from "phase1.jinja" import phase_1_data with context %}
phase1: {{ phase_1_data | json() }} # Make phase1 data available as-is.

phase2: # Use phase1 data in phase2.
is_os_family_is_known: {{ phase_1_data != 'Unknown' }}

If you use the `with_context` keyword you can make use of all the
normal Salt variables and function calls in phase1 and you can import
phase1 from multiple files in phase2.

sandy

unread,
Mar 13, 2017, 3:04:40 PM3/13/17
to Salt-users
Thanks.  Though we are are 2016.3.3 (pre the 'bug fix' for ext-pillar-first), I have tried setting this to both True and False with no luck.  Also, there are no overlapping pilar keys between phase 1 and phase 2 so in theory it should not matter which pillar (external pillar or pillar_roots) wins on a conflict.

sandy

unread,
Mar 14, 2017, 4:18:51 PM3/14/17
to Salt-users, se...@eseth.com
Thanks for the reply!

Interesting - it never occurred to me that something like this can be done.  (Here is another shameless plug for a short description of high level data model and/or time transaction model of saltstack.  For example, write a version of this page - https://docs.saltstack.com/en/latest/topics/development/architecture.html - in terms of when and where pillar/state code is jinja-afied and evaluated as a function of the master/minion location, on a time-line, more or less for each of the following - salt. salt-call, and salt-run.  Including relevant data/pillar contexts. :-) 

In any case, I tried converting the code from our original 'pillar' design model to the 'jinja' design model as described and ran into performance issues, albeit that the initial prototypes seemed to actually work.  For example, the current complete pillar design implementation requires about 3 seconds to run on ten nodes - "time salt \*pillar.items'.  A partial only implementation of the jinja design (about 2/3 of the pillar data converted) take 18 seconds plus. On a single node the jinja design takes about 4.5 seconds.

Unfortunately as mentioned I don't quite understand the pillar/state eval saltstack design, but it is not too surprising that when converting to jinja model that more time is required to eval pillar.  Unfortunately the jinja model does not seem to scale well as a function of the number of nodes.  Is that expected with this design?  Or am I doing something fundamentally wrong?

For what it is worth, I also tried the ext_pillar methodology but ran into two issues.  First, our pillar contains jinja, and it seems like the only available choice to eval jinja and yaml as external pillar is by using PillarStack.  Is that true?  But when trying to get that to work on 2016.3.3 there were strange low level errors when calling various salt.cmd.run calls (in the original pillar code) when using PillarStack.  Become an contortionist getting it to work and ended up with gross code.  And PillarStack __salt__ commands apparently only run on the salt-master, which breaks the design of some of the original pillar data that is trying to be 'pre-pillar'ed (apologies for the verb-ification) because is needs to be minion specific.

So, both paths seem dubious to continue to invest in.  Yes/no?  Are these the only two ways to be able to reference what normally is pillar data at pillar eval time?

Thanks again.
-sandy

Seth House

unread,
Mar 14, 2017, 7:06:58 PM3/14/17
to salt users list
On Tue, Mar 14, 2017 at 2:18 PM, sandy <windov...@gmail.com> wrote:
> when and where pillar/state code is jinja-afied and evaluated as
> a function of the master/minion location, on a time-line, more or less for
> each of the following - salt. salt-call, and salt-run. Including relevant
> data/pillar contexts. :-)

This would indeed be a great addition to the docs. CliffsNotes version:

1. A Minion requests its Pillar from the Master.

This is triggered by calling state.apply, state.highstate,
saltutil.refresh_pillar, & pillar.items. It does not matter where or
what calls any of those functions on the Minion (`salt`, `salt-call`,
Orchestrate, etc), the result is the same.

2. The Master generates Pillar for that requesting Minion.

It uses the following data: the Minion ID, the Minion's Grains
(already cached on the Master), the Minion's opts (the parsed Minion
config file).

3. Salt's normal Renderer pipeline generates the Pillar.

In short: the default pipeline is `#!jinja|yaml`, which means a
`foo.sls` is first run through the Jinja templating engine, then the
result of that is run through the YAML parser producing the final
Pillar data structure. There's a video on the SaltStack YouTube
channel that goes into Renderers in the context of the State system
but they work the same on the Master for Pillar (sorry, the screen is
hard to see):

https://youtu.be/s967lYS_nd4?t=18m48s
https://docs.saltstack.com/en/latest/ref/renderers/

4. The generated Pillar is returned to the requesting Minion.

The data structure from the previous step is shipped over the wire and
kept in-memory on the Minion for quick lookup when, say, referencing
Pillar values from within Salt States. Salt States are generated on
the Minion using the same Renderer system.

> In any case, I tried converting the code from our original 'pillar' design
> model to the 'jinja' design model as described and ran into performance
> issues

It sounds like you're doing interesting things with Pillar! There are
a few tips and tricks for when you're dealing with Pillar at large
scale or when processing many files to generate Pillar:

1. Jinja is quite fast but YAML is very slow.

YAML is nice to read but it is amazingly, astoundingly, mind-numbingly
slow to parse. If you have many SLS files using the default YAML
renderer, or if you're making frequent use of {% load_yaml %} from my
example, that alone could explain the slowness.

If you need the best possible performance don't use YAML anywhere.
Change the she-bang at the top of your .sls files to use other
Renderers instead like `#!jinja|json` or even just `#!py`. If you need
to load data in from external sources stick to JSON -- or better yet
MsgPack if you can.

A highly performant version of my example from before might be:

{# /srv/pillar/phase1.jinja #}
{% set phase_1_data = {
'phase1': {
'foo': 'Foo',
'bar': 'Bar',
'baz': salt.grains.get('os_family', 'Unknown'),
},
} %}

#!jinja|json
{# /srv/pillar/phase2.sls #}
{% from "phase1.jinja" import phase_1_data with context %}

{# Stick with Jinja data structures as long as possible. #}
{% set ret = {
'phase1': phase_1_data,
'phase2': {
'is_os_family_is_known': phase_1_data != 'Unknown',
}
} %}

{# Make the final result available as JSON. #}
{{ ret | json() }}

If you set the logging level to 'debug' on the Master it will output
log entries for how long each Renderer takes:

[PROFILE ] Time (in seconds) to render '/srv/pillar/phase2.sls' using
'jinja' renderer: 0.00707197189331

2. Pillar is generated on the Master by request for each Minion separately.

If you have many Minions you may want to stagger when each one makes
that request to avoid overloading the Master all at once.

3. If you must perform heavy operations, consider caching the result.

If your Pillar generation is doing slow things like calling out to an
API or calling slow-running functions, for example, shelling out using
`cmd.run`, you may want to cache the result of that operation locally.
Another option is to move that work down to the individual Minions to
spread out the work by putting those calls in State files or using
sdb.

https://docs.saltstack.com/en/latest/topics/sdb/

> Unfortunately the jinja
> model does not seem to scale well as a function of the number of nodes. Is
> that expected with this design? Or am I doing something fundamentally
> wrong?

Hopefully the above thoughts will steer you in the right direction.
Jinja by itself can easily scale to many, many Minions but if Jinja is
doing something slow like parsing inline YAML or shelling out then all
bets are off.

sandy

unread,
Mar 15, 2017, 9:18:52 PM3/15/17
to Salt-users, se...@eseth.com
Oh that is is great - that alone helps a lot!  The existing doc really should be tweaked - if I get the time, I'll try a pull request :-)

A few extra credit question on the CliffNotes - answers are extra credit - we seem to have a working solution at this point:

Regarding #2 - does that imply that the doc for PillarStack which says that __salt__ calls are executed on the master is no different then salt.cmd.run that is executed in pillar files during normal pillar?  All such system probing happens on the master.  Something else must be done to probe the minions at highstate pillar time (if it is possible at all), right?  Note - we do not want to do this - just making sure we understand.

Is there no step #0 where one can evaluate some pillar once on the master and not have it re-eval'ed per minion?  I guess I was hoping that ext_pillar was just that - eval'ed once on the master and then pillar merged each time from scratch for each minion.

Regarding sdb, is it possible underneath salt functions (for example when calling highstate from the command line) to store a jinja produced dictionary in sbd once and only once and then have each minion get that pillar dictionary from there at pillar time and have it merged, so that minion pillar eval-time can be a function of the sdb stored pillar?  I can imagine some work-arounds for this but wondering if it was natively possible regardless.

FWIIW, with your response above we figured out a reasonable UX way of extending the existing solution.  Which is less work and also seems faster then the ext_pillar and pillar as jinja solutions (need more timing tests).  For the record the existing solution involves creating a chain of jinja 'include:' directives using 'defaults:' to pass the relevant jinja variables downstream which make up the pillar data of interest.  So pillar fileA includes pillar fileB which includes pillar fileC, and each passes down the jinja values of interest (which are used at pillar eval-time) which in fact also create the actual pillar data (which is used later at state-eval time).  There are only a few jinja variables that need to be passed down in this manner, but need them we do in a big way.  The created dependency chain is not great, but it seems fast and the explicit default: values seem to keep the design somewhat clean.

Would not have been able to work out the issues without this conversation.  Thanks!
-sandy

Seth House

unread,
Mar 16, 2017, 12:04:45 PM3/16/17
to Salt-users, sandy
On Wed, Mar 15, 2017 at 7:18 PM, sandy <windov...@gmail.com> wrote:
> __salt__ calls are executed on the master is no different then salt.cmd.run
> that is executed in pillar files during normal pillar?

Correct.

> Is there no step #0 where one can evaluate some pillar once on the master
> and not have it re-eval'ed per minion?

Correct. It's a single compilation step.

> Regarding sdb, is it possible underneath salt functions (for example when
> calling highstate from the command line) to store a jinja produced
> dictionary in sbd once and only once and then have each minion get that
> pillar dictionary from there at pillar time and have it merged, so that
> minion pillar eval-time can be a function of the sdb stored pillar?

It would take some manual wiring and choosing the right sdb backend
but this sounds completely plausible.

> For the record the existing solution involves creating a chain of jinja
> 'include:' directives using 'defaults:' to pass the relevant jinja variables
> downstream which make up the pillar data of interest.

Sounds like a reasonable approach. I'm glad to hear you found a solution. :+1:
Reply all
Reply to author
Forward
0 new messages