Store Pillars in DB, not YAML

190 views
Skip to first unread message

Thomas Güttler

unread,
Feb 8, 2016, 4:11:30 PM2/8/16
to Salt-users
I like to create the dependency-net in salt with YAML.

But I don't like to create the input-data (pillars) in YAML. There is no fixed schema and typos will
happen sooner or later.

I like to specify data structures in relational databases.

I saw that there is a mysql module to define pillars: https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.mysql.html

I am unsure how to works, maybe it does not at all what I want.

Here is my idea. The following data is from the pillar docs:

users:
  thatch: 1000
  shouse: 1001
  utahdave: 1002
  redbeard: 1003

What does it look like? For me it looks like a database dump.

There is a table users with columns: username, user_id.

Since the above data is only the data from one host, we need to add a column "host_id"
which is a ForeignKey to a table of all hosts.

The most basic implementation would be to dump the data from the DB
to YAML. This way no single line in salt would need a change.

What do you think?

Tell me what's wrong and why this is a bad idea :-)

For those people unfamiliar with relational databases: It is not hard like
years ago. It is fun and professional. I personally like to specify my data structures with Django-ORM.
This gives you an admin interface and schema-migration tool for free :-)

Regards,
  Thomas Güttler

Seth House

unread,
Feb 9, 2016, 12:34:42 AM2/9/16
to salt users list
Salt's external Pillar system is flexible enough to do what you want.
The docs are at the link below but a short description of the basic
functionality is:

An external Pillar module is a Python function that takes a minion ID
as the first argument. Each minion will request its Pillar from the
master and the master will invoke this function on behalf of the
minion. The function can do anything; query a database, call an API,
read files off the file system, etc. The only contract is that the
function returns a dictionary. That dictionary is then merged into the
main Pillar dictionary. That's it!

https://docs.saltstack.com/en/latest/topics/development/external_pillars.html

You can load your custom module in the Salt master by defining the
following setting in your master config:

extension_modules: /srv/modules

Make a directory:

/srv/modules/pillar

Add the configuration for your custom module also then restart the Salt master.
> --
> You received this message because you are subscribed to the Google Groups
> "Salt-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to salt-users+...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Thomas Güttler

unread,
Feb 9, 2016, 2:50:05 AM2/9/16
to Salt-users, se...@eseth.com
External pillars. This sounds good. Thank you for your feedback.

Regards,
  Thomas Güttler

Florian Ermisch

unread,
Feb 10, 2016, 3:41:26 AM2/10/16
to salt-...@googlegroups.com, Thomas Güttler
Hi Thomas,

If I just had the time I'd build something like this based on PostgreSQL.
Using PostgreSQL's JSON features you could have a (materialized) view where you just select the minion ID from and get the minion's whole pillar as JSON. This would make the ext_pillar's implementation trivial.
The database's schema surely won't be simple but when you get that one right the view for the JSON shouldn't be to hard to figure out.

Of course you could also take a topfile like approach, make multiple queries and place the JSON from the "users" view in `{"users": …}` and so on.
You could have a "topfile" table! :D

Hm, might be useful to pass some grains to the ext_pillar, too.

Regards, Florian

Thomas Güttler

unread,
Feb 10, 2016, 3:33:43 PM2/10/16
to Salt-users, guet...@thomas-guettler.de, florian...@alumni.tu-berlin.de
Hi Florian,

I guess you are too fast here. My topic is about the great fixed schema definition which you get when you use a relational database.

Storing JSON in postgres is in my eyes the same fuzzy-data-soup like YAML in a file system.

Or I have not understood what you mean with "PostgreSQL's JSON features".

What do you mean with "database's schema surely won't be simple"?

AFAIK there is no common schema definition for pillar data. Our custom pillar data schema is
up to now very simple. I guess we would need 5 tables. But that are our custom tables. AFAIK everybody
is on his own in this context. At least up to now.

What do you mean with "topfile" table? I read the top of this: https://docs.saltstack.com/en/latest/ref/states/top.html
I think topfile is not part of pillar. It is part of the salt-directory.

But maybe you are right. If it groups the infrastructure it could be good candiate for belonging into a database.

My background:

 - Code needs to live in version control (I use git)
 - Data belongs into the database (I use postgres and django-ORM)

Up to now nothing new.

 - Config is data, and belongs into the database. I came across this during the last years. This thinking is unfortunately not wide spread.

With salt I am unsure. What is code and what is config? Up to now I am unsure.

Pillars look like config, like something that belongs into a database.

Regards,
  Thomas Güttler

viq

unread,
Feb 10, 2016, 10:47:02 PM2/10/16
to salt-...@googlegroups.com, guet...@thomas-guettler.de, florian...@alumni.tu-berlin.de

But I'm also thinking "config needs to be versioned" which means git, not DB.

--
viq

Florian Ermisch

unread,
Feb 11, 2016, 2:38:16 AM2/11/16
to salt-...@googlegroups.com
Wikipedia stores its data in MySQL ;P

Yes, they're using it in a rather strange
way for a RDBMS but they got versioning.

Florian

Florian Ermisch

unread,
Feb 11, 2016, 3:52:15 AM2/11/16
to Thomas Güttler, Salt-users
Hi Thomas,


Am 10. Februar 2016 21:33:43 MEZ, schrieb "Thomas Güttler" <guet...@thomas-guettler.de>:
> Hi Florian,
>
> I guess you are too fast here.

Oops, sorry. I got this topic in the back of my head for 2 or 3
years now so I have a lot of thoughts on the topic ^^"

> My topic is about the great fixed schema definition
> which you get when you use a relational database.

See a few comments below.

> Storing JSON in postgres is in my eyes the same
> fuzzy-data-soup like YAML in a file system.
>
> Or I have not understood what you mean with
> "PostgreSQL's JSON features".

PostgreSQL can store JSON, evaluate attributes
and index them. I think also validate the structure.

*But the important part here is:*
You can make an query and ask to get JSON back.

You can have a nice relational schema and let PostgreSQL
handle the data manipulation to fit whatever structure salt
expects.
Thus the mentioned view.

> What do you mean with "database's schema surely
> won't be simple"?
>
> AFAIK there is no common schema definition for pillar data.
> Our custom pillar data schema is up to now very simple.
> I guess we would need 5 tables. But that are our
> custom tables. AFAIK everybody is on his own in this context.
> At least up to now.

When you're using salt to manage users, salt itself, ssh,
monitoring tools, on workstations tools for the devs, on
servers databases & backup clients & webservers & all
the little infrastructure bits and utilize half a dozen
formulas your pillar starts to get complicated.
Defining the schema you then have to map into a YAML/
JSON treelike structure won't be too simple.

One could see the format of the pillar data like formulas
expect it to be as a common schema. But of course that's
the resulting JSON.

Based on that there may be some SQL schemas showing up
but the default will stay YAML+Jinja.

> What do you mean with "topfile" table? I read the top of this:
> https://docs.saltstack.com/en/latest/ref/states/top.html
> I think topfile is not part of pillar. It is part of the
> salt-directory.

A way of grouping the minions based on different attributes
still allowing to target single hosts.
Everyone matching 'web*' gets the webserver stuff, 'xsql*'
gets the database stuff, here you get our defaults for RedHat systems, …
A kind of metatable like `top.sls` provides for `file_roots`
and `pillar_roots`.
>
> But maybe you are right. If it groups the infrastructure it
> could be good candiate for belonging into a database.
>
> My background:
>
> - Code needs to live in version control (I use git)
> - Data belongs into the database (I use postgres and
> django-ORM)
>
You're halfway there. Use Django-CRM to model your
DB-schemas and then add a view returning JSON.

> Up to now nothing new.
>
> - Config is data, and belongs into the database. I came
> across this during the last years. This thinking is unfortunately
> not wide spread.
>
> With salt I am unsure. What is code and what is config? Up
> to now I am unsure.

States are code, pillar consists of specific config values
thus data. Its structure might be code-ish but it's just
defining what goes where.

> Pillars look like config, like something that belongs into a
> database.

And it's way easier to have input validation on those than
on a bunch of plain text files! A missing '.' (or ':') in an IP?
PostgreSQL will be like "dude, that's no `inet`!" :D

Anyone had to restructure half their YAML files to fit a new
pillar structure? Like when you dump a custom state to use
a more feature-rich formula?
Not fun. Even with the more cumbersome structures
wrapped in macros.

And yes, you can add commit hooks checking you've written
proper YAML. But checking for IPs proper for a certain
subnet or validating usernames defined somewhere else?
I'd rather write the schema for a relational DB I can later
build a frontend for. Even if it's just curses providing some
checkboxes.

And for versioning the actual data: Just look at all the wikis
with a RDBMS backend.

> Regards,
> Thomas Güttler

Regards, Florian
*turning down rantiness, going for breakfast*

> Am Mittwoch, 10. Februar 2016 09:41:26 UTC+1 schrieb Florian Ermisch:
> >
> > Hi Thomas,
> >
> > If I just had the time I'd build something like this based on
> >PostgreSQL.
> > Using PostgreSQL's JSON features you could have a
> > (materialized) view where you just select the minion ID from
> > and get the minion's whole pillar as JSON. This would make
> > the ext_pillar's implementation trivial.
> > The database's schema surely won't be simple but when
> > you get that one right the view for the JSON shouldn't be to
> > hard to figure out.
> >
> > Of course you could also take a topfile like approach,
> > make multiple queries and place the JSON from the "users"
> > view in `{"users": …}` and so on.
> > You could have a "topfile" table! :D
> >
> > Hm, might be useful to pass some grains to the ext_pillar, too.
> >
> > Regards, Florian
> >
> > Am 8. Februar 2016 22:11:30 MEZ, schrieb "Thomas Güttler" <guet...@thomas-guettler.de>:
> > > I like to create the dependency-net in salt with YAML.
> > >
> > > But I don't like to create the input-data (pillars) in YAML.
> > > There is no fixed schema and typos will happen sooner
> > > There or later.

Thomas Güttler

unread,
Feb 12, 2016, 2:38:36 AM2/12/16
to Salt-users, guet...@thomas-guettler.de, florian...@alumni.tu-berlin.de


Am Donnerstag, 11. Februar 2016 04:47:02 UTC+1 schrieb vic viq:

But I'm also thinking "config needs to be versioned" which means git, not DB.



The goal is versioning.  The are several strategies to get to this goal.
Using git is one of them. There several other strategies available to get to the goal.


Thomas Güttler

unread,
Feb 12, 2016, 2:50:06 AM2/12/16
to Salt-users, guet...@thomas-guettler.de, florian...@alumni.tu-berlin.de


Am Donnerstag, 11. Februar 2016 09:52:15 UTC+1 schrieb Florian Ermisch:
Hi Thomas,


Am 10. Februar 2016 21:33:43 MEZ, schrieb "Thomas Güttler" <guet...@thomas-guettler.de>:
> Hi Florian,
>
> I guess you are too fast here.

Oops, sorry. I got this topic in the back of my head for 2 or 3
years now so I have a lot of thoughts on the topic ^^"

> My topic is about the great fixed schema definition
> which you get when you use a relational database.

See a few comments below.

> Storing JSON in postgres is in my eyes the same
> fuzzy-data-soup like YAML in a file system.
>
> Or I have not understood what you mean with
> "PostgreSQL's JSON features".

PostgreSQL can store JSON, evaluate attributes
and index them. I think also validate the structure.

*But the important part here is:*
You can make an query and ask to get JSON back.

You can have a nice relational schema and let PostgreSQL
handle the data manipulation to fit whatever structure salt
expects.
Thus the mentioned view.

OK, I see. If I understood you correctly, then you prefer
a DB schema to JSON.

In the old post I thought you want to store JSON in the DB.


 
> What do you mean with "database's schema surely
> won't be simple"?
>
> AFAIK there is no common schema definition for pillar data.
> Our custom pillar data schema is up to now very simple.
> I guess we would need 5 tables. But that are our
> custom tables. AFAIK everybody is on his own in this context.
> At least up to now.

When you're using salt to manage users, salt itself, ssh,
monitoring tools, on workstations tools for the devs, on
servers databases & backup clients & webservers & all
the little infrastructure bits and utilize half a dozen
formulas your pillar starts to get complicated.
Defining the schema you then have to map into a YAML/
JSON treelike structure won't be too simple.


We are just starting to use salt. This way our DB model would be
simple.

I just ask myself: Why does everyone run his own schema here?

Yes, there are several different environment, and every environment
has his own special cases. But the basics are the same.

 
One could see the format of the pillar data like formulas
expect it to be as a common schema. But of course that's
the resulting JSON.

Based on that there may be some SQL schemas showing up
but the default will stay YAML+Jinja.

A schema for data is an abstract thing. The format (JSON, YAML, DB)
does not matter in this context. For me it would be easy
to define a schema in SQL.



 
> What do you mean with "topfile" table? I read the top of this:
> https://docs.saltstack.com/en/latest/ref/states/top.html
> I think topfile is not part of pillar. It is part of the
> salt-directory.

A way of grouping the minions based on different attributes
still allowing to target single hosts.
Everyone matching 'web*' gets the webserver stuff, 'xsql*'
gets the database stuff, here you get our defaults for RedHat systems, …
A kind of metatable like `top.sls` provides for `file_roots`
and `pillar_roots`.
>
> But maybe you are right. If it groups the infrastructure it
> could be good candiate for belonging into a database.
>
> My background:
>
>  - Code needs to live in version control (I use git)
>  - Data belongs into the database (I use postgres and
>    django-ORM)
>
You're halfway there. Use Django-CRM to model your
DB-schemas and then add a view returning JSON.


There is a typo: I mean ORM: Object Realational Mapping.
 
> Up to now nothing new.
>
> - Config is data, and belongs into the database. I came
>   across this during the last years. This thinking is unfortunately
>   not wide spread.
>
> With salt I am unsure. What is code and what is config? Up
> to now I am unsure.

States are code, pillar consists of specific config values
thus data. Its structure might be code-ish but it's just
defining what goes where.

> Pillars look like config, like something that belongs into a
> database.

And it's way easier to have input validation on those than
on a bunch of plain text files! A missing '.' (or ':') in an IP?
PostgreSQL will be like "dude, that's no `inet`!" :D


Yes, this input-validation is what I like.

 
Anyone had to restructure half their YAML files to fit a new
pillar structure? Like when you dump a custom state to use
a more feature-rich formula?
Not fun. Even with the more cumbersome structures
wrapped in macros.


Database schemamigration with django ORM is fun.
At least if you have done it without such a great tool before.
If you are completely new to this, you might not understand
that it is like flying. The good think is, that it does not matter
much if you upgrade one or several hundred databases.

 
And yes, you can add commit hooks checking you've written
proper YAML. But checking for IPs proper for a certain
subnet or validating usernames defined somewhere else?
I'd rather write the schema for a relational DB I can later
build a frontend for. Even if it's just curses providing some
checkboxes.

And for versioning the actual data: Just look at all the wikis
with a RDBMS backend.


are you sure? AFAIK only git can do versioning :-) .......... this was a joke!

Regards,
  Thomas

Thomas Güttler

unread,
Feb 12, 2016, 2:51:08 AM2/12/16
to Salt-users, guet...@thomas-guettler.de, florian...@alumni.tu-berlin.de
I like this thread, unfortunately, I will be offline for about ten days.

Regards,
  Thomas

viq

unread,
Feb 13, 2016, 5:32:27 PM2/13/16
to salt-...@googlegroups.com
I admit, I don't know much about databases. How would you go about ensuring proper versioning in a database? I know that with git/mercurial you have no way around it, especially if salt is told to talk to VCS directly instead of using files on disk, but I don't have enough background to think of a workflow involving a database that would give similar assurance.
--
viq

Florian Ermisch

unread,
Feb 20, 2016, 5:13:05 AM2/20/16
to salt-...@googlegroups.com
Am 13. Februar 2016 23:32:21 MEZ, schrieb
> viq <vic...@gmail.com>:
My SQL-fu isn't that strong, but you could have a
non-optional foreign key [1] referencing a changeset
table to group changes on different tables to
"commits".

Handling a treelike structure like git branches would
be difficult, but a dev -> test -> prod staging should
be easy to add to those changeseyts. You then could
also enforce references not happening to "lower"
stages like adding a user from dev to a system in prod.

But for those things you should find an experienced
DBA who writes properly normalized SQL schemas
and knows which constrains [1] to apply* ;)

Regards, Florian

[1] http://www.postgresql.org/docs/9.4/static/ddl-constraints.html

*) Yes, I have high standards for RDBMS schemas.
I have seem ugly things (not properly normalized
tables containing 20+ columns), stupid things
(filtering in the application code instead of just
asking the DBMS to do it and thus breaking the
service every two weeks *cough* keystone *cough*)
and horrible things (using MySQL as key value store
with values containing what looked like base64
encoded binary data between chunks of plain text).

Thomas Güttler

unread,
Mar 1, 2016, 12:10:10 PM3/1/16
to Salt-users, florian...@alumni.tu-berlin.de
How to version data in a RDBMS is not part of this thread.

Yes, using git seems to be a faster way to your goal in the first step, but
in the long run a well defined db schema is a very solid ground.

Florian Ermisch

unread,
Mar 2, 2016, 5:41:35 PM3/2/16
to salt-...@googlegroups.com, Thomas Güttler
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Welcome back, Thomas!

Just related stuff:

I'll have to set up a PostgreSQL job-cache soonish so I'll have PostgreSQL hooked up to one of my masters anyway.
This will also give a load of JSON to play with, even when I'm chopping up JSON to fill its values into proper tables instead
of taking data from tables to generate JSON ;)

For those who want their database's content versioned check
out https://github.com/jasonk/postgresql-versioning.

For versioning the schema I'll look at Pyrseas [0] or maybe
SQLAlchemy Migrate [1]. The first one seems a good fit as
it stores the schema descriptions as YAML or JSON.

Maybe I can post something to build tables with data scraped
from the job-cache. A trigger on the job-cache table would
to nicely to update more specific tables.
And based on (experiences from) that tables for a PostgreSQL ext_pillar with the JSON conversion done in the DBMS.

Just don't expect too much. When it actually happens it will
probably get stuck as soon as it does the most basic things
I need.

Regards, Florian

[0] https://github.com/perseas/Pyrseas
[1] https://github.com/openstack/sqlalchemy-migrate

Am 1. März 2016 18:10:10 MEZ, schrieb "Thomas Güttler" <guet...@thomas-guettler.de>:
> How to version data in a RDBMS is not part of this thread.
>
> Yes, using git seems to be a faster way to your goal in the first
> step, but
> in the long run a well defined db schema is a very solid ground.
>
>
>
> Am Samstag, 20. Februar 2016 11:13:05 UTC+1 schrieb Florian Ermisch:
> >
> > Am 13. Februar 2016 23:32:21 MEZ, schrieb
> > > viq <vic...@gmail.com <javascript:>>:
-----BEGIN PGP SIGNATURE-----
Version: APG v1.1.1

iQFTBAEBCAA9BQJW12wVNhxGbG9yaWFuIEVybWlzY2ggPGZsb3JpYW4uZXJtaXNj
aEBhbHVtbmkudHUtYmVybGluLmRlPgAKCRBAkXUY77vNq7e4B/wLiynw3Qz1DFsY
H/s76x7dZr70A7pYGFjjexD6PDOwpE+CnLWNLkDgOuZdk0zBCy4zcTH//t8HcQtk
kAfi4GWHstqVIoyQ6fWk5V3ElKF3dZxPmm0Zs+U2oPmlmD1dUKp5GDp33jt2JrrF
UGoFVRFTzqyjSIWheyscfMjEHYKuFQdZSXINjJevKyliFNInLi/yOmPoF9x657aI
1n6ZELjB+IgxOFPtKjpWiCmUS5zSkFRgD/PfvGUPBU6d/6jelwJqHcDRnaWl3hDw
R22fm8lR9NUFpvn7/jXOaMwfsxnVBcosDlTn/JrYQFkAdyvigHwsCAz8Q5cUk4k1
IBxDjUY3
=7C7x
-----END PGP SIGNATURE-----

Thomas Güttler

unread,
Mar 4, 2016, 1:30:57 AM3/4/16
to Salt-users, guet...@thomas-guettler.de, florian...@alumni.tu-berlin.de
Am 02.03.2016 um 23:41 schrieb Florian Ermisch:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Welcome back, Thomas!
>
> Just related stuff:
>
> I'll have to set up a PostgreSQL job-cache soonish so I'll have PostgreSQL hooked up to one of my masters anyway.
> This will also give a load of JSON to play with, even when I'm chopping up JSON to fill its values into proper tables instead
> of taking data from tables to generate JSON ;)

I am interested. You take the "fuzzy" json and store it in a solid db schema. Nice.

BTW "Job Cache". Does this wording match to the use case (in general)

I have never worked with it. But in my eyes it is a "Result Store".

What do you think?




> For those who want their database's content versioned check
> out https://github.com/jasonk/postgresql-versioning.
>
> For versioning the schema I'll look at Pyrseas [0] or maybe
> SQLAlchemy Migrate [1]. The first one seems a good fit as
> it stores the schema descriptions as YAML or JSON.

Ah, SQLAlchemy has a migration tool like django, too. Nice.



> Maybe I can post something to build tables with data scraped
> from the job-cache. A trigger on the job-cache table would
> to nicely to update more specific tables.

What kind of updates do you want?


> And based on (experiences from) that tables for a PostgreSQL ext_pillar with the JSON conversion done in the DBMS.
>
> Just don't expect too much. When it actually happens it will
> probably get stuck as soon as it does the most basic things
> I need.

What is your high level use case?


Regards,
Thomas Güttler

Florian Ermisch

unread,
Mar 6, 2016, 5:29:51 PM3/6/16
to salt-...@googlegroups.com, Thomas Güttler
Am 4. März 2016 07:30:57 MEZ, schrieb "Thomas Güttler" <guet...@thomas-guettler.de>:
> Am 02.03.2016 um 23:41 schrieb Florian Ermisch:
> >
> > Welcome back, Thomas!
> >
> > Just related stuff:
> >
> > I'll have to set up a PostgreSQL job-cache soonish so I'll have
> > PostgreSQL hooked up to one of my masters anyway.
> > This will also give a load of JSON to play with, even when I'm
> > chopping up JSON to fill its values into proper tables instead
> > of taking data from tables to generate JSON ;)
>
> I am interested. You take the "fuzzy" json and store it in a
> solid db schema. Nice.
>
> BTW "Job Cache". Does this wording match to the use case (in general)
>
> I have never worked with it. But in my eyes it is a "Result Store".
>
> What do you think?
>
Yes, it is a result store, but right now I need to collect some
data from/about our minions. So I'll use PostgreSQL as job-
cache to gather returned data which I can have PostgreSQL
transform for me (taking certain attributes to create rows
in my tables).
>
>
> > For those who want their database's content versioned check
> > out https://github.com/jasonk/postgresql-versioning.
> >
> > For versioning the schema I'll look at Pyrseas [0] or maybe
> > SQLAlchemy Migrate [1]. The first one seems a good fit as
> > it stores the schema descriptions as YAML or JSON.
>
> Ah, SQLAlchemy has a migration tool like django, too. Nice.
>

Yeah, I would rather stay closer to the DB and not use an full
ORM like Django's. Then I would have to run Python code to
load the objects and dump their data as JSON instead of having
PostgreSQL turning the data into JSON to pass to salt directly.

>
>
> > Maybe I can post something to build tables with data scraped
> > from the job-cache. A trigger on the job-cache table would
> > to nicely to update more specific tables.
>
> What kind of updates do you want?

Like having associated service tags and MAC addresses
updated when you move a salted installation to new hardware.
Or taking a highstate's returns and track failed states.

Might need a procedural language [0] because of its conditional
behavior. PL/Python [1] seems to be only as "untrusted"
language so executed code would always run as administrator
like a setuid(0) binary in unixoids.

[0]: http://www.postgresql.org/docs/9.4/static/xplang.html
[1]: http://www.postgresql.org/docs/9.4/static/plpython.html
>
> > And based on (experiences from) that tables for a
> > PostgreSQL ext_pillar with the JSON conversion done in
> > the DBMS.
> >
> > Just don't expect too much. When it actually happens it will
> > probably get stuck as soon as it does the most basic things
> > I need.
>
> What is your high level use case?

Like mentioned above, collecting data about the minions.
I try to do new stuff with salt and also try to make old stuff
manageable via salt. But I still got a lot of stuff I hadn't had
time to salt yet so the visibility into many of the existing
setups is pretty bad.
And, of course, there's the "I need this up & running soon,
you can do it properly the next time"…

Thomas Güttler

unread,
Mar 7, 2016, 5:00:28 AM3/7/16
to Salt-users, guet...@thomas-guettler.de, florian...@alumni.tu-berlin.de


Am Sonntag, 6. März 2016 23:29:51 UTC+1 schrieb Florian Ermisch:
Am 4. März 2016 07:30:57 MEZ, schrieb "Thomas Güttler" <guet...@thomas-guettler.de>:
> Am 02.03.2016 um 23:41 schrieb Florian Ermisch:
> >
> > Welcome back, Thomas!
> >
> > Just related stuff:
> >
> > I'll have to set up a PostgreSQL job-cache soonish so I'll have
> > PostgreSQL hooked up to one of my masters anyway.
> > This will also give a load of JSON to play with, even when I'm
> > chopping up JSON to fill its values into proper tables instead
> > of taking data from tables to generate JSON ;)
>
> I am interested. You take the "fuzzy" json and store it in a
> solid db schema. Nice.
>
> BTW "Job Cache". Does this wording match to the use case (in general)
>
> I have never worked with it. But in my eyes it is a "Result Store".
>
> What do you think?
>
Yes, it is a result store, but right now I need to collect some
data from/about our minions. So I'll use PostgreSQL as job-
cache to gather returned data which I can have PostgreSQL
transform for me (taking certain attributes to create rows
in my tables).

My issue to rename "job cache" was closed :-(

 
>
>
> > For those who want their database's content versioned check
> > out https://github.com/jasonk/postgresql-versioning.
> >
> > For versioning the schema I'll look at Pyrseas [0] or maybe
> > SQLAlchemy Migrate [1]. The first one seems a good fit as
> > it stores the schema descriptions as YAML or JSON.
>
> Ah, SQLAlchemy has a migration tool like django, too. Nice.
>

Yeah, I would rather stay closer to the DB and not use an full
ORM like Django's. Then I would have to run Python code to
load the objects and dump their data as JSON instead of having
PostgreSQL turning the data into JSON to pass to salt directly.


ok

 
>
>
> > Maybe I can post something to build tables with data scraped
> > from the job-cache. A trigger on the job-cache table would
> > to nicely to update more specific tables.
>
> What kind of updates do you want?

Like having associated service tags and MAC addresses
updated when you move a salted installation to new hardware.
Or taking a highstate's returns and track failed states.


I am interested in tracking failed states, too.

 
Might need a procedural language [0] because of its conditional
behavior. PL/Python [1] seems to be only as "untrusted"
language so executed code would always run as administrator
like a setuid(0) binary in unixoids.

[0]: http://www.postgresql.org/docs/9.4/static/xplang.html
[1]: http://www.postgresql.org/docs/9.4/static/plpython.html

I worked with plpythonu before. Works nice. Yes, it can be compared to setuid,
but I see no problem since you don't execute the data :-) In most cases you
just transform the data somehow.

 
Reply all
Reply to author
Forward
0 new messages