On Tue, May 1, 2012 at 09:31, Kelsey Hightower <kel...@puppetlabs.com> wrote:
> I'm thinking of adding a new save API to Hiera. The idea is that Hiera
> should provide an iterface for saving data, which should make it easy for
> front-end tools to interact with backends that support saving data.
Why does it make it any easier than having tools with already know
about their back-end semantics directly managing that data? It has
substantial limitations (eg: no user concept, no credentials, and no
way to determine the appropriate set based on the back-end.)
It doesn't document anything of substance about the API: will save be
fast or slow? Can save deadlock? How does it differentiate different
operations on the same key? How do I determine the hierarchy - or do
I need to implicitly know that to use this?
I have no idea how it works across machines. Can I use this from the
dashboard when that is installed on a different machine to the master?
How do changes propagate after `save` is called when I have multiple
masters?
It also makes it impossible to use this in any meaningful UI: there is
absolutely no mechanism to determine what the failure was. Did we
fail because we got the hierarchy wrong, or the backend wrong, or
something else failed? Should I just retry, or give up?
> Do people think this is a good idea? I see this as a foundational bit for
> building UI's on top of Hiera.
The principal is reasonable, but this isn't even close to a proposal
for a save API that works in the real world.
----- Original Message -----
> From: "Daniel Pittman" <dan...@puppetlabs.com>
> To: puppe...@googlegroups.com
> Sent: Tuesday, May 1, 2012 6:17:53 PM
> Subject: Re: [Puppet-dev] Hiera should have an save API
>
> On Tue, May 1, 2012 at 09:31, Kelsey Hightower
> <kel...@puppetlabs.com> wrote:
>
> > I'm thinking of adding a new save API to Hiera. The idea is that
> > Hiera
> > should provide an iterface for saving data, which should make it
> > easy for
> > front-end tools to interact with backends that support saving data.
>
> Why does it make it any easier than having tools with already know
> about their back-end semantics directly managing that data? It has
> substantial limitations (eg: no user concept, no credentials, and no
> way to determine the appropriate set based on the back-end.)
This has been my main concern too and why I never implemented anything like this
in the first place - I think the data being queried is best modelled elsewhere.
The data is best created at the time when you classify a node in that same UI -
hiera should query that data but not know too much about the visual aspects of
it.
This would be usable for small installs who just use the json/yaml backends and
have no node classification system (other than maybe hand editing these files
and using hiera_include or something). People who are already happy to just
hand hack JSON/YAML anyway.
Having to know the hierarchy on the CLI isn't that great an experience and neither
is typing complex data like hashes, arrays and such.
In mcollective I can type complex data on the CLI because the DDL describes the
data - I know when you typed "1" that it should be a number or a boolean and I
convert that for you. Hiera has no data description, its free form so even with
a face or whatever it just would be limited use - soon you'll be editing JSON
or YAML again to represent arrays of hashes, thats wrong.
>
> It doesn't document anything of substance about the API: will save be
> fast or slow? Can save deadlock? How does it differentiate
> different operations on the same key? How do I determine the hierarchy - or do
> I need to implicitly know that to use this?
This is impossible to answer - the save API has no idea about the backends.
We *could* in theory extend backends to provide all these answers through some
kind of flag about the backend but I do not think we should.
Backends are easy to write and understand and so people do actually write them
vs some other plugins we might have like providers or types. It's a pretty thin
line. Its a case of could but imo should not.
>
> I have no idea how it works across machines. Can I use this from the
> dashboard when that is installed on a different machine to the
> master?
> How do changes propagate after `save` is called when I have multiple
> masters?
>
> It also makes it impossible to use this in any meaningful UI: there
> is
> absolutely no mechanism to determine what the failure was. Did we
> fail because we got the hierarchy wrong, or the backend wrong, or
> something else failed? Should I just retry, or give up?
>
> > Do people think this is a good idea? I see this as a foundational
> > bit for
> > building UI's on top of Hiera.
>
> The principal is reasonable, but this isn't even close to a proposal
> for a save API that works in the real world.
I would love to see a solution for this but its deceptively hard to do
and I think ultimately better solved by exposing a REST API into your
dasbhoard/foreman/etc where you have RBAC and the other points you
raised
> The idea is that this simple save function would be behind a REST API like> the one you mention. Do the hard work of modeling and capturing data then...but the save function proposed is too abstract from the reality of
> make a call to Hiera#save. If a REST API for Hiera is needed we can build
> one.
data storage to be able to do that. Each backend needs additional
context - or someone to write a custom back-end for their site, ever
time - to be effective.
This is a good idea, but at the wrong level of abstraction.
--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To post to this group, send email to puppe...@googlegroups.com.
To unsubscribe from this group, send email to puppet-dev+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-dev?hl=en.
I'll chime in on this now, I suppose.You are right that both read and write operations are good for abstraction. The problem comes that comes into play is that read and write operations usually end up with completely different needs for their abstractions and so combining them together in a single system can be problematic (this is the basis for the CQRS architectural design). So although you can combine the write model and the read model in the same application, they often will have little to do with each other and so you might as well keep them separate.
I'll chime in on this now, I suppose.You are right that both read and write operations are good for abstraction. The problem comes that comes into play is that read and write operations usually end up with completely different needs for their abstractions and so combining them together in a single system can be problematic (this is the basis for the CQRS architectural design). So although you can combine the write model and the read model in the same application, they often will have little to do with each other and so you might as well keep them separate.
On May 8, 2012, at 4:04 PM, Jeff McCune wrote:
On Tue, May 8, 2012 at 2:59 PM, Daniel Pittman <dan...@puppetlabs.com> wrote:> The idea is that this simple save function would be behind a REST API like> the one you mention. Do the hard work of modeling and capturing data then...but the save function proposed is too abstract from the reality of
> make a call to Hiera#save. If a REST API for Hiera is needed we can build
> one.
data storage to be able to do that. Each backend needs additional
context - or someone to write a custom back-end for their site, ever
time - to be effective.What additional context is necessary?Why would custom back ends be necessary if the default one we use supports writability?This is a good idea, but at the wrong level of abstraction.I'm not yet convinced this is the wrong level of abstraction. If I understand your original email, you mentioned building tools that understand the semantics of specific back end storage systems in order to write data into the system. That seems to defeat the whole point of a robust plugin system.If read operations are good enough to warrant abstraction, surely write operations are too. Right?-Jeff--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To post to this group, send email to puppe...@googlegroups.com.
To unsubscribe from this group, send email to puppet-dev+unsubscribe@googlegroups.com.
On Tuesday, May 8, 2012 7:17:33 PM UTC-4, Andy Parker wrote:I'll chime in on this now, I suppose.You are right that both read and write operations are good for abstraction. The problem comes that comes into play is that read and write operations usually end up with completely different needs for their abstractions and so combining them together in a single system can be problematic (this is the basis for the CQRS architectural design). So although you can combine the write model and the read model in the same application, they often will have little to do with each other and so you might as well keep them separate.Can you clarify "separate"? Hiera is the thing that has a plugin system and delates lookups to the backend (plugin). The plugin returns a response. The plugin can be simple or very complex in how it goes about fetching the data. The only thing Hiera provides is a common interface for doing lookups. Based on your response, I'm still not clear why we cannot do the same thing for save. I can see your argument for why this is a bad thing in general, but why is it a bad thing for Hiera?
--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To view this discussion on the web visit https://groups.google.com/d/msg/puppet-dev/-/6j67wPa8zBQJ.
To post to this group, send email to puppe...@googlegroups.com.
To unsubscribe from this group, send email to puppet-dev+...@googlegroups.com.