thanks for coming up with such elaborate ideas, your input to this group
adds a lot of meat to many discussions.
I can agree with a lot of what you wrote, barring the following remarks:
On 01/26/2012 06:00 PM, jcbollinger wrote:
> Modules provide definitions of resources that they own. For the most
> part, those definitions should be virtual to avoid unnecessary inter-
> module coupling, but some resources are reasonable to define
> concretely.
Jeff has made a strong point against using virtual resources in modules
at all, causing me to shift my own views as well.
If I understand him correctly, one of the chief problems is the high
probability of accidental collection/realisation of such resources by
the end user's manifest.
On 01/26/2012 06:48 PM, jcbollinger wrote:
> I can imagine many -- perhaps most -- resource definitions being
> replaced or supplemented by constraint declarations.
The model is intriguing, but gives me another usability headache.
Wouldn't this put an end to self-contained modules?
I wrote in a latter mail (is this the same thread? Sorry, I use this
only through Thunderbird and get confused sometimes) how I see need for
explicit module dependencies and a system that can automatically
download required modules from the forge. I can see this supplementing
your idea of constraints nicely, but without it, downloading modules
could quickly become a nightmare for users.
Cheers,
Felix
On Fri, Jan 27, 2012 at 15:20, Felix Frank
<felix...@alumni.tu-berlin.de> wrote:
> how I see need for
> explicit module dependencies and a system that can automatically
> download required modules from the forge. I can see this supplementing
> your idea of constraints nicely, but without it, downloading modules
> could quickly become a nightmare for users.
There's something else we need to think about here. Some modules have
a soft/conditional requirement for other modules. What I mean is that
if you don't use certain parts of a module, you don't need the module
that that part of the code refers to. the only decent way I can come
up with to solve that is to use what for instance in C is done with
#IFDEF. That way the module could just ignore modules that it doesn't
_really_ require.
I for instance have modules that allow you to use different backends
for monitoring or backups. If requirements were done automatically
based on the whole module, it would need a myriad of other modules,
only one of which is ever used.
cheers,
--
Walter Heck
--
follow @walterheck on twitter to see what I'm up to!
--
Check out my new startup: Server Monitoring as a Service @ http://tribily.com
Follow @tribily on Twitter and/or 'Like' our Facebook page at
http://www.facebook.com/tribily
Yes.
Also: currently in Puppet one cannot say anything about a resource without a
declaration that you will "manage" it. (Unless perhaps that state happens to be
encapsulated by a Fact, which typically isn't and in many cases couldn't
feasibly be - like the case of file attributes.)
Therefore many "dependencies" are created only because of a need to check some
state of a resource - which one may not want or need to manage.
> Consider, then, a new metaresource type, Constraint. The purpose of the
> Constraint resource type would be to allow multiple unrelated classes to
> collaborate on defining the properties of a single resource, and it would do
> so by allowing classes to limit the values that chosen resource properties
> may have.
>
> At compilation time, Puppet would collect and combine all the constraints on
> any particular resource, and use the results to set unspecified property
> values and validate specified ones. Usage might look something like this:
>
> constraint { 'webserver-httpd_package-present':
> resource => Package['httpd'],
> property => 'ensure',
> forbidden_value => [ 'absent', 'purged' ],
> # also available: allowed_value
> # maybe: default_value
> }
>
> Not only would this nicely meet the needs of different modules to express
> their requirements on shared resources, it would also make it much easier to
> recognize resource conflicts. If Puppet automatically generated empty
> resource definitions to constrain when it discovered constraints on
> otherwise-undefined resources, then that would also overcome the problem of
> deciding where to define particular resources.
>
> I can imagine many -- perhaps most -- resource definitions being replaced or
> supplemented by constraint declarations.
Here's a slightly different angle.
(Note, I'll use capitalisation distinguish between the "resources" that exist
outside Puppet, and the "Resource" instances inside, which model them.)
I think there *is* a case to be made that Puppet needs a new "kind" of Resource
declaration. One which promises never to change the state of anything.
Immutability is, I think, the key to allowing multiple declarations of this sort
co-exist. Resources currently have to uniquely "own" a resource so that they can
safely change it. As I said, one doesn't always need or want that ownership: we
know the kind of baggage it carries.
Ideally we'd be able to separate out the aspects of a Resource which merely
assert what *should* be the case (ensure => present etc.) from those bits which
would then change the state of the resource if it deviates.
For the sake of discussion I'll call that former kind of declaration an "Assertion".
To briefly address the points from earlier email: Nan asked how would one
address merging, with respect to the following aspects of a Resource?
a) unifying before/requires attributes
b) unifying if/then/else constructs
c) auditing changes back to their source
d) unifying hash/array attributes
Well, when using Resources, yes, these make them very hard to unify in general.
However - although this needs more thought - if we could invent some way to
declare mere Assertions, they might do instead of Resources for many cases, and
it might be possible to unify them more simply, because:
a) Problems related to ordering mostly disappear,
...since nothing is being changed. (Of course, external things which might
change things could be a problem.)
b) If/then/else clauses can still make sense within definitions of custom
Assertions,
...if they can compose only other Assertions, and the conditions don't change.
c) Auditing is not a problem: since nothing is changed, there is nothing to
audit.
d) Hash/array attribute values would need to be resolved on a case-by-case
basis,
...depending on the semantics. When hash and arrays are semantically
representing are sets, this should be a straightforward "and" or "or" operation.
When the order matters, like a search path, or there is no obvious way to unify
two attributes, then this is an unresolvable contradiction and should generate
an error. There may be cases in between.
Possibly this doesn't fit all the use-cases which run into cross-module
dependency problems, but might significantly reduce the need to create the
dependencies in the first place.
Anyway, I need to get back to work, I'll try to say more in a later email.
Cheers,
N
On 01/27/2012 02:52 PM, Walter Heck wrote:
> There's something else we need to think about here. Some modules have
> a soft/conditional requirement for other modules. What I mean is that
> if you don't use certain parts of a module, you don't need the module
> that that part of the code refers to. the only decent way I can come
> up with to solve that is to use what for instance in C is done with
> #IFDEF. That way the module could just ignore modules that it doesn't
> _really_ require.
thanks for pointing this out, but it has been covered (I think) in
another thread already:
On 01/19/2012 09:17 PM, Nick Fagerlund wrote:
> So, you can conditionally declare the rule if the defined type is
> available to the autoloader, and otherwise you don't attempt to manage
> the firewall and expect that the user has read the documentation and
> will make a hole for the service themselves.
>
> if defined(firewall::iptables::rule) {
> firewall::iptables::rule {'mysql_server':
> ...etc. etc.
> }
> }
>
> See? It's just a way to peek around at what the user has installed.
Thanks again to Nick for this quote, it keeps proving useful ;-)
Cheers,
Felix
On 01/27/2012 04:22 PM, jcbollinger wrote:
> From a usability perspective, I think this is a far better proposal
> than anything else on the table:
I've thought of another plus. Even though the design proposal adds to
the DSL (and complexity is generally to be avoided), it does so in a
manner that will not make it necessary for novices (or any end user) to
deal with the particular featureset, but instead limits its target
audience to developers of public modules.
>> Wouldn't this put an end to self-contained modules?
>
> No. What makes you think it would? I think it allows modules to be
> *more* self-contained than they currently are.
I was thinking along the lines of "currently modules will just install
the packages they need", for instance. But you're right of course -
currently modules can break each other because of this, so a way for the
compiler to clearly pinpoint reasons and locations of mismatches would
indeed be superior.
> I do not mean to say that a tool that automatically downloaded and
> installed modules from the Forge would be useless -- far from it. I
> just don't think that it would adequately address the inter-module
> dependency issue, and therefore I would recommend that such a tool not
> even try to do so. It solves an altogether different problem.
Again, I'm inclined to concur. The issue at hand is to stop modules from
breaking each other horribly. Once that's off the table, a module
management system is a whole new issue in and of itself.
Cheers,
Felix
I've been mulling this over and wanted to get opinions on an, ugly, but completely functional approach.
We've been talking about people supplying metadata to describe inter-class dependencies (inter-module dependencies really, but hear me out). With the
advent of the parameterized class, you could simply write all ifdef dependencies directly into a parameter like follows:
# Get this for the 'contains' function
include 'stdlib'
class foo (
# No requirements by default
$reqs = ['']
) {
if contains($reqs,'bar') {
include 'bar'
}
...some stuff...
if contains($reqs, 'bar') {
bar::baz { 'something': ... }
}
}
It's not elegant by any means, but it is functional and since (in theory) puppet only includes a class once, then all of the various includes would be
completely skipped.
If would be nice if, in this example, $reqs was actually a class metaparameter and Puppet would automatically try to include the class when passed
into that variable.
Benefits:
* Works with the current language structure
* Does what you want it to in terms of not needing defined
Drawbacks:
* Requires the user to have an explicit working knowledge of all modules and namespaces
* Adds a lot of random logic to the code (unless it becomes a metaparam of some sort)
Thanks,
Trevor
On 01/27/2012 08:52 AM, Walter Heck wrote:
> Hello,
>
> On Fri, Jan 27, 2012 at 15:20, Felix Frank
> <felix...@alumni.tu-berlin.de> wrote:
>> how I see need for
>> explicit module dependencies and a system that can automatically
>> download required modules from the forge. I can see this supplementing
>> your idea of constraints nicely, but without it, downloading modules
>> could quickly become a nightmare for users.
> There's something else we need to think about here. Some modules have
> a soft/conditional requirement for other modules. What I mean is that
> if you don't use certain parts of a module, you don't need the module
> that that part of the code refers to. the only decent way I can come
> up with to solve that is to use what for instance in C is done with
> #IFDEF. That way the module could just ignore modules that it doesn't
> _really_ require.
>
> I for instance have modules that allow you to use different backends
> for monitoring or backups. If requirements were done automatically
> based on the whole module, it would need a myriad of other modules,
> only one of which is ever used.
>
> cheers,
- --
Trevor Vaughan
Vice President, Onyx Point, Inc.
email: tvau...@onyxpoint.com
phone: 410-541-ONYX (6699)
pgp: 0x6C701E94
- -- This account not approved for unencrypted sensitive information --
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQEcBAEBAgAGBQJPJBXGAAoJECNCGV1OLcypcXkH/3Y2nqqGgJzAKg9YVj/DjiB7
8zbtA7/nVvC8LwtIwwGi7jY+VcbietGwNC8JOoxnTdFN4dCb1xsAcTqzt8p/NXHE
HhwGIG9YGaMoZzvwtfUGc6wrOeqxLvInq2g6e0Qk5QkhBZVg7T5DV4/mvXfheZOR
n1mENjPNMoRONifb24PqxK91CbRtBmJGxEX8b6pDB529oU6aZxNQi6xSn1KSkCJM
SZjVaDoxPqHC4V9L3/J34Rq8H96tfMvTHvSjI3+/nrX80k9MRTkIw5LMIESfTktM
oHKmIXeYcf1yymepuwFmjEgvQ/hp0P5YWsXX3xhE+OCEoaby0AQ6FRHtSJq2y8E=
=qNAo
-----END PGP SIGNATURE-----
--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To post to this group, send email to puppet...@googlegroups.com.
To unsubscribe from this group, send email to puppet-users...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Yep, that's it.
-Jeff
On 01/28/2012 04:35 PM, Trevor Vaughan wrote:
> Drawbacks:
>
> * Requires the user to have an explicit working knowledge of all
modules and namespaces
> * Adds a lot of random logic to the code (unless it becomes a
metaparam of some sort)
You skipped the most important drawback: Commitment to parameterized
classes. The fact that there can be only one place that includes those
classes, and that this singular place must have the whole picture of
what requirements are met, is conceivably a show stopper from my point
of view.
This will work for people that have a functional ENC, I guess, but
should that be a requirement for using Forge modules?
Furthermore, how can modules hope to ever interoperate like this? If all
module classes get parameterized, it will be outright impossible for one
module to ever include another module's classes.
Say module A includes class B::C. As soon as a user installs module A in
addition to B, they have to clean their manifests of inclusions of B::C.
On 01/29/2012 07:39 AM, Brian Gupta wrote:
> It frightens me a bit that I think the "correct" solution, will be to
> replicate what the distros are doing in Puppetforge. Basically turning
> puppetforge into a massive cross distro metadata repo, with very strict
> contribution standards and rules. This would involve strong rules for
> curated modules that would require manpower to vet (and to
> contribute the modules).
I honestly don't see the problem. Imagine CPAN was limited to downloads
of tarballs from the website (or even souce control checkouts). I
disbelieve it would be as significant today as it has become.
The same goes for Ruby Gems and all such systems.
As this seems to be a recurring theme: Am I wrong to compare these to
the Forge?
Sincerely,
Felix
This seems like a massive undertaking from where we are now, but it
would in the end make all of our lives a ton easier (one trusted
source for good high quality modules) and reduce the 'problem' of
inter-module dependencies to a minimum. Of course it still exists for
in-house applications that are being puppetised, but it would already
mean the world if they would be able to depend on what the public
trusted modules define.
I personally like the way the drupal module projects work: anyone can
start a project, but they are all hosted on the drupal.org site within
drupal.org version control, and they have teams of code reviewers
maintaining integrity of the module base that lives on drupal.org.
cheers,
Walter
> --
> You received this message because you are subscribed to the Google Groups "Puppet Users" group.
> To post to this group, send email to puppet...@googlegroups.com.
> To unsubscribe from this group, send email to puppet-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
>
--
It is annoying to have everything in a single place that defines the state of your nodes but, as you point out, this seems to be the model if you're
using an ENC and that seems to be the recommended practice across the mailing list for any sort of scale.
But, you don't need a functional ENC to make this work, you simply need to have everything defined at the top level whether it be via node, higher
level class, or ENC.
The main issue here seems to be modules that are trying to be "too helpful" from my reading of the mailing list.
It seems that many would like this to be an anti-pattern:
class foo {
if not defined(Class['bar']) {
include 'bar'
bar::baz { ... }
}
}
include 'foo'
Instead, you should be less helpful, and do the following:
include 'foo'
include 'bar'
bar::baz { ... }
So, instead of doing something like, say, setting up IPTables in your module (thus creating a cross-module dependency), you should do all of this in
one monolithic place at the node level or a higher level aggregation class level.
While this keeps your modules clean, it seems like a lot more effort to maintain since the module for nginx should really know what ports it's using
and know how to set up its own firewall rules.
So, the tradeoff is an ENC vs. a large collection of cluttered classes at the top level to make sure you don't have cross-module dependencies.
I'm not sure if either is better (or if either is any good at all) but they're both functional.
The ability to tag modules as requiring other modules of a particular version (ala CPAN, Gems, everything else....) would solve this issue as Puppet
would be able to check to make sure that you have the correct version of the modules installed prior to compiling the catalog.
Trevor
- --
Trevor Vaughan
Vice President, Onyx Point, Inc.
email: tvau...@onyxpoint.com
phone: 410-541-ONYX (6699)
pgp: 0x6C701E94
- -- This account not approved for unencrypted sensitive information --
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQEcBAEBAgAGBQJPJoUxAAoJECNCGV1OLcyp7igH/0rroAjC8Ewc9Aw2bdE7gO0N
0KfvzYCTZtJLFTBeNIAErliWd9iR5W84H0j8KJGjPg18qcRcDHjC/hnf5+GV8lIS
6kG3EgwYwyDg8Xc0qAbWubJv7bJ29X4Fc8CCHkq13CkXFM/OqnKpUbXA6X6+o5a/
Hv5Z6WXQjPC7uCupwyqktkjj5sjwvdgniSvKsj6EK3bhGRyMsvJAzmDjucwcRNsM
vz6IG05aFJrYTUp0rZzTJf/HjIPgmD90puoXXSa/RVQnsb3WSw0AwYe1jBAHWap4
pDw+F2qrMdwc9XgQv4ZFFNp/A1OCFh21uW3B1D7XjM+U3QRpmXTKhX71lcqbX08=
=XRv1
-----END PGP SIGNATURE-----
Nigel,
It frightens me a bit that I think the "correct" solution, will be to replicate what the distros are doing in Puppetforge. Basically turning puppetforge into a massive cross distro metadata repo, with very strict contribution standards and rules. This would involve strong rules for curated modules that would require manpower to vet (and to contribute the modules).
Ok, here's a couple of examples. Apologies for the length.
1. Packages
Let's say I'm writing a module called foo.
It uses a package called 'libfoo'.
Therefore I want to make sure this package is installed before the module's
configuration is applied. I use the pattern where I define separate classes to
handle install and config, and chain them together:
class foo::install {
# Make sure libfoo is installed
package { 'libfoo': ensure => present }
}
class foo::config {
# adjusts foo.conf
# ....
}
class foo {
include foo::install
include foo::config
Class[foo::install] -> Class[foo::config]
}
Bingo! Works fine. Until I want to add another 3rd party module which needs
libfoo. Now I have to move the package declaration out to a shared class and
update both modules to fit: maybe the other module wants a specific version,
other install_options, or whatever.
Sounds simple in a short example like this, except, that:
- I've been forced to customise what should be an 'off the shelf' module,
- I have to figure out what the shared class/module should say
- and fix an arbitrary name for it
- I've therefore hard-wired strong coupling in both to my shared class
- I've added potential for refactoring breakage
- and more of the same sort of problems when scaling up
Genuinely reusable modules seem nearly impossible to write as it stands. If I
want to publish my module on Puppet Forge, then the shared::libfoo module must
be published too. Except it might not agree with other published modules' ideas
about what a shared::libfoo should declare, or even be called, and so it is not
typically re-usable without refactoring.
Or, I don't publish it, leave a dangling dependency on a class called
shared::libfoo. I am still hard-wiring a name, but not what it does. I have to
tell users the name the module they must define, and a list of requirements to
put in it on our behalf.
Or, I just don't define anything about libfoo except in the documentation.
Which seems the most practical thing to do, assuming that things break
intelligibly if libfoo is absent, but this really amounts to giving up and
moving on.
Or, maybe there currently are better ways than all the above - but if so I'm
unclear what.
Now imagine we could simply assert a requirement on a package, without actually
managing it. For the sake of this example, I'll invent a syntax for
"Assertions" similar to that used for Virtual Resources which use an '@' sigil
prefix. I'll arbitrarily use a '+' instead:
+package { 'libfoo': ensure => present }
This just means "ensure libfoo is installed". It changes nothing about the
target system. It does not mean "go and install the 'libfoo' package". It does
not mean "I own the 'libfoo' package resource, and I'll fight anyone who tries
to say otherwise".
Therefore, this type of assertion can be repeated multiple times in different
modules. Possibly in slightly different ways - with extra attributes, etc.
Puppet should just check they are all consistent, and fail if they aren't, or if
the net requirements are not met. I don't know enough about Puppet internals to
say for sure, but as described in my previous email: because the Assertion
changes nothing, I hope this would be relatively easy to implement.
Now I can write my module using an Assertion instead:
class foo::install {
# Make sure libfoo is installed
+package { 'libfoo': ensure => present }
# ...
}
...and I no longer have to find the common ground between modules which use
libfoo, and/or modify the modules to use the shared declaration.
Also, we have lost an explicit dependency on a shared module arbitrarily called
'shared::libfoo' which merely declared:
package { 'libfoo': ensure => present }
So I no longer need to publish this shared module and either dictate to, or
negotiate with with potential users of my module about the intersection of our
requirements. Nor do I need to omit this requirement entirely (which might be
the only practical alternative).
Yet I am still checking the prerequisites are there.
Of course, I may still have to create a package which actually does the
appropriate package install. Or maybe not? Perhaps my provisioning system does
that for me, and I can skip that step? Either way, the knowledge that my system
is still checking the prerequisites are there.
If my prerequisites are missing, I would hope Puppet would give helpful errors
showing what needed what, and I can add a declaration to install the right
packages in a top-level "glue" class. But means we can avoid hard-wiring
arbitrary module names into the component modules.
In summary, this would be simpler and more effective than any existing Puppet
pattern I know about.
2. Creating user accounts
Another example, which was the topic of my earlier post "constraint checking".
Say I want to create a custom resource which sets up user accounts for me in a
manner of my choice.
define user_account(
$name,
$home,
$shell,
$passwd,
$uid,
$gid,
$groups,
) {
# I want to validate $home, $shell, $groups exist and are usable...
# This is a classic case where one is tempted to use this anti-pattern
# to define something usable if it doesn't exist:
if !define(Group[$gid]) {
group { $gid: ensure => present }
}
# If I can't do that, perhaps I can just depend and hope it's picked up
# elsewhere?
require File[$shell]
# ... except this can't say anything about $shell, like
# "it must be executable".
# .... do other stuff here ....
# Now define the user resource which creates the user
# (but in my tests, does not seem to check the requirements
# exist.)
user {$name:
ensure => present,
home => $home,
shell => $shell,
passwd =>
uid => $uid,
gid => $gid,
groups => $groups,
}
}
Imagine we could use Assertions as described above. Validating the parameters
is now straightforward:
define user_account(
$name,
$home,
$shell,
$passwd,
$uid,
$gid,
$groups,
) {
# Ensure $home, $shell, gid, $groups exist
# We want to own this one
group { $gid: ensure => present }
# These are shared
+group { $groups: ensure => present }
# This is shared
+file { $shell:
ensure => present,
mode => '0744', # ...if only I could merely say "a+x" here
}
# ... do other stuff ...
# Now define the resource:
user {$name:
ensure => present,
home => $home,
shell => $shell,
passwd =>
uid => $uid,
gid => $gid,
groups => $groups,
}
}
And as above:
- only User[$name] or Group[$gid] will ever conflict (which is what we want)
- there are no shared module dependencies forced into existence
- we are not risking silly mistakes like shell => '/bin/bqsh' or
groups => ['weel']
There's more I could say, but I hope this gives the basic idea.
Cheers,
N
thanks for your elaborate design sketch.
Sorry for limiting my quote severely.
On 01/30/2012 06:28 PM, Nick wrote:
> +package { 'libfoo': ensure => present }
Is this different from John's "constraint" proposal?
To me this didn't become clear: Does the manifest still need to declare
an actual package { "libfoo" } somewhere, or is this implied by at least
one assertion regarding any of its parameters?
Of the latter is the case, then this is not different from just allowing
puppet to consume multiple declarations of the same resource, along with
all the oft-discussed difficulties.
If instead, there still is that one central resource declaration
somewhere, I'm pretty sure this is the same as constraints.
Which is probably a really neat idea.
Cheers,
Felix
It did sound similar, yes - but unless I misunderstand it, not identical. For
example, I don't understand how Constraints would avoid the problems with
unifying resources that Nan mentioned.
John's example appeared to be wrapping an existing Resource with something which
puts constraints on it, i.e. a layer on top of "Resources". It implies a regular
Resource to wrap somewhere.
Whereas what I had in mind was something which in principle at least, was more
basic than a Resource. With an "Assertion" there is nothing being managed, or
mutated, by definition. It defines conditions on a resource (lower-case 'r')
which can be checked, and merged, but doesn't imply that any Resource
(upper-case 'R') need to be declared. It's quite possible that one wouldn't
bother, if you don't need to manage or mutate anything.
So Resources (upper case 'R') could be thought of as extensions to Assertions
which also supply rules to mutate a system's state, should the conditions of the
Assertion not be met, so that the conditions *are* met.
> To me this didn't become clear: Does the manifest still need to declare
> an actual package { "libfoo" } somewhere, or is this implied by at least
> one assertion regarding any of its parameters?
To be explicit: if an Assertion "+package { libfoo: }" is declared, it just
means "libfoo must be installed for this manifest to work". I don't think it
needs to mandate a declaration of a full-blown "package { libfoo: }" somewhere.
In fact, I can probably imagine circumstances when something might be invoked
which indirectly takes care of the "libfoo" package (or file, or whatever) - and
then being forced to manage the "libfoo" package in Puppet just because you want
to assert its presence could be a liability.
N
On 01/30/2012 10:28 PM, Nick wrote:
> It did sound similar, yes - but unless I misunderstand it, not identical. For
> example, I don't understand how Constraints would avoid the problems with
> unifying resources that Nan mentioned.
as far as I understand, there is no need to merge anything. The catalog
will or will not contain a certain resource. If existing, the resource
will have a certain set of parameteres and metaparameters. Each
constraint can be individually compared to this state. If a constrained
resource does not exist, or any of its (meta)parameters deviate from
what the constraint defines, the catalog is no longer valid.
The beauty of this design is that the language is very expressive and
all validation can be done by the master.
Err, right, John? :-)
> John's example appeared to be wrapping an existing Resource with something which
> puts constraints on it, i.e. a layer on top of "Resources". It implies a regular
> Resource to wrap somewhere.
>
> Whereas what I had in mind was something which in principle at least, was more
> basic than a Resource. With an "Assertion" there is nothing being managed, or
> mutated, by definition. It defines conditions on a resource (lower-case 'r')
> which can be checked, and merged, but doesn't imply that any Resource
> (upper-case 'R') need to be declared. It's quite possible that one wouldn't
> bother, if you don't need to manage or mutate anything.
Ah, so you'd have the agent verify if all assertions (which need to
appear as first-class citizens in the catalog) hold true, and otherwise
fail the catalog?
That strikes me as very elegant indeed.
How will situations be handled where assertions won't hold true until
parts of the catalog have been applied?
> So Resources (upper case 'R') could be thought of as extensions to Assertions
> which also supply rules to mutate a system's state, should the conditions of the
> Assertion not be met, so that the conditions *are* met.
Let's not alienate the community by declassing the proven and beloved
Resources ;-) but I've got to say, this idea does hold merit.
So does the constraint idea. Something tells me that both might be of
benefit, but I'm afraid of years of user confusion to come when everyone
is tasked with understanding the difference between the two and to
decide when to use which.
If we need to take a pick, there's two things I'd have to say for
constraints:
1. They're more closely tailored to the problem at hand.
2. They're stronger in chorus with what puppet is today.
Assertions would probably widen the borders of what's possible with
puppet (and how easy it is), and they would allow/require us to part
with some paradigms. I'm torn whether we want this sooner than seeing
the multiple declaration problem sorted out in a less intrusive way.
Cheers,
Felix
I'm not familiar enough with Puppet's internals to answer that very confidently.
My guess is that one might be able to express requires/before relationships
between Assertions and Resources in order to enforce things like this.
The main implication of that would be to restrict the freedom to assume that
assertions can be applied in any order, because the agent's application of the
catalog would need to be split into a sequence of "assertion" and "mutation"
steps. An Assertion must then not be moved outside the part of the sequence
assigned to it.
N