RFC2 - Resource Defaults

67 views
Skip to first unread message

Henrik Lindberg

unread,
Jul 6, 2014, 9:26:28 PM7/6/14
to puppe...@googlegroups.com
Thank you everyone that commented on the first RFC for Resource Defaults
(and Collection) - some very good ideas came up. I am creating a new
thread because I wanted to take the Collection part in a separate
discussion (honestly I would like to burn the entire implementation -
but that is separate topic), and I want to present some new ideas about
defaults triggered by David Schmitts idea about defaults applicable to
only a resource body.

Let me back up and first describe a problem we have in the grammar.

The egrammar (as well as the current grammar in 3x) tries to be helpful
by recognizing certain combinations as illegal (an override that is
virtual or exported), a resource default or override cannot specify a
title. This unfortunately means that the grammar has to recognize
sequences of tokens that makes this grammar ambiguous and it has to be
solved via operator precedence tricks (that makes the problem show up as
other corner cases of the grammar). (This is a classic mistake of trying
to implement too much semantics in the grammar / parser).

So...

What if we simply made the three resource expressions (create resource,
set resource defaults, an resource override) have exactly the same
grammar, and push of validation to the static validation that takes
place and the runtime.

Basically the grammar would be (I am cheating just a little here to
avoid irrelevant details):

ResourceExpression
: At? left_expr = Expression '{' ResourceBodies ';'? '}'
;

ResourceBodies
: ResourceBody (';' ResourceBody)*
;

ResourceBody
: title = Expression ':' AttributeOperations ','?
;

AttributeOperations
: AttributeOperation (',' AttributeOperation)*
;

AttributeOperation
: AttributeName ('=>' | '+>') Expression

AttributeName
: NAME | KeywordsAcceptableAsAttributeName
;

# Details here irrelevant, meaning is: virtual or exported resource
# AT is the '@' token
At
: AT
| AT AT
| ATAT
;

So, how are the three kinds expressed? Notice that a title is required
for each ResourceBody. So we are basically going to handle different
combinations of left_expr and titles. We simply evaluate the left_expr
at runtime and treat the combinations of the *resulting* type and type
of title:

If left (result) is a String, it must be the name of a Resource Type.
The title works as it does now in 3x. In addition the title (of one
resource body) may be Default, which sets the defaults for this resource
expression only (all bodies in the same resource expression) - i.e.
"Schmitt style".

notify { hi: message => 'hello there' }

file {
default:
mode => '0331',
owner => 'weird';
'/tmp:foo':
content => 'content 1'
}

If the left is the keyword 'class', it works the same way as for when
the left is a String but defaults can now only set defaults for meta
parameters since there are no other attributes that work safely across
all classes. (Yes you can do this today with Class { } in 3x)

class { 'a::b': param => value }
class { default: audit => true }

If the left is Type[CatalogEntry] (i.e. a resource or class reference),
the meaning changes to either a default or an override. An example is
probably easier that lots of words to describe this:

Define the defaults for all instances of the resource type Notify:

Notify { default:
message => 'the default message'
}

Override the message for the notify resource hi.

Notify { hi:
message => 'ciao'
}

If the left type is instance specific it is now an error (since a title
followed by ':' is required to create a less ambiguous grammar) - if we
allowed this, a user could write:

Notify[hi] { bye: message => 'adieu' }

we allow a very odd statement (and the reason why resource overrides
currently does not allow a title in its "resource body"). So, an error
for this case.

Since what I propose simply evaluates the left expression there is no
reason to deny certain expression in this place, and it is possible to
use say a variable as indirection to the actual type.

$a = Notify
$a { hi: message => 'hello there' }

(Which is *very* useful to alias types). Strings can also be used - e.g.

'notify' { hi: message => 'hello there'}

which also makes the grammar more symmetric (a bare word like notify is
just a string value i.e. 'notify'). (We still would not allow types
to have funny characters, spaces etc. but it is at least symmetrical).

The current combination of (peculiar) grammar and static analysis of the
left expression to determine which "shape" the resource expression has
means that the parser cannot differentiate between a reference to a
type, and a reference to a specific instance (in general) since all it
sees is

Capitalized { ... }
Captialized[ ... ] { ... }

And thinks the first is a default, and the second an override. In the
new type system however, Resource['notify'] is the same as just a
Notify, and thus, these longer reference simply does not parse (it looks
like Capitalized[...] to the parser / static "shape" detector.
(This could be fixed the ugly way of making the "shape detector" aware
of the built in type names - but that is... just ugly.

I think it is possible to do this in a way that makes it possible to
deprecate the 3x style in 3.7 and change it in 4.0. There is breakage
here for sure, but we are already breaking the scoping of the defaults
(and it was even suggested to remove them completely as they are
typically used as syntactic sugar (and we are removing the more
"powerful" (and confusing) ways defaults works.

Anyway, a few corner cases exist. A +> is only allowed if it is a
default or override expression. This means that we can only statically
check (or rather not check) a default resource body for use of +> (they
are allowed). The other forms depends on the type of the result of
evaluating the left_expr, and thus that check must be done at runtime
(unless the expression is a Literal - which is what is in all existing
3x code).

Summary:

By treating all three kinds of resource expressions the same way
grammatically we get a cleaner grammar with fewer corner cases (less
strange errors). The use of 'default' becomes symmetric; when applied to
a type, it sets the defaults for that type, and when applied to a
resource body (bodies) it applies to those bodies. Overriding something
requires the title, it just moves from the left side to be a title like
in the original, only that the left is now an upper cased type reference
(e.g. notify {hi: } = regular, Notify { hi: } = override.

As the implementation of this requires a number of iterations of grammar
changes I have not tried out what I propose in practice (but
I think it should work as it makes the grammar much simpler).

Do you like these ideas? Is it worth trying to make this work to try it
in practice? (it will take 1-2 days for the implementation, and a bit
more to fix all breaking tests).

My own main issue with the idea is that it makes code backwards
incompatible; you cannot write a manifest that uses defaults and
overrides in a way that works both in 3.x and 4.x. (Or, I have not
figured out a way yet at least).

(This is a cleanup that I have wanted to do for quite some time, and
David Schmitt's proposal triggered all of this - YES, we do have a
'default' literal that we can use !)

Looking forward to comments, and wb all 4th July celebrating peeps.

And finally, an alternative regarding Overrides, if we want to keep the
left side to be resource instance specific, (i.e no title), we could
simply change it to an assignment of a hash. I.e. instead of

Notify[hi] { message => 'overridden message' }

you write:

Notify[hi] = { message => 'overriden message' }

And now, the right hand side is simply a hash. The evaluator gets a
specific reference to a resource instance, and knows what to do.
(We could also allow both; the type + title in body way, and the
assignment way).

Now I have typed too much already...

Regards
- henrik
--

Visit my Blog "Puppet on the Edge"
http://puppet-on-the-edge.blogspot.se/

Henrik Lindberg

unread,
Jul 8, 2014, 7:06:22 AM7/8/14
to puppe...@googlegroups.com
Oh, and another insight - there is a desire to be able to specify the
same resource multiple times - if the values are the same, or augmenting
values then it is not a conflict. This will require the new catalog
builder we have been sketching on since it has a richer catalog model.

It just struck me, that if we do that, then there is really no
difference between a Resource Override, and specifying values more than
once and the Resource Override syntax can be removed.

- henrik



Trevor Vaughan

unread,
Jul 8, 2014, 8:31:27 AM7/8/14
to puppe...@googlegroups.com
there is a desire to be able to specify the same resource multiple times - if the values are the same, or augmenting values then it is not a conflict

Yes, please! I've wanted this for years!

Trevor






--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/lpgjan%24thf%241%40ger.gmane.org.

For more options, visit https://groups.google.com/d/optout.



--
Trevor Vaughan
Vice President, Onyx Point, Inc
(410) 541-6699
tvau...@onyxpoint.com

-- This account not approved for unencrypted proprietary information --

Ashley Penney

unread,
Jul 8, 2014, 10:55:56 AM7/8/14
to puppe...@googlegroups.com
I've skipped everything else about these messages to beg for this too, this is the single most important change we can make to the language to allow users to use modules off the forge more easily.


To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CANs%2BFoXEmCqKWGZ92jzApa_qP4RKk7zdodUL6OgxiPMU5tc4SQ%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.



--
Ashley Penney
Module Engineer

Join us at PuppetConf 2014September 23-24 in San Francisco - http://puppetconf.com 

Henrik Lindberg

unread,
Jul 8, 2014, 11:51:08 AM7/8/14
to puppe...@googlegroups.com
On 2014-08-07 16:55, Ashley Penney wrote:
> I've skipped everything else about these messages to beg for this too,
> this is the single most important change we can make to the language to
> allow users to use modules off the forge more easily.
>

Hear ya, load and clear. The first cut of the new catalog model has
features that makes it possible to implement this (basically keeping
track of multiple places or "source"). Lots of work remains, expect this
to arrive in Puppet 5.

The first hurdle; dynamic scope has already been fixed - it would
otherwise have made the feature impossible.

We do have to define how defaults work as well (who's defaults win).

John Bollinger

unread,
Jul 8, 2014, 12:09:11 PM7/8/14
to puppe...@googlegroups.com


On Sunday, July 6, 2014 8:26:28 PM UTC-5, henrik lindberg wrote:
Thank you everyone that commented on the first RFC for Resource Defaults
(and Collection) - some very good ideas came up. I am creating a new
thread because I wanted to take the Collection part in a separate
discussion (honestly I would like to burn the entire implementation -
but that is separate topic), and I want to present some new ideas about
defaults triggered by David Schmitts idea about defaults applicable to
only a resource body.

Let me back up and first describe a problem we have in the grammar.

The egrammar (as well as the current grammar in 3x) tries to be helpful
by recognizing certain combinations as illegal (an override that is
virtual or exported), a resource default or override cannot specify a
title. This unfortunately means that the grammar has to recognize
sequences of tokens that makes this grammar ambiguous and it has to be
solved via operator precedence tricks (that makes the problem show up as
other corner cases of the grammar).


I agree that that is a problem.  Are you saying that the 3.x grammar also has such ambiguities, or just that it also rejects some token sequences (as any grammar must do)?

 


I'm not fully up to speed on the type system, but surely if the left side evaluates to a resource or class *reference* then the statement can only be an override.  Right?  For it to express resource defaults, the left side must evaluate to a type -- either Class or a resource type.

 
An example is
probably easier that lots of words to describe this:

Define the defaults for all instances of the resource type Notify:

    Notify { default:
      message => 'the default message'
    }

Override the message for the notify resource hi.

    Notify { hi:
      message => 'ciao'
    }

If the left type is instance specific it is now an error (since a title
followed by ':' is required to create a less ambiguous grammar) - if we
allowed this, a user could write:

   Notify[hi] { bye: message => 'adieu' }

we allow a very odd statement (and the reason why resource overrides
currently does not allow a title in its "resource body"). So, an error
for this case.


I tried hard to not like this, but then I recognized how congruent it is -- or could be -- with one of my pet ideas: resource constraints.  Felix implemented part of that idea, but as I understand it, he handled only the diagnostic part (recognizing whether constraints were satisfied) and not the prescriptive part (updating the catalog, if possible, to ensure that declared constraints are satisfied).

I don't much like the idea of "overriding" resource declarations as such, but it's not such a big leap to reposition that as simply declaring additional requirements (i.e. constraints) on resources that (may) be declared elsewhere.  We get the rest of the way to a limited form of prescriptive constraints by collapsing this variety of resource overrides with resource declarations.  In other words, consider these two expressions:

file { '/tmp/hello': ensure => 'file' }
File { '/tmp/hello': ensure => 'file' }

What if they meant exactly the same thing?  Specifically, what if they meant "there must be a resource in the catalog of type File and name '/tmp/hello', having the value 'file' for its 'ensure' property"?  Either one would then insert such a resource into the catalog if necessary, and afterward attempt to set the (then) existing resource's 'ensure' parameter.

I say "attempt to set" the existing resource's parameter, because I think we still need to forbid modifying a value that was set (to a different value) somewhere else.  And that's more complicated than just checking for undef, because it may be that undef is itself the intentionally assigned parameter value, so that it would be an error to change it.  It also might need to be more permissive for subclasses and collectors.

We could go farther with that.  If it seems wasteful to have two different forms of the same statement, then we could apply additional semantics to one or the other.  For example, perhaps the lowercase form could implicitly declare all unmentioned parameters as undef, with the effect that they could not be overridden to anything different.  I don't know whether that would be useful, or whether there is some other behavior that would be more useful.  Perhaps it would be best to just let the two forms be equivalent, or maybe even to deprecate one.


Anyway, a few corner cases exist. A +> is only allowed if it is a
default or override expression.


+> is a bit of a dark horse in the regime of overrides, as modifying a previously-declared parameter value is inherent in its design.  On the other hand, it is currently useful for overrides only in subclasses and collectors, where modifying a declared value is considered acceptable.  I don't think it's a major issue that evaluating a statement containing a plussignment may yield an error, as long as the meaning of the statement is not itself in question.


Do you like these ideas? Is it worth trying to make this work to try it
in practice? (it will take 1-2 days for the implementation, and a bit
more to fix all breaking tests).


As I said before, I tried to dislike them, but I like them despite myself.  Especially if you're willing to accept my extension.

 

My own main issue with the idea is that it makes code backwards
incompatible; you cannot write a manifest that uses defaults and
overrides in a way that works both in 3.x and 4.x. (Or, I have not
figured out a way yet at least).



That's the main reason I tried to dislike the idea, and it's not negligible.  It seems like the best alternative for that would be to select an alternative syntax for type expressions that cannot be confused with resource references.  Doing so would remove some of the reason for the above idea, but would not render it moot.

Henrik Lindberg

unread,
Jul 8, 2014, 4:16:46 PM7/8/14
to puppe...@googlegroups.com
On 2014-08-07 18:09, John Bollinger wrote:
>
>
> On Sunday, July 6, 2014 8:26:28 PM UTC-5, henrik lindberg wrote:
>
> Thank you everyone that commented on the first RFC for Resource
> Defaults
> (and Collection) - some very good ideas came up. I am creating a new
> thread because I wanted to take the Collection part in a separate
> discussion (honestly I would like to burn the entire implementation -
> but that is separate topic), and I want to present some new ideas about
> defaults triggered by David Schmitts idea about defaults applicable to
> only a resource body.
>
> Let me back up and first describe a problem we have in the grammar.
>
> The egrammar (as well as the current grammar in 3x) tries to be helpful
> by recognizing certain combinations as illegal (an override that is
> virtual or exported), a resource default or override cannot specify a
> title. This unfortunately means that the grammar has to recognize
> sequences of tokens that makes this grammar ambiguous and it has to be
> solved via operator precedence tricks (that makes the problem show
> up as
> other corner cases of the grammar).
>
>
>
> I agree that that is a problem. Are you saying that the 3.x grammar
> also has such ambiguities, or just that it also rejects some token
> sequences (as any grammar must do)?
>
3x grammar does not have this specific problem; instead it has a dozen
other problems because people worked around ambiguities by creating
sub-trees of the expression tree for different kinds of expressions. It
is just horrible.
The problem is that this happens before anything is evaluated, the
grammar is making decisions on tokens, not what the result evaluates to.

What I want to do is to push the validation off to the runtime (where
it is known what the expressions evaluate to). Doing so requires that
the resource expression is unambiguous - now I have to treat the
following { } body as a kind of operator having a particular precedence.
To be able to do this, I would like all resource expressions to have a
title, as that makes the expr { expr : part unique
and cannot be confused with an expression followed by a hash.

A resource reference is essentially expr [ expr ], it now considers any
such expression as one that should result in a type reference, but it
can not know for sure statically - it must evaluate the expression to
see what it gets (yet, the grammar contains checks that cannot answer
this correctly). (it is the remaining messy part of the egrammar).


> An example is
> probably easier that lots of words to describe this:
>
> Define the defaults for all instances of the resource type Notify:
>
> Notify { default:
> message => 'the default message'
> }
>
> Override the message for the notify resource hi.
>
> Notify { hi:
> message => 'ciao'
> }
>
> If the left type is instance specific it is now an error (since a title
> followed by ':' is required to create a less ambiguous grammar) - if we
> allowed this, a user could write:
>
> Notify[hi] { bye: message => 'adieu' }
>
> we allow a very odd statement (and the reason why resource overrides
> currently does not allow a title in its "resource body"). So, an error
> for this case.
>
>
>
> I tried hard to not like this, but then I recognized how congruent it is
> -- or could be -- with one of my pet ideas: resource constraints. Felix
> implemented part of that idea, but as I understand it, he handled only
> the diagnostic part (recognizing whether constraints were satisfied) and
> not the prescriptive part (updating the catalog, if possible, to ensure
> that declared constraints /are/ satisfied).
>
> I don't much like the idea of "overriding" resource declarations as
> such, but it's not such a big leap to reposition that as simply
> declaring additional requirements (i.e. constraints) on resources that
> (may) be declared elsewhere. We get the rest of the way to a limited
> form of prescriptive constraints by collapsing this variety of resource
> overrides with resource declarations. In other words, consider these
> two expressions:
>
> file { '/tmp/hello': ensure => 'file' }
> File { '/tmp/hello': ensure => 'file' }
>
> What if they meant exactly the same thing? Specifically, what if they
> meant "there must be a resource in the catalog of type File and name
> '/tmp/hello', having the value 'file' for its 'ensure' property"?
> Either one would then insert such a resource into the catalog if
> necessary, and afterward attempt to set the (then) existing resource's
> 'ensure' parameter.
>
> I say "attempt to set" the existing resource's parameter, because I
> think we still need to forbid modifying a value that was set (to a
> different value) somewhere else. And that's more complicated than just
> checking for undef, because it may be that undef is itself the
> intentionally assigned parameter value, so that it would be an error to
> change it. It also might need to be more permissive for subclasses and
> collectors.
>
Note that collection currently can modify any already set value on all
kinds of resources (regular, virtual and exported) at any point
throughout the evaluation. How is it that these "rules" are given such
mighty powers when a rule such as "File['tmp/foo'] { owner => x }" is
not allowed to override a set mode of the same file? (I understand the
need to guard against typos and unintentional changes). Basically I see
File[id] { x => y } as the same expression as File <| title == id |> { x
=> y }.

> We could go farther with that. If it seems wasteful to have two
> different forms of the same statement, then we could apply additional
> semantics to one or the other. For example, perhaps the lowercase form
> could implicitly declare all unmentioned parameters as undef, with the
> effect that they could not be overridden to anything different.. I
> don't know whether that would be useful, or whether there is some other
> behavior that would be more useful. Perhaps it would be best to just
> let the two forms be equivalent, or maybe even to deprecate one.
>
>
> Anyway, a few corner cases exist. A +> is only allowed if it is a
> default or override expression.
>
>
>
> +> is a bit of a dark horse in the regime of overrides, as modifying a
> previously-declared parameter value is inherent in its design. On the
> other hand, it is currently useful for overrides only in subclasses and
> collectors, where modifying a declared value is considered acceptable.
> I don't think it's a major issue that evaluating a statement containing
> a plussignment may yield an error, as long as the meaning of the
> statement is not itself in question.
>

I like the direction this is going! You are absolutely right that
basically all these expressions (resource attribute settings,
"overrides", defaults) basically define a set of rules that together
should define the resulting resources and the values of their attributes
(call them constraints or rules).

This would mean that +> would be perfectly valid even in definitions
that does not modify something inherited, instead it basically says "use
both what someone else said, and what I said" (if the parameter can hold
multiple values).


>
> Do you like these ideas? Is it worth trying to make this work to try it
> in practice? (it will take 1-2 days for the implementation, and a bit
> more to fix all breaking tests).
>
>
>
> As I said before, I tried to dislike them, but I like them despite
> myself. Especially if you're willing to accept my extension.
>
>
Yes, I think your extension was that there is no difference between a
LHS that is a type (e.g. Notify), and a lower case name of a type (e.g.
notify). I agree completely.

> My own main issue with the idea is that it makes code backwards
> incompatible; you cannot write a manifest that uses defaults and
> overrides in a way that works both in 3.x and 4.x. (Or, I have not
> figured out a way yet at least).
>
>
There is one way that works across all versions; calling functions -
e.g. something like set_defaults(Type[CatalogEntry] type, Hash[String,
Any] values), and override(CatalogEntry resource_ref, Hash[String, Any]
values). This obviously works because it does not rely on a particular
syntax.

>
> That's the main reason I tried to dislike the idea, and it's not
> negligible. It seems like the best alternative for that would be to
> select an alternative syntax for type expressions that cannot be
> confused with resource references. Doing so would remove some of the
> reason for the above idea, but would not render it moot.
>

Even with functions added, it still means a change, and since it from
the point of view of a user does not provide immediate value (just
translation (a.k.a pain)) the entire idea may be a hard sell. I do think
we can put this and other ideas regarding queries (collection) together
in such a way that it justifies the required changes. Not just sure yet
on the entire package...

John Bollinger

unread,
Jul 9, 2014, 2:56:45 PM7/9/14
to puppe...@googlegroups.com


Well, they are manifestly different at least in that the collector version will realize the selected resource if need be.  I anyway agree that there is an issue there, but I'm not confident that I characterize it the same way you do.  That is, I don't think File[id] { x => y } is too weak; rather, I think File<| title == id |> { x => y } is too strong.  If one part of my manifest set declares a property value for some resource, then it is wrong for a different part to change it, because the result no longer satisfies the original requirement.

There's some room to moderate that position a bit, of course.  For example, I have less objection to these kinds of changes being performed via subclasses -- after all, that's the whole point of subclasses.  One could also argue, for instance, about whether it should be ok to override parameters of virtual or exported resources.  I think even these cases are uncomfortable, but I'm not prepared advocate for their foreclosure against the opposition that would surely arise.

 


Yes, that's a good reinterpretation.  One of Puppet's weaknesses in this area is that it currently has few good ways to distinguish parameter values that are hard requirements from those that are preferences, nor to distinguish inclusive requirements from exclusive ones.  Using +> to designate inclusive requirements would help that out some, at least for multi-valued parameters.

 

>
>     Do you like these ideas? Is it worth trying to make this work to try it
>     in practice? (it will take 1-2 days for the implementation, and a bit
>     more to fix all breaking tests).
>
>
>
> As I said before, I tried to dislike them, but I like them despite
> myself.  Especially if you're willing to accept my extension.
>
>
Yes, I think your extension was that there is no difference between a
LHS that is a type (e.g. Notify), and a lower case name of a type (e.g.
notify). I agree completely.



Yes, that's basically it, but don't overlook that making it so requires adjustments to the semantics of both kinds of statements.


John

Henrik Lindberg

unread,
Jul 9, 2014, 4:51:48 PM7/9/14
to puppe...@googlegroups.com
> version will realize the selected resource if need be.. I anyway agree
> that there is an issue there, but I'm not confident that I characterize
> it the same way you do. That is, I don't think File[id] { x => y } is
> too weak; rather, I think File<| title == id |> { x => y } is too
> strong. If one part of my manifest set declares a property value for
> some resource, then it is *wrong* for a different part to change it,
> because the result no longer satisfies the original requirement.
>
I can agree with that too. That was my first reaction - overriding
anything is just horrible.

> There's some room to moderate that position a bit, of course. For
> example, I have less objection to these kinds of changes being performed
> via subclasses -- after all, that's the whole point of subclasses.

Yeah, but that is like allowing it if you add "please" to the request,
since there would be nothing stopping you from subclassing and overriding.

> One
> could also argue, for instance, about whether it should be ok to
> override parameters of virtual or exported resources.

I have less qualms about the virtual - they are sort of not baked yet.
It is when they are realized that it gets prickly IMO... also when
mutating something that was exported from elsewhere (rewriting history).
Yes, there needs to be a way to tell puppet how strongly something is
desired... - we could end lines with 'please', 'pretty please' and 'or
be killed', the more expletives the higher the desire...

That model breaks down too when used wrong (or facing pathological
cases) - look at the number of !important that people have to put into a
CSS to get styling right...

There are a number of different cases here:

- I don't have a value, you must supply one
- I have a default value, you may change it
- Once I am given a non-default value, I may not be changed
- I have a constant value
- I can answer if my (current) value is a default value
- You can change my value back to the default (without knowing what that
value is)
- I get my value from a provider of a value, you can supply the provider
- You can add values, but not remove any
- Values you set can be changed only by you
- My value is derived from other values
- etc... etc...

And that needs to be joined with a mechanism of who's statement about
these things that have precedence:

- by weight
- by time
- by composition (stratas/layers/hierarchy)
- by additional meta rules (rules that describes the importance of the
other rules)

>
> >
> > Do you like these ideas? Is it worth trying to make this work
> to try it
> > in practice? (it will take 1-2 days for the implementation,
> and a bit
> > more to fix all breaking tests).
> >
> >
> >
> > As I said before, I tried to dislike them, but I like them despite
> > myself. Especially if you're willing to accept my extension.
> >
> >
> Yes, I think your extension was that there is no difference between a
> LHS that is a type (e.g. Notify), and a lower case name of a type (e.g.
> notify). I agree completely.
>
>
>
> Yes, that's basically it, but don't overlook that making it so requires
> adjustments to the semantics of /both/ kinds of statements.
>
Oh, yeah.

David Schmitt

unread,
Jul 11, 2014, 8:47:20 AM7/11/14
to puppe...@googlegroups.com
Hi *,
[...]
> Since what I propose simply evaluates the left expression there is no
> reason to deny certain expression in this place, and it is possible to
> use say a variable as indirection to the actual type.
>
> $a = Notify
> $a { hi: message => 'hello there' }
>
> (Which is *very* useful to alias types). Strings can also be used - e.g.
>
> 'notify' { hi: message => 'hello there'}
>
> which also makes the grammar more symmetric (a bare word like notify is
> just a string value i.e. 'notify'). (We still would not allow types
> to have funny characters, spaces etc. but it is at least symmetrical).


I really dig this idea. Reading it sparked a crazy idea in the
language-designer part of my brain: What about going even further and
making the RHS also an Expression?

In the grammar basically everything would become a function call or just
a sequence of expressions. For the expressiveness of the language it
might do wonders:

$ref = File[id]
$select = File<|title == id|>
$ref == $select # true
$type = File

$values = { id => { mode => 0664, owner => root } }
# equivalent hash shortcut notation for backwards compat and
# keystroke reduction
$values = { id: mode => 0664, owner => root }
$defaults = { owner => def, group => def }
$overrides = { mode => 0 }

$final = hash_merge($values, { default: $defaults })

# old style
create_resources($type, $values, $defaults)
# basic resource statement
$type $final
# interpreted as function call
$type($final)
# override chaining
$ref $overrides
$select $overrides

# if create_resources would return the created resources:
$created = create_resources($type, $values, $defaults)
$created $overrides

# replace create_resources
File hiera('some_files')

# different nesting
file { "/tmp/foo": $value_hash }

# extreme override chaining
File['/tmp/bar']
{ mode => 0644 }
{ owner => root }
{ group => root }

# inverse defaulting
file { [ '/tmp/1', '/tmp/2' ]: } { mode => 0664, owner => root }

# define defined()
defined(File['/tmp/bar']) == !empty(File<|title == '/tmp/bar'|>)

This would require unifying the attribute overriding semantics as almost
everything would become a override.

It would also lift set-of-resources as currently used in simple
collect-and-override statements to an important language element as
almost everything touching resources would "return" such a set.

Formalizing this a little bit:

* 'type' is a type reference.
* 'Type' is the list of resources of type 'type' in the current
catalog (compilation).
* 'Type[expr]' is the resource of type 'type' and the title equal
to the result of evaluating 'expr'
* 'Type<| expr |>' is the list of local resources of type 'type' in
the current compilation where 'expr' evaluates true. As a
side-effect, it realizes all matched virtual resources.[1]
* 'Type<<| expr |>>' is the list of local and exported resources of
type 'type' where 'expr' evaluates true. As a side-effect,
it realizes all matched exported resources.[2]
* '{ key => value, }' is a simple hash ('hash')
* '{ title: key => value, }' is a hash-of-hashes. Let's call this a
untyped resource ('ur') due to its special syntax[3].
* 'type ur' now syntactically matches what puppet3 has and evaluates
to the set of resources ('resset') created by
create_resources('type', 'ur').
* '[Type1[expr1], Type2[expr2]]' is the resset containing
'Type1[expr1]' and 'Type2[expr2]'.
* 'resset hash' (e.g. 'File { mode => 0 }') is an override expression.
It sets all values from 'hash' on all resources in 'resset'.
* 'resset -> resset' (and friends) define resource relationships
between sets of resources.
'Yumrepo -> Package' would be a nice example, also avoiding
premature realization.
* 'create_resource(type, ur)' returns a resset containing resources
of type 'type' with the values from 'ur'. Written differently,
'create_resource' becomes a cast-and-realize operator.[4]
- This allows things like 'create_resource(...) -> resset' and
'create_resource(...) hash'
* 'include someclass' returns the resset of all resources included in
'someclass'. Note that 'included' is a very weakly defined concept
in puppet, see Anchor Pattern.
* Instances of user-defined types might also be seen as heterogeneous
ressets.


[1] It might be worthwhile to start requiring to always write
'realize(Type<| expr |>)' for this side-effect. This looks annoying.
[2] Unintentionally realized exported resources seem a much less
frequent problem than the same side-effect on virtual resources causes.
It might make sense to avoid [1] and instead introduce something like
'Type[|expr|]' and 'Type[[|expr|]]' to select without realizing.
[3] Note that this is really only syntactic. { title => { key => value
}} would be the evaluate to the equivalent untyped resource.
[4] I'm beginning to get an uncanny XPath/PuppetSQL vibe here.

Up until now, this is MOSTLY syntactic sugar to massively improve the
flexibility of the language. To avoid the most egregious abuses and
traps of this flexibility we have to take a good look at the underlying
datamodel, how evaluating puppet manifests changes this model and what
the result should be.

The result is very simple: the compiled catalog is a heterogeneous set
of resources. In an ideal world is that the contents of this resset is
independent of the evaluation order of the source files (and also the
order of the statements within).

Unifying all kinds of overrides, defaults and "normal" parameter setting
into a single basic operation opens the way to discuss this on a
different level: for a evaluation order independent result, it's not
important how or when a value is set, but it's only important that it is
only set once at most. That is a condition that is easily checked and
enforced if we accept that the evaluator may reject some complex
manifests that could be evaluated theoretically but not with a given
implementation.

The alert reader rightly complains that defaults and overrides have
different precedences. To make a strict evaluation possible I'd suggest
to create multiple "value slots" on a property. A default, normal and
override slot. The properties' value is the highest priority value
available.

To avoid write/write conflicts in the evaluation, each slot may be
changed only once. This follows directly from the eval-order
independence requirement: when there are two places trying to set the
same property to different values with the same precedence it cannot
work. The argument is the same as for disallowing duplicate resources
currently.

To avoid read/write conflicts in the evaluation, each property may be
sealed to the currently available value(s) when reading from it. This
allows detecting write-after-read situations. At this point the
evaluator has enough information to decide whether the write is safe
(the value doesn't change) or not (the eval-order independence is
violated). In a future version, the evaluator could be changed to return
promises instead of values and to lazy evaluation of promises. That way
it would be possible to evaluate all manifests that have a eval-order
independent result (that is, all that are reference-loop-free).

The case of +>: the write/write conflict is irrelevant up to the order
of the resulting list. The read/write conflict can be checked like any
other case.

A more subtle problem with this approach are resset-based assignments.
Some examples:

File { mode => 0644 } # wrong precedence
file { '/tmp/foo': mode => 0600 }

File['/tmp/foo'] { mode => 0644 }
file { '/tmp/foo': mode => 0600 }

File<| title == '/tmp/foo' |> { mode => 0644 }
file { '/tmp/foo': mode => 0600 }

File <| owner == root |> { mode => 0644 }
file { '/tmp/foo': mode => 0600 }

The solution to this lies in deferring evaluation of all dynamic (Type
and Type<||>) ressets to the end of the compilation. While that would
not influence write/write conflicts, it would force most read/write
conflicts to happen always.

Another ugly thing would be detecting this nonsense:

File <| mode == 0600 |> { mode => 0644 }


The same read/write conflict detection logic could be re-used for
variables, finally being able to detect use of not-yet-defined variables.

> My own main issue with the idea is that it makes code backwards
> incompatible; you cannot write a manifest that uses defaults and
> overrides in a way that works both in 3.x and 4.x. (Or, I have not
> figured out a way yet at least).

Even if you skip the resources-as-hashes idea, I think most of the
defaults and overrides precedence and eval-order confusion can be
mitigated by a multi-slot implementation for properties as described above.

> And finally, an alternative regarding Overrides, if we want to keep the
> left side to be resource instance specific, (i.e no title), we could
> simply change it to an assignment of a hash. I.e. instead of
>
> Notify[hi] { message => 'overridden message' }
>
> you write:
>
> Notify[hi] = { message => 'overriden message' }
>
> And now, the right hand side is simply a hash. The evaluator gets a
> specific reference to a resource instance, and knows what to do.
> (We could also allow both; the type + title in body way, and the
> assignment way).

This is what actually triggered my first idea. Also because I really
dislike the assignment there.

> Now I have typed too much already...

Me too ;-)


Regards, David


Henrik Lindberg

unread,
Jul 11, 2014, 10:50:47 PM7/11/14
to puppe...@googlegroups.com
Yes, that is how everything else works, but cannot because of the
ambiguity in hash vs. resource body/bodies (in the two different shapes
for regular, override/defaults).

> $ref = File[id]
> $select = File<|title == id|>
> $ref == $select # true
> $type = File
>
Yes, except we will have issues with the query (it is lazy now). We
either need to make it evaluate immediately, or make the return value
a Future.

> $values = { id => { mode => 0664, owner => root } }
> # equivalent hash shortcut notation for backwards compat and
> # keystroke reduction
> $values = { id: mode => 0664, owner => root }

Ah, neat idea make { x: y => z } mean the same as {x => { y => z}} !

> $defaults = { owner => def, group => def }
> $overrides = { mode => 0 }
>
> $final = hash_merge($values, { default: $defaults })
>
Did you mean?
hash_merge($defaults, $overrides)

(which btw is the same as $defaults + $overrides).
Or, was that an example of a special instruction to the hash_merge to
treat a key of literal 'default' as things that do not override (i.e.
all other keys override, but 'default' defines what to pick if nothing
defined. If so, This is easily expressed directly in the language like this:

$final = $defaults + $values + $overrides

> # old style
> create_resources($type, $values, $defaults)
> # basic resource statement
> $type $final
This is problematic, we cannot make any sequence a function call without
requiring that every expression is terminated with punctuation (e.g.
';') - but that must then be applied everywhere.

> # interpreted as function call
> $type($final)

This is problematic because:
- selecting what to call via a general expression has proven in several
languages to be a source of thorny bugs in user code.

> # override chaining
> $ref $overrides
> $select $overrides
>
> # if create_resources would return the created resources:
It should.

> $created = create_resources($type, $values, $defaults)
> $created $overrides
>
> # replace create_resources
> File hiera('some_files')
>
> # different nesting
> file { "/tmp/foo": $value_hash }
>
> # extreme override chaining
> File['/tmp/bar']
> { mode => 0644 }
> { owner => root }
> { group => root }
>
> # inverse defaulting
> file { [ '/tmp/1', '/tmp/2' ]: } { mode => 0664, owner => root }
>
> # define defined()
> defined(File['/tmp/bar']) == !empty(File<|title == '/tmp/bar'|>)
>
> This would require unifying the attribute overriding semantics as almost
> everything would become a override.
>
> It would also lift set-of-resources as currently used in simple
> collect-and-override statements to an important language element as
> almost everything touching resources would "return" such a set.
>
> Formalizing this a little bit:
>
> * 'type' is a type reference.
> * 'Type' is the list of resources of type 'type' in the current
> catalog (compilation).
This is actually a reference to the set that includes all instances of
that type (irrespective of if they actually exist anywhere). Something
needs to narrow that set to "in the catalog" (which is actually several
sets; realized, virtual, exported from here, imported to here, ...). To
get such a set, there should be a query operator (it operates on a
container and takes a type and predicates for that type).

> * 'Type[expr]' is the resource of type 'type' and the title equal
> to the result of evaluating 'expr'
yes
> * 'Type<| expr |>' is the list of local resources of type 'type' in
> the current compilation where 'expr' evaluates true. As a
> side-effect, it realizes all matched virtual resources.[1]
Some sort of query operator. If we keep the <| |>, it could mean
selection of the "virtual here" container/subset, currently it is
"everything defined here".

> * 'Type<<| expr |>>' is the list of local and exported resources of
> type 'type' where 'expr' evaluates true. As a side-effect,
> it realizes all matched exported resources.[2]
Same comment as above, but using a different container.

> * '{ key => value, }' is a simple hash ('hash')
> * '{ title: key => value, }' is a hash-of-hashes. Let's call this a
> untyped resource ('ur') due to its special syntax[3].
> * 'type ur' now syntactically matches what puppet3 has and evaluates
> to the set of resources ('resset') created by
> create_resources('type', 'ur').
> * '[Type1[expr1], Type2[expr2]]' is the resset containing
> 'Type1[expr1]' and 'Type2[expr2]'.
That is what you get now. (or rather you get a set of references to the
resource instances, not the instances themselves).

> * 'resset hash' (e.g. 'File { mode => 0 }') is an override expression.
> It sets all values from 'hash' on all resources in 'resset'.
> * 'resset -> resset' (and friends) define resource relationships
> between sets of resources.
> 'Yumrepo -> Package' would be a nice example, also avoiding
> premature realization.
The relations are recorded as being between references.

> * 'create_resource(type, ur)' returns a resset containing resources
> of type 'type' with the values from 'ur'. Written differently,
> 'create_resource' becomes a cast-and-realize operator.[4]
> - This allows things like 'create_resource(...) -> resset' and
> 'create_resource(...) hash'
> * 'include someclass' returns the resset of all resources included in
> 'someclass'. Note that 'included' is a very weakly defined concept
> in puppet, see Anchor Pattern.
Hm, intriguing idea.

> * Instances of user-defined types might also be seen as heterogeneous
> ressets.
>
Yes.

>
> [1] It might be worthwhile to start requiring to always write
> 'realize(Type<| expr |>)' for this side-effect. This looks annoying.

it could be

Type <| expr |>.realize

> [2] Unintentionally realized exported resources seem a much less
> frequent problem than the same side-effect on virtual resources causes.
> It might make sense to avoid [1] and instead introduce something like
> 'Type[|expr|]' and 'Type[[|expr|]]' to select without realizing.

I like to go in the other direction with fewer special operators.

> [3] Note that this is really only syntactic. { title => { key => value
> }} would be the evaluate to the equivalent untyped resource.
> [4] I'm beginning to get an uncanny XPath/PuppetSQL vibe here.
>
:-)

> Up until now, this is MOSTLY syntactic sugar to massively improve the
> flexibility of the language. To avoid the most egregious abuses and
> traps of this flexibility we have to take a good look at the underlying
> datamodel, how evaluating puppet manifests changes this model and what
> the result should be.
>
> The result is very simple: the compiled catalog is a heterogeneous set
> of resources. In an ideal world is that the contents of this resset is
> independent of the evaluation order of the source files (and also the
> order of the statements within).
>
Yes. A Catalog is basically Array[CatalogEntry]

> Unifying all kinds of overrides, defaults and "normal" parameter setting
> into a single basic operation opens the way to discuss this on a
> different level: for a evaluation order independent result, it's not
> important how or when a value is set, but it's only important that it is
> only set once at most. That is a condition that is easily checked and
> enforced if we accept that the evaluator may reject some complex
> manifests that could be evaluated theoretically but not with a given
> implementation.
>
yes.

> The alert reader rightly complains that defaults and overrides have
> different precedences. To make a strict evaluation possible I'd suggest
> to create multiple "value slots" on a property. A default, normal and
> override slot. The properties' value is the highest priority value
> available.
>
That is one way, yes.

> To avoid write/write conflicts in the evaluation, each slot may be
> changed only once. This follows directly from the eval-order
> independence requirement: when there are two places trying to set the
> same property to different values with the same precedence it cannot
> work. The argument is the same as for disallowing duplicate resources
> currently.
>
I think this may just move the problem to dueling defaults, dueling
values, and dueling overrides. (This problem occurs in the binder and
there the problem is solved by the rules (expressed in the terms we use
here (except the term 'layer', which I will come back to):
- if two defaults are in conflict, a set value wins
- if two values are in conflict, an override wins
- if two overrides are in conflict, then the one made in the highest
layer wins.
- a layer must be conflict free

Highest (most important) layer is "the environment", secondly "all
modules" - this means that conflicts bubble to the top, where a user
must resolve the conflict by making the final decision.

The environment level can be thought of as what is expressed in
"site.pp", global or expressed for a "node" (if we forget for a while
about all the crazy things puppet allows you to do with global scope;
open and redefine code etc).

> To avoid read/write conflicts in the evaluation, each property may be
> sealed to the currently available value(s) when reading from it. This
> allows detecting write-after-read situations. At this point the
> evaluator has enough information to decide whether the write is safe
> (the value doesn't change) or not (the eval-order independence is
> violated). In a future version, the evaluator could be changed to return
> promises instead of values and to lazy evaluation of promises. That way
> it would be possible to evaluate all manifests that have a eval-order
> independent result (that is, all that are reference-loop-free).
>
yes, and now, basically, the catalog is produced using a production
system that was populated by the puppet logic.

> The case of +>: the write/write conflict is irrelevant up to the order
> of the resulting list. The read/write conflict can be checked like any
> other case.
>
> A more subtle problem with this approach are resset-based assignments.
> Some examples:
>
> File { mode => 0644 } # wrong precedence
> file { '/tmp/foo': mode => 0600 }
>
> File['/tmp/foo'] { mode => 0644 }
> file { '/tmp/foo': mode => 0600 }
>
> File<| title == '/tmp/foo' |> { mode => 0644 }
> file { '/tmp/foo': mode => 0600 }
>
> File <| owner == root |> { mode => 0644 }
> file { '/tmp/foo': mode => 0600 }
>
> The solution to this lies in deferring evaluation of all dynamic (Type
> and Type<||>) ressets to the end of the compilation. While that would
> not influence write/write conflicts, it would force most read/write
> conflicts to happen always.
>
> Another ugly thing would be detecting this nonsense:
>
> File <| mode == 0600 |> { mode => 0644 }
>
>
> The same read/write conflict detection logic could be re-used for
> variables, finally being able to detect use of not-yet-defined variables.
>
Here we have another problem; variables defined in classes are very
different from those defined elsewhere - they are really
attributes/parameters of the class. All other variables follow the
imperative flow. That has always bothered me and causes leakage from
classes (all the temporary variables, those used for internal purposes
etc). This is also the source of "immutable variables", they really do
not have to be immutable (except in this case).

If we make variables be part of the lazy logic you would be able to write:

$a = $b + 2
$b = 2

I think this will confuse people greatly.

>> My own main issue with the idea is that it makes code backwards
>> incompatible; you cannot write a manifest that uses defaults and
>> overrides in a way that works both in 3.x and 4.x. (Or, I have not
>> figured out a way yet at least).
>
> Even if you skip the resources-as-hashes idea, I think most of the
> defaults and overrides precedence and eval-order confusion can be
> mitigated by a multi-slot implementation for properties as described above.
>
>> And finally, an alternative regarding Overrides, if we want to keep the
>> left side to be resource instance specific, (i.e no title), we could
>> simply change it to an assignment of a hash. I.e. instead of
>>
>> Notify[hi] { message => 'overridden message' }
>>
>> you write:
>>
>> Notify[hi] = { message => 'overriden message' }
>>
>> And now, the right hand side is simply a hash. The evaluator gets a
>> specific reference to a resource instance, and knows what to do.
>> (We could also allow both; the type + title in body way, and the
>> assignment way).
>
> This is what actually triggered my first idea. Also because I really
> dislike the assignment there.
>

The crux here is that just having one expression being followed by
another - e.g:

1 1

is this a call to 1 with 1 as an argument, or the production of one
value 1, followed by another?

This general problem is solved by stating that for this to be a call,
the first expression must be special; a NAME token that is followed by a
general expression (or a list of expressions: e.g. NAME e,e,e)

We cannot turn a hash into an operator since that would make it close to
impossible to write a literal hash.

Hence... for the resource expressions we need an operator that operates
on three things, type, id, and named arguments (plus, via the operator,
or through other means) the extra information if each value is a
default, a value, or an override, if it is an addition or a subtraction).

We can solve this by making the data structure special (the {: }),
using an operator, or using more complex but generic data structure
(hash with particular keys). If we use : in hashes to mean hash of hash,
then we made it easier to encode things like defaults, values and
overrides but we lack type and id.

You could read the
notify { hi: message => hello }
as:
Notify.new(hi, {message=>hello})

As I see it, the main grammar problem is that there is no "new
operator". Hence my attempt:

Notify[hi] = {message => hello}


>> Now I have typed too much already...
>
> Me too ;-)
>
Dueling ramblers?

To Summarize

I think it will be hard to change the core expression that creates
resource - i.e.

notify { hi : ...}

and then we are back at where I started;
- we can play tricks with the titel (using a literal default there)
- we can generalize the LHS since {: is an operator (i.e. differentiate
between LHS being name, and a type (notify vs Notify), or being a
resource-set, say from a query like Notify <| |>, or indeed any
expression such as a variable reference. The main problem here is being
able to infer the correct type (when that is not possible we end up with
late evaluation errors if there are mistakes, and they are hard to deal
with), so we may want to restrict the type of expression to those where
type is easily inferred.

David Schmitt

unread,
Jul 17, 2014, 8:57:06 AM7/17/14
to puppe...@googlegroups.com
Ah, nice! Actually I was thinking of something completely different, but
this is better.

>> # old style
>> create_resources($type, $values, $defaults)
>> # basic resource statement
>> $type $final
> This is problematic, we cannot make any sequence a function call without
> requiring that every expression is terminated with punctuation (e.g.
> ';') - but that must then be applied everywhere.


You are right, having that in the grammar makes no sense. I still think
it is a neat detail to keep in mind when thinking about the underlying
structure of what we're building.

>> # interpreted as function call
>> $type($final)
>
> This is problematic because:
> - selecting what to call via a general expression has proven in several
> languages to be a source of thorny bugs in user code.

Conceded.
If functions can return sets of resources that can be manipulated, the
special query operator syntax can be abolished - or at least
de-emphasised. See Eric's puppetdbquery for an example.

>> * 'Type<<| expr |>>' is the list of local and exported resources of
>> type 'type' where 'expr' evaluates true. As a side-effect,
>> it realizes all matched exported resources.[2]
> Same comment as above, but using a different container.
>
>> * '{ key => value, }' is a simple hash ('hash')
>> * '{ title: key => value, }' is a hash-of-hashes. Let's call this a
>> untyped resource ('ur') due to its special syntax[3].
>> * 'type ur' now syntactically matches what puppet3 has and evaluates
>> to the set of resources ('resset') created by
>> create_resources('type', 'ur').
>> * '[Type1[expr1], Type2[expr2]]' is the resset containing
>> 'Type1[expr1]' and 'Type2[expr2]'.
> That is what you get now. (or rather you get a set of references to the
> resource instances, not the instances themselves).

Is there a distinguishable difference for the language user?

>> * 'resset hash' (e.g. 'File { mode => 0 }') is an override expression.
>> It sets all values from 'hash' on all resources in 'resset'.
>> * 'resset -> resset' (and friends) define resource relationships
>> between sets of resources.
>> 'Yumrepo -> Package' would be a nice example, also avoiding
>> premature realization.
> The relations are recorded as being between references.
>
>> * 'create_resource(type, ur)' returns a resset containing resources
>> of type 'type' with the values from 'ur'. Written differently,
>> 'create_resource' becomes a cast-and-realize operator.[4]
>> - This allows things like 'create_resource(...) -> resset' and
>> 'create_resource(...) hash'
>> * 'include someclass' returns the resset of all resources included in
>> 'someclass'. Note that 'included' is a very weakly defined concept
>> in puppet, see Anchor Pattern.
> Hm, intriguing idea.
>
>> * Instances of user-defined types might also be seen as heterogeneous
>> ressets.
>>
> Yes.
>
>>
>> [1] It might be worthwhile to start requiring to always write
>> 'realize(Type<| expr |>)' for this side-effect. This looks annoying.
>
> it could be
>
> Type <| expr |>.realize

Ugh.

>> [2] Unintentionally realized exported resources seem a much less
>> frequent problem than the same side-effect on virtual resources causes.
>> It might make sense to avoid [1] and instead introduce something like
>> 'Type[|expr|]' and 'Type[[|expr|]]' to select without realizing.
>
> I like to go in the other direction with fewer special operators.

As said above, functions returning resource sets might be the way to go
then. It's not like we need to design the next APL ;-)
Don't you mean "the highest layer with a value must be conflict free" ?

> Highest (most important) layer is "the environment", secondly "all
> modules" - this means that conflicts bubble to the top, where a user
> must resolve the conflict by making the final decision.
>
> The environment level can be thought of as what is expressed in
> "site.pp", global or expressed for a "node" (if we forget for a while
> about all the crazy things puppet allows you to do with global scope;
> open and redefine code etc).

If you mean what I think you mean, I think like it.

Another example to try to understand this:

class somemodule { package { "git": ensure => installed } }

class othermodule { package { "git": ensure => '2.0' } }

node 'developer-workstation' {
# force conflict on Package[git]#ensure here: installed != '2.0'
include somemodule
include othermodule

# conflict resolved: higher layer saves the day
Package[git] { ensure => '2.1' }
}

How would the parser/grammar/evaluator understand which manifests are
part of what layer?
Yeah, not being able to calculate and reset values in parameters (or
class vars) is a pita, leading to all sorts of $managed_ and $real_
variables for little gain. Having proper futures or at least r/w
conflict detection might fix that instead of doing immutable.

> If we make variables be part of the lazy logic you would be able to write:
>
> $a = $b + 2
> $b = 2
>
> I think this will confuse people greatly.

Hehe, I can imagine that. When accessing variables across files/classes
I do not see that as a big problem, though. Within a single file/scope
it can be forbidden, or at least warned/linted.
Is the parser and the evaluator so intertwined that that cannot be
interpreted in context? "1" is not a callable, therefore it cannot be a
function call.

> This general problem is solved by stating that for this to be a call,
> the first expression must be special; a NAME token that is followed by a
> general expression (or a list of expressions: e.g. NAME e,e,e)
>
> We cannot turn a hash into an operator since that would make it close to
> impossible to write a literal hash.
>
> Hence... for the resource expressions we need an operator that operates
> on three things, type, id, and named arguments (plus, via the operator,
> or through other means) the extra information if each value is a
> default, a value, or an override, if it is an addition or a subtraction).
>
> We can solve this by making the data structure special (the {: }),
> using an operator, or using more complex but generic data structure
> (hash with particular keys). If we use : in hashes to mean hash of hash,
> then we made it easier to encode things like defaults, values and
> overrides but we lack type and id.
>
> You could read the
> notify { hi: message => hello }
> as:
> Notify.new(hi, {message=>hello})
>
> As I see it, the main grammar problem is that there is no "new
> operator". Hence my attempt:
>
> Notify[hi] = {message => hello}
>
>
>>> Now I have typed too much already...
>>
>> Me too ;-)
>>
> Dueling ramblers?

I think "Ideating" is the proper jargon here ;-)

> To Summarize
>
> I think it will be hard to change the core expression that creates
> resource - i.e.
>
> notify { hi : ...}
>
> and then we are back at where I started;
> - we can play tricks with the titel (using a literal default there)
> - we can generalize the LHS since {: is an operator (i.e. differentiate
> between LHS being name, and a type (notify vs Notify), or being a
> resource-set, say from a query like Notify <| |>, or indeed any
> expression such as a variable reference. The main problem here is being
> able to infer the correct type (when that is not possible we end up with
> late evaluation errors if there are mistakes, and they are hard to deal
> with), so we may want to restrict the type of expression to those where
> type is easily inferred.

:-/


Regards, David

Reid Vandewiele

unread,
Jul 17, 2014, 2:53:28 PM7/17/14
to puppe...@googlegroups.com
On Friday, July 11, 2014 7:50:47 PM UTC-7, henrik lindberg wrote:

Here we have another problem; variables defined in classes are very
different from those defined elsewhere - they are really
attributes/parameters of the class. All other variables follow the
imperative flow. That has always bothered me and causes leakage from
classes (all the temporary variables, those used for internal purposes
etc). This is also the source of "immutable variables", they really do
not have to be immutable (except in this case).

If we make variables be part of the lazy logic you would be able to write:

   $a = $b + 2
   $b = 2

I think this will confuse people greatly.

Slightly off-topic so I'll keep it short.

I have a huge appreciation for immutable variables in the Puppet language as I think it helps keep people centered in the mindset of declarative configuration and not procedural programming. The fact that variable values are parse-order dependent is detrimental in that it forces users to hold and visualize a more complex model in order to not get tripped up by parse-order dependencies. Resources can only be declared once and can be referred to before they've been hit by the parser. I would strongly support variables being the same. Today they are immutable and so have one foot in that door. Making them part of the lazy logic sounds like it could get them the rest of the way.

Outside of technical implementation challenges, it would be a good thing if variables were immutable and lazily evaluated in such a way as to make the example given above work.

Is there an existing thread or Jira ticket that would be a more appropriate place to discuss further?

Henrik Lindberg

unread,
Jul 17, 2014, 5:41:46 PM7/17/14
to puppe...@googlegroups.com
On 2014-17-07 14:56, David Schmitt wrote:
> On 2014-07-12 04:50, Henrik Lindberg wrote:
>> On 2014-11-07 10:55, David Schmitt wrote:
[...snip...]

>>> # old style
>>> create_resources($type, $values, $defaults)
>>> # basic resource statement
>>> $type $final
>> This is problematic, we cannot make any sequence a function call without
>> requiring that every expression is terminated with punctuation (e.g.
>> ';') - but that must then be applied everywhere.
>
>
> You are right, having that in the grammar makes no sense. I still think
> it is a neat detail to keep in mind when thinking about the underlying
> structure of what we're building.
>
yes, basically an expression that is a kind of join between a Puppet
Type[Resource] and application of one or multiple sets of data.

[...snip...]

> If functions can return sets of resources that can be manipulated, the
> special query operator syntax can be abolished - or at least
> de-emphasised. See Eric's puppetdbquery for an example.
>
yes, that is the direction this is going.

>>> * 'Type<<| expr |>>' is the list of local and exported resources of
>>> type 'type' where 'expr' evaluates true. As a side-effect,
>>> it realizes all matched exported resources.[2]
>> Same comment as above, but using a different container.
>>
>>> * '{ key => value, }' is a simple hash ('hash')
>>> * '{ title: key => value, }' is a hash-of-hashes. Let's call this a
>>> untyped resource ('ur') due to its special syntax[3].
>>> * 'type ur' now syntactically matches what puppet3 has and evaluates
>>> to the set of resources ('resset') created by
>>> create_resources('type', 'ur').
>>> * '[Type1[expr1], Type2[expr2]]' is the resset containing
>>> 'Type1[expr1]' and 'Type2[expr2]'.
>> That is what you get now. (or rather you get a set of references to the
>> resource instances, not the instances themselves).
>
> Is there a distinguishable difference for the language user?
>
No, not really. The type is a reference, the operations on it looks it
up or creates new. In the current implementation the one and same class
is used for both references and the real thing (this causes great pain
and confusion in the code).
A positive Ugh, or a Ugh of agonizing pain? :-)

>>> [2] Unintentionally realized exported resources seem a much less
>>> frequent problem than the same side-effect on virtual resources causes.
>>> It might make sense to avoid [1] and instead introduce something like
>>> 'Type[|expr|]' and 'Type[[|expr|]]' to select without realizing.
>>
>> I like to go in the other direction with fewer special operators.
>
> As said above, functions returning resource sets might be the way to go
> then. It's not like we need to design the next APL ;-)
>
yes agree, functions are good - operators are ok too when they are non
ambiguous and not too exotic :-)

[...snip ...]
>> I think this may just move the problem to dueling defaults, dueling
>> values, and dueling overrides. (This problem occurs in the binder and
>> there the problem is solved by the rules (expressed in the terms we use
>> here (except the term 'layer', which I will come back to):
>> - if two defaults are in conflict, a set value wins
>> - if two values are in conflict, an override wins
>> - if two overrides are in conflict, then the one made in the highest
>> layer wins.
>> - a layer must be conflict free
>
> Don't you mean "the highest layer with a value must be conflict free" ?
>
yes, that is true, since some higher layer must resolve the issue. Not
meaningful to enforce resolution in every layer.

>> Highest (most important) layer is "the environment", secondly "all
>> modules" - this means that conflicts bubble to the top, where a user
>> must resolve the conflict by making the final decision.
>>
>> The environment level can be thought of as what is expressed in
>> "site.pp", global or expressed for a "node" (if we forget for a while
>> about all the crazy things puppet allows you to do with global scope;
>> open and redefine code etc).
>
> If you mean what I think you mean, I think like it.
>
> Another example to try to understand this:
>
> class somemodule { package { "git": ensure => installed } }
>
> class othermodule { package { "git": ensure => '2.0' } }
>
> node 'developer-workstation' {
> # force conflict on Package[git]#ensure here: installed != '2.0'
> include somemodule
> include othermodule
>
> # conflict resolved: higher layer saves the day
> Package[git] { ensure => '2.1' }
> }
>
> How would the parser/grammar/evaluator understand which manifests are
> part of what layer?
>

We will have a new catalog model (the result built by the new catalog
builder slated to replace the current compiler). There we have a richer
model for defining the information about resources. We also have a new
loader system that knows where code came from. Thus, anything loaded at
the environment level (i.e. node definitions) knows it is in a higher layer.
We will see what we end up with - this is currently not a primary
concern. One neat thing you can do with the future parser (and in 4.0)
is to evaluate code in a local block using the function 'with', and then
pass it arguments, the variables in the lambda given to with are all
local! Thus you can assign the final value from what that lambda returns.


>> The crux here is that just having one expression being followed by
>> another - e.g:
>>
>> 1 1
>>
>> is this a call to 1 with 1 as an argument, or the production of one
>> value 1, followed by another?
>
> Is the parser and the evaluator so intertwined that that cannot be
> interpreted in context? "1" is not a callable, therefore it cannot be a
> function call.
>
They are not intertwined except that the parser builds a model that the
evaluator evaluates. The evaluator does exactly what you say, it
evaluates the LHS, and if that is not a NAME it fails, and if the NAME
is not a reference to a Function it fails. But the evaluator can not do
that in general, there may be a sequence of say if statements, they all
produce a value, should the second if be an argument to attempting to
call what the first produced ?

if true { teapot }
if true { 'yes I am'}

The evaluator would have no way of knowing, except doing static analysis
(which limits the expressiveness).

Currently the parser rewrites NAME expr, or NAME expr ',' expr ... into
a function call. I do not want to add additional magic of the same kind
if it can be avoided.
I am hacking on ideas to fix the problematic constructs in the grammar.
I have had some success, but it is to early to write about. Will come
back when I know more.

Henrik Lindberg

unread,
Jul 17, 2014, 8:01:27 PM7/17/14
to puppe...@googlegroups.com
On 2014-17-07 20:53, Reid Vandewiele wrote:
> On Friday, July 11, 2014 7:50:47 PM UTC-7, henrik lindberg wrote:
>
>
> Here we have another problem; variables defined in classes are very
> different from those defined elsewhere - they are really
> attributes/parameters of the class. All other variables follow the
> imperative flow. That has always bothered me and causes leakage from
> classes (all the temporary variables, those used for internal purposes
> etc). This is also the source of "immutable variables", they really do
> not have to be immutable (except in this case).
>
> If we make variables be part of the lazy logic you would be able to
> write:
>
> $a = $b + 2
> $b = 2
>
> I think this will confuse people greatly.
>
>
> Slightly off-topic so I'll keep it short.
>
> I have a huge appreciation for immutable variables in the Puppet
> language as I think it helps keep people centered in the mindset of
> declarative configuration and not procedural programming. The fact that
> variable values are parse-order dependent is detrimental in that it
> forces users to hold and visualize a more complex model in order to not
> get tripped up by parse-order dependencies. Resources can only be
> declared once

This is going to change - many want to be able to declare the a resource
multiple times, and that the non conflicting result is merged.
The resource does not exist until it is evaluated; you can have
references to it (or to any non existing resource) - things must resolve
at the end naturally.

> and can be referred to before they've been hit by the
> parser.

The are never hit by "the parser" - the parser is what translates the
source text into something that can be acted on, it does not evaluate
anything, or schedule lazy evaluation etc. That is the combination of
the evaluator (in the future parser), and an invisible evaluator that is
split up between all the AST objects in the current implementation.

I think you meant, can be referenced independently of the evaluation
order... but that is always true - you can reference anything even
things that do not exist (only you get an error at the end).

It is the instruction / operator that have lazy evaluation; say $a -> $b
which evaluates $a and $b to produce references to resource and then
asks the compiler to "make it so that everything in $b is after
everything in $a. It is at the point where the lazy operator is
evaluated that it is an error to reference something that does not exist.

> I would strongly support variables being the same. Today they
> are immutable and so have one foot in that door. Making them part of the
> lazy logic sounds like it could get them the rest of the way.
>
Well, there are different kinds of variables; those in a class are not
really variables; they are more like the class' attributes/parameters.
For a define the variables are not attributes (only the declared
parameters are).

> Outside of technical implementation challenges, it would be a good thing
> if variables were immutable and lazily evaluated in such a way as to
> make the example given above work.
>
> Is there an existing thread or Jira ticket that would be a more
> appropriate place to discuss further?
>
There have been several tickets in the past where this has been
discussed (starting with various ideas). We already have complex
dependency orders in our evaluation, adding yet another such mechanism
on top seems like more opportunities to create endless loops and
deadlocks. The capability "lazy values" must be introduced in a safe
way. On that topic we do have some ideas, but it will take a while to
write them up into a coherent proposal.

I do not think lazy evaluation is appropriate for all kinds of variables
because we have an imperative language that constructs a lazily
evaluated catalog that in turn is declarative.

As David Schmitt pointed out, having to use multiple variables is a pita
in several use cases because imperative programming (discrete steps) has
to be taken to get the task done. We eventually would end up with
something F# or Haskel like, or something resembling Prolog, since doing
this for variables is just the tip of the iceberg. Would be cool to see
a full function oriented puppet language variant...

John Bollinger

unread,
Jul 18, 2014, 1:33:20 PM7/18/14
to puppe...@googlegroups.com


On Friday, July 11, 2014 9:50:47 PM UTC-5, henrik lindberg wrote:
On 2014-11-07 10:55, David Schmitt wrote:
>  [...]

>    * 'Type<| expr |>' is the list of local resources of type 'type' in
>      the current compilation where 'expr' evaluates true. As a
>      side-effect, it realizes all matched virtual resources.[1]
Some sort of query operator. If we keep the <| |>, it could mean
selection of the "virtual here" container/subset, currently it is
"everything defined here".

[...]
 
 
> [1] It might be worthwhile to start requiring to always write
> 'realize(Type<| expr |>)' for this side-effect.


You are by no means the first to suggest that conflating realization of virtual resources with selecting a collection of resources was a poor idea.  With the much higher profile that collections have nowadays, I think that is becoming a more pressing issue.

 
> This looks annoying.

it could be

   Type <| expr |>.realize



Couldn't it also just be

    realize Type <| expr |>

?  I mean, at least some Puppet functions already do not require parentheses around their arguments under some circumstances (e.g. include()).  It makes more sense to me to put the function name / keyword first, but the parentheses are optional as far as I am concerned, and it reads more cleanly without them.

Alternatively, perhaps there can be a new selection criterion that limits collections to resources that are in the catalog (at the time the collection's contents are determined).  That would allow current collector semantics to remain the same, while still affording manifest authors the ability to collect resources without realizing still-virtual ones.  The same or similar criteria could be used to narrow results to other categories of resources.


Also, long ago I made a feature request that collections be usable as r-values.  My main idea for that was to be able to assign collections to the relational metaparameters, and that purpose is now served by chain expressions instead.  Still, I suspect there are still other uses for collections as r-values, and it sounds like that may be one of the directions this is going.  If that can be made to work smoothly, then it would be pretty cool.  On the other hand, I don't have specific use cases in mind, so I wouldn't personally consider this a priority, and maybe not even a real candidate for implementation.


John

Henrik Lindberg

unread,
Jul 18, 2014, 3:48:23 PM7/18/14
to puppe...@googlegroups.com
On 2014-18-07 19:33, John Bollinger wrote:
> On Friday, July 11, 2014 9:50:47 PM UTC-5, henrik lindberg wrote:
> On 2014-11-07 10:55, David Schmitt wrote:
> > [...]
> > * 'Type<| expr |>' is the list of local resources of type
> 'type' in
> > the current compilation where 'expr' evaluates true. As a
> > side-effect, it realizes all matched virtual resources.[1]
> Some sort of query operator. If we keep the <| |>, it could mean
> selection of the "virtual here" container/subset, currently it is
> "everything defined here".
>
> [...]
>
>
> > [1] It might be worthwhile to start requiring to always write
> > 'realize(Type<| expr |>)' for this side-effect.
>
> You are by no means the first to suggest that conflating realization of
> virtual resources with selecting a collection of resources was a poor
> idea. With the much higher profile that collections have nowadays, I
> think that is becoming a more pressing issue.
>
> > This looks annoying.
>
> it could be
>
> Type <| expr |>.realize
>
>
>
> Couldn't it also just be
>
> realize Type <| expr |>
>
yes, it could, realize is a function that may be called without parentheses.

> ? I mean, at least some Puppet functions already do not require
> parentheses around their arguments under some circumstances (e.g.
> include()). It makes more sense to me to put the function name /
> keyword first, but the parentheses are optional as far as I am
> concerned, and it reads more cleanly without them.
>
> Alternatively, perhaps there can be a new selection criterion that
> limits collections to resources that are in the catalog (at the time the
> collection's contents are determined). That would allow current
> collector semantics to remain the same, while still affording manifest
> authors the ability to collect resources without realizing still-virtual
> ones. The same or similar criteria could be used to narrow results to
> other categories of resources.
>
We are thinking about a new kind of query mechanism. This because
changing semantics of the current mechanism is going to cause lots of
breakage.

>
> Also, long ago I made a feature request that collections be usable as
> r-values. My main idea for that was to be able to assign collections to
> the relational metaparameters, and that purpose is now served by chain
> expressions instead. Still, I suspect there are still other uses for
> collections as r-values, and it sounds like that may be one of the
> directions this is going. If that can be made to work smoothly, then it
> would be pretty cool. On the other hand, I don't have specific use
> cases in mind, so I wouldn't personally consider this a priority, and
> maybe not even a real candidate for implementation.
>
These are indeed the kinds of things we want to be able to do.

Priority wise, we are first going to reimplement the collection
mechanism in such a way that we can delete the old code it depends on.
Later (after 4.0) we are going to be working on the new catalog builder.

What we want to ensure now is that we make the necessary grammar changes
to the language that makes it possible for us to gradually introduce
these new features.

David Schmitt

unread,
Jul 19, 2014, 4:51:20 AM7/19/14
to puppe...@googlegroups.com
On 2014-07-17 23:41, Henrik Lindberg wrote:
> On 2014-17-07 14:56, David Schmitt wrote:
>> On 2014-07-12 04:50, Henrik Lindberg wrote:
>>> On 2014-11-07 10:55, David Schmitt wrote:

[...snip...]

>>>> [1] It might be worthwhile to start requiring to always write
>>>> 'realize(Type<| expr |>)' for this side-effect. This looks annoying.
>>>
>>> it could be
>>>
>>> Type <| expr |>.realize
>>
>> Ugh.
>>
> A positive Ugh, or a Ugh of agonizing pain? :-)

The latter.

[...snip ...]

>>> I think this may just move the problem to dueling defaults, dueling
>>> values, and dueling overrides. (This problem occurs in the binder and
>>> there the problem is solved by the rules (expressed in the terms we use
>>> here (except the term 'layer', which I will come back to):
>>> - if two defaults are in conflict, a set value wins
>>> - if two values are in conflict, an override wins
>>> - if two overrides are in conflict, then the one made in the highest
>>> layer wins.
>>> - a layer must be conflict free
>>
>> Don't you mean "the highest layer with a value must be conflict free" ?
>>
> yes, that is true, since some higher layer must resolve the issue. Not
> meaningful to enforce resolution in every layer.

Good.
Slick. The next question then is: How does the user know? ;-) And How
fine-grained are "levels"? Each entry on the module path? Will I be able
to query the puppetmaster for a report on which sources influenced (or
did not influence) a specific value?

[snip]

>>> If we make variables be part of the lazy logic you would be able to
>>> write:
>>>
>>> $a = $b + 2
>>> $b = 2
>>>
>>> I think this will confuse people greatly.
>>
>> Hehe, I can imagine that. When accessing variables across files/classes
>> I do not see that as a big problem, though. Within a single file/scope
>> it can be forbidden, or at least warned/linted.
>>
> We will see what we end up with - this is currently not a primary
> concern. One neat thing you can do with the future parser (and in 4.0)
> is to evaluate code in a local block using the function 'with', and then
> pass it arguments, the variables in the lambda given to with are all
> local! Thus you can assign the final value from what that lambda returns.

I see, I'll have to read up on the future parser...

>>> The crux here is that just having one expression being followed by
>>> another - e.g:
>>>
>>> 1 1
>>>
>>> is this a call to 1 with 1 as an argument, or the production of one
>>> value 1, followed by another?
>>
>> Is the parser and the evaluator so intertwined that that cannot be
>> interpreted in context? "1" is not a callable, therefore it cannot be a
>> function call.
>>
> They are not intertwined except that the parser builds a model that the
> evaluator evaluates. The evaluator does exactly what you say, it
> evaluates the LHS, and if that is not a NAME it fails, and if the NAME
> is not a reference to a Function it fails. But the evaluator can not do
> that in general, there may be a sequence of say if statements, they all
> produce a value, should the second if be an argument to attempting to
> call what the first produced ?
>
> if true { teapot }
> if true { 'yes I am'}
>
> The evaluator would have no way of knowing, except doing static analysis
> (which limits the expressiveness).
>
> Currently the parser rewrites NAME expr, or NAME expr ',' expr ... into
> a function call. I do not want to add additional magic of the same kind
> if it can be avoided.

Accepted.

[snip]

> I am hacking on ideas to fix the problematic constructs in the grammar.
> I have had some success, but it is to early to write about. Will come
> back when I know more.

Good hunting!


Regards, David

David Schmitt

unread,
Jul 19, 2014, 4:54:38 AM7/19/14
to puppe...@googlegroups.com
On 2014-07-18 02:01, Henrik Lindberg wrote:
[snip]
> As David Schmitt pointed out, having to use multiple variables is a pita
> in several use cases because imperative programming (discrete steps) has
> to be taken to get the task done. We eventually would end up with
> something F# or Haskel like, or something resembling Prolog, since doing
[snip]


You say that as if it were something bad.</tongue-in-cheek>



Regards, David

Henrik Lindberg

unread,
Jul 19, 2014, 11:44:00 AM7/19/14
to puppe...@googlegroups.com
On 2014-19-07 10:51, David Schmitt wrote:
>
[...snip...]
>> We will have a new catalog model (the result built by the new catalog
>> builder slated to replace the current compiler). There we have a richer
>> model for defining the information about resources. We also have a new
>> loader system that knows where code came from. Thus, anything loaded at
>> the environment level (i.e. node definitions) knows it is in a higher
>> layer.
>
> Slick. The next question then is: How does the user know? ;-) And How
> fine-grained are "levels"? Each entry on the module path? Will I be able
> to query the puppetmaster for a report on which sources influenced (or
> did not influence) a specific value?
>

The model is borrowed from OSGi. Each "component" has visibility into
its own things, and on the things it depends. A component may have
private things that are not visible to other components. (OSGi also has
a "friend" concept that we did not implement, and we have not (yet?)
implemented "re-publish" (i.e. that one module (A) has visibility into
another (B) and making its public loadable entities available to those
who depend only on A). For that to make sense we need a way to alias the
things in (B) or something in (C) can not address them because it only
knows about things in the a:: namespace. (Either that or that a
component may publish things in other name spaces under certain
conditions).

For the purpose of loading, view the environment as a component that all
modules have visibility into. There is also a system "component" (i.e.
puppet runtime), and a static/internal (functions and things generated;
like the logging functions, data types etc.) that is visible to all.


> [snip]
>
>
> Good hunting!
>

Spear in one hand, axe in the other...

Henrik Lindberg

unread,
Jul 19, 2014, 1:24:49 PM7/19/14
to puppe...@googlegroups.com
On 2014-19-07 17:43, Henrik Lindberg wrote:
> On 2014-19-07 10:51, David Schmitt wrote:
>>
> [...snip...]
>>> We will have a new catalog model (the result built by the new catalog
>>> builder slated to replace the current compiler). There we have a richer
>>> model for defining the information about resources. We also have a new
>>> loader system that knows where code came from. Thus, anything loaded at
>>> the environment level (i.e. node definitions) knows it is in a higher
>>> layer.
>>
>> Slick. The next question then is: How does the user know? ;-) And How
>> fine-grained are "levels"? Each entry on the module path? Will I be able
>> to query the puppetmaster for a report on which sources influenced (or
>> did not influence) a specific value?
>>
>
> The model is borrowed from OSGi. Each "component" has visibility into
> its own things, and on the things it depends. A component may have
> private things that are not visible to other components. (OSGi also has
> a "friend" concept that we did not implement, and we have not (yet?)
> implemented "re-publish" (i.e. that one module (A) has visibility into
> another (B) and making its public loadable entities available to those
> who depend only on A). For that to make sense we need a way to alias the
> things in (B) or something in (C) can not address them because it only
> knows about things in the a:: namespace. (Either that or that a
> component may publish things in other name spaces under certain
> conditions).
>

I should also have mentioned that in Puppet 3.7 future parser (and 4.0),
the new loaders are only used for the functions using the new Functions
API. We expect to move the rest over in stepwise fashion.

Also, worth mentioning is that modules that lack meta data, or have no
dependency section in their metadata has visibility into the environment
and every other module.
Reply all
Reply to author
Forward
0 new messages