Over-engineering rant

213 views
Skip to first unread message

Jakov Sosic

unread,
Jan 7, 2017, 9:00:39 PM1/7/17
to puppet...@googlegroups.com
Hi guys,

this is maybe a topic better suited for -dev list, but, well, here goes.

I've been using puppet heavily for 3-4 years, up until version 4, now
I'm mostly maintaining my own open source modules.

What stumped me lately is the amount of changes that are happening.


Every week/two-weeks I do some code changes, and since I tried adding
some of the spec tests through TravisCI, I've encountered errors about
`validate_string` being obsoleted.

Then, looking deeper into the rabbit hole. I've encountered this:

https://github.com/puppetlabs/puppetlabs-ntp/blob/master/manifests/init.pp

And I was shocked... :D WTF just happened? :D



It's becoming overwhelming to follow all these changes even for a puppet
veteran, what about newcomers?

I've been using ansible for a year due to requirements in my new
position, and I've been gaining understanding why is puppet seemingly
losing ground vs other CM tools. It's like the puppet has become a
purpose of it's own, and not a tool to solve actual problems...

Don't get me wrong, I did survive migration from sysv to SMF, sysv to
systemd, mmap to wiredtiger, and gazilion other changes, and I'm not
some old grumpy guy :) But... coming back to a tool after 6 months of
absence and finding that I have difficulties reading and understanding
code? Doesn't really make sense...

And, on the other hand - all this complexity to manage a NTP?


And again - there are features that are really lacking - for example:

1) Remote agent runs, meaning:
- local puppet compiling manifest
- ssh-ing into a box, scp-ing all the needed shit
- applying catalog over there

This would eliminate a need for master server (or code living on all VMs
in case of masterless puppet) for smaller installations.

2) Something like search function in chef instead of exported resources
and puppetdb

3) Simpler and more integrated orchestration - mcollective is
over-engineered and learning curve is almost vertical uphill battle.

I've been using exactly ansible to orchestrate puppet managed
environments :) And when you hear that from a 4-5 year-old puppet user -
that means something is wrong in this ecosystem.

4) Built in management of cloud infrastructure resources (ec2 instances,
ami images, etc)


Sorry if I offended anybody here, it was not my intention... I hope
community picks up some hints from my rant... Maybe I completely missed
the ball, but since I was/still am passionate puppet user, I had to
share this with community.

Please don't be harsh in your comments ;)


--
Jakov

Fabrice Bacchella

unread,
Jan 8, 2017, 5:04:34 AM1/8/17
to puppet...@googlegroups.com

> Le 8 janv. 2017 à 03:00, Jakov Sosic <jso...@gmail.com> a écrit :
>
> Hi guys,
>
> this is maybe a topic better suited for -dev list, but, well, here goes.
>
> I've been using puppet heavily for 3-4 years, up until version 4, now I'm mostly maintaining my own open source modules.
>
> What stumped me lately is the amount of changes that are happening.
>
>
> Every week/two-weeks I do some code changes, and since I tried adding some of the spec tests through TravisCI, I've encountered errors about `validate_string` being obsoleted.
>
> Then, looking deeper into the rabbit hole. I've encountered this:
>
> https://github.com/puppetlabs/puppetlabs-ntp/blob/master/manifests/init.pp
>
> And I was shocked... :D WTF just happened? :D
>

This one is funny too:
https://github.com/puppetlabs/puppetlabs-ntp/blob/master/templates/ntp.conf.epp

> And, on the other hand - all this complexity to manage a NTP?

And that's for something that for a given environment never change, have no options. So dropping a standard file that is hand made once in a lifetime is enough for the vast majority of people.

That's why I don't use standard or references modules. I can do in 10 lines written in 10 minutes what they did in 100 written in many days. I don't care that they don't run on some exotic plate from that I never heard of or are not good for stratum 1 servers. They are tailored for my need, that's enough for me.
They never brake, never warn, works almost unchanged from puppet 2.7 time and it take me the same amount of time that it would have taken to download, understand and check them.

>

> 2) Something like search function in chef instead of exported resources and puppetdb

Puppetdb is very nice and useful, but perhaps a simple and custom query langage would be easier to use than strange json query.

>
> 3) Simpler and more integrated orchestration - mcollective is over-engineered and learning curve is almost vertical uphill battle.

I don' agree with this, it does nice thing and you can use it without going deep into it. If you have very special needs, a tool like rundeck might be very helpful.
>



R.I.Pienaar

unread,
Jan 8, 2017, 7:43:33 AM1/8/17
to puppet-users
google puppet PQL which is that simper language, a searcher function for it ships out of the box.

Jakov Sosic

unread,
Jan 8, 2017, 8:54:10 AM1/8/17
to puppet...@googlegroups.com
On 01/08/2017 11:04 AM, Fabrice Bacchella wrote:

> And that's for something that for a given environment
> never change, have no options. So dropping a standard
> file that is hand made once in a lifetime is enough for
> the vast majority of people.

Exactly my point...

I never really understood all the blog posts about people migrating to
other tools, and didn't quite understand bunch of the remarks and
reasons given in those posts...let alone agree with them.

But, becoming such a time sink even for a veteran is kinda depressing. I
can even see my self writing such a blog post in the future :D

Sure, organizations having 10+ devops engineers can afford to allocate
one of them 0.5/1.0 FTE on puppet alone, but smaller shops with 2-5
devops engineers just can't afford it.

Learning curve for a newcomer is steep high, but even once you're
seasoned puppet engineer, amount of changes happening is overwhelming.
It just becomes a time sink, and wastes a lot of your time just to keep
your code base up to date.

And sincerely I don't see any obvious benefit of some of these additions
(epp, moving from lose to strict parameter/variable types, ...).


It's something to think about: is puppet becoming it's own goal and
purpose (losing sight of what it should be - a tool that solves actual
problems)?

Fabrice Bacchella

unread,
Jan 8, 2017, 11:59:05 AM1/8/17
to puppet...@googlegroups.com

> Le 8 janv. 2017 à 14:54, Jakov Sosic <jso...@gmail.com> a écrit :
>
> On 01/08/2017 11:04 AM, Fabrice Bacchella wrote:
>
>> And that's for something that for a given environment
> > never change, have no options. So dropping a standard
> > file that is hand made once in a lifetime is enough for
> > the vast majority of people.
>
> Exactly my point...
>
> I never really understood all the blog posts about people migrating to other tools, and didn't quite understand bunch of the remarks and reasons given in those posts...let alone agree with them.
>
> But, becoming such a time sink even for a veteran is kinda depressing. I can even see my self writing such a blog post in the future :D
>
> Sure, organizations having 10+ devops engineers can afford to allocate one of them 0.5/1.0 FTE on puppet alone, but smaller shops with 2-5 devops engineers just can't afford it.
>

They can, by dropping so called 'best practice' from puppet, but instead stick to KISS. Like don't build files that can just dropped, don't automatise what is done once in life time, puppet mode are data, not code. So I fell happy to put specific code for my own plate form in them.

Gareth Rushgrove

unread,
Jan 8, 2017, 12:52:20 PM1/8/17
to puppet...@googlegroups.com
See the reasonably new PQL syntax and the puppetdb_query function
which can use it with it.

https://docs.puppet.com/puppetdb/4.3/api/query/tutorial-pql.html
https://docs.puppet.com/puppetdb/4.3/api/query/v4/pql.html

Some prior-art to this in the very popular puppetdbquery module from
Eric Dalen as well.

https://github.com/dalen/puppet-puppetdbquery

This blog post is a good place to start:
https://puppet.com/blog/introducing-puppet-query-language-pql

>
> 3) Simpler and more integrated orchestration - mcollective is
> over-engineered and learning curve is almost vertical uphill battle.
>
> I've been using exactly ansible to orchestrate puppet managed environments
> :) And when you hear that from a 4-5 year-old puppet user - that means
> something is wrong in this ecosystem.
>
> 4) Built in management of cloud infrastructure resources (ec2 instances, ami
> images, etc)
>

There is some of this in the AWS module. It doesn't cover all AWS
resources yet but it covers many of the core bits (VPC, EC2,
autoscaling, etc.)

https://github.com/puppetlabs/puppetlabs-aws

There are also modules for vSphere (although this is Puppet Enterprise
only) and Azure, with a suite of modules for GCE from Google demo'd at
PuppetConf last year.

The image_build module currently facilities building Docker images
straight from Puppet code, but I have a sketch for extending this to
build AMIs and other cloud provider images. Lots of people do this
already using Puppet with Packer, but we can wrap some of that to
provide a higher level interface and some embedded best practices.

https://github.com/puppetlabs/puppetlabs-image_build

>
> Sorry if I offended anybody here, it was not my intention... I hope
> community picks up some hints from my rant... Maybe I completely missed the
> ball, but since I was/still am passionate puppet user, I had to share this
> with community.
>
> Please don't be harsh in your comments ;)
>

No at all. As a Puppet employee this sort of feedback is great. Puppet
is definitely used by a wide range of folks with different
backgrounds, in different contexts and in different types of
organisations.

I don't personally think one-size-fits all when it comes to how it's
used (As an unrelated example J2EE was fine for some organisation, and
terrible for others, and led via other routes to those that preferred
Plain Old Java Objects - or POJO). The new data types (as found in the
NTP module) are additive to the language, and you don't need to use
them. They do help in some cases, in particular when writing modules
that are mainly used by others. The data types help create a clear
user interface so the end user shouldn't need to look under the hood.
But that's not a problem everyone has.

Thanks

Gareth

>
> --
> Jakov
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/d7250279-33b4-9d91-445c-7eaf81a13b14%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.



--
Gareth Rushgrove
@garethr

devopsweekly.com
morethanseven.net
garethrushgrove.com

Matthew Kennedy

unread,
Jan 8, 2017, 12:54:46 PM1/8/17
to puppet...@googlegroups.com

IMHO, the changes made to the language in 4.x allows for better and more complete modeling of systems. Yes you have more 'things' to learn, the types/lookup systems for example but they are relatively simple to understand. Look at your ntp example, I image it was the specification of Types that looks so different and it is but your get assurances that your classes parameters receive data it can use. You don't need an army of validators. This is a good thing.

The rapid development of a system and its supporting structures is not a sign of rot it's quite the opposite.

Keep in mind as well that puppet is NOT a scripting language that lets you setup ntp on your nodes. It's a modeling language that lets you express the important features of your system(s).  At times that can look like over-engineering and it is if the perspective you have is  'I just need my ntp config pushed to my box' but I'd posit that your missing the forest for the trees. The ntp model should model the ntp system with sufficient suppleness that it is generally applicable AND able to handle more advanced use cases. If that is not an important feature if the systems you are modeling, don't use it. File[] is always there 😉


--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users...@googlegroups.com.

Rob Nelson

unread,
Jan 8, 2017, 2:31:33 PM1/8/17
to puppet...@googlegroups.com
There are a lot of very valid issues and concerns you bring up here. I do want to start by saying, however, that puppet 4 is more than 6 months old - about 20 months to be precise - and most of the significant language changes were introduced somewhat earlier in the future parser in puppet 3. These changes should be easier to take in for sure, but that is at least 3x more to catch up on. I hope that doesn't sound like a harsh response, but I think it's more accepted that after 1.5-2 years, most moving projects will require significant re-learning.

Re: ntp module. Puppet supports a ton of operating systems and most of us only run a handful. The module is more complex than any one person usually needs, but it also addresses everyone's needs. From a user perspective, though, it's pretty simple to just `include ntp` unless you want something nonstandard. I think most modules are written this way, certainly the best ones are. Data in modules is an approach to try and reduce the complexity of the code and retain the support for a wide array of operating systems. As a fellow module author, I am struggling somewhat with this myself. There are some good articles about this but I would love to see even more explanations and examples of how to convert from puppet 3 style to data in modules, everyone learns differently.

On Travis CI, are you seeing failures related to puppet or dependencies? If it's ruby, I have many feels that I can't share publicly. But the summary is, it's a crapshoot what dependency will break today. I've started setting up Travis Cron Jobs for this, so nightly builds occur before my once a month PR that goes red. My best suggestion is to take a look at Vox Pupuli's modulesync configs, or the dynamically generated .travis.yml in a repo, to keep apprised of what Gemfile settings work. Links:

https://rnelson0.com/2016/12/15/scheduling-regular-travis-ci-builds-with-cron-jobs/
https://github.com/voxpupuli/modulesync_config/blob/d4e999bf434dd220614b80c108f7221eb5f3c1db/config_defaults.yml#L18-L56
https://github.com/voxpupuli/puppet-archive/blob/master/Gemfile

For remote agent runs, I think that is a very interesting idea. While it has some security implications (agent->master port 8140 vs compile->agent port 22) and fact collection would not work as-is, it could be very useful.I wonder if the `puppet device` face could be either adjusted for that or a base for experimentation?

I tend to agree on difficulties with mcollective as orchestration, but I haven't found an orchestration tool that has been simple. Ansible has its issues, too. I just don't think use of another tool is horrible, though. The combination of modules like puppet_agent to upgrade puppet or other components on agents and PQL to query puppet and application orchestration to direct action on PQL results looks to be an appealing combination of puppet-only tools. I've only gotten to play with the modules portion and hope to play with PQL and AO soon to see how realistic that assessment is.

If your Travis issues are with puppet itself, can you share some details?
On Sat, Jan 7, 2017 at 9:00 PM Jakov Sosic <jso...@gmail.com> wrote:
Hi guys,


this is maybe a topic better suited for -dev list, but, well, here goes.
T time. 
--

You received this message because you are subscribed to the Google Groups "Puppet Users" group.

To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/d7250279-33b4-9d91-445c-7eaf81a13b14%40gmail.com.

Ramin K

unread,
Jan 8, 2017, 4:03:09 PM1/8/17
to puppet...@googlegroups.com
To be honest I never understood some of the blogs either, but this
thread has clarified it for me. To phrase it somewhat unkindly, Some
sysadmins when faced with software engineering want to go back to shell
scripts.

Are we seriously going to complain that we can enforce input validation
of types and structures? Are you mad? I work on a 10 year old 100k+ LOC
of damned MANIFEST code and I daily curse every committer who didn't
think about their data type and structure. We have a joke on on our
team, "true, false, and string. My least favorite data type." And it's
everywhere in the code and hard to blindly rip out.

Just last month it took us three days to sort out a define with
svc_check and svc_checks. Now we force array validation on svc_checks,
removed svc_check, and dropped a ton of confusing code along the way. Is
someone going to get bit when they pass a string? Yes. Will they figure
it out in a minute or two because the validation fail will tell them
exactly what to provide? Yes.

I would argue that our experience is working against us when it comes
to the new code. Everything is a string and we massage the data later is
how most of us worked. Also we may know exactly what we want to manage.
Now Puppet has the tools to validate input and with in module data
easily support just about any config. Sure it looks more complex and
hard to tell where data is coming from if you haven't seen the style
before, but simplifies templates, compares, regex, booleans, and
everything else we were doing. This code can be USED by anyone, but
takes slightly longer to understand.

Whether the module is overwrought is certainly a conversation worth
having, but let's separate that from the upgrade in technology.

Ramin

Dirk Heinrichs

unread,
Jan 9, 2017, 1:44:43 AM1/9/17
to puppet...@googlegroups.com
Am 08.01.2017 um 11:04 schrieb Fabrice Bacchella:

And, on the other hand - all this complexity to manage a NTP?
And that's for something that for a given environment never change, have no options. So dropping a standard file that is hand made once in a lifetime is enough for the vast majority of people.

And it doesn't even support Windows.


That's why I don't use standard or references modules. I can do in 10 lines written in 10 minutes what they did in 100 written in many days. I don't care that they don't run on some exotic plate from that I never heard of or are not good for stratum 1 servers. They are tailored for my need, that's enough for me. They never brake, never warn, works almost unchanged from puppet 2.7 time and it take me the same amount of time that it would have taken to download, understand and check them.

Ack.

Bye...

    Dirk
--
Dirk Heinrichs
Senior Systems Engineer, Delivery Pipeline
OpenTextTM Discovery | Recommind
Email: dirk.he...@recommind.com
Website: www.recommind.de

Recommind GmbH, Von-Liebig-Straße 1, 53359 Rheinbach

Vertretungsberechtigte Geschäftsführer John Marshall Doolittle, Gordon Davies, Roger Illing, Registergericht Amtsgericht Bonn, Registernummer HRB 10646

This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet.

John Gelnaw

unread,
Jan 9, 2017, 9:56:34 AM1/9/17
to Puppet Users
On Sunday, January 8, 2017 at 2:31:33 PM UTC-5, Rob Nelson wrote:
There are a lot of very valid issues and concerns you bring up here. I do want to start by saying, however, that puppet 4 is more than 6 months old - about 20 months to be precise - and most of the significant language changes were introduced somewhat earlier in the future parser in puppet 3. These changes should be easier to take in for sure, but that is at least 3x more to catch up on. I hope that doesn't sound like a harsh response, but I think it's more accepted that after 1.5-2 years, most moving projects will require significant re-learning.

I've been using "future parser" in Puppet 3 for a while-- I absolutely had to have iteration, and a few other features, so I *thought* I had been keeping up with puppet development.

I had a similar reaction to the OP when I looked at the NTP code-- "eeeeek!!!".

Although knowing that it's optional is a good thing, and knowing it's available is also good-- it is something of an overwhelming example of "wall of code".  Then again, for those who say NTP is simple-- I point and laugh in your general direction.  The fact that NTP *can* be as simple as a drift file and an NTP host, doesn't mean it's always that easy, and I respect the amount of effort in making that module work.

Having said that, my ntp class is a bit simpler, and resembles the classic "package / file / service" puppet class, because that's all my site requires. 

Most of my bitterness towards puppet comes from the 3.x series, where the API was a moving target, and upgrading to the "latest" puppet 3.x package could break your world.  It's gotten significantly better, but I'm still only about halfway up the puppet 3.x --> 4.x cliff.  ;)

R.I.Pienaar

unread,
Jan 9, 2017, 10:10:08 AM1/9/17
to puppet-users


----- Original Message -----
> From: "John Gelnaw" <jge...@gmail.com>
> To: "puppet-users" <puppet...@googlegroups.com>
> Sent: Monday, 9 January, 2017 15:56:34
> Subject: Re: [Puppet Users] Over-engineering rant

> On Sunday, January 8, 2017 at 2:31:33 PM UTC-5, Rob Nelson wrote:
>>
>> There are a lot of very valid issues and concerns you bring up here. I do
>> want to start by saying, however, that puppet 4 is more than 6 months old -
>> about 20 months to be precise - and most of the significant language
>> changes were introduced somewhat earlier in the future parser in puppet 3.
>> These changes should be easier to take in for sure, but that is at least 3x
>> more to catch up on. I hope that doesn't sound like a harsh response, but I
>> think it's more accepted that after 1.5-2 years, most moving projects will
>> require significant re-learning.
>>
>
> I've been using "future parser" in Puppet 3 for a while-- I absolutely had
> to have iteration, and a few other features, so I *thought* I had been
> keeping up with puppet development.
>
> I had a similar reaction to the OP when I looked at the NTP code--
> "eeeeek!!!".
>

so we're on the same page are you just saying in general the NTP module has too much
going on and its too huge for a "simple" piece of software?

Or that are you comparing the puppet 3 version:

https://github.com/puppetlabs/puppetlabs-ntp/blob/1cdff74278d2fce0f7a12100d12913c9e0c36ce8/manifests/init.pp
and it's companion file https://github.com/puppetlabs/puppetlabs-ntp/blob/1cdff74278d2fce0f7a12100d12913c9e0c36ce8/manifests/params.pp

with puppet 4 version

https://github.com/puppetlabs/puppetlabs-ntp/blob/master/manifests/init.pp
and it's companion file https://github.com/puppetlabs/puppetlabs-ntp/blob/master/data/common.yaml

and saying it's gotten impossible and much worse?

The main difference is lines like:

validate_bool($disable_auth)

became

Boolean $disable_auth

and Data now uses Hiera, its a LOT LESS code in Puppet 4 and fewer dependencies
etc

Just want to understand the actual complaint part of this distinctly from the rant
part of this mail thread.

John Gelnaw

unread,
Jan 9, 2017, 2:14:00 PM1/9/17
to Puppet Users
On Monday, January 9, 2017 at 10:10:08 AM UTC-5, R.I. Pienaar wrote:

so we're on the same page are you just saying in general the NTP module has too much
going on and its too huge for a "simple" piece of software?

Mostly, it was the unexpected syntax.  Somehow, I completely missed any references to data typing.  I'm not even *opposed* to it-- although there's a very lazy part of me that says it needs to remain optional.  :)

While the puppet 3 version is, quite frankly, hideous, I understand why it's that way, and it's at least formatted nicely, so it's easily read.

The puppet 4 version looks cluttered (even though it's much simpler, it APPEARS more cluttered because it's not a table any longer), and was a paradigm shift I was unprepared for.

But reading through it makes sense.

Although-- I think I'd consider (optionally) moving the params to an external file, for readability, if nothing else. 

In both the puppet 3.x and 4.x examples, you've got a whole lot of information jammed into the "first line" (that has 50+ parameters) that can overwhelm a novice user.

Whatever happened to yaml-in-modules as a concept?  I'd think using something like that for parameter definitions would be a much cleaner approach.

Maybe something like:

params.yaml:
classes:
  ntp
:
    config_epp
:
      type
: string
      required
: false


... but that may be too much caffeine talking.  ;)
 
and Data now uses Hiera, its a LOT LESS code in Puppet 4 and fewer dependencies
etc

Just want to understand the actual complaint part of this distinctly from the rant
part of this mail thread.

Not even sure it was a complaint-- Just a bit of culture shock as an unknown feature crept up on me.

Heck, you should have seen me trying to find out what the "@@" syntax meant (puppetdb has been somewhat unstable until recently in my environment, so I've never spent much time on it, and didn't have a need for exported resources).

Puppet, as a language, however, has been a moving target for years-- at one point in the 3.x days, I had to switch to a fixed version to keep my puppet server from becoming incompatible with my existing code-- but that also meant I couldn't easily get security updates, because puppet doesn't understand "Update to latest version below version 'x'".  Things have improved considerably, but it's a still a full time job keeping up with the changes.

But if you want rants:

  * why won't my puppet agents download a new CA from the puppet master when I update it?  Why do I have to manually delete the "cached" /var/lib/puppet/ssl/certs/ca.pem file in order to get the new ca.pem file downloaded?  That's not cached, that's stored.  ;)
  * similarly, having to manually delete / renew agent certs is painful because you have to be logged in on both the agent and the master-- an auto-renew feature would be nice.


R.I.Pienaar

unread,
Jan 9, 2017, 2:39:38 PM1/9/17
to puppet-users


----- Original Message -----
> From: "John Gelnaw" <jge...@gmail.com>
> To: "puppet-users" <puppet...@googlegroups.com>
> Sent: Monday, 9 January, 2017 20:14:00
> Subject: Re: [Puppet Users] Over-engineering rant

> On Monday, January 9, 2017 at 10:10:08 AM UTC-5, R.I. Pienaar wrote:
>>
>>
>> so we're on the same page are you just saying in general the NTP module
>> has too much
>> going on and its too huge for a "simple" piece of software?
>>
>
> Mostly, it was the unexpected syntax. Somehow, I completely missed any
> references to data typing. I'm not even *opposed* to it-- although there's
> a very lazy part of me that says it needs to remain optional. :)
>
> While the puppet 3 version is, quite frankly, hideous, I understand why
> it's that way, and it's at least formatted nicely, so it's easily read.
>
> The puppet 4 version looks cluttered (even though it's much simpler, it
> APPEARS more cluttered because it's not a table any longer), and was a
> paradigm shift I was unprepared for.
>

yes, today we have:

class foo(
String $thing,
Boolean $other_thing,
Enum[Boolean, String] $yet_another_thing
) { }

my early feedback on this was from a readability perspective the thing I
care for most is the variable name and like in this class the var names
are obscured as you're immediately drawn to the var types and not their
names

Indention would have helped so there is a column of var names all below
each other, but better would be:

class foo(
$thing String,
$other_thing Boolean,
$yet_another_thing Enum[Boolean, String]
) { }

Here I can see what the variables are and care for the rest later, there
is a ticket and I know it was planned to support both, not sure where that
is

From a system perspective we can have MUCH better auto generated docs now
that it knows what type a variable is and user interfaces like the console
can eventually produce much better user interfaces because they can prompt
for things intelligently

> But reading through it makes sense.
>
> Although-- I think I'd consider (optionally) moving the params to an
> external file, for readability, if nothing else.
>
> In both the puppet 3.x and 4.x examples, you've got a whole lot of
> information jammed into the "first line" (that has 50+ parameters) that can
> overwhelm a novice user.
>
> Whatever happened to yaml-in-modules as a concept? I'd think using
> something like that for parameter definitions would be a much cleaner
> approach.

look at data/* in the module, you still have to define the parameters so
it knows to look them up, I think having the parameters in code is best
but the data and configuration of them (how to merge etc) in data is most
usable, especially as the data aspect is optional etc
4 has been a sea change from the old ad hoc approach to a whole standardisation
and rationalisation perspective, the outcome is much better and actually standard
and documented and with a proper language specification and you can find out how
things work - a HUGE improvement, but yes it's practically a brand new language

5 which isnt that far off will be very very smooth sailing as the language changes
are done - apparently :) time will tell.

> But if you want rants:
>
> * why won't my puppet agents download a new CA from the puppet master
> when I update it? Why do I have to manually delete the "cached"
> /var/lib/puppet/ssl/certs/ca.pem file in order to get the new ca.pem file
> downloaded? That's not cached, that's stored. ;)

Because if i can convince your client to connect to $evil_ca, then what?
How's it to know its a new legit ca and not a new bad ca?

> * similarly, having to manually delete / renew agent certs is painful
> because you have to be logged in on both the agent and the master-- an
> auto-renew feature would be nice.

likewise.

John Gelnaw

unread,
Jan 9, 2017, 7:17:59 PM1/9/17
to Puppet Users
On Monday, January 9, 2017 at 2:39:38 PM UTC-5, R.I. Pienaar wrote:

Because if i can convince your client to connect to $evil_ca, then what?
How's it to know its a new legit ca and not a new bad ca?

The same way it "knew" when you originally provisioned it-- It didn't.  In fact, the agent, by default, displays the *request* fingerprint-- but never the server fingerprint, and doesn't give me a chance to verify it.

So how many times have you verified you didn't talk to an evil CA when you originally connected an agent?

And the thing is, if I delete that cached file, it promptly (and as near as I can tell, blindly) downloads the ca.pem file anyway.

The entire point of a public/private key system is the ability to trust.  The agent can trust the server, the server can trust the agent.

The lack of ability to renew that trust *before* it expires is a serious failure-- Recently, my initial 5 year CA expired.  The "conventional wisdom" was to REBUILD MY ENTIRE ENTERPRISE.  If I'd had to do that, there's a good chance I'd have reevaluated my 5 year old decision to go with puppet-- not saying I wouldn't have wound up with puppet anyway, but I'd have looked much, much harder at competing products, which have made huge progress in 5 years.

Fortunately, based on a suggestion here, I was able to sign a new CA with the same private key used to create the original CA, and replace that CA before everything stopped working.  Then, using mcollective, I removed the cached ca.pem file, and let puppet download the new ca.pem.  Of course, the workstations that were off for the month of July came back and couldn't do anything, because the original CA had expired, and the only way to fix them at that point was to manually log in and clean up the mess.

If the CA is valid, and the client cert is valid, there's no reason on earth why the agent and CA shouldn't be able to renegotiate a certificate.  There's no reason why the CA shouldn't be able to tell the client "Oh, you have the old CA, here, have a new one", since the agent has, in theory, a valid copy of the original CA which it can use to validate the connection.

Otherwise, you have to delete the certificates from the master and the agent, regenerate the request from the agent, and re-sign the cert on the master-- and you can't tell me that's a more secure process than a negotiated, verified renewal / update-- not to mention a massive time waster that goes completely against the philosophy of *having* centralized management.  It's too much dev, not enough ops.  :)

I've written a script to automate renewing the agent cert-- but it's ugly, and as you point out, it opens up the possibility for someone to impersonate my existing CA.

Rob Nelson

unread,
Jan 9, 2017, 9:05:15 PM1/9/17
to puppet...@googlegroups.com
I think certificate handling is a valid critique of puppet's security implementation. Running a public key infrastructure of any sort is difficult. Things like expired CAs and a lack of intermediate signing CAs does expose puppet administrators who are lacking those fairly rare skill sets to some difficult potential issues. I don't want to run a CA, mostly because I've had to run one before. Many people would also like to extend the expiration to more than 5 years, but don't find out about this issue until 4.5 years in. Whoops :)

It's just that the fix isn't agents automatically accepting new CAs. In the example given of bringing a new CA online, the issue isn't that the client would be missing a copy of the original CA signatures, but that there's no way to verify the new CA is related to the old CA. This constitutes a pretty high security risk with a decent probability for exploitation - and not just by external parties, it would be easy to DoS your agents during a failed migration or by testing with vagrant or additional VMs by forgetting to change DNS/IPs or a dozen other simple things to miss. Any improvement here probably ends up being relatively complex to ensure risks remain low. 

It would be much more reasonable to have an extremely long lived CA and some intermediate CAs. This is supported by puppet, but only I believe with an external CA setup (https://docs.puppet.com/puppet/latest/config_ssl_external_ca.html) - again, not something most of us should probably be doing. I don't know that there's a great way to handle this for the masses, unless Puppet wants to become a CA and sign intermediates for us ;)

On Mon, Jan 9, 2017 at 7:18 PM John Gelnaw <jge...@gmail.com> wrote:
since the agent has, in theory, a valid copy of the original CA which it can use to validate the connection.
--
Rob Nelson

R.I.Pienaar

unread,
Jan 10, 2017, 12:37:14 AM1/10/17
to puppet-users


----- Original Message -----
> From: "John Gelnaw" <jge...@gmail.com>
> To: "puppet-users" <puppet...@googlegroups.com>
> Sent: Tuesday, 10 January, 2017 01:17:58
> Subject: Re: [Puppet Users] Over-engineering rant

> On Monday, January 9, 2017 at 2:39:38 PM UTC-5, R.I. Pienaar wrote:
>>
>>
>> Because if i can convince your client to connect to $evil_ca, then what?
>> How's it to know its a new legit ca and not a new bad ca?
>>
>
> The same way it "knew" when you originally provisioned it-- It didn't. In
> fact, the agent, by default, displays the *request* fingerprint-- but never
> the server fingerprint, and doesn't give me a chance to verify it.
>
> So how many times have you verified you didn't talk to an evil CA when you
> originally connected an agent?

Every time? I logged into my known CA using a non Puppet means, I know it's
the known CA because of SSH safety checks and I sign the client I expect to
sign on this known CA using the information at hand - the client fingerprint
that I visually confirm.

> And the thing is, if I delete that cached file, it promptly (and as near as
> I can tell, blindly) downloads the ca.pem file anyway.

But this is not enough, the new ca.pem isn't all you need, you need certs signed
by the new ca too.

Today the trust system is such that the master has to know its handing code
out to trusted clients - hence giving you the info you need to establish client
trust. Yes more is needed, see below.

> The entire point of a public/private key system is the ability to trust.
> The agent can trust the server, the server can trust the agent.
>
> The lack of ability to renew that trust *before* it expires is a serious
> failure-- Recently, my initial 5 year CA expired. The "conventional
> wisdom" was to REBUILD MY ENTIRE ENTERPRISE. If I'd had to do that,
> there's a good chance I'd have reevaluated my 5 year old decision to go
> with puppet-- not saying I wouldn't have wound up with puppet anyway, but
> I'd have looked much, much harder at competing products, which have made
> huge progress in 5 years.

Yes, as I said to you before, the Puppet CA system needs a lot of work, it
was designed now ~10 years ago and much have changed, it's had some organic
growth but a huge fixup is needed. I bet 10 years ago Luke did not expect
the CA to be around 10 years on and did not imagine we are where we are
so understandable how it is, but it should have been redesigned a while
ago already.

Turns out it's not news to anyone that this is needed and if you look in
Jira there is a whole group of tickets covering exactly that and afaik
it's quite high priority. I am sure constructive input on those will be
appreciated.

This is why I've previously, when you contacted me off list, also asked
the same question to you: Have you filed any tickets or are you just
ranting to make yourself feel better?

To expand on the issue with redownloading CA and blind trust, lets
consider a situation I am often in.

My laptop laptop1.mycorp.local is Puppet managed, have a cert and
a CA. My laptop is using DHCP because I travel a lot and it uses
the default 'puppet' name for the master.

I go to evilcorp.local who gives me a DHCP host name sucker1.evilcorp.local,
my Puppet agent makes a new cert automagically for this name, sends
it off to be signed by puppet1.evilcorp.local who in turn auto signs
it, I cache the new ca.pem and we're off. It runs a exec{} that
rsyncs my whole ~ off to its NAS neatly bypassing any disk encryption
I might have and so steals all my other clients code and secrets I
happen to have on my laptop.

Except this doesn't happen because it doesn't redownload the CA.
Not redownloading the CA is CRITICAL. And yes naming things still
suck, calling it a cache is a mistake, not treating it like a cache
is not.

Today you can mitigate this happening even in the case where somehow
ca.pem goes missing - like it ends up in lost+found. By setting a
specific certname that is related to your HARDWARE and not your
SOFTWARE. Don't use 'puppet' as a master name. Not enough though
since my whole SSL dir and config might go missing and this is the
problem with Puppet and its annoying pure local disk based approach
to this.

Local disk issues aside, Ideally I want:

# puppet agent --waitforcert 10 --ca_finger_print xx:xx:xx

It should send off the CSR and only accept a signed cert back from
that CA, it should only store that CA, it should store that finger
print in a lock file and ONLY EVER talk to that CA or do anything
as a result of that CA.

If it has any private key at all, it should never automatically
generate a new private key regardless of certname. Automatic
bootstrapping should only happen when the whole ssl setup is
completely missing.

The process of reissuing expiring certs or cycling your client
certs should be done ONLY under the management of the existing
trust relationship and never automatically.

* A week/month before CA expires, the master CA makes a new one
* New connections come in, they are trusted by not yet expired
CA, this already trusted CA instructs the agent to either
redo its certificates of redo certs and CA. The latter
would update the lock file with the new CA fingerprint in
addition to the old one
* Agents goes through the cert/ca dance but now as it's already
under the existing CA trust relationship it can easily be auto
signed. Made safe by CSR extended attributes that contains a
single use shared secret the CA gave it via the SSL connection
* On CA expiry, start using the new credentials.

Now we can set much shorter cert life times, I am especially interested
in short agent cert life times, the same process can be used to also
move agents to a new CA etc

Tools should exist to reset a agent back to factory default where it
will start a whole new fresh bootstrap as it was first day. There can
be no safe middleground other than make new certs as part of existing
trust as above, or establish new trust from scratch. You cannot safely
just download new CAs (and anyway its not enough as mentioned)

This is very different from WHY ARENT YOU DOWNLOADING JUST ANY RANDOM
ca.pem YOU ARE GIVEN THIS WILL FIX ALL THE PROBLEMS WHY ARE YOU SUCH
IDIOTS? As per your emails.

It's a ongoing trust relationship that is extended under control,
the basic design isn't new or anything the same happens in IKE, TLS
etc except there it's in memory which is very different.

Trevor Vaughan

unread,
Jan 10, 2017, 9:16:43 AM1/10/17
to puppet...@googlegroups.com
Actually, from an automation point of view, this is pretty trivial.

Step 1) Create new CA (preserving old CA trust) X number of days prior to expiration
Step 2) Pass out both CA trust roots to all systems
Step 3) Start a re-signing party using the fact that you already have a bi-directional trust in place
Step 4) Let the old CA certificate gracefully expire and remove it whenever you like (nothing will trust it anyway)

That's pretty much it.

For traditional *NIX applications using a CAPath approach, this is trivial.

As long as you have a valid CA in your CAPath, you can roll over certificates with ease.

Trevor

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Trevor Vaughan
Vice President, Onyx Point, Inc

-- This account not approved for unencrypted proprietary information --

Rob Nelson

unread,
Jan 10, 2017, 9:28:17 AM1/10/17
to puppet...@googlegroups.com
I would argue that it's when you break steps 1-3 down into implementation details that it becomes confusing for many. If you've done it before, it's trivial; if it's your first time, it can be hairy.

R.I.Pienaar

unread,
Jan 10, 2017, 9:42:29 AM1/10/17
to puppet-users


----- Original Message -----
> From: "Rob Nelson" <rnel...@gmail.com>
> To: "puppet-users" <puppet...@googlegroups.com>
> Sent: Tuesday, 10 January, 2017 15:28:07
> Subject: Re: [Puppet Users] Over-engineering rant

> I would argue that it's when you break steps 1-3 down into implementation
> details that it becomes confusing for many. If you've done it before, it's
> trivial; if it's your first time, it can be hairy.

this really is the same flow as I highlighted and as shown puppet could just
do it for us
>>> email to puppet-users...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/ms
>>> gid/puppet-users/CAC76iT_2XN3vaZKrpzsrXOzkT%2B4_3P82ZZWkipig
>>> m8%3D%3DXew9ZA%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/puppet-users/CAC76iT_2XN3vaZKrpzsrXOzkT%2B4_3P82ZZWkipigm8%3D%3DXew9ZA%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --
>> Trevor Vaughan
>> Vice President, Onyx Point, Inc
>> (410) 541-6699 x788 <(410)%20541-6699>
>>
>> -- This account not approved for unencrypted proprietary information --
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Puppet Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to puppet-users...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/
>> msgid/puppet-users/CANs%2BFoVhHTsauG_gA_fODFXWYAoj9McHuvLk5ikOC%
>> 3DoReFd35Q%40mail.gmail.com
>> <https://groups.google.com/d/msgid/puppet-users/CANs%2BFoVhHTsauG_gA_fODFXWYAoj9McHuvLk5ikOC%3DoReFd35Q%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to puppet-users...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/CAC76iT-EdpmHgARsA4HQ6YTgCug9MF%3DMng-K2VO6X6rZwowe1w%40mail.gmail.com.

John Gelnaw

unread,
Jan 10, 2017, 11:54:45 AM1/10/17
to Puppet Users
On Tuesday, January 10, 2017 at 12:37:14 AM UTC-5, R.I. Pienaar wrote:
 
> So how many times have you verified you didn't talk to an evil CA when you
> originally connected an agent?

Every time? I logged into my known CA using a non Puppet means, I know it's
the known CA because of SSH safety checks and I sign the client I expect to
sign on this known CA using the information at hand - the client fingerprint
that I visually confirm.

But when you connect agent "A" to server "B", unlike SSH, there's no option of confirming the server's identity.

If I've connected to a rogue CA, I would expect it to autosign my cert request as quickly as possible, and start spamming bad catalogs at my agent.

Verifying the correct agent fingerprint when you sign the cert is not the time to be paranoid, merely a time to be cautious (in case you hand out any restricted information such as passwords in your catalogs).

> And the thing is, if I delete that cached file, it promptly (and as near as
> I can tell, blindly) downloads the ca.pem file anyway.

But this is not enough, the new ca.pem isn't all you need, you need certs signed
by the new ca too.

Now, that's where things get interesting.  I mentioned before that I generated a new CA, signed with the old private key.  All of my existing agent certs kept working.

When it comes to certificate wizardry, I'm not a master-- mid level apprentice would be a better description, but the certificate wizard in my office was unsurprised that I didn't need to generate new certs.
 
Turns out it's not news to anyone that this is needed and if you look in
Jira there is a whole group of tickets covering exactly that and afaik
it's quite high priority. I am sure constructive input on those will be
appreciated.

This is why I've previously, when you contacted me off list, also asked
the same question to you: Have you filed any tickets or are you just
ranting to make yourself feel better?

I didn't actually see a response to my offline comments-- I assumed they got bit bucketed, so I came back here.

Historically, I've contributed to one issue a long time ago, haven't filed new issues, because frankly, there was nothing in the discussion I felt I could contribute to, other than "current system bad, pls fix!".

As for ranting, I have had two major complaints about puppet, and I expressed both here-- hopefully in a civil fashion.  Neither has been a show stopper, but both have been a source of frustration.
 
To expand on the issue with redownloading CA and blind trust, lets
consider a situation I am often in.

My laptop laptop1.mycorp.local is Puppet managed, have a cert and
a CA.  My laptop is using DHCP because I travel a lot and it uses
the default 'puppet' name for the master.

Now, I don't want to be misunderstood here, so I'll speak plainly:

Non FQDN's are the work of satan.  If you can't be bothered to specify the fully-qualified domain name for your One True puppet master, that's your fault.
 
I go to evilcorp.local who gives me a DHCP host name sucker1.evilcorp.local,
my Puppet agent makes a new cert automagically for this name, sends
it off to be signed by puppet1.evilcorp.local who in turn auto signs
it, I cache the new ca.pem and we're off.  It runs a exec{} that
rsyncs my whole ~ off to its NAS neatly bypassing any disk encryption
I might have and so steals all my other clients code and secrets I
happen to have on my laptop.

Except this doesn't happen because it doesn't redownload the CA.
Not redownloading the CA is CRITICAL.  And yes naming things still
suck, calling it a cache is a mistake, not treating it like a cache
is not.

What you've described is your laptop blindly trusting the DHCP server and the local DNS server, and doesn't properly use DNS (let alone DNSSEC) to verify it's talking to the same puppet master it's registered with.
 
The whole point of SSL is that I have a certificate that proves to the server I am who I claim to be.  The server ALSO has a certificate that proves it's who it claims to be.

If your only safeguard against not getting hijacked by evil puppet masters is to not renegotiate a soon-to-be-expired CA, then puppet has a flawed security model.

This is very different from WHY ARENT YOU DOWNLOADING JUST ANY RANDOM
ca.pem YOU ARE GIVEN THIS WILL FIX ALL THE PROBLEMS WHY ARE YOU SUCH
IDIOTS?  As per your emails.

If you interpreted my emails (posts) that way, I'm very sorry-- My original complaint was that there was no functionality to update a valid-but-about-to-expire CA with a new one, without manually deleting the existing CA, and blindly trusting the new CA.

The fact that I was able to generate a valid new CA that was still recognized by my existing agent certificates apparently escaped you.

At no time did I intend to suggest that any operation (including renewing the agent cert) should be carried out in a state of non-trust.

And anyone who has their puppet server name on their laptop set to "puppet" is not allowed to yell about security.  EVER.

R.I.Pienaar

unread,
Jan 10, 2017, 12:57:23 PM1/10/17
to puppet...@googlegroups.com

> And anyone who has their puppet server name on their laptop set to "puppet" is not allowed to yell about security. EVER.

The scenario I showed was default how puppet works by design. You can be sure that most people deploy it that way. They certainly cannot make informed decisions about this aspect of puppet in most of the cases which is why the current behaviour needs a huge overhaul. For which there are tickets. Though real world input will be useful no doubt.

Personally I use puppet apply on portable machines because the current situation is just impossible for that kind of machine.

Trevor Vaughan

unread,
Jan 10, 2017, 1:55:16 PM1/10/17
to puppet...@googlegroups.com
"puppet can just do it for us"

This * 1000


>>> To view this discussion on the web visit https://groups.google.com/d/ms
>>> gid/puppet-users/CAC76iT_2XN3vaZKrpzsrXOzkT%2B4_3P82ZZWkipig
>>> m8%3D%3DXew9ZA%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/puppet-users/CAC76iT_2XN3vaZKrpzsrXOzkT%2B4_3P82ZZWkipigm8%3D%3DXew9ZA%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --
>> Trevor Vaughan
>> Vice President, Onyx Point, Inc
>> (410) 541-6699 x788 <(410)%20541-6699>
>>
>> -- This account not approved for unencrypted proprietary information --
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Puppet Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>> To view this discussion on the web visit https://groups.google.com/d/
>> msgid/puppet-users/CANs%2BFoVhHTsauG_gA_fODFXWYAoj9McHuvLk5ikOC%
>> 3DoReFd35Q%40mail.gmail.com
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/467848195.683621.1484059337277.JavaMail.zimbra%40devco.net.

For more options, visit https://groups.google.com/d/optout.



--
Trevor Vaughan
Vice President, Onyx Point, Inc

Eric Sorenson

unread,
Jan 10, 2017, 3:19:45 PM1/10/17
to Puppet Users

On Monday, January 9, 2017 at 6:56:34 AM UTC-8, John Gelnaw wrote:
On Sunday, January 8, 2017 at 2:31:33 PM UTC-5, Rob Nelson wrote:
There are a lot of very valid issues and concerns you bring up here. I do want to start by saying, however, that puppet 4 is more than 6 months old - about 20 months to be precise - and most of the significant language changes were introduced somewhat earlier in the future parser in puppet 3. These changes should be easier to take in for sure, but that is at least 3x more to catch up on. I hope that doesn't sound like a harsh response, but I think it's more accepted that after 1.5-2 years, most moving projects will require significant re-learning.

I've been using "future parser" in Puppet 3 for a while-- I absolutely had to have iteration, and a few other features, so I *thought* I had been keeping up with puppet development.

I had a similar reaction to the OP when I looked at the NTP code-- "eeeeek!!!".

Although knowing that it's optional is a good thing, and knowing it's available is also good-- it is something of an overwhelming example of "wall of code".  Then again, for those who say NTP is simple-- I point and laugh in your general direction.  The fact that NTP *can* be as simple as a drift file and an NTP host, doesn't mean it's always that easy, and I respect the amount of effort in making that module work. 

Having said that, my ntp class is a bit simpler, and resembles the classic "package / file / service" puppet class, because that's all my site requires. 

I'd like to point out that this ntp module is also deliberately a test case for *all* of the puppet 4 language features, and as such is kind of a "reference module", so it certainly could be simpler but is intended to both do something useful and provide a working example of things like EPP and the type system. Helen Campbell wrote up a walk-through of the features that she and David Schmitt implemented in it here:  https://puppet.com/blog/ntp-puppet-4-language-update


Most of my bitterness towards puppet comes from the 3.x series, where the API was a moving target, and upgrading to the "latest" puppet 3.x package could break your world.  It's gotten significantly better, but I'm still only about halfway up the puppet 3.x --> 4.x cliff.  ;)

Can you give me an example of backwards-incompatible API changes in the 3.x series? I'm not being snarky; we had long debates (way too long, in some cases) about semantic versioning and did extra work to not introduce breaking changes into the 3.x. The goal was rebuilding trust that new versions behave like you'd expect given the version number, so I'm dismayed to hear that those efforts failed and things broke for you anyway :(

--eric0

John Gelnaw

unread,
Jan 10, 2017, 8:50:56 PM1/10/17
to Puppet Users
On Tuesday, January 10, 2017 at 3:19:45 PM UTC-5, Eric Sorenson wrote:

I'd like to point out that this ntp module is also deliberately a test case for *all* of the puppet 4 language features, and as such is kind of a "reference module", so it certainly could be simpler but is intended to both do something useful and provide a working example of things like EPP and the type system. Helen Campbell wrote up a walk-through of the features that she and David Schmitt implemented in it here:  https://puppet.com/blog/ntp-puppet-4-language-update

Understood.
 
Most of my bitterness towards puppet comes from the 3.x series, where the API was a moving target, and upgrading to the "latest" puppet 3.x package could break your world.  It's gotten significantly better, but I'm still only about halfway up the puppet 3.x --> 4.x cliff.  ;)

Can you give me an example of backwards-incompatible API changes in the 3.x series? I'm not being snarky; we had long debates (way too long, in some cases) about semantic versioning and did extra work to not introduce breaking changes into the 3.x. The goal was rebuilding trust that new versions behave like you'd expect given the version number, so I'm dismayed to hear that those efforts failed and things broke for you anyway :(

Unfortunately, I don't remember specifics-- looking in my git log, I had to freeze the version at 3.4.3.  Some feature (not mentioned in my git log) went from "deprecated" to "fail" in the jump from 3.4.3 to 3.5, and upgrading to 3.5.x or later caused my puppet master to stop working.

I'm thinking it might have actually been a change in the future parser, which I was (and still am) using fairly heavily in my AD-based user management.  Is it possible that the future parser stopped accepting hyphens around then?

Henrik Lindberg

unread,
Jan 11, 2017, 12:25:22 PM1/11/17
to puppet...@googlegroups.com
:-)
As you were probably aware, the future parser was marked as experimental
and it was explicitly pointed out that it could be breaking API even in
minor releases. If we had not taken that approach, you would probably
still be waiting for iteration...

You were courageous to start using future parser so early. It stabilized
and had most kinks worked out around 3.7 and around that time also
became a supported option.

A big thank you for being an early user and being willing to expose
yourself to breakages in an experimental version. Without test pilots we
would not have been able to reach the high quality we now enjoy.
Were you unaware that your choice was to take the rocky road?

At the point in time when the future parser work started there were
several hundred bugs reported against features of the language that were
strange and to a large degree unsolvable per ticket. There was also a
steady stream of incoming tickets. The situation today is very different
- there are hardly any language related issues - when we get one it is
usually about improving error messages, the ability to detect and report
corner cases that otherwise result in hard to understand subsequent
errors, and the like.

There is still a substantial cleanup and rewrite to be done in the
Compiler (and related language constructs, such as resource defaults,
the inability to declare an identical resource more than once, the
inflexibility and lack of semantic power in the space ship operators, etc.)

We also have internal APIs that needs to evolve, and here we must move
slowly as changes cause breakage and it takes a long time before the
majority of the puppet ecosystem has migrated/upgraded.

Specific to your question about future parser 3.4 -> 3.5:
Future Parser in Puppet 3.5.0 was a big change as it was the first
version of the reimplemented evaluator. Up to that point, the future
parser transformed everything to the old AST format before evaluation.

- henrik

> --
> You received this message because you are subscribed to the Google
> Groups "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to puppet-users...@googlegroups.com
> <mailto:puppet-users...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/6d377003-44f2-436d-a126-c4722ff6339f%40googlegroups.com
> <https://groups.google.com/d/msgid/puppet-users/6d377003-44f2-436d-a126-c4722ff6339f%40googlegroups.com?utm_medium=email&utm_source=footer>.
> For more options, visit https://groups.google.com/d/optout.


--

Visit my Blog "Puppet on the Edge"
http://puppet-on-the-edge.blogspot.se/
Reply all
Reply to author
Forward
0 new messages