Puppet 5 release planning

562 views
Skip to first unread message

Eric Sorenson

unread,
Feb 27, 2017, 6:59:59 PM2/27/17
to puppe...@googlegroups.com
Hi all - we're nearing the end of the Puppet 4.x series feature development. It's been almost two years since Puppet 4.0 dropped and it seems like an opportune time to start thinking about the next semver major.

There was some discussion last year[0], but the development work is truly rolling forward now, so I wanted to restart the conversation about Puppet 5 to elicit feedback and make sure to incorporate the community's needs into the plan. 

The headline here is that the core open-source "Puppet Platform" (puppet-agent, puppet-server, puppetdb) are moving to a more coordinated release model, with compatibility guarantees and consistent versioning among the components. The first release of this "Puppet Platform 5", currently targeted at May, will bring these components' major versions together and provide some nice features without a huge backwards-incompatible break.

A couple of FAQs, or rather questions I imagine will be frequently asked:

Q: Puppet 5, what the hell eric0?! I just spent a month updating my code to run under Puppet 4. 
A: No Puppet code that works under Puppet 4 needs changing[1] to work under 5. This is a semver major to release some backwards-incompatible changes that have stacked up, plus some additional feature work, but does not affect the language. Puppet 4 won't be EOL any time soon (and we're guaranteeing commercial customer support until 2018) but we've got to keep the platform moving forward. Plus, it seems like a good opportunity to eliminate the confusion caused by "Puppet 4" being delivered in packages, split between puppet-agent-1.x and puppet-server-2.x .... 

Q: So what *is* in it? Why should I upgrade?
A: Lots of good stuff. Hiera 5 with eyaml is built-in; it's UTF-8 clean; network comms are pure, sweet, fast JSON. Our current Ruby versions are EOL'ed, so we're moving to MRI Ruby 2.4 on the agent and jruby9k on the server. The PE-only puppet-server metrics service is getting some enhancements and will be open-sourced.  
 
Q: How's it going to be delivered? Are Puppet Collections still a thing?
A: Funny you should ask. As we kicked around a couple of months ago[3], it's been two years and the collections idea just hasn't worked out in practice, so it seems wise to iterate and keep evolving. The current plan is to create a new repo, parallel with the existing PC1 repos, simply named 'puppet'. The platform components will roll into it and future semver-majors will be coordinated across the components, hopefully leading to smaller, easily digestible chunks of change.

You can see the complete list of changes (which will evolve as we gather feedback and adjust scope) at this JIRA query[2]. If there's anything on the roster that looks like it'll break your world — or, conversely, if you want to nominate a change that's important to you but isn't currently on the list — this thread is the place to do that. 

--eric0

[1] I'm reserving a tiny, tiny asterisk for some Ruby extensions that use internal APIs that may change, like pre-Puppet 4.9 lookup extensions.

Eric Sorenson - eric.s...@puppet.com 
director of product, ecosystem and platform

Trevor Vaughan

unread,
Feb 28, 2017, 9:40:26 AM2/28/17
to puppe...@googlegroups.com
Hi Eric,

All of this sounds good (particularly not breaking Puppet 4 code).

Have there been any thoughts towards integrating Beaker as a first class citizen to the release process? The fact that Beaker and PE don't use the same Ruby version is...highly irritating during development.

Also, while integrating hiera-eyaml (great idea), would it be possible to integrate node_encrypt (https://github.com/binford2k/binford2k-node_encrypt) into the stack? I really like the idea of keeping my protected information out of PuppetDB or other logging destination.

Finally, in the same vein, would it be possible to have optionally enabled full catalog encryption? Basically, encrypt the catalog with the client cert, pass it over, and have it sit on disk encrypted. I understand that this will cause additional load (thus the optional nature), but it would help solve concerns about sensitive information sitting freely on disk in highly compliance-focused environments.

Thanks,

Trevor

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/6FF107F6-088A-43E2-BECF-772D8C153C49%40puppet.com.
For more options, visit https://groups.google.com/d/optout.



--
Trevor Vaughan
Vice President, Onyx Point, Inc

-- This account not approved for unencrypted proprietary information --

Miguel Di Ciurcio Filho

unread,
Mar 3, 2017, 12:54:09 PM3/3/17
to puppe...@googlegroups.com
On Mon, Feb 27, 2017 at 8:59 PM, Eric Sorenson <eric.s...@puppet.com> wrote:
> The headline here is that the core open-source "Puppet Platform"
> (puppet-agent, puppet-server, puppetdb) are moving to a more coordinated
> release model, with compatibility guarantees and consistent versioning among
> the components. The first release of this "Puppet Platform 5", currently
> targeted at May, will bring these components' major versions together and
> provide some nice features without a huge backwards-incompatible break.

Good news!

Would be possible to also version the components that go inside the
puppet-agent package in the same way? For example: facter, hiera and
mcollective.

Speaking about mcollective, it has been notorious that Puppet Inc. has
stopped any further development and has put mcolletive in maintenance
mode for quite a while, in favor of the Orchestrator in 2015.

Fast forward to 2017 and there are a lot of useful stuff in
mcollective still not available in the Orchestrator.

Is also known that R.I.Pienaar is trying to work with Puppet Inc to
maintain mcollective. Also, I consider his work on choria.io
remarkable and has keep mcollective powerful and useful once again.

Last years PuppetConf we where told that the server side of the
orchestrator would be merged into Puppet Server.

Looking into a "Puppet Platform 5", what looks like to be the
orchestration option?


> Q: How's it going to be delivered? Are Puppet Collections still a thing?
> A: Funny you should ask. As we kicked around a couple of months ago[3], it's
> been two years and the collections idea just hasn't worked out in practice,
> so it seems wise to iterate and keep evolving. The current plan is to create
> a new repo, parallel with the existing PC1 repos, simply named 'puppet'. The
> platform components will roll into it and future semver-majors will be
> coordinated across the components, hopefully leading to smaller, easily
> digestible chunks of change.

Sounds reasonable.

Would be this the time to also use /etc/puppet and /opt/puppet?

--
http://instruct.com.br
11 3230-6506
61 4042-2250

Eli Young

unread,
Mar 3, 2017, 4:13:24 PM3/3/17
to Puppet Developers
Rather than calling the new package repository "puppet", it might make more sense to call it "puppet5". That way, when Puppet 6 rolls around, it can go into its own repository ("puppet6") and people can change the repository over once they've tested that their code works with the new version.

Rob Nelson

unread,
Mar 3, 2017, 4:55:17 PM3/3/17
to puppe...@googlegroups.com
I second this. While PC1 didn't quite work out the way many expected, it made it impossible to accidentally the whole (puppet 4) bottle when it came to updating your machines still running puppet 3.

Josh Cooper

unread,
Mar 6, 2017, 6:01:08 PM3/6/17
to puppe...@googlegroups.com
On Mon, Feb 27, 2017 at 3:59 PM, Eric Sorenson <eric.s...@puppet.com> wrote:
Hi all - we're nearing the end of the Puppet 4.x series feature development. It's been almost two years since Puppet 4.0 dropped and it seems like an opportune time to start thinking about the next semver major.

There was some discussion last year[0], but the development work is truly rolling forward now, so I wanted to restart the conversation about Puppet 5 to elicit feedback and make sure to incorporate the community's needs into the plan. 

The headline here is that the core open-source "Puppet Platform" (puppet-agent, puppet-server, puppetdb) are moving to a more coordinated release model, with compatibility guarantees and consistent versioning among the components. The first release of this "Puppet Platform 5", currently targeted at May, will bring these components' major versions together and provide some nice features without a huge backwards-incompatible break.

A couple of FAQs, or rather questions I imagine will be frequently asked:

Q: Puppet 5, what the hell eric0?! I just spent a month updating my code to run under Puppet 4. 
A: No Puppet code that works under Puppet 4 needs changing[1] to work under 5. This is a semver major to release some backwards-incompatible changes that have stacked up, plus some additional feature work, but does not affect the language. Puppet 4 won't be EOL any time soon (and we're guaranteeing commercial customer support until 2018) but we've got to keep the platform moving forward. Plus, it seems like a good opportunity to eliminate the confusion caused by "Puppet 4" being delivered in packages, split between puppet-agent-1.x and puppet-server-2.x .... 

Q: So what *is* in it? Why should I upgrade?
A: Lots of good stuff. Hiera 5 with eyaml is built-in; it's UTF-8 clean; network comms are pure, sweet, fast JSON.

For Puppet 5, we want to make JSON the default serialization format for communication between puppet agent <-> server and server <-> puppetdb, while providing a migration path so older agents (v3/4) can continue communicating with Puppet 5 masters using PSON. This should improve performance for compile masters, provide better internalization support, and ensure JSON interoperability.

Some background. In Puppet 3.2.2, we switched from YAML to PSON as the default serialization format for network communication due to security issues with YAML. PSON is a 7+ year old version of pure_json plus puppet patches. This results in a number of problems:

1. PSON is slow - The PSON parser and generator are implemented in "pure" ruby code, and pure_json benchmarks show parsing in native code is 26.9 times faster than in ruby, and generation in native code is 12.2 times faster than in ruby.

2. PSON doesn't conform to RFC7159 - Puppet added patches that diverge from the specification, e.g. see commit 3c56705a. Also, the JSON specification has evolved since RFC4627.

3. Incomplete Unicode support - pure_json 1.1.9 was released at a time when ruby barely supported string encodings (1.8.6 and 1.9.1 had just been released). Since then, the upstream pure_json library evolved and added unicode support. We backported some unicode fixes to our vendored implementation, e.g. see commit 8306c5, but I don't know that it's the complete set of changes necessary for internationalization.

4. Lossy conversions - due to our non-compliant implementation, puppetdb sometimes receives invalid UTF-8 content. Puppetdb will coerce the data using the Unicode replacement character, but it is a lossy conversion.

5. Duplicated code - Ruby 1.9.3 and up vendors pure_json with native libraries! So somewhat ironically, puppet is using a slow, outdated, and non-compliant JSON implementation, when the better replacement is already in ruby bundled in our AIO packages.

Proposal

1. Puppet 5 agents should accept and prefer JSON content, identified by the "application/json" content type. Agents should continue to accept PSON when taking to older masters.

2. Puppetserver 5 should accept requests with "application/json" and "pson" content types, and return responses in the appropriate format. Puppetserver needs to continue accepting PSON so that older agents (v3/4) can communicate.

3. Puppetdb terminus and puppetdb 5 should use JSON instead of PSON.

4. It should be possible to configure a Puppet 5 agent to use PSON when talking to an older puppetmaster, most likely using the existing "preferred_serialization_format" setting. This is primarily needed when sending reports, similar to what we did when switching from YAML to PSON in Redmine 21427 and PR 1869.

5. The agent currently PSON encodes facts, CGI escapes them in the body of the catalog request, and sets the content-type to application/x-www-form-urlencoded. Puppet 5 agents should inline the facts as is, set `facts_format => identity`, and generate the catalog request body as JSON with content-type application/json.

6. Modify "console" format to use JSON instead of PSON, but preserve existing pretty-print formatting behavior.

7. In a future major release (6 or later), remove PSON. Alternatively, alias PSON as JSON so that any modules relying on PSON directly don't break.

Alternatives

We considered making MessagePack the default. However, MessagePack is a binary protocol, which could be an issue for interoperability, e.g. curl. And if MessagePack can't be used, then we have to fallback to PSON, with all of its issues. Also MessagePack only understands bytes not characters, so it's easier for non-compliant clients to send invalid UTF-8 data. Finally, MessagePack is known for being "space-efficient", e.g. storing data in memcached, which isn't a problem we're trying to optimize for. Most likely JSON combined with gzip compression on the wire will provide sufficient performance.

What We're Not Doing (Yet)

Years ago, we talked about switching everything in puppet from YAML to JSON. While it's attractive from a simplicity/consistency perspective, we don't want to break compatibility. So for now, we're going to continue using YAML for files that the agent stores locally on disk, e.g. last_run_report.

We're not removing PSON any time soon, as we'll need to support old agents talking to newer masters for "awhile".

Supporting JSON won't solve the "binary data in the catalog" problem. However, there are two current options, enable Message Pack or use the Binary type recently added to the Puppet language. In the future, puppet's "rich data" feature will allow transferring binary data in the catalog.

 
Our current Ruby versions are EOL'ed, so we're moving to MRI Ruby 2.4 on the agent and jruby9k on the server. The PE-only puppet-server metrics service is getting some enhancements and will be open-sourced.  
 
Q: How's it going to be delivered? Are Puppet Collections still a thing?
A: Funny you should ask. As we kicked around a couple of months ago[3], it's been two years and the collections idea just hasn't worked out in practice, so it seems wise to iterate and keep evolving. The current plan is to create a new repo, parallel with the existing PC1 repos, simply named 'puppet'. The platform components will roll into it and future semver-majors will be coordinated across the components, hopefully leading to smaller, easily digestible chunks of change.

You can see the complete list of changes (which will evolve as we gather feedback and adjust scope) at this JIRA query[2]. If there's anything on the roster that looks like it'll break your world — or, conversely, if you want to nominate a change that's important to you but isn't currently on the list — this thread is the place to do that. 

--eric0

[1] I'm reserving a tiny, tiny asterisk for some Ruby extensions that use internal APIs that may change, like pre-Puppet 4.9 lookup extensions.

Eric Sorenson - eric.s...@puppet.com 
director of product, ecosystem and platform

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/6FF107F6-088A-43E2-BECF-772D8C153C49%40puppet.com.
For more options, visit https://groups.google.com/d/optout.



--
Josh Cooper
Developer, Puppet

Eric Sorenson

unread,
Mar 24, 2017, 6:09:31 PM3/24/17
to puppe...@googlegroups.com
Well, that was the whole collections idea in a nutshell, but every one of those new repos would inevitably leave some people stranded on an old one. Terrifyingly, there are still something like 100,000 hosts[1] hitting the EOL'ed 3.x repos, which will never get any updates... there's no clearly great answer here but optimizing to protect people who have 'ensure => latest' against upstream repos doesn't seem like the right thing.

--eric0

[1] Big error bars on this number, but we do get ~5M hits per day for the old repository metadata across the deb and yum repos, so about 100K hits per 30 minute interval...

To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CAC76iT8PquryseW873Sx2Q9URq4bdmCA3oX1FnMxKU6-%3D7Esjg%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.

Eric Sorenson - er...@puppet.com 

John Bollinger

unread,
Mar 27, 2017, 9:12:41 AM3/27/17
to Puppet Developers


On Friday, March 24, 2017 at 5:09:31 PM UTC-5, Eric Sorenson wrote:
Well, that was the whole collections idea in a nutshell, but every one of those new repos would inevitably leave some people stranded on an old one. Terrifyingly, there are still something like 100,000 hosts[1] hitting the EOL'ed 3.x repos, which will never get any updates... there's no clearly great answer here but optimizing to protect people who have 'ensure => latest' against upstream repos doesn't seem like the right thing.

We've had this discussion before.  Nevertheless, I submit that the proposition is not to "optimize" for people who rely on the upstream repos, but rather to faithfully fulfill the responsibilities that many people presume you undertake by providing software repos in the first place.  In this case that also carries the benefit of avoiding torpedoing some 100K or so Puppet installations, and regardless of technical considerations, I question the wisdom of such a move on business and community relations grounds.

If you don't want to maintain repos for the EOL software versions (which is reasonable), then it would be much better to simply remove those repos than to drop incompatible package versions into them.


John

Trevor Vaughan

unread,
Mar 28, 2017, 7:31:22 AM3/28/17
to puppe...@googlegroups.com
+1 for just removing old repos.

I would do something like CentOS where you have an archive or 'unsupported' space for people that simply can't upgrade for whatever reason.

yum.puppetlabs.com/unsupported for the RPM users, for example.

Trevor

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

Erik Dalén

unread,
Mar 28, 2017, 7:40:39 AM3/28/17
to puppe...@googlegroups.com
Doing like debian with both named releases like squeeze and jessie etc, but also repositories like stable would be a nice option. That way users can just choose if they only want to stick to puppet 4.x packages in the repo or if they prefer to use package pinning instead and have the repo contain all packages.

I would assume the majority of users pin their package versions, so the current solution where you need to both start mirroring a new repo, switch your hosts to use that and update the pinning is a bit of a hassle with no benefits compared to just updating the pinning which was the case before the PC1 repo.

On Tue, 28 Mar 2017 at 13:31 Trevor Vaughan <tvau...@onyxpoint.com> wrote:
+1 for just removing old repos.

I would do something like CentOS where you have an archive or 'unsupported' space for people that simply can't upgrade for whatever reason.

yum.puppetlabs.com/unsupported for the RPM users, for example.

Trevor
On Mon, Mar 27, 2017 at 9:12 AM, John Bollinger <John.Bo...@stjude.org> wrote:


On Friday, March 24, 2017 at 5:09:31 PM UTC-5, Eric Sorenson wrote:
Well, that was the whole collections idea in a nutshell, but every one of those new repos would inevitably leave some people stranded on an old one. Terrifyingly, there are still something like 100,000 hosts[1] hitting the EOL'ed 3.x repos, which will never get any updates... there's no clearly great answer here but optimizing to protect people who have 'ensure => latest' against upstream repos doesn't seem like the right thing.

We've had this discussion before.  Nevertheless, I submit that the proposition is not to "optimize" for people who rely on the upstream repos, but rather to faithfully fulfill the responsibilities that many people presume you undertake by providing software repos in the first place.  In this case that also carries the benefit of avoiding torpedoing some 100K or so Puppet installations, and regardless of technical considerations, I question the wisdom of such a move on business and community relations grounds.

If you don't want to maintain repos for the EOL software versions (which is reasonable), then it would be much better to simply remove those repos than to drop incompatible package versions into them.


John

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
--
Trevor Vaughan
Vice President, Onyx Point, Inc

-- This account not approved for unencrypted proprietary information --

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CANs%2BFoUTk6FrGRWtthFJK9doATAmNdLoisHUrMh7eh-6kzHTmg%40mail.gmail.com.

Trevor Vaughan

unread,
Mar 28, 2017, 8:36:49 AM3/28/17
to puppe...@googlegroups.com
I actually don't pin my package versions (too much micro-management and fat fingering of global updates) but I also never point at Internet repos since you have no idea what magic upstream will do to you on a daily basis.

On my development systems, I do update from the OS vendors nightly but that's mainly to find out what breaks before anyone else does.

Trevor

On Tue, Mar 28, 2017 at 7:40 AM, Erik Dalén <erik.gus...@gmail.com> wrote:
Doing like debian with both named releases like squeeze and jessie etc, but also repositories like stable would be a nice option. That way users can just choose if they only want to stick to puppet 4.x packages in the repo or if they prefer to use package pinning instead and have the repo contain all packages.

I would assume the majority of users pin their package versions, so the current solution where you need to both start mirroring a new repo, switch your hosts to use that and update the pinning is a bit of a hassle with no benefits compared to just updating the pinning which was the case before the PC1 repo.

On Tue, 28 Mar 2017 at 13:31 Trevor Vaughan <tvau...@onyxpoint.com> wrote:
+1 for just removing old repos.

I would do something like CentOS where you have an archive or 'unsupported' space for people that simply can't upgrade for whatever reason.

yum.puppetlabs.com/unsupported for the RPM users, for example.

Trevor
On Mon, Mar 27, 2017 at 9:12 AM, John Bollinger <John.Bo...@stjude.org> wrote:


On Friday, March 24, 2017 at 5:09:31 PM UTC-5, Eric Sorenson wrote:
Well, that was the whole collections idea in a nutshell, but every one of those new repos would inevitably leave some people stranded on an old one. Terrifyingly, there are still something like 100,000 hosts[1] hitting the EOL'ed 3.x repos, which will never get any updates... there's no clearly great answer here but optimizing to protect people who have 'ensure => latest' against upstream repos doesn't seem like the right thing.

We've had this discussion before.  Nevertheless, I submit that the proposition is not to "optimize" for people who rely on the upstream repos, but rather to faithfully fulfill the responsibilities that many people presume you undertake by providing software repos in the first place.  In this case that also carries the benefit of avoiding torpedoing some 100K or so Puppet installations, and regardless of technical considerations, I question the wisdom of such a move on business and community relations grounds.

If you don't want to maintain repos for the EOL software versions (which is reasonable), then it would be much better to simply remove those repos than to drop incompatible package versions into them.


John

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.
--
Trevor Vaughan
Vice President, Onyx Point, Inc

-- This account not approved for unencrypted proprietary information --

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CAAAzDLd5OEhjSL3hyvJGXk9zDpZKVGXYuvU3s%3DitjA%2BJp0G7Ww%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.

Eric Sorenson

unread,
Apr 6, 2017, 7:14:56 PM4/6/17
to Puppet Developers
OK I'm convinced.. We'll make sure there's a versioned repo that the packages actually flow into. I'd like to additionally have a non-versioned symlink repo+release package so people can track the current set if they want. 

John, thanks for the persistence.

--eric0


On Tuesday, March 28, 2017 at 4:40:39 AM UTC-7, Erik Dalén wrote:
Doing like debian with both named releases like squeeze and jessie etc, but also repositories like stable would be a nice option. That way users can just choose if they only want to stick to puppet 4.x packages in the repo or if they prefer to use package pinning instead and have the repo contain all packages.

I would assume the majority of users pin their package versions, so the current solution where you need to both start mirroring a new repo, switch your hosts to use that and update the pinning is a bit of a hassle with no benefits compared to just updating the pinning which was the case before the PC1 repo.

On Tue, 28 Mar 2017 at 13:31 Trevor Vaughan <tvau...@onyxpoint.com> wrote:
+1 for just removing old repos.

I would do something like CentOS where you have an archive or 'unsupported' space for people that simply can't upgrade for whatever reason.

yum.puppetlabs.com/unsupported for the RPM users, for example.

Trevor
On Mon, Mar 27, 2017 at 9:12 AM, John Bollinger <John.Bo...@stjude.org> wrote:


On Friday, March 24, 2017 at 5:09:31 PM UTC-5, Eric Sorenson wrote:
Well, that was the whole collections idea in a nutshell, but every one of those new repos would inevitably leave some people stranded on an old one. Terrifyingly, there are still something like 100,000 hosts[1] hitting the EOL'ed 3.x repos, which will never get any updates... there's no clearly great answer here but optimizing to protect people who have 'ensure => latest' against upstream repos doesn't seem like the right thing.

We've had this discussion before.  Nevertheless, I submit that the proposition is not to "optimize" for people who rely on the upstream repos, but rather to faithfully fulfill the responsibilities that many people presume you undertake by providing software repos in the first place.  In this case that also carries the benefit of avoiding torpedoing some 100K or so Puppet installations, and regardless of technical considerations, I question the wisdom of such a move on business and community relations grounds.

If you don't want to maintain repos for the EOL software versions (which is reasonable), then it would be much better to simply remove those repos than to drop incompatible package versions into them.


John

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.
--
Trevor Vaughan
Vice President, Onyx Point, Inc

-- This account not approved for unencrypted proprietary information --

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.

Miguel Di Ciurcio Filho

unread,
Apr 7, 2017, 9:47:09 AM4/7/17
to eric.s...@puppet.com, puppe...@googlegroups.com
Hello Eric, just a heads up about my questions :-D

Eric Sorenson

unread,
Apr 7, 2017, 6:25:55 PM4/7/17
to Miguel Di Ciurcio Filho, Eric Sorenson, puppe...@googlegroups.com
On Apr 7, 2017, at 6:46 AM, Miguel Di Ciurcio Filho <mig...@instruct.com.br> wrote:

Hello Eric, just a heads up about my questions :-D

On Fri, Mar 3, 2017 at 2:53 PM, Miguel Di Ciurcio Filho
<mig...@instruct.com.br> wrote:
On Mon, Feb 27, 2017 at 8:59 PM, Eric Sorenson <eric.s...@puppet.com> wrote:
The headline here is that the core open-source "Puppet Platform"
(puppet-agent, puppet-server, puppetdb) are moving to a more coordinated
release model, with compatibility guarantees and consistent versioning among
the components. The first release of this "Puppet Platform 5", currently
targeted at May, will bring these components' major versions together and
provide some nice features without a huge backwards-incompatible break.

Good news!

Would be possible to also version the components that go inside the
puppet-agent package in the same way? For example: facter, hiera and
mcollective.

Hi Miguel - We're not planning to move facter and the stand-alone hiera package to v5.0. The new environment-aware Hiera is "5" internally but it is part of Puppet, the previous standalone gem/tarball need to stay at their current version to avoid any more confusion.

mcollective is on its own release scheme, so it's not moving to 5 also. 




Speaking about mcollective, it has been notorious that Puppet Inc. has
stopped any further development and has put mcolletive in maintenance
mode for quite a while, in favor of the Orchestrator in 2015.

Fast forward to 2017 and there are a lot of useful stuff in
mcollective still not available in the Orchestrator.

Yes this is unfortunately true.


Is also known that R.I.Pienaar is trying to work with Puppet Inc to
maintain mcollective. Also, I consider his work on choria.io
remarkable and has keep mcollective powerful and useful once again.

We are including the NATS gem necessary to run choria agents in the puppet-agent 5 package, to make it easy to set up.


Last years PuppetConf we where told that the server side of the
orchestrator would be merged into Puppet Server.

We should be more transparent on this  - it is still in the plan but has been delayed due to priority changes on the project. I'm still hopeful this will be out in the next few months but won't be in the 5.0 release.

Looking into a "Puppet Platform 5", what looks like to be the
orchestration option?

I wish I had a better answer, but you are correct that we are between old and new tools. We will still include mco 2.x, pxp agent and nats to make different options possible, but have not conclusively settled on a unified orchestration solution.



Q: How's it going to be delivered? Are Puppet Collections still a thing?
A: Funny you should ask. As we kicked around a couple of months ago[3], it's
been two years and the collections idea just hasn't worked out in practice,
so it seems wise to iterate and keep evolving. The current plan is to create
a new repo, parallel with the existing PC1 repos, simply named 'puppet'. The
platform components will roll into it and future semver-majors will be
coordinated across the components, hopefully leading to smaller, easily
digestible chunks of change.

Sounds reasonable.

One comment here from the rest of the thread, the plan is to make a new repo named "puppet5", with a symlink and symbolic release package pointing at "puppet". Then when there are new versions, they will go into "puppet6" and the symlink will be updated. So you can either pick the numbered major version and stick with it until you are ready to opt-in to the new major series, or use the symbolic name and stay current.

Would be this the time to also use /etc/puppet and /opt/puppet?

Good question. I am afraid it would break too many things, so /opt/puppetlabs will be around for a while longer.

Eric Sorenson - er...@puppet.com 

Thomas Mueller

unread,
Apr 8, 2017, 3:07:53 AM4/8/17
to puppe...@googlegroups.com

>>> Would be this the time to also use /etc/puppet and /opt/puppet?
>
> Good question. I am afraid it would break too many things, so
> /opt/puppetlabs will be around for a while longer.
>
>
Please, no new default paths again.

- Thomas

Martin Alfke

unread,
Apr 8, 2017, 4:49:31 AM4/8/17
to puppe...@googlegroups.com
+1

Martin

Trevor Vaughan

unread,
Apr 8, 2017, 1:09:01 PM4/8/17
to puppe...@googlegroups.com
+100

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages