Tiering platform's and providers in puppet's core

209 views
Skip to first unread message

Andy Parker

unread,
Jan 8, 2014, 4:09:35 PM1/8/14
to puppe...@googlegroups.com
During today's PR triage we spent a long time going over PRs 2227, 2226, 2225, 2034, and 2130. These are all related together and are the combination of two different issues. One is that the package provider needs some way of passing through arbitrary, unmodelled parameters to the provider (so that you can have modifications specific to a provider that don't have any meaning for other providers). The other part was to update the freebsd package provider support for a new packaging system that apparently came out in FreeBSD 10. The first change we want to get into the core. The second change pushes to the forefront a problem that has been plaguing us for a long time: there are a lot of systems that puppet can run on, has providers for, but we can't maintain inside the core very well (we don't have the expertise, time, or, sometimes, desire), and there are other things that we care about quite a lot and can try to take an active role in maintaining.

So here is the proposal: let's split things into 2 tiers:

  * Tier 1: these are things that the core team will work to make sure are working well, maintain compatibility between puppet releases, and give a critical eye to changes
  * Tier 2: things that are shipped as part of the standard puppet system, but are more of a "contrib" status. Those won't be required to abide by the same versioning semantics, there is a defined maintainer (or clearly unmaintained) by someone. The exact details of how contrib is structured and shipped needs to be worked out, however.

So why have a Tier 2 and not just shove everything into modules? One reason would be to have things that are "core", but not maintained by the core developers still be visible and close at hand. I would give us a little more visibility into what is happening without being a bottleneck. Another would be if the maintainer doesn't want to deal with releasing the changes, thinking about version numbers, etc. They would all just be rolled into the next puppet release and go along for the ride.

So the next questions to answer on this are:

  * What would be tier 1 and tier 2?
  * How should the separation be made evident?
  * How should contrib be structured?
  * Process for gaining or losing a maintainer?
  * Who should be the initial maintainers? I think we already have some people who might want to step up for some things (ptomulik for FreeBSD perhaps?)

I know that this has been talked about for a long time and that we already have a lot of projects in flight (and have dropped the ball on so many), but if we get some consensus on this, I think we can make some good progress toward getting this all in place.

--
Andrew Parker
Freenode: zaphod42
Twitter: @aparker42
Software Developer

Join us at PuppetConf 2014September 23-24 in San Francisco

Daniel Pittman

unread,
Jan 8, 2014, 6:20:30 PM1/8/14
to puppe...@googlegroups.com
I think the best thing would be that tier 1 types and providers are
part of the Puppet core, and tier two are modules that are bundled
into the core and shipped with it.

> * How should the separation be made evident?

By putting tier 2 code into modules -- and, optimally, making it
sensible to upgrade them separately -- the division is very clear.
Core code is core, and non-core code is in modules.

Users who want it to "just work" get a set of modules vetted, tested,
and shipped that work out of the box. Transparently, no less, because
we do support types and providers in modules fairly well.

It makes it clear looking at the code, too: core things are in core,
and non-core things get pulled in from a different source when the
product ships.

> * How should contrib be structured?
> * Process for gaining or losing a maintainer?

Like any module: put it on the forge, have someone own it. Those
people could very well be...

> * Who should be the initial maintainers? I think we already have some
> people who might want to step up for some things (ptomulik for FreeBSD
> perhaps?)

...the Puppet Labs paid staff, who work on "Puppet", and now also work
on things that Puppet ships. If you take the approach of ensuring
that your team can ship new versions of those modules, you resolve the
problem of control.

It also means that, eg, ptomulik can ship improvements for the FreeBSD
tooling ahead of Puppet, and the newer versions will get rolled back
into core when they are tested, vetted, and ready to be used without
effort by folks who don't want the complexity of learning our module
ecosystem -- they just want to get things done.

> I know that this has been talked about for a long time and that we already
> have a lot of projects in flight (and have dropped the ball on so many), but
> if we get some consensus on this, I think we can make some good progress
> toward getting this all in place.

I suspect your first reaction to this will be "no", or perhaps even
"hell, no!"; overall, I think that shipping modules with the core is
actually a good step forward. Many languages -- Ruby, Perl, Python,
Java, Clojure -- have found this an effective way to manage their core
and library separation.

I think there is substantial evidence that this is a good, supportable
and effective approach to solving exactly this problem, as well as to
reducing the coupling between "core" and "non-core" modules, and their
release.

--
Daniel Pittman
⎋ Puppet Labs Developer – http://puppetlabs.com
♲ Made with 100 percent post-consumer electrons

Andy Parker

unread,
Jan 9, 2014, 2:30:37 PM1/9/14
to puppe...@googlegroups.com
I agree that more of what is in core needs to be moved out into modules. I think having a tier 2 inside the main repo would provide the path to do that. We can first move some parts into the tier 2 area and then over time it might just move entirely to a separate module.

But the question here was actually more of "What actual parts of the system should be classified as tier 1 and what parts are tier 2?"
 
>   * How should the separation be made evident?

By putting tier 2 code into modules -- and, optimally, making it
sensible to upgrade them separately -- the division is very clear.
Core code is core, and non-core code is in modules.

Users who want it to "just work" get a set of modules vetted, tested,
and shipped that work out of the box.  Transparently, no less, because
we do support types and providers in modules fairly well.


"vetted" and "tested" are the key things here. These parts of the system aren't really vetted, nor are they tested. We don't check that the FreeBSD package providers work, for instance.
I think that aiming for it is a noble goal, but not where we are right now. There was one experiment with pulling some things into a module (nagios, I believe) but that backfired. Wouldn't first putting into place a clear delineation inside the existing codebase be a good first step in that direction?
 
I think there is substantial evidence that this is a good, supportable
and effective approach to solving exactly this problem, as well as to
reducing the coupling between "core" and "non-core" modules, and their
release.

--
Daniel Pittman
⎋ Puppet Labs Developer – http://puppetlabs.com
♲ Made with 100 percent post-consumer electrons

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-dev/CAFAW1H%3DHyzVnUaLeBu8ZHbMEKtRq2bW3b_avZzGGT0BLoSOd6Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Dustin J. Mitchell

unread,
Jan 10, 2014, 9:17:08 AM1/10/14
to puppe...@googlegroups.com
For what it's worth, Python at least has struggled with modules being
in and out of the Python distribution. Riding Python's trains means
stringent compatibility constraints, long support durations (many
years), and a long commit-to-ship delay. Puppet certainly moves
faster than Python, so maybe that's not so important here.

Another lesson from Python is that, in fact, everything is a module.
There are almost no "core Python" things aside from the language
itself and some builtins.

And a final lesson from Python: if it's one of the batteries that's
included, then it follows Python's shipping guidelines as far as
testing/vetting, compatibility, code style, and so on. If a module
can't make the "Tier 1" cut, it's not shipped with Python.

As for the question you want us all to answer, I think that the
delineation should be such that it is easy for a user to tell when
they cross a line, and should be based on PL's ability to adequately
test things as well as committment to support in the future.

I think that basically boils down to platforms, which in technical
terms will mostly mean Tier-2 providers for Tier-1 types like service,
package, file, and so on. As far as committment to support, I think
that product-specific support like the nagios_* types should be in
Tier 2 only as a way of saying that someday Puppet may ship without
them (although presumably they'd be easy to spin off into a forge
module at that time). Of course I don't know what PL's plans are, but
that's the idea.

Dustin

Erik Dalén

unread,
Jan 10, 2014, 9:37:25 AM1/10/14
to Puppet Developers
I've found that when putting stuff in modules that use some shared code like puppetdbquery, you either have to do a pluginsync on the master first, or do some hacks with the ruby load path to load the code directly out of the modulepath. That could be a bit annoying for stuff like naginator or this BSD stuff.

I think there is some bug report for it but can't find it now.



For more options, visit https://groups.google.com/groups/opt_out.



--
Erik Dalén

Ken Barber

unread,
Jan 10, 2014, 9:51:55 AM1/10/14
to puppe...@googlegroups.com
> I've found that when putting stuff in modules that use some shared code like
> puppetdbquery, you either have to do a pluginsync on the master first, or do
> some hacks with the ruby load path to load the code directly out of the
> modulepath. That could be a bit annoying for stuff like naginator or this
> BSD stuff.
>
> I think there is some bug report for it but can't find it now.

This one?

http://projects.puppetlabs.com/issues/4248

ken.

Erik Dalén

unread,
Jan 10, 2014, 9:54:38 AM1/10/14
to Puppet Developers
hmm, I think that is fixed. When you use pluginsync it can load it. But the master process can't load them unless there is a agent on the master syncing it to the libdir (which also means all environments on the master use the same code unless you do that hack mentioned in the bug report).


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.




--
Erik Dalén

Andy Parker

unread,
Jan 10, 2014, 12:45:10 PM1/10/14
to puppe...@googlegroups.com
On Fri, Jan 10, 2014 at 6:17 AM, Dustin J. Mitchell <dus...@v.igoro.us> wrote:
For what it's worth, Python at least has struggled with modules being
in and out of the Python distribution.  Riding Python's trains means
stringent compatibility constraints, long support durations (many
years), and a long commit-to-ship delay.  Puppet certainly moves
faster than Python, so maybe that's not so important here.

Another lesson from Python is that, in fact, everything is a module.
There are almost no "core Python" things aside from the language
itself and some builtins.


Perl has a similar approach. This difference release frequencies however, cause the perl core modules to sometimes be "dual-life" (http://search.cpan.org/dist/perl-5.16.3/pod/perlsource.pod#Core_modules). This works out where they have some modules that are released both in the core and on CPAN, which some being actively developed in the core and others not.
 
And a final lesson from Python: if it's one of the batteries that's
included, then it follows Python's shipping guidelines as far as
testing/vetting, compatibility, code style, and so on.  If a module
can't make the "Tier 1" cut, it's not shipped with Python.

As for the question you want us all to answer, I think that the
delineation should be such that it is easy for a user to tell when
they cross a line, and should be based on PL's ability to adequately
test things as well as committment to support in the future.


If you take a look at the tests that we run (https://jenkins.puppetlabs.com/view/Puppet%20FOSS/view/Master/) you can see that we test on several flavors of Linux, one version of Solaris, and many different Windows versions. PE covers more platforms than the FOSS tests cover, but it would be completely reasonable for them to get support from those extra platforms by using modules.

As far as future concerns, I don't think the PL FOSS maintenance of platforms will substantially increase in the future. I think where we are right now is about where we'll be for a while. We'd keep the modules to support some of the platforms open and on the forge, of course.
 
I think that basically boils down to platforms, which in technical
terms will mostly mean Tier-2 providers for Tier-1 types like service,
package, file, and so on.  As far as committment to support, I think
that product-specific support like the nagios_* types should be in
Tier 2 only as a way of saying that someday Puppet may ship without
them (although presumably they'd be easy to spin off into a forge
module at that time).  Of course I don't know what PL's plans are, but
that's the idea.


Ok, let me take a stab at this:
 
Tier-1 types: user, service, file, group, package, host, cron, exec, stage, tidy
Tier-1 providers: dpkg, apt, gem, msi, rpm, windows, yum, useradd, windows_adsi, groupadd, crontab

everything else is tier-2.
 
Dustin


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

Andy Parker

unread,
Jan 10, 2014, 12:47:05 PM1/10/14
to puppe...@googlegroups.com
On Fri, Jan 10, 2014 at 6:37 AM, Erik Dalén <erik.gus...@gmail.com> wrote:
I've found that when putting stuff in modules that use some shared code like puppetdbquery, you either have to do a pluginsync on the master first, or do some hacks with the ruby load path to load the code directly out of the modulepath. That could be a bit annoying for stuff like naginator or this BSD stuff.

I think there is some bug report for it but can't find it now.


The problem comes down to managing the LOAD_PATH while puppet is running and loading code. Moving puppet to use modules, even internally, would bring this problem more to the fore and maybe push us to find a solution.
 

For more options, visit https://groups.google.com/groups/opt_out.

Dustin J. Mitchell

unread,
Jan 10, 2014, 12:55:24 PM1/10/14
to puppe...@googlegroups.com
On Fri, Jan 10, 2014 at 12:45 PM, Andy Parker <an...@puppetlabs.com> wrote:
> Tier-1 types: user, service, file, group, package, host, cron, exec, stage,
> tidy
> Tier-1 providers: dpkg, apt, gem, msi, rpm, windows, yum, useradd,
> windows_adsi, groupadd, crontab
>
> everything else is tier-2.

I'd like to see augeas be Tier-1. I think of that as a more powerful
version of file, really. But if that's not how it's maintained, c'est
la vie. In other words, if I'm asking for PL to take on more
maintenance burden than currently, then my request is out of scope for
this conversation.

Also, it seems like 'notify' is simple enough to include, and
'resources' and 'schedule' are pretty core to the language.

Other than that, I agree.

Dustin

Ashley Penney

unread,
Jan 10, 2014, 1:17:43 PM1/10/14
to puppe...@googlegroups.com
I'm not sure I'd stick 'mount' in tier2, but that's the only other thing I can think of.  I'm late to the party but I want to throw my support behind Daniel's plan to move all tier2 stuff into modules, right from the start, and ship known tested versions with Puppet.

As part of the module team I think we'd have an easier time pitching in to help with the maintenance of these (such as helping with PRs and testing) if they were in modules like the rest of our stuff.  We're building out internal infrastructure to acceptance test our "supported modules" on our PE platforms and it wouldn't be much of a stretch to eventually bring these into that concept and have us specifically work on improving the test situation around these.

Thanks, 

Eric Sorenson

unread,
Jan 10, 2014, 2:24:01 PM1/10/14
to puppe...@googlegroups.com
Erik Dal�n wrote:
> hmm, I think that is fixed. When you use pluginsync it can load it. But
> the master process can't load them unless there is a agent on the master
> syncing it to the libdir (which also means all environments on the
> master use the same code unless you do that hack mentioned in the bug
> report).

I think you're talking about:

http://projects.puppetlabs.com/issues/18461

which I just migrated over to JIRA:

https://tickets.puppetlabs.com/browse/PUP-1416

--
Eric Sorenson - eric.s...@puppetlabs.com - freenode #puppet: eric0
puppet platform // coffee // techno // bicycles

Kylo Ginsberg

unread,
Jan 12, 2014, 5:38:06 PM1/12/14
to puppe...@googlegroups.com
On Fri, Jan 10, 2014 at 10:17 AM, Ashley Penney <ape...@gmail.com> wrote:
I'm late to the party but I want to throw my support behind Daniel's plan to move all tier2 stuff into modules, right from the start, and ship known tested versions with Puppet.

Even later to the party, but I agree :) The alternative of a contrib directory could muddy the waters so that there were 3 locations a given type/provider could land (core/contrib/module), when the current 2 locations (core/module) suffice. Easy to imagine extra bike-shedding on where something lands and/or the contrib directory becoming a failed experiment wasteland.

However, one question I have about shipping modules with puppet as discussed in this thread: are people thinking this means modules pre-installed in /usr/share/puppet/modules, or that the packaging step would merge/patch the tier2 modules into puppet proper?

If the former, is that overly disruptive to sites that specify modulepath? If the latter, does that complicate sites that want to upgrade one of the packaged-in modules using pmt? I haven't thought this through, so there may be a perfectly simple answer.

Kylo

--
Kylo Ginsberg

Join us at PuppetConf 2014September 23-24 in San Francisco - http://bit.ly/pupconf14
Register now and save 40%! Offer expires January 31st.

Andy Parker

unread,
Jan 13, 2014, 7:20:24 PM1/13/14
to puppe...@googlegroups.com
On Sun, Jan 12, 2014 at 2:38 PM, Kylo Ginsberg <ky...@puppetlabs.com> wrote:
On Fri, Jan 10, 2014 at 10:17 AM, Ashley Penney <ape...@gmail.com> wrote:

I'm late to the party but I want to throw my support behind Daniel's plan to move all tier2 stuff into modules, right from the start, and ship known tested versions with Puppet.


I take this to mean that we shouldn't bother trying to start with a middle ground of having things live inside the puppet codebase as modules. I fear that skipping that step would cause a lot of churn and/or pain before we get the modules ironed out.
 
Even later to the party, but I agree :) The alternative of a contrib directory could muddy the waters so that there were 3 locations a given type/provider could land (core/contrib/module), when the current 2 locations (core/module) suffice. Easy to imagine extra bike-shedding on where something lands and/or the contrib directory becoming a failed experiment wasteland.


Ok, so the initial idea of keeping a "contrib" inside the puppet codebase for some things under active development seems to be a losing one. What about the trimmed down idea of having it be a staging ground for pulling things out (in which case "contrib" is a terrible name for it)?
 
However, one question I have about shipping modules with puppet as discussed in this thread: are people thinking this means modules pre-installed in /usr/share/puppet/modules, or that the packaging step would merge/patch the tier2 modules into puppet proper?


I'm interested in this as well.
 
If the former, is that overly disruptive to sites that specify modulepath? If the latter, does that complicate sites that want to upgrade one of the packaged-in modules using pmt? I haven't thought this through, so there may be a perfectly simple answer.

Kylo

--
Kylo Ginsberg

Join us at PuppetConf 2014September 23-24 in San Francisco - http://bit.ly/pupconf14
Register now and save 40%! Offer expires January 31st.

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.



--
Andrew Parker
Freenode: zaphod42
Twitter: @aparker42
Software Developer

Dustin J. Mitchell

unread,
Jan 13, 2014, 8:13:20 PM1/13/14
to puppe...@googlegroups.com
How about something as simple as a top-level "modules" directory in
the puppet source, which is installed separately and dynamically
appended to the modulepath at runtime? That avoids any problems for
users who set modulepath, allows modules in users' modulepaths to
override the built-in modules, and doesn't use the inaccurate name
"contrib". It also makes it easy to move modules in and out of core.

Probably a formal way to look at this is to define the
`effective_modulepath` as the concatenation of `modulepath` and
`system_modulepath`, where the former is based on user configuration
and the latter determined when Puppet is installed. Then just replace
uses of `modulepath` with `effective_modulepath` in the loader.

Dustin

Nan Liu

unread,
Jan 13, 2014, 9:56:48 PM1/13/14
to puppet-dev
It's great the core type/provider is getting a serious review.

On Mon, Jan 13, 2014 at 4:20 PM, Andy Parker <an...@puppetlabs.com> wrote:
On Sun, Jan 12, 2014 at 2:38 PM, Kylo Ginsberg <ky...@puppetlabs.com> wrote:
Even later to the party, but I agree :) The alternative of a contrib directory could muddy the waters so that there were 3 locations a given type/provider could land (core/contrib/module), when the current 2 locations (core/module) suffice. Easy to imagine extra bike-shedding on where something lands and/or the contrib directory becoming a failed experiment wasteland.


Ok, so the initial idea of keeping a "contrib" inside the puppet codebase for some things under active development seems to be a losing one. What about the trimmed down idea of having it be a staging ground for pulling things out (in which case "contrib" is a terrible name for it)? 

The less the better, since this could get pretty confusing to troubleshoot. Maybe a mechanism which collapse the providers to avoid a large module sprawl. At minimum a tool to track everything:

puppet resource_types
- package core
- service core
- database /etc/puppetlabs/puppet/modules/mysql 
...

puppet resource_providers package
- package:
  |- apt /etc/puppetlabs/puppet/modules/apt/ v1.0
  |- gem /usr/share/puppet/modules/gem v1.0
...

However, one question I have about shipping modules with puppet as discussed in this thread: are people thinking this means modules pre-installed in /usr/share/puppet/modules, or that the packaging step would merge/patch the tier2 modules into puppet proper?


I'm interested in this as well.

Maybe merging would be better, at least to force detection of colliding providers (you can't install two versions of the yum provider). 
 
If the former, is that overly disruptive to sites that specify modulepath? If the latter, does that complicate sites that want to upgrade one of the packaged-in modules using pmt? I haven't thought this through, so there may be a perfectly simple answer.

Installing to /usr/share would be a pain for things like vagrant (which assumes a single puppet module path). I can see other issues with testing in vagrant, and there would be quite an increase in .fixture.yml just to do something basic.

For puppet upgrades there's no assumption that modules are compatible and I think handling upgrades of type/provider modules would be similar process (Puppetfile/librarian-puppet or r10k). 

Nan

Jason Antman

unread,
Jan 14, 2014, 10:07:29 AM1/14/14
to puppe...@googlegroups.com
I thought I'd throw in my 2 cents, as a long-time puppet user, current PE customer, and community member trying to make more code contributions...

First off, this thread has been great. I was going to quote a few replies, but there have been so many good ideas, that's sort of pointless. I fully support Daniel's plan to push tier2 directly to modules. More than that, I'd like to see it implemented in a way that I (an "advanced user") can easily opt-out of a given tier2 module (did someone say Nagios?) and replace it with something external.

I'd like to share a realization that I recently had, which could perhaps be an aid in delineating what's tier1 vs tier2: I'd always assumed that everything that shipped with Puppet was tested. Period. It was unclear to me until I started trying to use puppetlabs' forge modules with PE (and found that one or two in particular didn't work), and started actually submitting some PRs against core, that there were varying levels of support, and that just because Puppet might ship with a provider for X doesn't mean that it's fully validated and tested against that (i.e. Andy's comments about FreeBSD). (As an aside, I'd also assumed that what I remember hearing years ago was true, and there was no internal split between PE and FOSS - that PE was "just FOSS in a prettier box, with support and some value-adds", presumably that the only testing done to PE and not FOSS was around Console and packaging. Andy's comment that PE is tested on more platforms than FOSS was something I'd always written off as anti-Puppet conspiracy theory.)

As such, for the benefit of the community, I'd suggest that anything that (a) isn't fully tested and vetted by PL (whatever that means) or (b) is known to be broken (i.e. naginator) be split out into tier2, as modules, with a clear delineation to explain to users that these are essentially sub-par and warranty-free. (I suppose this largely falls in line with Dustin's comment about Python core vs modules).

I can't say I have a clear picture of how this would work... but as a probably 'more advanced' user of Puppet, I'd like to see this happen in a way that makes it easy to not only run a new version of a tier2 module, but also perform a wholesale replacement of it with something from the community (once again, reference to the nagios types). As such, I guess I'd be in favor of installing them *somewhere* outside of the core and adding a config directive (true by default) to automatically append that path to modulepath. That would be transparent to users who don't care about it, and for people like me, allow us to cherry-pick specific modules to append to our modulepath, and ignore others. Ideally the Modulefile format would be updated to understand this, so it would be easier to specify requirements for things that might no longer be present in a given puppet install.

Versioning and dependencies are another strong argument in favor of moving directly to modules. If tier2 "things", i.e. the FreeBSD provider, are maintained and versioned separately but included in the "puppet" distribution proper, how does a Forge module or arbitrary piece of code declare that it needs a specific version of the provider? If I pull in the latest git version but am still running "Puppet 3.5.0" how is that communicated to modules? We know how to do this with puppet as a whole ($::puppetversion) or with modules (Modulefile, and the various tools that support it), but it's unclear to me how this would work if, for example, the FreeBSD package provider version wasn't inextricably tied to the puppet version.

Just some thoughts. I'm very excited to see this change, both for the implications it has around nagios, and to possibly throw my name in the hat as a maintainer for the `pip` package provider.
-Jason Antman
--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

John Bollinger

unread,
Jan 14, 2014, 1:52:58 PM1/14/14
to puppe...@googlegroups.com


On Monday, January 13, 2014 7:13:20 PM UTC-6, Dustin J. Mitchell wrote:
How about something as simple as a top-level "modules" directory in
the puppet source, which is installed separately and dynamically
appended to the modulepath at runtime?  That avoids any problems for
users who set modulepath, allows modules in users' modulepaths to
override the built-in modules, and doesn't use the inaccurate name
"contrib".  It also makes it easy to move modules in and out of core.



I like the idea of a system modulepath wherein reside modules that are accounted in some way part of Puppet itself, with that being appended automatically to the user modulepath.  I think that will make the change fairly transparent to users, even if it occurs over several releases.  That would also make it convenient to package the tier-2 stuff separately from the Puppet core, if so desired, and I think it would be easy to maintain (as much so as any alternative I can think of, anyway).


John

Kylo Ginsberg

unread,
Jan 14, 2014, 4:13:36 PM1/14/14
to puppe...@googlegroups.com
On Tue, Jan 14, 2014 at 7:07 AM, Jason Antman <ja...@jasonantman.com> wrote:
(As an aside, I'd also assumed that what I remember hearing years ago was true, and there was no internal split between PE and FOSS - that PE was "just FOSS in a prettier box, with support and some value-adds", presumably that the only testing done to PE and not FOSS was around Console and packaging. Andy's comment that PE is tested on more platforms than FOSS was something I'd always written off as anti-Puppet conspiracy theory.)

Quick comment wrt this aside, so you don't need to watch the Zapruder film ;>

* PE tests against the Operating System grid here: http://docs.puppetlabs.com/pe/latest/install_system_requirements.html

Both test platforms the other doesn't (e.g. PE tests AIX, FOSS tests Fedora, etc). And notable to this thread, neither tests, say, *BSD.

Kylo

Andy Parker

unread,
Jan 15, 2014, 3:57:36 PM1/15/14
to puppe...@googlegroups.com
On Mon, Jan 13, 2014 at 6:56 PM, Nan Liu <nan...@gmail.com> wrote:
It's great the core type/provider is getting a serious review.

On Mon, Jan 13, 2014 at 4:20 PM, Andy Parker <an...@puppetlabs.com> wrote:
On Sun, Jan 12, 2014 at 2:38 PM, Kylo Ginsberg <ky...@puppetlabs.com> wrote:
Even later to the party, but I agree :) The alternative of a contrib directory could muddy the waters so that there were 3 locations a given type/provider could land (core/contrib/module), when the current 2 locations (core/module) suffice. Easy to imagine extra bike-shedding on where something lands and/or the contrib directory becoming a failed experiment wasteland.


Ok, so the initial idea of keeping a "contrib" inside the puppet codebase for some things under active development seems to be a losing one. What about the trimmed down idea of having it be a staging ground for pulling things out (in which case "contrib" is a terrible name for it)? 

The less the better, since this could get pretty confusing to troubleshoot. Maybe a mechanism which collapse the providers to avoid a large module sprawl. At minimum a tool to track everything:

puppet resource_types
- package core
- service core
- database /etc/puppetlabs/puppet/modules/mysql 
...

puppet resource_providers package
- package:
  |- apt /etc/puppetlabs/puppet/modules/apt/ v1.0
  |- gem /usr/share/puppet/modules/gem v1.0
...


That brings up a good point. In the perl world you have corelist (http://stackoverflow.com/questions/2049735/how-can-i-tell-if-a-perl-module-is-core-or-part-of-the-standard-install), which becomes invaluable for writing portable CPAN modules. It helps answer the question, which gets harder and harder as time goes on, of "what version of Foo was shipped in core version X?"
 
However, one question I have about shipping modules with puppet as discussed in this thread: are people thinking this means modules pre-installed in /usr/share/puppet/modules, or that the packaging step would merge/patch the tier2 modules into puppet proper?


I'm interested in this as well.

Maybe merging would be better, at least to force detection of colliding providers (you can't install two versions of the yum provider). 

I'm not clear on what you mean. Does installing two versions of the yum provider not work, or are you saying that this would be a desirable outcome?
 
 
If the former, is that overly disruptive to sites that specify modulepath? If the latter, does that complicate sites that want to upgrade one of the packaged-in modules using pmt? I haven't thought this through, so there may be a perfectly simple answer.

Installing to /usr/share would be a pain for things like vagrant (which assumes a single puppet module path). I can see other issues with testing in vagrant, and there would be quite an increase in .fixture.yml just to do something basic.

For puppet upgrades there's no assumption that modules are compatible and I think handling upgrades of type/provider modules would be similar process (Puppetfile/librarian-puppet or r10k). 

Nan

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

Andy Parker

unread,
Jan 15, 2014, 4:20:17 PM1/15/14
to puppe...@googlegroups.com
Ok, let me try to summarize the discussion so far:

  * Tier1/Tier2 as a basic premise seems to be accepted as a good idea.
  * Tier2 code ideally won't live inside the puppet repo at all
  * Tier2 code should be packaged up as modules
  * Make the separation based on what we (PL) actually test
  * OR make *everything* Tier2 (no such thing as core providers)
  * the puppet packages should pull in a select set of modules (and specific versions) and ship those in a vendor modulepath

I think I can be on board with this as an end goal. And I lean toward making everything Tier2. My only concern is the overhead of managing all of those dependencies, it seems like it could quickly lead to a place where we are spending a huge amount of our time just dealing with version numbers.

Now for a proposal on how to get there (order might be a little wrong):

  1. create a "modules" directory that is a peer of "lib" in the puppet repo
  2. select a section of functionality to pull out (nagios might be the first good candidate since we've already tried it once)
  3. create a puppet module in the modules directory and move the code and tests to the module
  4. Update the rake tasks to run all of the spec tests as well as the spec tests of each module
  5. Plumb in a "build" rake task (right now we don't have one). This will be a step that merges the module back into the lib code as part of packaging.
  6. Extend puppet's support for modulepath to include a static vendored modules section
  7. Change the build/packaging/install scripts to move the modules into the vendored directory instead of merging it into the puppet code
  8. Repeat steps 2 and 3 until happy

After that is all in place (or just after the first one plumbs in all of the functionality) I think we can then start moving things off to the forge and pulling them in a different way.


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

Dustin J. Mitchell

unread,
Jan 15, 2014, 4:21:48 PM1/15/14
to puppe...@googlegroups.com
On Wed, Jan 15, 2014 at 4:20 PM, Andy Parker <an...@puppetlabs.com> wrote:
> * Tier2 code ideally won't live inside the puppet repo at all
..
> 1. create a "modules" directory that is a peer of "lib" in the puppet repo

These seem contradictory..

Dustin

Andy Parker

unread,
Jan 15, 2014, 4:37:07 PM1/15/14
to puppe...@googlegroups.com
They are. I'm thinking that the modules directory would live only as long as it takes to extract them out in a way that produces reasonable modules for publishing on the forge. The reason for the modules step is so that we can keep shipping a working system as the work is going on without having to keep the changes on a long lived branch.

The contradiction is resolved after we complete the final step "After that is all in place (or just after the first one plumbs in all of the functionality) I think we can then start moving things off to the forge and pulling them in a different way."
 
Dustin


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

Dustin J. Mitchell

unread,
Jan 15, 2014, 5:16:52 PM1/15/14
to puppe...@googlegroups.com
Thanks for the clarification.

Dustin

James Turnbull

unread,
Jan 15, 2014, 7:01:38 PM1/15/14
to puppe...@googlegroups.com
Andy Parker wrote:
> On Wed, Jan 15, 2014 at 1:21 PM, Dustin J. Mitchell <dus...@v.igoro.us
> <mailto:dus...@v.igoro.us>> wrote:
>
> On Wed, Jan 15, 2014 at 4:20 PM, Andy Parker <an...@puppetlabs.com
> <mailto:an...@puppetlabs.com>> wrote:
> > * Tier2 code ideally won't live inside the puppet repo at all
> ..
> > 1. create a "modules" directory that is a peer of "lib" in the
> puppet repo
>
> These seem contradictory..
>
>
> They are. I'm thinking that the modules directory would live only as
> long as it takes to extract them out in a way that produces reasonable
> modules for publishing on the forge. The reason for the modules step is
> so that we can keep shipping a working system as the work is going on
> without having to keep the changes on a long lived branch.
>
> The contradiction is resolved after we complete the final step "After
> that is all in place (or just after the first one plumbs in all of the
> functionality) I think we can then start moving things off to the forge
> and pulling them in a different way."
>

I think this is a broadly good idea but I've got one concern about the
on ramp to using Puppet. Whatever is done, pull out some/pull out all,
the user experience of getting started with Puppet should remain
seamless or at least as good as it is now. For example, if there's
suddenly another step to get started with Puppet, i.e.:

1. Install Puppet.
2. Add resources.
3. See Puppet in action.

Then I think the getting started user experience suffers. Especially
with so many tutorials out there that just assume various resources will
be available. If the user gets some esoteric error (Dog forbid Puppet
having an esoteric error :)) when they try to run a local .pp file or do
a puppet resource then that's a big turn-off.

Puppet's learning curve can be steep for many users. Let's not make it
any harder.

Cheers

James

--
* The Docker Book (http://dockerbook.com)
* The LogStash Book (http://logstashbook.com)
* Pro Puppet (http://tinyurl.com/ppuppet2 )
* Pro Linux System Administration (http://tinyurl.com/linuxadmin)
* Pro Nagios 2.0 (http://tinyurl.com/pronagios)
* Hardening Linux (http://tinyurl.com/hardeninglinux)

Deepak Giridharagopal

unread,
Jan 15, 2014, 8:06:38 PM1/15/14
to puppe...@googlegroups.com
On Wed, Jan 15, 2014 at 2:20 PM, Andy Parker <an...@puppetlabs.com> wrote:
Ok, let me try to summarize the discussion so far:

  * Tier1/Tier2 as a basic premise seems to be accepted as a good idea.
  * Tier2 code ideally won't live inside the puppet repo at all
  * Tier2 code should be packaged up as modules
  * Make the separation based on what we (PL) actually test
  * OR make *everything* Tier2 (no such thing as core providers) 
  * the puppet packages should pull in a select set of modules (and specific versions) and ship those in a vendor modulepath

I think I can be on board with this as an end goal. And I lean toward making everything Tier2. My only concern is the overhead of managing all of those dependencies, it seems like it could quickly lead to a place where we are spending a huge amount of our time just dealing with version numbers.

Now for a proposal on how to get there (order might be a little wrong):

  1. create a "modules" directory that is a peer of "lib" in the puppet repo
  2. select a section of functionality to pull out (nagios might be the first good candidate since we've already tried it once)
  3. create a puppet module in the modules directory and move the code and tests to the module
  4. Update the rake tasks to run all of the spec tests as well as the spec tests of each module
  5. Plumb in a "build" rake task (right now we don't have one). This will be a step that merges the module back into the lib code as part of packaging.
  6. Extend puppet's support for modulepath to include a static vendored modules section
  7. Change the build/packaging/install scripts to move the modules into the vendored directory instead of merging it into the puppet code
  8. Repeat steps 2 and 3 until happy

After that is all in place (or just after the first one plumbs in all of the functionality) I think we can then start moving things off to the forge and pulling them in a different way.


This is a great thread. So as I've been reading through this and talking with people on #puppet-dev, I've come around to thinking about it this way:

* there's code for which Puppet Labs is the maintainer along with the community
* there's code for which there are only community maintainers
* there's code that's effectively unmaintained
* there's code that's currently in core, but probably shouldn't be (like the nagios stuff)

For things where it was a mistake to really be in core in the first place, I think we should just move that stuff out. I'm really just thinking about the nagios types here, but maybe there are others.

For things that are effectively unmaintained, like platforms that nobody is willing to step up and own...I think we should put those on a path to be moved out. We're not doing anyone any favors by having that stuff in core and bit-rotting.

The other two buckets are the most important ones in my mind. This isn't really about tiers, but instead about maintained/unmaintained code. As long as code is maintained, it should be a first-class citizen regardless of whether or not it's maintained by the community or by puppet labs. The community is not second-class, which is what I think the word "tier" implies.

Unmaintained code, though, is definitely second-class. :)

So really what I'm talking about is actively seeking out community maintainers for certain platforms, and giving them commit access. They handle pull requests for that part of the tree, and generally act as good stewards (tests pass, obey semver, packaging works, etc).

I think in order to get there, we need to do a few things:

1. Inventory what we've got in terms of platforms/types/providers
2. Figure out what subset of those are things Puppet Labs helps maintain (see Kylo's link)
3. Figure out what subset of those are like the nagios types in that they really make sense as external modules
4. For the rest, begin looking for community maintainers. We can look at people who have made commits, we can ask on this list, IRC, etc.

I think once we do that exercise, we can start thinking about the mechanics of reorganizing the source tree accordingly. I'd suggest that we reorganize things so that maintainers manage a subtree.

--
Deepak Giridharagopal / Puppet Labs

Deepak Giridharagopal

unread,
Jan 15, 2014, 8:08:06 PM1/15/14
to puppe...@googlegroups.com
+100

deepak

Pawel Tomulik

unread,
Jan 15, 2014, 8:59:14 PM1/15/14
to puppe...@googlegroups.com, ja...@lovedthanlost.net


I believe separation may be done without rising the learning curve. Once the puppet is split into (a little) core and a gazylion of modules, a (sub)set of modules may be identified to be packaged and distributed as deb, rpm, and so on. Let say there could  be a `puppet-modules` or `puppet-standard-modules` package and `puppet` could just depend on it (or at leas recommend it - some packaging systems such as Debian's apt have such a functionality and it sometimes installs recommended packages automatically). So, basically you install puppet as always and you get the puppet and all of it's "standard" types/providers as it is currently.
 

Puppet's learning curve can be steep for many users. Let's not make it
any harder.

Cheers

James

--
* The Docker Book (http://dockerbook.com)
* The LogStash Book (http://logstashbook.com)
* Pro Puppet (http://tinyurl.com/ppuppet2 )
* Pro Linux System Administration (http://tinyurl.com/linuxadmin)
* Pro Nagios 2.0 (http://tinyurl.com/pronagios)
* Hardening Linux (http://tinyurl.com/hardeninglinux)



--
Pawel Tomulik

Pawel Tomulik

unread,
Jan 15, 2014, 9:12:09 PM1/15/14
to puppe...@googlegroups.com
W dniu wtorek, 14 stycznia 2014 16:07:29 UTC+1 użytkownik Jason Antman napisał:
I thought I'd throw in my 2 cents, as a long-time puppet user, current PE customer, and community member trying to make more code contributions...

First off, this thread has been great. I was going to quote a few replies, but there have been so many good ideas, that's sort of pointless. I fully support Daniel's plan to push tier2 directly to modules. More than that, I'd like to see it implemented in a way that I (an "advanced user") can easily opt-out of a given tier2 module (did someone say Nagios?) and replace it with something external.

I'd like to share a realization that I recently had, which could perhaps be an aid in delineating what's tier1 vs tier2: I'd always assumed that everything that shipped with Puppet was tested. Period. It was unclear to me until I started trying to use puppetlabs' forge modules with PE (and found that one or two in particular didn't work), and started actually submitting some PRs against core, that there were varying levels of support, and that just because Puppet might ship with a provider for X doesn't mean that it's fully validated and tested against that (i.e. Andy's comments about FreeBSD). (As an aside, I'd also assumed that what I remember hearing years ago was true, and there was no internal split between PE and FOSS - that PE was "just FOSS in a prettier box, with support and some value-adds", presumably that the only testing done to PE and not FOSS was around Console and packaging. Andy's comment that PE is tested on more platforms than FOSS was something I'd always written off as anti-Puppet conspiracy theory.)

As such, for the benefit of the community, I'd suggest that anything that (a) isn't fully tested and vetted by PL (whatever that means) or (b) is known to be broken (i.e. naginator) be split out into tier2, as modules, with a clear delineation to explain to users that these are essentially sub-par and warranty-free. (I suppose this largely falls in line with Dustin's comment about Python core vs modules).

I can't say I have a clear picture of how this would work... but as a probably 'more advanced' user of Puppet, I'd like to see this happen in a way that makes it easy to not only run a new version of a tier2 module, but also perform a wholesale replacement of it with something from the community (once again, reference to the nagios types). As such, I guess I'd be in favor of installing them *somewhere* outside of the core and adding a config directive (true by default) to automatically append that path to modulepath. That would be transparent to users who don't care about it, and for people like me, allow us to cherry-pick specific modules to append to our modulepath, and ignore others. Ideally the Modulefile format would be updated to understand this, so it would be easier to specify requirements for things that might no longer be present in a given puppet install.

Versioning and dependencies are another strong argument in favor of moving directly to modules. If tier2 "things", i.e. the FreeBSD provider, are maintained and versioned separately but included in the "puppet" distribution proper, how does a Forge module or arbitrary piece of code declare that it needs a specific version of the provider? If I pull in the latest git version but am still running "Puppet 3.5.0" how is that communicated to modules? We know how to do this with puppet as a whole ($::puppetversion) or with modules (Modulefile, and the various tools that support it), but it's unclear to me how this would work if, for example, the FreeBSD package provider version wasn't inextricably tied to the puppet version.

Just some thoughts. I'm very excited to see this change, both for the implications it has around nagios, and to possibly throw my name in the hat as a maintainer for the `pip` package provider.
-Jason Antman



It seems like a prerequisite for the above is a decent, feature reach packaging system for puppet modules. It should provide a way to describe complex dependencies, including expressions such as "or" (e.g. foo >=1.2.3 | bar >= 4.5.6), tools for conflict resolutions, options to hold installed versions, smart upgrades etc. Only then you could ensure that users can be happy mixing custom versions of custom modules and their systems could evolve smoothly in time. Maybe another idea would be to split out the module packager and make it a separate project? :)

Pawel Tomulik

unread,
Jan 16, 2014, 5:16:33 AM1/16/14
to puppe...@googlegroups.com


W dniu czwartek, 16 stycznia 2014 03:12:09 UTC+1 użytkownik Pawel Tomulik napisał:


It seems like a prerequisite for the above is a decent, feature reach packaging system for puppet modules. [...]

Sorry for my English, I meant "feature-rich" :)

Jason Antman

unread,
Jan 17, 2014, 8:48:03 PM1/17/14
to puppe...@googlegroups.com
Re: Deepak's message about community maintainers... that sounds
wonderful to me. I'm not sure they'd even need commit access, perhaps it
would be feasible to operate on a model where all PRs against a given
provider are handled by a maintainer, and once they sign off a PL
employee does the merge. That would allow the burden of triage and
review to be handled by a community maintainer, while still allowing
someone @puppetlabs.com to have the final approval/commit.

On 01/15/2014 07:01 PM, James Turnbull wrote:
> I think this is a broadly good idea but I've got one concern about the
> on ramp to using Puppet. Whatever is done, pull out some/pull out all,
> the user experience of getting started with Puppet should remain
> seamless or at least as good as it is now. For example, if there's
> suddenly another step to get started with Puppet, i.e.:
>
> 1. Install Puppet.
> 2. Add resources.
> 3. See Puppet in action.
>
> Then I think the getting started user experience suffers. Especially
> with so many tutorials out there that just assume various resources will
> be available. If the user gets some esoteric error (Dog forbid Puppet
> having an esoteric error :)) when they try to run a local .pp file or do
> a puppet resource then that's a big turn-off.
>
> Puppet's learning curve can be steep for many users. Let's not make it
> any harder.
>
> Cheers
>
> James
>
This also speaks directly to something that Pawel said... if PMT were a
feature-complete tool, understanding things other than just the Forge
(i.e. GitHub, arbitrary git URIs, internal Forge mirrors, etc.) and
or'ed/fork requirements (i.e. "puppetlabs-stdlib >= 3.2.0 *or*
github.com/jantman/puppetlabs-stdlib >= 3.2.1") ... and it was the
de-facto method of managing modules, then I'd say (hand-waving
implementation) there should be a list of modules that are "standard"
(i.e. the previously-core parts), and if they're missing at puppetmaster
start (or perhaps even at install-time), or puppet starts with an empty
moduledir, the user is prompted (or a similar message is logged) to run
"puppet module bootstrap" which would install a default list of modules.
I'll admit I'm not sure how that would work with a simple one-off .pp
file, but then again, I think the Forge is becoming ubiquitous enough
that standalone .pp files with no external modules (aside from testing)
should, hopefully, be less and less common.

I certainly agree with James' standpoint. However, as the set of people
at $work who commit to our internal puppet modules expands, I'm
constantly battling the vast amount of outdated information/tutorials on
the 'net. Keeping backwards compatibility with tutorials and blog posts
that never get updated (yes, I'm guilty of this myself) was already
broken by deprecating dynamic variable lookups, puppetd, and a handful
of other changes.

I'll raise a counterpoint that, instead of trying to maintain backwards
compatibility with third-party docs, we should try to (a) make
docs.puppetlabs.com such an authoritative and complete source that
future tutorials will begin "first follow the Getting Puppet Setup doc
at <url> and then....", and (b) try to do the right thing at install
time and startup to detect this situation and offer the user a simple
one-command method of installing a default/base module set.

-Jason

James Turnbull

unread,
Jan 17, 2014, 9:32:41 PM1/17/14
to puppe...@googlegroups.com
> I certainly agree with James' standpoint. However, as the set of people
> at $work who commit to our internal puppet modules expands, I'm
> constantly battling the vast amount of outdated information/tutorials on
> the 'net. Keeping backwards compatibility with tutorials and blog posts
> that never get updated (yes, I'm guilty of this myself) was already
> broken by deprecating dynamic variable lookups, puppetd, and a handful
> of other changes.
>
> I'll raise a counterpoint that, instead of trying to maintain backwards
> compatibility with third-party docs, we should try to (a) make
> docs.puppetlabs.com such an authoritative and complete source that
> future tutorials will begin "first follow the Getting Puppet Setup doc
> at <url> and then....", and (b) try to do the right thing at install
> time and startup to detect this situation and offer the user a simple
> one-command method of installing a default/base module set.

I don't much care if Puppet from a tutorial (or theoretically books)
written several years ago doesn't work anymore. That's more an example
of one entry point to the issue. More importantly, and the key issue, is
if a user can't puzzle out how to use Puppet upon first touch.
Especially if it doesn't "just work" out of the box.

Regards

Felix Frank

unread,
Jan 18, 2014, 12:20:16 PM1/18/14
to puppe...@googlegroups.com
On 01/16/2014 01:01 AM, James Turnbull wrote:
> I think this is a broadly good idea but I've got one concern about the
> on ramp to using Puppet. Whatever is done, pull out some/pull out all,
> the user experience of getting started with Puppet should remain
> seamless or at least as good as it is now. For example, if there's
> suddenly another step to get started with Puppet, i.e.:
>
> 1. Install Puppet.
> 2. Add resources.
> 3. See Puppet in action.
>
> Then I think the getting started user experience suffers. Especially
> with so many tutorials out there that just assume various resources will
> be available. If the user gets some esoteric error (Dog forbid Puppet
> having an esoteric error :)) when they try to run a local .pp file or do
> a puppet resource then that's a big turn-off.
>
> Puppet's learning curve can be steep for many users. Let's not make it
> any harder.

I feel that this is a very valid point. As Pawel pointed out, this can
be approached with package dependencies.

Would it make sense to take this a step further and rename the remainder
of puppet to puppet-core and supplement it with, say,
puppet-core-modules? Puppet proper would comprise both.

Just a thought.

Pawel Tomulik

unread,
Jan 18, 2014, 1:06:56 PM1/18/14
to puppe...@googlegroups.com


You may wish to look at https://groups.google.com/forum/#!topic/puppet-bsd/g5DDPd3PL-U. These people seem to think about similar approach to packaging puppet for FreeBSD.

Andy Parker

unread,
Jan 21, 2014, 1:30:42 PM1/21/14
to puppe...@googlegroups.com
I agree with that thread. In general I think it is a good idea of platform specific packages to include some platform specific modules as part of the system out of the box. 

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

Andy Parker

unread,
Jan 21, 2014, 1:43:00 PM1/21/14
to puppe...@googlegroups.com
I mostly agree with that. We move it out (nagios is the most obvious example, I think), but what do we do with it? Just dump it on the forge and leave it?
 

For things that are effectively unmaintained, like platforms that nobody is willing to step up and own...I think we should put those on a path to be moved out. We're not doing anyone any favors by having that stuff in core and bit-rotting.

The other two buckets are the most important ones in my mind. This isn't really about tiers, but instead about maintained/unmaintained code. As long as code is maintained, it should be a first-class citizen regardless of whether or not it's maintained by the community or by puppet labs. The community is not second-class, which is what I think the word "tier" implies.

Unmaintained code, though, is definitely second-class. :)

So really what I'm talking about is actively seeking out community maintainers for certain platforms, and giving them commit access. They handle pull requests for that part of the tree, and generally act as good stewards (tests pass, obey semver, packaging works, etc).


So this would be to keep the code in the puppet repo, which is definitely much less work up front. Although I think we should still try to split out the code into modules that live inside the main repo. If nothing else this helps to make the boundaries clearer.
 
I think in order to get there, we need to do a few things:

1. Inventory what we've got in terms of platforms/types/providers

Types/Providers:
augeas
 +- augeas
component
  no providers
computer
 +- computer
cron
 +- crontab
exec
 +- posix
 +- shell
 +- windows
file
 +- posix
 +- windows
file
 +- posix
 +- windows
filebucket
  no providers
group
 +- aix
 +- directoryservice
 +- groupadd
 +- ldap
 +- pw
 +- windows_adsi
host
 +- parsed
interface
 +- cisco
k5login
  no providers
macauthorization
 +- macauthorization
mailalias
 +- aliases
maillist
 +- mailman
mcx
 +- mcxcontent
mount
 +- parsed
nagios_command
  no providers
nagios_contact
  no providers
nagios_contactgroup
  no providers
nagios_host
  no providers
nagios_hostdependency
  no providers
nagios_hostescalation
  no providers
nagios_hostextinfo
  no providers
nagios_hostgroup
  no providers
nagios_service
  no providers
nagios_servicedependency
  no providers
nagios_serviceescalation
  no providers
nagios_serviceextinfo
  no providers
nagios_servicegroup
  no providers
nagios_timeperiod
  no providers
notify
  no providers
package
 +- aix
 +- appdmg
 +- apple
 +- apt
 +- aptitude
 +- aptrpm
 +- blastwave
 +- dpkg
 +- fink
 +- freebsd
 +- gem
 +- hpux
 +- macports
 +- msi
 +- nim
 +- openbsd
 +- opkg
 +- pacman
 +- pip
 +- pkg
 +- pkgdmg
 +- pkgin
 +- pkgutil
 +- portage
 +- ports
 +- portupgrade
 +- rpm
 +- rug
 +- sun
 +- sunfreeware
 +- up2date
 +- urpmi
 +- windows
 +- windows
 +- yum
 +- yumhelper.py
 +- zypper
port
 +- parsed
resources
  no providers
router
  no providers
schedule
  no providers
scheduled_task
 +- win32_taskscheduler
selboolean
 +- getsetsebool
selmodule
 +- semodule
service
 +- base
 +- bsd
 +- daemontools
 +- debian
 +- freebsd
 +- gentoo
 +- init
 +- launchd
 +- openbsd
 +- openrc
 +- openwrt
 +- redhat
 +- runit
 +- service
 +- smf
 +- src
 +- systemd
 +- upstart
 +- windows
ssh_authorized_key
 +- parsed
sshkey
 +- parsed
stage
  no providers
tidy
  no providers
user
 +- aix
 +- directoryservice
 +- hpux
 +- ldap
 +- pw
 +- user_role_add
 +- useradd
 +- windows_adsi
vlan
 +- cisco
whit
  no providers
yumrepo
  no providers
zfs
 +- zfs
zone
 +- solaris
zpool
 +- zpool

This list was created by:

for t in `ls lib/puppet/type`; do 
  base=`basename $t ".rb"`; 
  echo $base;
  if [ -d "lib/puppet/provider/$base" ]; then 
    for p in `ls lib/puppet/provider/$base`; do 
      echo " +- $(basename $p .rb)";
    done;
  else
    echo "  no providers";
  fi
done
 
2. Figure out what subset of those are things Puppet Labs helps maintain (see Kylo's link)
3. Figure out what subset of those are like the nagios types in that they really make sense as external modules
4. For the rest, begin looking for community maintainers. We can look at people who have made commits, we can ask on this list, IRC, etc.

I think once we do that exercise, we can start thinking about the mechanics of reorganizing the source tree accordingly. I'd suggest that we reorganize things so that maintainers manage a subtree.

Exactly
 

--
Deepak Giridharagopal / Puppet Labs

--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

Pawel Tomulik

unread,
Jan 21, 2014, 4:59:04 PM1/21/14
to puppe...@googlegroups.com


I think, a quite harder may be to identify specs that are related to these types/providers.
 

Michael Stahnke

unread,
Jan 22, 2014, 11:51:08 PM1/22/14
to puppe...@googlegroups.com
On Tue, Jan 14, 2014 at 7:07 AM, Jason Antman <ja...@jasonantman.com> wrote:
I thought I'd throw in my 2 cents, as a long-time puppet user, current PE customer, and community member trying to make more code contributions...

First off, this thread has been great. I was going to quote a few replies, but there have been so many good ideas, that's sort of pointless. I fully support Daniel's plan to push tier2 directly to modules. More than that, I'd like to see it implemented in a way that I (an "advanced user") can easily opt-out of a given tier2 module (did someone say Nagios?) and replace it with something external.

I'd like to share a realization that I recently had, which could perhaps be an aid in delineating what's tier1 vs tier2: I'd always assumed that everything that shipped with Puppet was tested. Period. It was unclear to me until I started trying to use puppetlabs' forge modules with PE (and found that one or two in particular didn't work), and started actually submitting some PRs against core, that there were varying levels of support, and that just because Puppet might ship with a provider for X doesn't mean that it's fully validated and tested against that (i.e. Andy's comments about FreeBSD). (As an aside, I'd also assumed that what I remember hearing years ago was true, and there was no internal split between PE and FOSS - that PE was "just FOSS in a prettier box, with support and some value-adds", presumably that the only testing done to PE and not FOSS was around Console and packaging. Andy's comment that PE is tested on more platforms than FOSS was something I'd always written off as anti-Puppet conspiracy theory.)

Just for clarity here, there is more testing around PE than Puppet. However, there aren't really additional tests in the PE system that Puppet doesn't run on Puppet itself (other than some platform coverage, e.g, AIX, at least to the best of my knowledge). Most of the additional testing comes from Puppet working with a specific version of Facter, working with PuppetDB working with a UI working with Passenger and a specific version of Ruby, etc, etc, etc.  



As such, for the benefit of the community, I'd suggest that anything that (a) isn't fully tested and vetted by PL (whatever that means) or (b) is known to be broken (i.e. naginator) be split out into tier2, as modules, with a clear delineation to explain to users that these are essentially sub-par and warranty-free. (I suppose this largely falls in line with Dustin's comment about Python core vs modules).

Just for clarity here: it's all warranty free. See section 7 of the Apache License. http://www.apache.org/licenses/LICENSE-2.0.txt

Jason Antman

unread,
Jan 24, 2014, 12:54:00 PM1/24/14
to puppe...@googlegroups.com
Thanks for the clarifications.


On 01/22/2014 11:51 PM, Michael Stahnke wrote:

As such, for the benefit of the community, I'd suggest that anything that (a) isn't fully tested and vetted by PL (whatever that means) or (b) is known to be broken (i.e. naginator) be split out into tier2, as modules, with a clear delineation to explain to users that these are essentially sub-par and warranty-free. (I suppose this largely falls in line with Dustin's comment about Python core vs modules).

Just for clarity here: it's all warranty free. See section 7 of the Apache License. http://www.apache.org/licenses/LICENSE-2.0.txt
I suppose, given its use in the licensing world, I should've picked a word other than "warranty". By "warranty-free", I was referring to things that PL doesn't test/vet/claim to know whether they're working or broken. Specifically re: "clear delineation", I was referring to the current state of the PL forge modules where some of them (puppetlabs-puppet) are just totally ancient, and some of them (puppetlabs-postgres and -apache, IIRC) don't work with PE. Anything that's split out into a module published by PL but is known to be in a poor state (or is an unknown state) should clearly indicate that; in absence of such disclaimer, the instinctual assumption (at least mine) is "oh, this is puppetlabs-*, it must be best-of-breed and will work well for me in my puppet environment."

-Jason

Andy Parker

unread,
Jan 29, 2014, 2:36:59 PM1/29/14
to puppe...@googlegroups.com
Sorry for falling behind on this thread. I've been working away on items for PUP-536 (PUP-1118 specifically) and am now incredibly behind on email.

On Fri, Jan 17, 2014 at 5:48 PM, Jason Antman <ja...@jasonantman.com> wrote:
Re: Deepak's message about community maintainers... that sounds
wonderful to me. I'm not sure they'd even need commit access, perhaps it
would be feasible to operate on a model where all PRs against a given
provider are handled by a maintainer, and once they sign off a PL
employee does the merge. That would allow the burden of triage and
review to be handled by a community maintainer, while still allowing
someone @puppetlabs.com to have the final approval/commit.


I'd be fine with non-PL committers. Partly this is selfish because we struggle to keep up with all of the changes that are wanted, and it would free us up to work on some more radical changes that we've had ideas about for a while (several have shown up in threads here).

I think if someone is a committer they should be trusted enough to make appropriate changes and get the necessary review. If it becomes a problem we can always revoke access (although that would be a pretty extreme action, I think).
Yes, docs.puppetlabs.com is already very good. We just need to keep working on it and make it the most commonly found source so that as changes happen, the documentation that people find reflects the current state of affairs. This doesn't stop people from putting together guides and blog posts about ideas they try out and good practices that they find, but it might reduce confusion from finding out of date information.
 
-Jason


--
You received this message because you are subscribed to the Google Groups "Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to puppet-dev+...@googlegroups.com.

Reply all
Reply to author
Forward
0 new messages