State of Facter development / bug fixing

127 views
Skip to first unread message

Peter Meier

unread,
Oct 17, 2012, 8:32:52 AM10/17/12
to puppe...@googlegroups.com
Hi all,

I'm wondering what the state of the development and especially bug
fixing of Facter is:

Currently I have on my (very own) list 3 really nasty bugs, that make
it really hard to rely on values of core facts (e.g.: virtual,
is_virtual, ipaddress etc.) within manifest/module development, as
currently they are broken and we had to rollout hot-fixes or give
special guidelines to puppet users.

All of these bugs have been reported months ago, patches have been
proposed, but as how I - personally - see things nothing have been
done to review the pull requests nor to fix the bugs. Although, these
values are - in my opinion - very crucial for the usefulness of facter
and hence puppet.

I'm speaking of these 3 Bugs:

https://projects.puppetlabs.com/issues/8210 - kvm guests are detected
as physical -> Without using our own patched version of facter, we can
currently not detect reliably on what kind of system we are running.
-> Install smartd on virt-guests?!

http://projects.puppetlabs.com/issues/10625 -> xen is not reported
properly -> Same issue as above

http://projects.puppetlabs.com/issues/10278 -> facter reports
different facts depending on the locale of the current environment ->
manifests might use values of certain facts to determine how a host
needs to be configured (quite a common pattern) -> due to this bug it
might happen, that if a puppet engineer with a locale different than C
(or en_US) runs puppet via cli, things might drastically change and so
also how a node is configured -> break the host by just re-applying
unchanged manifests on an unchanged node!

The goal of facter is to provide values for manifest/module
developers. Based on these values developers can then programatically
decide how things should be configured. But how should one rely on
these values if they are not reliable and nobody (excuse me) cares to
fix them?

Or is it really the idea that in the future everybody runs their very
own patched version of facter and each time there is a new (minor)
release everybody has to check whether this does not break their whole
infrastructure?

And to close my rant: Why should one be interested in new shiny
inventory tools if the tools that provide the values for them are
broken and not fixed, hence provide inaccurate information? Or in
other words: Do you really want to build new things, although you're
not interested in fixing the groundwork?

Thanks for listening!

~pete

PS: Although things might sound harsh, I'm not pissed off. I really
value the work you do. I'm more trying to bring the questions I have
to your attention. Which is from a user's point of view, but this is
how I currently see things.

Andy Parker

unread,
Oct 17, 2012, 12:48:14 PM10/17/12
to puppe...@googlegroups.com
Hie Peter,

On Wed, Oct 17, 2012 at 5:32 AM, Peter Meier <peter...@immerda.ch> wrote:
> Hi all,
>
> I'm wondering what the state of the development and especially bug fixing of
> Facter is:

For the state of puppet development we've been trying to keep the
tickets in redmine as up to date as possible. We are using the target
version field as a way of communicating what we are hoping to get in a
release, but it isn't a guarantee that it will actually make it in. We
have been getting a bit of work on Puppet done, but Facter has been
falling by the wayside.

>
> Currently I have on my (very own) list 3 really nasty bugs, that make it
> really hard to rely on values of core facts (e.g.: virtual, is_virtual,
> ipaddress etc.) within manifest/module development, as currently they are
> broken and we had to rollout hot-fixes or give special guidelines to puppet
> users.
>
> All of these bugs have been reported months ago, patches have been proposed,
> but as how I - personally - see things nothing have been done to review the
> pull requests nor to fix the bugs. Although, these values are - in my
> opinion - very crucial for the usefulness of facter and hence puppet.
>

This is an area that we've been really week in: keeping on top of
patches from people outside of Puppet Labs. We've been trying to have
a (rotating) person dedicated to responding to those, but I'll admit
that I've often had to pull them off in order to work on other things.
Without that person looking at pull requests they end up falling by
the wayside. This is really not a good situation, because a huge about
of what makes puppet work in all of these environments is the
contributions from others.

So yeah...this is a big problem right now. I'm open to suggestions
about how we could work differently to fix it. Maybe giving out commit
access to more people outside of Puppet Labs?

> I'm speaking of these 3 Bugs:
>
> https://projects.puppetlabs.com/issues/8210 - kvm guests are detected as
> physical -> Without using our own patched version of facter, we can
> currently not detect reliably on what kind of system we are running. ->
> Install smartd on virt-guests?!
>
> http://projects.puppetlabs.com/issues/10625 -> xen is not reported properly
> -> Same issue as above
>
> http://projects.puppetlabs.com/issues/10278 -> facter reports different
> facts depending on the locale of the current environment -> manifests might
> use values of certain facts to determine how a host needs to be configured
> (quite a common pattern) -> due to this bug it might happen, that if a
> puppet engineer with a locale different than C (or en_US) runs puppet via
> cli, things might drastically change and so also how a node is configured ->
> break the host by just re-applying unchanged manifests on an unchanged node!
>
> The goal of facter is to provide values for manifest/module developers.
> Based on these values developers can then programatically decide how things
> should be configured. But how should one rely on these values if they are
> not reliable and nobody (excuse me) cares to fix them?
>

Yep

> Or is it really the idea that in the future everybody runs their very own
> patched version of facter and each time there is a new (minor) release
> everybody has to check whether this does not break their whole
> infrastructure?
>

Oh god no. I don't want to be in that situation. I think that is
almost the situation we are in right now, though.

> And to close my rant: Why should one be interested in new shiny inventory
> tools if the tools that provide the values for them are broken and not
> fixed, hence provide inaccurate information? Or in other words: Do you
> really want to build new things, although you're not interested in fixing
> the groundwork?
>

I agree completely. I've been on trying to shift focus to shoring up
the groundwork, but the lure of shiny new things is hard to resist.

> Thanks for listening!
>
> ~pete
>
> PS: Although things might sound harsh, I'm not pissed off. I really value
> the work you do. I'm more trying to bring the questions I have to your
> attention. Which is from a user's point of view, but this is how I currently
> see things.
>

Hey, I think you have some good reasons to be pissed off.

> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Developers" group.
> To post to this group, send email to puppe...@googlegroups.com.
> To unsubscribe from this group, send email to
> puppet-dev+...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/puppet-dev?hl=en.
>

Daniel Pittman

unread,
Oct 17, 2012, 3:28:26 PM10/17/12
to puppe...@googlegroups.com
On Wednesday, October 17, 2012 9:48:16 AM UTC-7, Andy Parker wrote:
On Wed, Oct 17, 2012 at 5:32 AM, Peter Meier <peter...@immerda.ch> wrote:  
> Currently I have on my (very own) list 3 really nasty bugs, that make it
> really hard to rely on values of core facts (e.g.: virtual, is_virtual,
> ipaddress etc.) within manifest/module development, as currently they are
> broken and we had to rollout hot-fixes or give special guidelines to puppet
> users.

For what it is worth, I think part of the challenge that Andy and his team face is that many of those core facts are widely used, but broken by design.

`virtual` and `ipaddress` are classic examples of that:

`virtual` is "what virtualization technology is in use on this machine", which becomes ... complex when, for example, I have a VMWare hosted VM than runs as an OpenVZ host.  Technically it is both of `virtual = vmware` *and* `virtual = openvzhn` at the same time, but the single value virtual fact can't reflect that.

`ipaddress` is almost as bad; it is the "primary" IP address of the server, which means "the best guess at which might matter".  There isn't a meaningful and universal definition of that, and it often defaults to "take a guess", or "the first", or whatever.  (The OSX version gets this more right, by using "the address attached to the interface with the default route."  As long as there is *the* default route, this is closer to user expectations.  Breaks down in any complex case (eg: multiple default routes, two /1 routes for VPN magic, etc.)

We can't just ditch those facts, because that would break a lot of people.  Things don't really work if we start returning multiple, packed values in the `virtual` string; code that depends on `virtual == vmware` will break given `virtual = vmware,openvzhn`.

You can't just ditch the attractive nuisance of "the primary" ipaddress of a machine, because so many people insist that this is a meaningful and universal property of machines, when it is really just a "best guess" no matter which way you slice it.  (the "DNS lookup my hostname" approach is just as prone to guessing wrong. :)

None of that is about bugs being ignored, but it might help explain why some of those bugs are so sticky...

Peter Meier

unread,
Oct 17, 2012, 5:01:18 PM10/17/12
to puppe...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> None of that is about bugs being ignored, but it might help explain
> why some of those bugs are so sticky...

I'm all in for a new way how facts are provided, like a more nested
datastructure (eg: hash). And I think at the point this is introduced
it might also be a good point to switch the meaning of their value.
Because if people want to use the new structure, they would have to
adapt their manifests anyway.

However, we are more or less still in the 1.6.x-series and the virtual
value used to work in 1.6.1 and is since 1.6.2 broken. 1.6 is now at
1.6.13 and the bugs reporting these issues are ~ 12 Months old.
And virtual is really one of the facts where people's eye begin to
shine with joy, when they see the possibilities they gain by using it
within their manifests.

And the issue I referred to with ipaddress is that 2 people logged
into the same machine as root might get totally different facts based
on their LOCALE. Yes, LOCALEs are hard to deal with and at this point
I'm actually not a 100% sure, but afair this haven't been the case in
some of the early 1.6 releases, at least we had never the problem
until we went to newer 1.6 releases. So I see that also as a regression.
Also if it isn't a regression: I still don't think that a core tool,
such as facter, should behave differently based on such a hidden and
not very well-known ENV-var such as LOCALE. You really have to know
facter a bit in detail, to get an idea what might get wrong if you get
a different output for 2 (nearly) identical sessions and combine it
with the knowledge that your co-worker rather likes to read german
than english (for whatever reasons). This is not that obvious and
hence people will see facter rather as being unpredictable and unstable.

In the past few months I bootstrapped a couple of new puppet
environments and each time I had to tell them, that they can't use the
official facter release and should take my patched RPM, because things
are broken, have been reported and will be fixed (at some point...).
Which have also been the case yesterday, which annoyed me and made me
complain.

So while I totally support your vision for a better datastructure and
more meaningful values within facter, I don't really see this as a
reason to not fix things that used to work and got broken and
especially things that are crucial for the value of facter as a tool:
Providing reliable and consistent values about a system.

~pete
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iEYEARECAAYFAlB/HJcACgkQbwltcAfKi3+DjgCfY2Q3VaAnXcXwHOqTn9JtqFI/
OF0AnjHD3Tj71sJyQHheKKcbyTwM/eC0
=BZ7t
-----END PGP SIGNATURE-----

Miguel Di Ciurcio Filho

unread,
Oct 17, 2012, 10:14:12 PM10/17/12
to puppe...@googlegroups.com
On Wed, Oct 17, 2012 at 1:48 PM, Andy Parker <an...@puppetlabs.com> wrote:
>
> So yeah...this is a big problem right now. I'm open to suggestions
> about how we could work differently to fix it. Maybe giving out commit
> access to more people outside of Puppet Labs?
>

Puppet usage in general have being growing at a fast pace in last year
or so, just look at talks of PuppetConf. No doubt about that.

So this huge growth will naturally put more pressure on solving bugs,
getting rid of old stuff, updating documentation, etc, because more
and more people are using Puppet.

Now my main point.

In my really humble opinion, PuppetLabs needs to decide what it wants to be.

a) PuppetLabs wants to be the sole benevolent dictator of the project,
keeping the keys to the kingdom. It decides who gets commit access. It
decides what bugs will be left behind because developers are working
on Puppet Enterprise and customers who put money on the company will
always come first of non paying users. Code contributions from outside
the company will happen, but they will always be small because thanks
to the CLA. Basically been more cathedral than a bazaar.

b) Make Puppet a real community project, where the "Puppet Community
Project" (maybe a different name) is the upstream of Puppet Enterprise
or other PuppetLabs projects. Like Citrix did to Xen and CloudStack,
Red Hat does with many other projects KVM, Linux, oVirt, OpenStack is
the upstream for many companies, Samba, Apache HTTP server is part of
many proprietary solutions. The list could go on and on.

If PuppetLabs continues to juggle the community expectations, like
Peters, and the paying customers because you guys control almost
everything, this type of tension will always happen and people might
get seriously frustrated. History have shown that.

So as I said, just my 0.2 cents.

Peter Meier

unread,
Oct 18, 2012, 3:14:25 AM10/18/12
to puppe...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Andy,

Thanks for your comments on my mail.

>> I'm wondering what the state of the development and especially
>> bug fixing of Facter is:
>
> For the state of puppet development we've been trying to keep the
> tickets in redmine as up to date as possible. We are using the
> target version field as a way of communicating what we are hoping
> to get in a release, but it isn't a guarantee that it will actually
> make it in. We have been getting a bit of work on Puppet done, but
> Facter has been falling by the wayside.

Yes, this is also a bit the impression that I have.

>> Currently I have on my (very own) list 3 really nasty bugs, that
>> make it really hard to rely on values of core facts (e.g.:
>> virtual, is_virtual, ipaddress etc.) within manifest/module
>> development, as currently they are broken and we had to rollout
>> hot-fixes or give special guidelines to puppet users.
>>
>> All of these bugs have been reported months ago, patches have
>> been proposed, but as how I - personally - see things nothing
>> have been done to review the pull requests nor to fix the bugs.
>> Although, these values are - in my opinion - very crucial for the
>> usefulness of facter and hence puppet.
>>
>
> This is an area that we've been really week in: keeping on top of
> patches from people outside of Puppet Labs. We've been trying to
> have a (rotating) person dedicated to responding to those, but I'll
> admit that I've often had to pull them off in order to work on
> other things. Without that person looking at pull requests they end
> up falling by the wayside. This is really not a good situation,
> because a huge about of what makes puppet work in all of these
> environments is the contributions from others.
>
> So yeah...this is a big problem right now. I'm open to suggestions
> about how we could work differently to fix it. Maybe giving out
> commit access to more people outside of Puppet Labs?

This is definitely not an easy problem to solve and I don't really
have a good proposal to fix it.

What I wanted to add, is that having Pull-Requests not reviewed/merged
for a longer time period makes it also hard to contribute back.

Because: Usually after months a person is way out of context of the
specific bug that the person tried to fix months ago. So if there are
minor things that needs to be discussed or improved in their patch,
people need to a) find time to look at your comments in detail, but
even more important b) find their way back into the bug's context.

As it is the case for your dedicated person, that she/he is less
productive if she/he keeps being pulled off the tickets to review, it
is even more severe for a person which daily job is not to dig around
in puppet's/facter's codebase. Also keep in mind that probably (my
guess) most people outside of Puppet Labs who are contributing are not
professional software developers, they are usually sysadmins that try
to fix a problem that tempers them from working productively. It might
have took them quite some effort to dig into the codebase and do that fix.

I know that often it took also me some time to respond to
comments/requests that were made to pull requests/reports/whatever
that I did. But at the time I got feedback I was already quite far
away from that certain problem, maybe even in a different project than
I encountered the reported problem. For me, it is then usually quite
some effort to get back into that context to provide a proper answer.
And for such a bigger step I need to find time and maybe also an
environment where I can reproduce it again. Which usually takes even
more time, so the whole feedback-loop takes even more time, also from
side. And hey, I internally rolled out a patch that kept me from being
bugged by things.
However, this might certainly be easier if I would have still been in
that context/project/...

And in my opinion it is even worse for first time contributer if their
fixes/pull requests or even bug reports are laying around for months
without a comment/review: Is Puppet Labs actually interested in my
contribution? Was it worth to make it? Should I make it again the next
time or just roll my own patched version?

So personally I think what is missing, is that people get feedback
when things are still fresh and present in their mind. This would
probably reduce the roundtrip time within the feedback loops.
Also I would not be sad, if people would just take the idea of a
proposed pull request and implement a more proper solution that is
then merged more instantly, than things are pushed back to me. I know
that others might see that differently, that's also why I'm writing my
personal opinion here. But I'm more interested in things getting fixed
than having exactly my commits in the codebase.

~pete
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iEYEARECAAYFAlB/rEgACgkQbwltcAfKi39jEgCgqlSnlYu4xQyfLQJvqhir06aC
SOwAoK/GZ0ecQKW6QEfxlOOeQtcGt5pd
=eB/l
-----END PGP SIGNATURE-----

Alex Harvey

unread,
Oct 18, 2012, 8:10:41 AM10/18/12
to puppe...@googlegroups.com
On Thursday, October 18, 2012 3:48:16 AM UTC+11, Andy Parker wrote:

This is an area that we've been really week in: keeping on top of
patches from people outside of Puppet Labs. We've been trying to have
a (rotating) person dedicated to responding to those, but I'll admit
that I've often had to pull them off in order to work on other things.
Without that person looking at pull requests they end up falling by
the wayside. This is really not a good situation, because a huge about
of what makes puppet work in all of these environments is the
contributions from others.

So yeah...this is a big problem right now. I'm open to suggestions
about how we could work differently to fix it. Maybe giving out commit
access to more people outside of Puppet Labs?
 
If there was an experienced developer outside of Puppet Labs who was willing to be a sort of volunteer team leader with commit rights dedicated to liaising with the community contributors and Puppet Labs - maybe that could be a solution.  Or failing that if we at least had a way of getting in touch with the right person in Puppet Labs to get our issues looked at that might help.  It doesn't seem ideal to be spamming the puppet developers mailing list just to get someone to look at an outstanding pull request.

-Alex

Ashley Penney

unread,
Oct 18, 2012, 9:58:41 AM10/18/12
to puppe...@googlegroups.com
On Wed, Oct 17, 2012 at 10:14 PM, Miguel Di Ciurcio Filho
<miguel...@gmail.com> wrote:

> b) Make Puppet a real community project, where the "Puppet Community
> Project" (maybe a different name) is the upstream of Puppet Enterprise
> or other PuppetLabs projects. Like Citrix did to Xen and CloudStack,
> Red Hat does with many other projects KVM, Linux, oVirt, OpenStack is
> the upstream for many companies, Samba, Apache HTTP server is part of
> many proprietary solutions. The list could go on and on.

I think this is probably the only way to stop things from collapsing
under the weight of the community expectations at this point. I think
opening up commit access to outside developers would be an enormously
dangerous, but potentially extremely rewarding, way to go. I know
that I've gotten discouraged from my attempts to fix things in facter
from the difficulty of getting them merged in and reviewed for large
scale changes.

Obviously I think if this is the route things go then the addition of
developers would have to be carefully controlled at the beginning in
order to not have chaos and a blob of code that Puppetlabs themselves
can no longer use productively, but it's clear that Puppetlabs simply
cannot hire enough developers internally to improve things at the rate
that the community wishes for.

Thanks,

Jeff McCune

unread,
Oct 18, 2012, 12:31:54 PM10/18/12
to puppe...@googlegroups.com
On Wed, Oct 17, 2012 at 5:32 AM, Peter Meier <peter...@immerda.ch> wrote:
Hi all,

I'm wondering what the state of the development and especially bug fixing of Facter is:

Currently I have on my (very own) list 3 really nasty bugs, that make it really hard to rely on values of core facts (e.g.: virtual, is_virtual, ipaddress etc.) within manifest/module development, as currently they are broken and we had to rollout hot-fixes or give special guidelines to puppet users.

All of these bugs have been reported months ago, patches have been proposed, but as how I - personally - see things nothing have been done to review the pull requests nor to fix the bugs. Although, these values are - in my opinion - very crucial for the usefulness of facter and hence puppet.

I'm speaking of these 3 Bugs:

https://projects.puppetlabs.com/issues/8210 - kvm guests are detected as physical -> Without using our own patched version of facter, we can currently not detect reliably on what kind of system we are running. -> Install smartd on virt-guests?!

http://projects.puppetlabs.com/issues/10625 -> xen is not reported properly -> Same issue as above

http://projects.puppetlabs.com/issues/10278 -> facter reports different facts depending on the locale of the current environment -> manifests might use values of certain facts to determine how a host needs to be configured (quite a common pattern) -> due to this bug it might happen, that if a puppet engineer with a locale different than C (or en_US) runs puppet via cli, things might drastically change and so also how a node is configured -> break the host by just re-applying unchanged manifests on an unchanged node!

Peter,

Thanks for pointing these out.  I'll be working through these issues starting today.

-Jeff

Jeff McCune

unread,
Oct 18, 2012, 1:26:43 PM10/18/12
to puppe...@googlegroups.com
Do you happen to have a KVM guest I could log into and work on the patch for 8210?

-Jeff 

Luke Kanies

unread,
Oct 18, 2012, 1:33:31 PM10/18/12
to puppe...@googlegroups.com
I agree with all of this. We've done a great job of building a self-sustaining user community, but we clearly have not delivered that on the development side.

There are outside contributors with commit access, but not many, and AFAIK they aren't able to spend much time on the project.

I would *love* to have more work on Puppet coming from outside of our organization. I've always wanted that, and it's always pained me that we never really figured it out.

How do we do this? It's not as simple as just giving a bunch of people commit access, is it?

--
Luke Kanies | http://about.me/lak | http://puppetlabs.com/ | +1-615-594-8199

Ashley Penney

unread,
Oct 18, 2012, 4:28:33 PM10/18/12
to puppe...@googlegroups.com
On Thu, Oct 18, 2012 at 1:33 PM, Luke Kanies <lu...@puppetlabs.com> wrote:

> I agree with all of this. We've done a great job of building a self-sustaining user community, but we clearly have not delivered that on the development side.
>
> There are outside contributors with commit access, but not many, and AFAIK they aren't able to spend much time on the project.
>
> I would *love* to have more work on Puppet coming from outside of our organization. I've always wanted that, and it's always pained me that we never really figured it out.
>
> How do we do this? It's not as simple as just giving a bunch of people commit access, is it?

It could be that simple! Especially if it was combined with a move
towards developing on master with Puppetlabs merging commits off
master into release branches. I think you could easily give an
experiment like this a try with the understanding that Puppetlabs can
revert commits if they feel they are damaging and can't work with the
author to resolve the problem.

That and strict control over merging into release branches feels like
a solid enough model that it could be tried without an enormous risk.
Worst case you could just scrap the idea and revert all the commits
that caused the experiment to fail.

Thanks,

Miguel Di Ciurcio Filho

unread,
Oct 18, 2012, 7:18:54 PM10/18/12
to puppe...@googlegroups.com
On Thu, Oct 18, 2012 at 2:33 PM, Luke Kanies <lu...@puppetlabs.com> wrote:
>
> I would *love* to have more work on Puppet coming from outside of our organization. I've always wanted that, and it's always pained me that we never really figured it out.
>
> How do we do this? It's not as simple as just giving a bunch of people commit access, is it?

No, that is not going to help at all. Want to really have more
contributions? Short answer: _get rid of the bureaucracy_

Now the long answer, based on what is written in CONTRIBUTING.md.

1) Drop the CLA.

- http://www.flamingspork.com/blog/2012/05/28/contributor-agreements-kill-contributions/

"A low barrier to entry is what has made the largest, most successful
free software projects what they are today. If you’re wanting your
project to be an open source project and not an open source product –
then you too must set a low barrier to entry. Contributor Agreements
significantly raise the barrier to entry. Suddenly my 10 line patch to
fix a bug turns into a discussion with my company lawyers, your
company lawyers and goes from taking an extra 5 minutes to send an
email with a patch to a mailing list into something that takes hours
and hours of my time."

- http://blogs.computerworlduk.com/simon-says/2010/11/contributor-agreements-say-your-contribution-is-unwelcome/index.htm

" If you're a company hoping to create a new open source project, take
heed; that advice you are getting to have a contributor agreement may
well lead to you getting no co-developers. As long as that's what you
want - well, I suppose that's OK, but intentionally discouraging
community is hardly the open source way."

- https://lwn.net/Articles/414051/ (LPC: Michael Meeks on LibreOffice
and code ownership)

This LWN article is a must read, including the comments. But I would
highlight this:
"Copyright assignment does not normally deprive a contributor of the
right to use the contributed software as he or she may wish. But it
reserves to the corporation receiving the assignments the right to
make decisions regarding the complete work. We as a community have
traditionally cared a lot about licenses, but we have been less
concerned about the conditions that others have to accept. Copyright
assignment policies are a barrier to entry to anybody else who would
work with the software in question. These policies also disrupt the
balance between developers and "suit wearers," and it creates FUD
around free software license practices."


2) Drop the requirement for accounts on GitHub and Redmine.

Why can't I just `git send-email` patches to the mailing list, receive
feedback on the mailing list, `git send-email` again, maintainer
applies the patch.

Requiring to have a ticket/issue for every single code contribution
does not make sense.

Look at all the steps and requirements for submitting a simple patch.
There is a lot of back and forth, copying and pasting URLs on the
ticket, etc.

3) Develop and discuss stuff in public

You guys said a few days ago that Dashboard is getting axed and a new
ENC is on the works. Well, would you mind to share what is being
worked on? Design ideas? You might want to get some feedback early.

How about before doing anything new, send to this list some simple
RFCs. Look back at the first link of point 1, it is written there a
great question: are you an open source _project_ or an open source
_product_?

Bottom line:

Summing up points 1 and 2, it will continue to be hard to have new
volunteers. New volunteers begin fixing typos, and all these
requirements will never motivate new blood to come along and hack I
include myself in this, mostly because of the CLA.

Luke Kanies

unread,
Oct 18, 2012, 11:03:58 PM10/18/12
to puppe...@googlegroups.com
I'm not a big fan of a complete wild-west with anyone having commit rights, but I agree that it's better to allow failure and revert than to work toward perfection up front.

I know Andy and Deepak are working furiously on this right now, even while I make their lives more complicated by emailing the world. :)

Luke Kanies

unread,
Oct 18, 2012, 10:53:20 PM10/18/12
to puppe...@googlegroups.com
Note that we don't require copyright assignment and never have. We have a nearly-standard Apache CLA, which just confirms that we have the right to distribute the code you're contributing.

We're actually in the process of figuring out whether we can remove these. Every lawyer we're talking to is saying no, but every one of us wants to get rid of them, so I expect us all to end up ignoring the lawyers. The only reason we instituted them in the first place is because we were GPL'd, and we expected to either make commercial forks (which we didn't do) or switch licenses (which we did).

> 2) Drop the requirement for accounts on GitHub and Redmine.
>
> Why can't I just `git send-email` patches to the mailing list, receive
> feedback on the mailing list, `git send-email` again, maintainer
> applies the patch.
>
> Requiring to have a ticket/issue for every single code contribution
> does not make sense.
>
> Look at all the steps and requirements for submitting a simple patch.
> There is a lot of back and forth, copying and pasting URLs on the
> ticket, etc.

I'm surprised to hear this. I would expect that github would be dramatically easier for contribution than emailing or whatever.

I know that I personally never was able to extract patches from a mailing list, because I use IMAP, not mboxes. Yes, I know I could do filtering, set up mutt explicitly for this, and relearn how to use it just for patches, but… I could also just go to github and see a clean, clear list with support for comments, email, and everything else I want.

I could see reducing to one account type, but I couldn't see us ever switching to email for patch management unless git changes its expectations of how people do email to include the actual internet. No, I'm not bitter. :/

I'd prefer to have tools to automatically sync redmine and the pull requests, rather than getting rid of one. We've done a lot of work there for syncing redmine and trello, for instance, but I don't think we've done much with github yet.

> 3) Develop and discuss stuff in public
>
> You guys said a few days ago that Dashboard is getting axed and a new
> ENC is on the works. Well, would you mind to share what is being
> worked on? Design ideas? You might want to get some feedback early.

We do a lot of this, but we're mixed in how consistent we are. Part of the reason is just the mixed bag of lots of us sitting in the same room - it's more expensive to communicate with people outside the building, so we tend to suck at it more than we should.

I will say that I'm still a bit gun-shy of posting all of our ideas publicly, at least anything other than stuff we're basically ready to work on now. I'm a bit gun-shy for 2 reasons: First, I do have some legitimate competitive concerns, but most importantly, it's a lot of work to get it all out there and there's questionable value.

That being said, we should obviously be more open about what we know we're going to work on and what we think we're going to build. We just recently split dev teams so that our OSS developers are much more independent of the commercial teams, which means that they'll naturally start looking outward more.

In the short term, I'll commit that we'll publish the ENC stuff ASAP. Note that I'm stepping out as a rogue CEO here and I've no idea of the actual consequences of my statement here, but it should clearly be open from the beginning. Note, though, that we've been doing a lot of work already talking to as many people as possible about this, so it's not like we've been just doing all of the thinking inside our walls.

> How about before doing anything new, send to this list some simple
> RFCs. Look back at the first link of point 1, it is written there a
> great question: are you an open source _project_ or an open source
> _product_?

We should do way more here, I agree.

I don't actually know the difference between an open source project and an open source product. It's all just free code on the internet, right?

> Bottom line:
>
> Summing up points 1 and 2, it will continue to be hard to have new
> volunteers. New volunteers begin fixing typos, and all these
> requirements will never motivate new blood to come along and hack I
> include myself in this, mostly because of the CLA.

Just so I understand, is the problem with the CLA just a too-high barrier of entry, or is it that you don't agree to the terms?

Miguel Di Ciurcio Filho

unread,
Oct 19, 2012, 4:02:32 PM10/19/12
to puppe...@googlegroups.com, da...@puppetlabs.com
On Thu, Oct 18, 2012 at 11:53 PM, Luke Kanies <lu...@puppetlabs.com> wrote:
>
> Note that we don't require copyright assignment and never have. We have a nearly-standard Apache CLA, which just confirms that we have the right to distribute the code you're contributing.
>

There, right there, that is the problem with the CLA. You see how it
is pointless? I'm already giving you code under the Apache License.
PuppetLabs and any other entity or person on this planet have all the
rights and obligations stated in the Apache License. There is no need
for a second document, PuppetLabs have the right to distribute my
contributions as everybody else does under the Apache License.

> We're actually in the process of figuring out whether we can remove these. Every lawyer we're talking to is saying no, but every one of us wants to get rid of them, so I expect us all to end up ignoring the lawyers. The only reason we instituted them in the first place is because we were GPL'd, and we expected to either make commercial forks (which we didn't do) or switch licenses (which we did).
>

There, right there too. The reason why you instituted the CLA is the
same reason that it fundamentally keeps people away from seriously
contributing to the project: PuppetLabs gets an extra privilege other
then the licensing of code being contributed.

The license of the project is a pact of how a community whats to share
its production and every single member of the community _must_ not
have any extra privilege.

>
> I know that I personally never was able to extract patches from a mailing list, because I use IMAP, not mboxes. Yes, I know I could do filtering, set up mutt explicitly for this, and relearn how to use it just for patches, but… I could also just go to github and see a clean, clear list with support for comments, email, and everything else I want.
>

About the git send-email maybe it is just me. I've contributed to
other projects where there is no need to open tickets, just send the
patch to the mailing list and take the heat of the feedback, plain
simple.

>
> We do a lot of this, but we're mixed in how consistent we are. Part of the reason is just the mixed bag of lots of us sitting in the same room - it's more expensive to communicate with people outside the building, so we tend to suck at it more than we should.
>

That is definitely understandable. When there is a mix like this, it
is hard to keep the outside community on the loop.

> I will say that I'm still a bit gun-shy of posting all of our ideas publicly, at least anything other than stuff we're basically ready to work on now. I'm a bit gun-shy for 2 reasons: First, I do have some legitimate competitive concerns, but most importantly, it's a lot of work to get it all out there and there's questionable value.
>
> That being said, we should obviously be more open about what we know we're going to work on and what we think we're going to build. We just recently split dev teams so that our OSS developers are much more independent of the commercial teams, which means that they'll naturally start looking outward more.
>

That is a very nice change.

>
> I don't actually know the difference between an open source project and an open source product. It's all just free code on the internet, right?
>

It is more than just free code on the internet, IMHO. Quoting Peter,
from the message that started all this:

"Or is it really the idea that in the future everybody runs their very
own patched version of facter and each time there is a new (minor)
release everybody has to check whether this does not break their whole
infrastructure?

We all have his code, it is free and on the internet. As he points
out, do we have to maintain our on branches with fixes that doesn't
get merged? And they might not get being merged or there is a lack of
man power thanks to reasons that I'm trying explain, especially the
CLA.

Your reasons you said about being gun-shy of posting ideas, IMHO, show
that you are more concerned about the product than the project.

Open Source/Free Software projects must not be afraid of competition
and must not be afraid of failing.

I don't what to be picky, but it is the example that comes to mind
because I was concerned. On Puppet 3.0 `puppet kick` now shows a
deprecation warning. On other projects, changes like that are
presented to the project and discussed. PuppetLabs' developers opened
the ticket, committed the code, and it is done. I notice the thing
after the fact and by accident.

This is a product behavior, not project behavior. My expectation as a
project is that:

1) Someone wants to deprecate/change/remove/create something, for any reason.
2) Sends an RFC do the mailing list and gets feedback
3) Work on the feedback received
4) Go to 2 until it there is a consensus.

I don't know if I'm the only one that have all this concerns or most
people are OK with how things are, but now I got it out of my chest
:-)

Regards,

Miguel

Luke Kanies

unread,
Oct 21, 2012, 5:21:14 PM10/21/12
to puppe...@googlegroups.com
On Oct 19, 2012, at 1:02 PM, Miguel Di Ciurcio Filho <miguel...@gmail.com> wrote:

> On Thu, Oct 18, 2012 at 11:53 PM, Luke Kanies <lu...@puppetlabs.com> wrote:
>>
>> Note that we don't require copyright assignment and never have. We have a nearly-standard Apache CLA, which just confirms that we have the right to distribute the code you're contributing.
>>
>
> There, right there, that is the problem with the CLA. You see how it
> is pointless? I'm already giving you code under the Apache License.
> PuppetLabs and any other entity or person on this planet have all the
> rights and obligations stated in the Apache License. There is no need
> for a second document, PuppetLabs have the right to distribute my
> contributions as everybody else does under the Apache License.

That's actually not necessarily true. Without a clear document somewhere, the legal standing of your contributions is technically unclear. Or rather, it is clear, and what's clear is that we don't automatically have a right to them, and they do not automatically acquire an Apache license.

One could certainly argue that if you contribute to an Apache-licensed project you are submitting that contribution under an Apache license, but unless every patch you send includes an Apache license with it, you're in questionable territory, and I'm confident you could find a lawyer willing to take the oppositing position.

It's annoying, but that's the way the law works.

>> We're actually in the process of figuring out whether we can remove these. Every lawyer we're talking to is saying no, but every one of us wants to get rid of them, so I expect us all to end up ignoring the lawyers. The only reason we instituted them in the first place is because we were GPL'd, and we expected to either make commercial forks (which we didn't do) or switch licenses (which we did).
>>
>
> There, right there too. The reason why you instituted the CLA is the
> same reason that it fundamentally keeps people away from seriously
> contributing to the project: PuppetLabs gets an extra privilege other
> then the licensing of code being contributed.
>
> The license of the project is a pact of how a community whats to share
> its production and every single member of the community _must_ not
> have any extra privilege.

Meh. I'd have a lot more sympathy for this position if every single member of the community put the same effort into the project. Long before I was able to hire developers to work on Puppet full time, I was the single largest contributor by a country mile, and after doing almost no work on it for at least 3 years, I'm *still* the largest contributor, and the next 10 or so are almost all paid to work on it by me.

Yes, a big part of that is because we've failed to build as much of a developer community as we could and should have, but I worked really hard at this in the early days and still didn't make a lot of progress. We're taking another crack at it now, and that's specifically one of Dawn's mandates, but I haven't run into a lot of people who think it's unreasonable that we try to find a way to make a living doing what we're doing.

And, of course, it's all moot, because that only mattered when the project was GPL'd. Now that it's Apache-licensed, Puppet Labs is not special in any way when it comes to licensing. We're only special because we employ tons of people who are paid to work on and care about Puppet. :)

>> I know that I personally never was able to extract patches from a mailing list, because I use IMAP, not mboxes. Yes, I know I could do filtering, set up mutt explicitly for this, and relearn how to use it just for patches, but… I could also just go to github and see a clean, clear list with support for comments, email, and everything else I want.
>>
>
> About the git send-email maybe it is just me. I've contributed to
> other projects where there is no need to open tickets, just send the
> patch to the mailing list and take the heat of the feedback, plain
> simple.

We used to do that for code review, but it got too overwhelming on the lists. I know some lists can survive with this, but we got a lot of feedback that ours wasn't working well. If that's not the case, I'm sure people would be willing to revisit, and, of course, you're always welcome to send your patches to the list if you prefer.

>> We do a lot of this, but we're mixed in how consistent we are. Part of the reason is just the mixed bag of lots of us sitting in the same room - it's more expensive to communicate with people outside the building, so we tend to suck at it more than we should.
>>
>
> That is definitely understandable. When there is a mix like this, it
> is hard to keep the outside community on the loop.
>
>> I will say that I'm still a bit gun-shy of posting all of our ideas publicly, at least anything other than stuff we're basically ready to work on now. I'm a bit gun-shy for 2 reasons: First, I do have some legitimate competitive concerns, but most importantly, it's a lot of work to get it all out there and there's questionable value.
>>
>> That being said, we should obviously be more open about what we know we're going to work on and what we think we're going to build. We just recently split dev teams so that our OSS developers are much more independent of the commercial teams, which means that they'll naturally start looking outward more.
>>
>
> That is a very nice change.
>
>>
>> I don't actually know the difference between an open source project and an open source product. It's all just free code on the internet, right?
>>
>
> It is more than just free code on the internet, IMHO. Quoting Peter,
> from the message that started all this:
>
> "Or is it really the idea that in the future everybody runs their very
> own patched version of facter and each time there is a new (minor)
> release everybody has to check whether this does not break their whole
> infrastructure?
>
> We all have his code, it is free and on the internet. As he points
> out, do we have to maintain our on branches with fixes that doesn't
> get merged? And they might not get being merged or there is a lack of
> man power thanks to reasons that I'm trying explain, especially the
> CLA.
>
> Your reasons you said about being gun-shy of posting ideas, IMHO, show
> that you are more concerned about the product than the project.
>
> Open Source/Free Software projects must not be afraid of competition
> and must not be afraid of failing.

I'm not convinced. Everyone attaches ego and pride to work they do, and they don't want that work to fail regardless of whether their jobs are on the line.

I could say a lot about this topic, but I'd prefer to do it over beers rather than publicly. Suffice it to say that I believe openness and transparency are the best behaviors, but there are some reasons for not being completely open about everything. Yes, those reasons reduce if I don't have to worry about what my company looks like in 3 years, but realistically, not by much.

> I don't what to be picky, but it is the example that comes to mind
> because I was concerned. On Puppet 3.0 `puppet kick` now shows a
> deprecation warning. On other projects, changes like that are
> presented to the project and discussed. PuppetLabs' developers opened
> the ticket, committed the code, and it is done. I notice the thing
> after the fact and by accident.
>
> This is a product behavior, not project behavior. My expectation as a
> project is that:
>
> 1) Someone wants to deprecate/change/remove/create something, for any reason.
> 2) Sends an RFC do the mailing list and gets feedback
> 3) Work on the feedback received
> 4) Go to 2 until it there is a consensus.
>
> I don't know if I'm the only one that have all this concerns or most
> people are OK with how things are, but now I got it out of my chest
> :-)

You're definitely not the only one. I actually thought we did do an RFC on that. I know we've done it on a bunch of other ones. If we missed it on that, I think it's basically an oversight.

You're right, though, that getting this wrong looks bad. To be clear, though, this wasn't done for business reasons, it was done because the team thought that few, if any, people were actually using it, and it made the whole system much simpler.

And, of course, lots and lots of projects do a lot of their work essentially behind closed doors, even if they aren't backed by a company. I don't want to work that way unless we can't avoid it, but I think it's more the rule than the exception, counter to the myth of how open source works.

Erik Dalén

unread,
Oct 23, 2012, 11:57:18 AM10/23/12
to puppe...@googlegroups.com


On Thursday 18 October 2012 at 13:33, Luke Kanies wrote:
I think trying to be extra speedy with reviewing and giving feedback on external pull requests would be a great start for this. It might be more time consuming and slow down development in the short run, but I think it would give more external contributions and speed up development in the long run. Basically regarding them as a VIP lane compared to internal ones or something.

--
Erik Dalén


Alex Harvey

unread,
Oct 23, 2012, 9:00:20 PM10/23/12
to puppe...@googlegroups.com


On Wednesday, October 24, 2012 2:57:08 AM UTC+11, Erik Dalén wrote:

I think trying to be extra speedy with reviewing and giving feedback on external pull requests would be a great start for this. It might be more time consuming and slow down development in the short run, but I think it would give more external contributions and speed up development in the long run. Basically regarding them as a VIP lane compared to internal ones or something.

I'm only new here but I agree with this.

I recently signed the CLA in my personal capacity without any discussion with company lawyers (I don't think I need their permission to sign as myself do I?) and I got myself across the Puppet Labs development processes.  I can understand why some people would find this a lot to learn just to submit a patch but I can also see why Puppet Labs would expect people who plan to make regular code submissions to follow the same processes as everyone else.  I would imagine that it would be even more work for the Puppet Labs developers if everyone isn't following the same process and thus the real problem - which I think is allowing there to be a backlog of pull requests - would probably be exacerbated.

I think it's fairly obvious though that if pull requests don't get looked at in a reasonable time frame then people just won't bother making them in the future.  And some will get sufficiently annoyed that they'll use Chef or something other than Puppet, if they've got bugs that they've had to fix, and they feel no one can even be bothered reviewing them. 

I think Andy's idea of making public the team's priorities is also a great idea - although external contributors may find that, almost by definition, their personal interests may not align with Puppet Labs.  External contributors are likely to be contributing in the first place because they have a very specific need that isn't otherwise likely to be looked at by Puppet Labs.  And that's my case of course - fixing puppet & facter for versions of commercial Unix that you guys don't have access to that I need to make it work on.

So I vote strongly for insisting that no internal Puppet Labs priorities can be allowed to divert the person assigned to looking at pull requests away from this greater priority.

Daniel Pittman

unread,
Oct 24, 2012, 12:02:04 PM10/24/12
to puppe...@googlegroups.com
On Tue, Oct 23, 2012 at 6:00 PM, Alex Harvey <alexh...@gmail.com> wrote:
> On Wednesday, October 24, 2012 2:57:08 AM UTC+11, Erik Dalén wrote:
>>
>> I think trying to be extra speedy with reviewing and giving feedback on
>> external pull requests would be a great start for this. It might be more
>> time consuming and slow down development in the short run, but I think it
>> would give more external contributions and speed up development in the long
>> run. Basically regarding them as a VIP lane compared to internal ones or
>> something.
>
> I'm only new here but I agree with this.
>
> I recently signed the CLA in my personal capacity without any discussion
> with company lawyers (I don't think I need their permission to sign as
> myself do I?) and I got myself across the Puppet Labs development processes.

The answer to that is the classic legal "it depends".

At least in Australia, many companies own anything you do during work
hours, and it was common for a while to claim ownership of anything in
"related areas of business". (That second was in a legal grey area,
possibly unenforcible, but likely to lead to a court battle if you
pushed it.)

Especially in the former case, you do need their permission to
contribute code back to Puppet. Australian law held that because you
were working for them, *the company* held the copyright on what you
produced, so they were the folks who had to agree to release it.

In other parts of the world this varies more because, hey, legal
question. If you really want to know you should consult an IP lawyer
in your country.

--
Daniel Pittman
⎋ Puppet Labs Developer – http://puppetlabs.com
♲ Made with 100 percent post-consumer electrons

Andy Parker

unread,
Oct 24, 2012, 12:37:22 PM10/24/12
to puppe...@googlegroups.com
On Tue, Oct 23, 2012 at 6:00 PM, Alex Harvey <alexh...@gmail.com> wrote:
>
>
> I'm only new here but I agree with this.
>
> I recently signed the CLA in my personal capacity without any discussion
> with company lawyers (I don't think I need their permission to sign as
> myself do I?) and I got myself across the Puppet Labs development processes.
> I can understand why some people would find this a lot to learn just to
> submit a patch but I can also see why Puppet Labs would expect people who
> plan to make regular code submissions to follow the same processes as
> everyone else. I would imagine that it would be even more work for the
> Puppet Labs developers if everyone isn't following the same process and thus
> the real problem - which I think is allowing there to be a backlog of pull
> requests - would probably be exacerbated.
>

Yeah, I was skeptical of all of these little rules around how to
submit commits and pull requests when I started, but I've come to
notice over time that they help us a lot in understanding changes and
tracing things back to reasons. If I could keep all of the code and
interactions in my head, it probably wouldn't be as much of an issue,
but with the number of things in puppet that interact we almost always
seem to get knock-on effects and we need to understand what a change
was trying to achieve so that when it turns out later that we need to
change things, we understand what will be affected.

> I think it's fairly obvious though that if pull requests don't get looked at
> in a reasonable time frame then people just won't bother making them in the
> future. And some will get sufficiently annoyed that they'll use Chef or
> something other than Puppet, if they've got bugs that they've had to fix,
> and they feel no one can even be bothered reviewing them.
>
> I think Andy's idea of making public the team's priorities is also a great
> idea - although external contributors may find that, almost by definition,
> their personal interests may not align with Puppet Labs. External
> contributors are likely to be contributing in the first place because they
> have a very specific need that isn't otherwise likely to be looked at by
> Puppet Labs. And that's my case of course - fixing puppet & facter for
> versions of commercial Unix that you guys don't have access to that I need
> to make it work on.
>

This brings up something that I'd like to figure out how to get going:
platform maintainers. As you say, we don't have access to all of this
stuff, nor do we have the manpower to stay on top of all of it. Can we
get a bunch of people who own puppet's core support for various
platforms? Would anyone be interested in signing up for that?

> So I vote strongly for insisting that no internal Puppet Labs priorities can
> be allowed to divert the person assigned to looking at pull requests away
> from this greater priority.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Developers" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/puppet-dev/-/quhkDbVAk0sJ.

Luke Kanies

unread,
Oct 25, 2012, 5:59:14 PM10/25/12
to puppe...@googlegroups.com
It turns out that it's fantastically difficult to be extra speedy on all external pull requests.

I agree with you that if we could do it, it would eventually result in more contributors, and we're trying to get enough resources right now that we can do so. Some of the pull requests are fundamentally hard, like those for platforms like FreeBSD that we don't have good test infrastructure for or skills in, and external pull requests tend to need a lot more modification and mentoring (e.g., they often have little to no tests), so a given pull request takes a lot longer to get in.

Beyond that, we also have business goals of our own, and that requires we actually spend time on our own pull requests. We have someone dedicated each week to external pull requests, and we're looking for more people to work on it, but the needs of the community have clearly outstripped our staffing to meet them (and probably did a long time ago).

The awesome part of being a software company is that we can afford to hire developers to work on software that our users want, but the sometimes less than awesome part is that we have to make sure quite a bit of our effort is aligned with being able to actually pay our developers.

Erik Dalén

unread,
Oct 25, 2012, 11:30:08 PM10/25/12
to puppe...@googlegroups.com
On Thursday 25 October 2012 at 17:59, Luke Kanies wrote:
I fully understand that external pull requests might need more mentoring and stuff than internal ones, but that's just another argument to be speedy on them IMO. If they need 3-4 iterations it is really annoying if each of them takes more than a week. Might also introduce extra work due to merge conflicts etc.
>
> Beyond that, we also have business goals of our own, and that requires we actually spend time on our own pull requests. We have someone dedicated each week to external pull requests, and we're looking for more people to work on it, but the needs of the community have clearly outstripped our staffing to meet them (and probably did a long time ago).
>
> The awesome part of being a software company is that we can afford to hire developers to work on software that our users want, but the sometimes less than awesome part is that we have to make sure quite a bit of our effort is aligned with being able to actually pay our developers.
That is fully understandable, but getting lots of contributions and community development is probably also good for your business :)

--
Erik Dalén



Jeff McCune

unread,
Oct 30, 2012, 12:38:48 PM10/30/12
to puppe...@googlegroups.com
On Thu, Oct 25, 2012 at 2:59 PM, Luke Kanies <lu...@puppetlabs.com> wrote:

[snip]

Beyond that, we also have business goals of our own, and that requires we actually spend time on our own pull requests.  We have someone dedicated each week to external pull requests, and we're looking for more people to work on it, but the needs of the community have clearly outstripped our staffing to meet them (and probably did a long time ago).

To expand a bit on what Luke said about this, I'm currently dedicated to the community and my primary goal is to focus on open pull requests.  For at least the entire month of November I'll be dedicated to this task and we won't be rotating people on a weekly basis.  Our hope is that a dedicated person increases overall pull request throughput by reducing the context switching associated with rotating team members into this position.

Initially, I'm going to focus on making sure as many open pull requests and issues have clear "next actions."  If it's not clear what the next action is, then there's a good chance the pull request will get stuck in limbo.

Our intent is that someone will remain dedicated to community pull requests beyond November and that there may be more than one person dedicated to this task.  I can't really commit to this beyond November but for at least the next month there's a person dedicated to this work.

-Jeff
Reply all
Reply to author
Forward
0 new messages