Chef provides an easy way to create custom resources. If you ever find
yourself doing something twice, you can embed your not_ifs and
only_ifs in them.
Check this page out here:
Great question. Do you have any specific examples where Puppet (or any tool) helps?
Now that you mention it, the questions primarily apply to the initial release of a "function". Maintenance over time doesn't require as much question asking.The situation that brought this to mind was: We were writing a new recipe to upgrade Control Tier and I had it working for the case where an upgrade needed to occur. But when we ran it on a system where an upgrade didn't need to occur we found I forgot to put "not_if" in a couple of places (I had it in some). So, to get the recipe releasable, I had to go through all the questions. If I later do some refactoring, I probably don't need to spend as much time thinking through all use-cases.Chef has nice features like "not_if" for its Resources, but you still have to remember to use them.
Also, bcfg2 has interactive mode, where it prompts you "y/n?" before
making each change.
Mind you, that's my opinion but I honestly think trying to retrofit is
an exercise in frustration. For the first few months here, I was
trying to work around the concept of a "legacy" tag on systems before
I finally said, screw it. I was getting deep in conditional hell and
not making any real progress.
John E. Vincent
One of the fundamentals in the book is a repeatable build library, Having one means once a system gets dorked it's quickly thrown away and a new one brought up in its place. The book was written before AWS was common place or Puppet and Chef were on the scene, meaning it has been done with old school unix power tools.
Not always an option with everything immediately, true, but where it can be done it should be done...it's like dealing with a compromised system. You can clean it as much as you like, the only way to trust it is to rebuild.
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Thursday, August 25, 2011 at 1:59 PM, John Vincent wrote:
> All it takes to successfully manage a brownfield system is one free
> system. If you're virtualized, more's the better. I would HIGHLY
> advise against trying to migrate an existing system to a new model.
> It's like the little puzzle games where you have one space free on the
> board to move pieces around - annoying but it works. Just rebuild the
> simplest possible component and work your way up freeing up resources
> as you go along.
> Mind you, that's my opinion but I honestly think trying to retrofit is
> an exercise in frustration. For the first few months here, I was
> trying to work around the concept of a "legacy" tag on systems before
> I finally said, screw it. I was getting deep in conditional hell and
> not making any real progress.
> On Thu, Aug 25, 2011 at 12:44 PM, Luke Kanies <lu...@puppetlabs.com (mailto:lu...@puppetlabs.com)> wrote:
> > It's great to say that no one should make attempts to manage brownfield
> > systems and should just blow the whole thing away and start over every time
> > they change management practices, but that's just not practical in the vast
> > majority of cases.
> > It's a very high cost to pay for just not having decent tools. Puppet works
> > fantastically for managing teeny tiny bits of your systems, and for bringing
> > completely unmanaged systems into somewhat managed or nearly entirely
> > managed states. Yes, it also works well for green field, but even green
> > field is usually relying on recent history and tools to know what to do.
> > On Aug 25, 2011, at 9:02 AM, Nathaniel Eliot wrote:
> > +1
> > Rebuilding from scratch and importing old data is generally less error-prone
> > than upgrading existing systems. This is especially true if the systems have
> > lived for a long time (or in a high velocity development shop, where changes
> > come in all the time). You get to do dry-runs on every step, which you
> > *cannot* do reliably in altering existing systems.
> > It can't solve every problem, but it can make many of them more manageable.
> > --
> > Nathaniel Eliot
> > T9 Productions
It's a *great* and very short book, especially valuable if you're taking over a server environment that's a total mess and don't know where to start. I found it just after I needed it most, but it confirmed for me that I was on the right track and helped me formulate a roadmap forward.
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
> > >
> > >
> > >
> > > --
> > > The most overlooked advantage to owning a computer is that if they foul
> > > up there's no law against wacking them around a little. -- Joe Martin
> > > ---------------------------------------------------------------------
> > > Luke Kanies | http://puppetlabs.com | http://about.me/lak
> > > Join us in PDX for PuppetConf: http://bit.ly/puppetconfsig
> > --
> > John E. Vincent
> > http://about.me/lusis
> Noah Campbell
When I started at $dayjob we had about 10 clusters performing
different tasks (SMTP/Web/IMAP etc) all of which had been built by
hand following documentation which included helpful hints such as:
"Copy the configuration file for this service from an existing cluster
node. If there aren't any other nodes, find someone who can write the
configuration file for you"
My first step was to introduce puppet to the team and gather their
thoughts. One member of the team liked the concept so much he
installed it on one of our clusters and configured it so that it
pushed *everything* to the servers that required it - including
pre-compiled binaries of things like PHP to /usr/local/bin etc.
I quickly pointed out that this was not the best use of puppet and we
set about writing our manifests for *one* of the clusters as we needed
to add a new node.
It used to take us approx. 10 working hours to build a server and even
then we had absolutely no guarantee that it would work when we put it
We took those ten hours (and possibly a few more) and invested them in
creating a hierarchical set of puppet manifests similar to the
* "base" - configures things that are relevant to all systems either
directly or through including other modeuls (nrpe/mcollective etc).
* service specific class - smtp for example
* cluster specific class - shares the name with the cluster, includes
the relevant service class(es) and the base class
Within a very short time, we could easily recreate a virtual version
of this particular cluster using cobbler etc.
Our build times went from c. 10 hrs to around 15 minutes - *and* we
could be sure that it would work.
We built the one node, and about six months after (we were paranoid
about finding a bug in the configuration!) we rebuilt the entire
cluster using *exactly the same manifests*.
We now have a general rule "if it's new (where "new" includes hardware
refresh, new features/services etc) it gets built from puppet." -
sure, this means that we still have some clusters which are not using
puppet, however we just don't have the time to rebuild them all.
We know that at some point in the future, as we move away from the
systems I inherited, that all of our kit will be managed by puppet and
this makes us happy... :)
One more thing - if you store your puppet manifests in some form of
SCM (git/svn etc) then you also get a "free" change management system.
If you do a puppet run and something breaks, just check the git/svn
log to see who worked on that class/template/manifest last and see
what changed from the diffs - this has been a life-saver on a number
If anyone is interested in knowing more about this, please feel free
to contact me off-list and I'll respond as soon as a I can with as
much information as I can (commercial confidentiality being maintained
 Shameless plug: You can read about configuring some of the things
mentioned in this email on my blog at www.threedrunkensysadsonthe.net