Best practices on organizing modules

688 views
Skip to first unread message

Kenneth Holter

unread,
Jul 16, 2010, 6:40:42 AM7/16/10
to puppet...@googlegroups.com
Hi all.


We're building our puppet infrastructure from scratch, and need to
decide on how to organize our modules. As the puppet best practice
document suggests, we're going to put all building blocks modules in
the "modules" area, and make use of the services and clients areas to
make up server configurations.

But how does others build server configurations from these modules in
a dynamic and structured way? How to you handle situations where one
have multiple projects, environments environments (dev, test, qass,
prod), and so forth - do you go for a
"<project>::<environment>::<role>" type of module/class structure?
Before starting to model our servers in the services and clients areas
I'd like to make sure we don't start out wrong.

Best regards,
Kenneth Holter

Christian

unread,
Jul 16, 2010, 7:46:52 AM7/16/10
to Puppet Users
Hi Kenneth,

I'm also creating a setup from scratch and want to do similar things
like you. Therefore i cannot provide a big experience report.

However we disussed quite a lot internally how we want to setup it
correctly.
One outcome was to create a generic node which holds all module
available on all machines (dns, nagios daemons, ntp,...).
A node therefore builds together from that generic node and the node
specific modules.

In order to define different setups for each module (test, dev,...) we
are going to use tags.
You can then use the central puppet run and pass a list of tags you
want deploy. You should check the tag section of the puppet
documentation.

I hope that helps a bit. I would be interested if experienced puppet
user would agree with this setup suggestion.

Christian

Kenneth Holter

unread,
Aug 2, 2010, 7:50:50 AM8/2/10
to puppet...@googlegroups.com
Thank you for your reply.

It was not clear to me exactly how you will be using tags - could you
please elaborate? How will your "clients" and "services" areas be
organized?

Let me briefly mention that we're going to use external nodes to tell
which environment (prod, qass, ...) a node belongs to, and use the
build in puppet enironments to separate code between the environments
(production clients will pull modules from a different module area
than clients in for example qass, and so forth).


- Kenneth

> --
> You received this message because you are subscribed to the Google Groups "Puppet Users" group.
> To post to this group, send email to puppet...@googlegroups.com.
> To unsubscribe from this group, send email to puppet-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
>
>

Jeff McCune

unread,
Aug 3, 2010, 1:37:18 AM8/3/10
to puppet...@googlegroups.com
On Mon, Aug 2, 2010 at 4:50 AM, Kenneth Holter <kenne...@gmail.com> wrote:
>
> Let me briefly mention that we're going to use external nodes to tell
> which environment (prod, qass, ...) a node belongs to, and use the
> build in puppet enironments to separate code between the environments
> (production clients will pull modules from a different module area
> than clients in for example qass, and so forth).

The organization of using environments in puppet with module paths
referencing multiple, different working copies of the same version
control repository checked out to different branches works quite well
for many people.

Git is particularly good in this case since it's designed to quickly
switch the working copy among branches quickly and easily. Doing so
in git doesn't result in a different filesystem path which is
convenient when setting the modulepath in each environment.

--
Jeff McCune
http://www.puppetlabs.com/

Kenneth Holter

unread,
Aug 3, 2010, 7:00:49 AM8/3/10
to puppet...@googlegroups.com
We're a subversion shop, so we're using subversion for this type of
thing (although I'd like to try out Git). We've created a few simple
scripts for tagging modules and such, so I belive we have a neat
solution to version control of our code.

But structuring the code itself (as was the original topic for this
thread) is of course equally important. We have multiple projects each
consisting of multiple server types/roles and environments (dev, test,
qass, prod), and need our clients and services areas to reflect this
in a dynamic and structured way. Having a base/generic class like
Christian proposed is the way to go, but I would like to hear from
others how they have organized the clients and services areas.


- Kenneth

Ohad Levy

unread,
Aug 3, 2010, 7:16:37 AM8/3/10
to puppet...@googlegroups.com

Kenneth Holter

unread,
Aug 3, 2010, 9:30:33 AM8/3/10
to puppet...@googlegroups.com
Thanks. So does your environment folder structure only contain modules, or have you put the clients and services module areas into the environment folder structure too? What does your clients and services areas look like? 

For now I've created different environment areas only for modules, and have thought about maybe going for something like "c_<project>::<environment>::<serverRole>" type of class structure, so that I can build different server configurations for the different projects, environments and server roles. On other words, my clients area would look something like this:
  • "c_projectA", in which the classes "c_projectA::prod::webserver", "c_projectA::qass::webserver" and so forth would be implemented (and added to the node definition for the relevant servers)
  • "c_projectB", in which the classes "c_projectB::dev::database", "c_projectB::testing::database" and so forth would be implemented (and added to the node definition for the relevant servers)
  • etc
I haven't thought out all the details yet, but I believe something like this would make having multiple server setups manageable. Any thoughts on this kind of setup?


Kenneth

Eric Sorenson

unread,
Aug 3, 2010, 1:45:04 PM8/3/10
to puppet...@googlegroups.com
On Aug 3, 2010, at 6:30 AM, Kenneth Holter wrote:

> On other words, my clients area would look something like this:
> • "c_projectA", in which the classes "c_projectA::prod::webserver", "c_projectA::qass::webserver" and so forth would be implemented (and added to the node definition for the relevant servers)
> • "c_projectB", in which the classes "c_projectB::dev::database", "c_projectB::testing::database" and so forth would be implemented (and added to the node definition for the relevant servers)
> • etc
> I haven't thought out all the details yet, but I believe something like this would make having multiple server setups manageable. Any thoughts on this kind of setup?
>

This seems like it would lead to a confusing multiplicity of manifests.

Unless (in your example) webservers are completely divergent and have nothing in common between qass and prod environments, I'd put all the webserver code in one class and use decision-making conditionals inside the class to change aspects of the resources. That way when someone else comes along and wants to modify webserver behaviour they have only one place to look instead of four or five. You're starting out with an external node tool, so setting variables at top-level scope for projects and environments will make this easy. I've ended up with a common structure among the different modules: typically there's a case statement at the top which sets class-scope variables based on the globals everyone on the team knows about and uses, then the actual resources below which use those class-scope vars inline. EG:

class infrastructure::sudoers {
case $env { # globally set by external node tool
dev,qa,perf: {
$sudoersfile = "sudoers.preprod"
}
prod: {
$sudoersfile = "sudoers.prod"
}
default: { # always make a safe default in case $env is unset
$sudoersfile = "sudoers.minimal"
}

}

file { "sudoers":
path => $operatingsystem ? {
solaris => "/usr/local/etc/sudoers",
default => "/etc/sudoers",
},
owner => "root",
group => 0,
mode => "0440",
source => "puppet:///external/sudo/$sudoersfile",
}
}


Does that make sense? Not sure if the use-case maps up exactly to yours but it sounds close enough that this might work for you and end up being a lot simpler.

- Eric Sorenson - N37 17.255 W121 55.738 - http://twitter.com/ahpook -

Kenneth Holter

unread,
Oct 7, 2010, 8:01:58 AM10/7/10
to puppet...@googlegroups.com

--
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To post to this group, send email to puppet...@googlegroups.com.
To unsubscribe from this group, send email to puppet-users...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.


Thanks for your detailed answer. It make perfect sense to implement thing that way, by including different resources based on which environment the client are defined in. But if the different projects had different requirements for their servers, would you extend your example to check on things like "if $evn == 'dev' and $project == 'projectA' then ....", or even "if $fqdn == 'client1.example.com' then ..."? 

In a real request I just got from on the the projects at work they wanted to include a couple of classes just for one specific server, so I basically did something like this:

-- code start --
class c_appserver::projectA {
   include baseclass
   include c_appserver

   if ($fqdn == 'client1.dev.example.com') {
       include some::class
   }

}
-- code start --

If the "some::class" class were to be included on all development servers for the project I'd be using "if ($env == 'dev') { ... }" instead.

Comments?

- Kenneth 



Reply all
Reply to author
Forward
0 new messages