Terraform, configuration management / CI / CD

92 views
Skip to first unread message

Antony Gelberg

unread,
Oct 9, 2017, 1:59:27 PM10/9/17
to Terraform
I'm just getting into Terraform and I'm at a crossroads. Let's say I need several environments, and I'm using AWS, with each environment in a separate VPC. I'm going to configure instances with Puppet and deploy with Jenkins. I see two basic design options here:

1. Puppet and Jenkins masters in one environment / VPC, responsible for configuring and deploying across all other environments / VPCs. Implications: opening up security groups, configuring Puppet environments using something like r10k, high dependency on that enviroment  VPCs will have to have different CIDRs (not sure if this is a big deal). 
2. Every environment to have its own Puppet and Jenkins master. Implications: more costly, seems "cleaner", less likely for environments to clash, potentially less (or more?) pain with managing puppet environments, might be overly complex.

In past projects I've always used (1), but this is my first project with Terraform, so as the possibilities of simple abstractions are improved, so is the power at my fingertips to do something really silly, really easily. ;)

Any advice would be appreciated on whether one of the above stands out, or if both are sane.

Fernando

unread,
Oct 9, 2017, 3:08:50 PM10/9/17
to Terraform
Or you can use packer to build your images and then place those in the terraform... no need for puppet

Antony Gelberg

unread,
Oct 9, 2017, 3:26:24 PM10/9/17
to Terraform
Well, I have read the Packer book, and it's on my radar, but still I'd like to get answers to the original question.

Also:
That takes Puppet into account, but not Jenkins (in terms of servers per VPC or one to rule them all).
It doesn't consider changes to live servers e.g. a small config change that I could roll out with Puppet, that doesn't seem to warrant building new images.

Fernando

unread,
Oct 9, 2017, 3:29:08 PM10/9/17
to Terraform
On Monday, 9 October 2017 20:26:24 UTC+1, Antony Gelberg wrote:

That takes Puppet into account, but not Jenkins (in terms of servers per VPC or one to rule them all).


jenkins will mostly call aws API endpoints, for which it needs a IAM Role from those accounts.

building images is pretty good practice cause it makes everything immutable and easy to reproduce.

Antony Gelberg

unread,
Oct 9, 2017, 3:40:31 PM10/9/17
to Terraform
About Packer, I hear what you're saying, and it may well be that i need to get more used to the idea. But it really seems like overkill to build new images for every little change. In my head, I'd have used Packer only to deploy new application code. Not the right way of thinking?

And it still seems to me that the Puppet / Jenkins question applies in a world with Packer, as Packer would be run by Jenkins, and use Puppet to configure the image.

Igor Cicimov

unread,
Oct 10, 2017, 11:55:45 PM10/10/17
to Terraform

On Tuesday, October 10, 2017 at 6:40:31 AM UTC+11, Antony Gelberg wrote:
About Packer, I hear what you're saying, and it may well be that i need to get more used to the idea. But it really seems like overkill to build new images for every little change. In my head, I'd have used Packer only to deploy new application code. Not the right way of thinking?

This question asks for a whole new debate probably :-) For me personally it is the right way of thinking, building a new image every time you need to apply a OS security update is just crazy. You will need a person with full time job of creating images all the time and nothing else. In my mind this is most suitable in case of containers where you have a single process and handful of libraries to worry about but not in case of full blown OS.

Fernando

unread,
Oct 11, 2017, 4:33:14 AM10/11/17
to Terraform
On Wednesday, 11 October 2017 04:55:45 UTC+1, Igor Cicimov wrote:
For me personally it is the right way of thinking, building a new image every time you need to apply a OS security update is just crazy. You will need a person with full time job of creating images all the time and nothing else.

That's where automation come... your builder (jenkins, circleci, wtv) will build the images for you, in an automated way. so no human will spend more time in this other than maybe kick off the build because of a critical / out of schedule update.
 

In my mind this is most suitable in case of containers where you have a single process and handful of libraries to worry about but not in case of full blown OS.

I agree, going containers is the ideal path.

Antony Gelberg

unread,
Oct 11, 2017, 6:07:53 AM10/11/17
to Terraform
I've replied to Igor and Fernando inline below. Still no nearer deciding whether the right way forward is one Puppet / Jenkins to rule them all, or one per VPC / environment. :)


On Wednesday, 11 October 2017 06:55:45 UTC+3, Igor Cicimov wrote:

On Tuesday, October 10, 2017 at 6:40:31 AM UTC+11, Antony Gelberg wrote:
About Packer, I hear what you're saying, and it may well be that i need to get more used to the idea. But it really seems like overkill to build new images for every little change. In my head, I'd have used Packer only to deploy new application code. Not the right way of thinking?

This question asks for a whole new debate probably :-) For me personally it is the right way of thinking, building a new image every time you need to apply a OS security update is just crazy. You will need a person with full time job of creating images all the time and nothing else. In my mind this is most suitable in case of containers where you have a single process and handful of libraries to worry about but not in case of full blown OS.

Or any other number of use cases, let's say a typo or change in a webserver config, or a kernel parameter. We are not using containers at the moment, and we want to take the path of least resistance, which at this stage doesn't involve containers nor masterless Puppet.


On Wednesday, 11 October 2017 11:33:14 UTC+3, Fernando wrote:
On Wednesday, 11 October 2017 04:55:45 UTC+1, Igor Cicimov wrote:
For me personally it is the right way of thinking, building a new image every time you need to apply a OS security update is just crazy. You will need a person with full time job of creating images all the time and nothing else.

That's where automation come... your builder (jenkins, circleci, wtv) will build the images for you, in an automated way. so no human will spend more time in this other than maybe kick off the build because of a critical / out of schedule update.

Yes, but this can take a lot of time compared to modifying the configuration in Puppet, and running puppet agent on the relevant servers if it's time-critical. I don't want to have to wait for a build if there's a critical bug. Or at least, I'd like to have the choice. It doesn't look fun to me when the powers that be ask "why is the site still down?" and I say "because we only do immutable updates".
  
In my mind this is most suitable in case of containers where you have a single process and handful of libraries to worry about but not in case of full blown OS.

I agree, going containers is the ideal path.

It may well be the ideal path, but it's not the path that will help us get something satisfactory up and running quickly, which we can refactor when we start using containers.
Message has been deleted

Chris Jefferies

unread,
Oct 11, 2017, 2:34:59 PM10/11/17
to Terraform
In our puppet setup, we are using hiera which essentially encodes all our nodes for different client deployments.  The encoding essentially defines the FQDNs of each host in a given system so while not impossible, bleed-through of code and data is not likely.

I admit the idea of Packer is intriguing but in discussions around our shop, there's not a lot of traction with the idea.  We also have services and databases in single hosts so it's not an easy stateless approach.  Deploying systems which have puppet already in place is the way we're doing it for now.  We have a single source from Jenkins and we deploy instances of our puppet master and yum server into each client track - mainly for proximity/performance reasons.

Also, we're looking to be able to deploy into AWS or Openstack as needed by the client so Terraform becomes the abstraction we need beyond CloudFormation or Heat.

Antony Gelberg

unread,
Oct 11, 2017, 5:02:19 PM10/11/17
to Terraform
So, if I got that right, you have a single Jenkins master deploy multiple puppet masters for separate environments? :)

I intend to use Hiera to handle the encoding between environments, hoping to get by without r10k. Do you use r10k?

I don't understand "we have services and database in single hosts so it's not an easy stateless approach", but I don't think it matters for the purpose of this thread. Packer kind of came in from the side. :)

When you say "client deployments", are you talking about some kind of SASS product that you deploy for customers with different configurations?

Chris Jefferies

unread,
Oct 11, 2017, 5:16:25 PM10/11/17
to Terraform
1. more or less correct.  We have a Jenkins server (a few actually, for build loads, etc) but it could be considered a single source.  The output of jenkins is mostly from our business applications and goes to our yum server.  We deploy the yum server into our client environments - mostly, as I mentioned, for local performance issues.  We have puppet master server for all development, and we deploy a client specific version of puppet master into the client environment - for the same reason as above.

2. we do not use r10k

3. yes, that was a packer/container comment - we're not there yet

4. something like that but can't really talk about the details - same basic setup for each client but significant customization for each
Reply all
Reply to author
Forward
0 new messages