How to use terraform environments for multiple environments split between different AWS accounts?

1,953 views
Skip to first unread message

matzuba

unread,
Jul 6, 2017, 3:30:49 AM7/6/17
to Terraform
Hi

I have multiple AWS accounts and within these accounts, i have a number of environments in each account that i am running terraform against.  I am trying to use the terraform state environments on s3 to manage the state between the different environments and different AWS accounts.


I can use a state environment on 1 AWS account/AWS provider and profile definition with no issue.  This works well and the state is different for each environment and stored on S3.
However, if i want to use the same setup but use a different AWS account which would mean a different combinration of credentials and a S3 bucket

Something like this:

Dev
-----
1 profile - set of AWS credentials  for dev account
1 backend  - 1 S3 bucket

3+ environments



devtest
├── backend.tf
├── ec2.tf
├── environments
│   ├── dev
│   │   ├── dev-platform.tfvars
│   │   └── dev-setup.sh
└── terraform.tfstate.d
    ├── newenv1
    │   └── terraform.tfstate.backup
    └── newenv2
        └── terraform.tfstate.backup


Now if i want to use state environments on a different AWS account which will mean different credentials/profile.  How can this be achieved? can it be achieved?  It seems that the terraform state file stores the profile and bucket details just for one account which in this case, is the dev account!  If i try to update these details with terraform init, terraform will notice the change and ask me if i want to copy the state which is not what i want and will break things!

like this:

.terraform/terraform.tfstate
{
    "version": 3,
    "serial": 0,
    "lineage": "c774df9c-4db5-493f-a00c-a9d7a8299a83",
    "backend": {
        "type": "s3",
        "config": {
            "bucket": "<dev.terrrraform-state>",
            "key": "terraform.tfstate",
            "profile": "<profile name>",
            "region": "<region>"
        },
        "hash": 9345827190033900984
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {},
            "depends_on": []
        }
    ]
}



I'd like to be able to support additional environment/aws account combos  such as staging/prod etc


Cheers

David Adams

unread,
Jul 6, 2017, 12:43:41 PM7/6/17
to terrafo...@googlegroups.com
There are lots of ways around this problem. We have a tool that wraps our terraform runs, and rewrites the backend.tf file and clears out .terraform as necessary when switching between tfvars files.

But the model I've come to think is slightly safer is to put all your actual Terraform resources, providers, etc into a module, and then for each environment (dev/prod/whatever), have a separate project that calls that module with all the parameters you need.

In terms of AWS and S3 themselves, we've had great luck with assuming IAM roles, which can be done across accounts, and which is integrated very very well into Terraform (better than it is with AWS's own tools).

The setup is something like this:

We have an AWS account that is _only_ for IAM users/groups/policies. All our ops team have individual IAM users in this account. Then in our other accounts for dev and our various applications, we set up IAM roles with whatever level of access we need, and grant the ability to assume those roles to the IAM-only AWS account. Then in the IAM-only account, we add users to groups that grant permission to assume the particular roles we created in other accounts. So for our core ops team, they can assume any of the remote roles. But for engineering teams, they can assume only roles which grant them access they need for their particular app.

Then in the AWS provider you can specify a role to assume before running (parameterize this if you are using the module-oriented config I mentioned), and in the remote state configuration you can specify a different role potentially in a different account, to assume when reading and writing the remote state.

Its' hard to explain, but it works completely transparently, and it feels pretty magical. With the modular setup described, we have some projects that actually span multiple AWS accounts, and you just pass which account you want to each module invocation and Terraform takes care of all the details of assuming the roles, and using the correct credentials for each resource. Then all the state for all those resources gets written via the different role to the bucket in an entirely other account. It's pretty great. Takes a bit of finagling to set up, but it's worth the effort.

--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
 
GitHub Issues: https://github.com/hashicorp/terraform/issues
IRC: #terraform-tool on Freenode
---
You received this message because you are subscribed to the Google Groups "Terraform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to terraform-tool+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/terraform-tool/6d1316c3-354c-4346-919a-90783d5ace95%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

matzuba

unread,
Jul 7, 2017, 9:36:22 AM7/7/17
to Terraform
Thanks for taking the time to share David!

I was thinking of using a wrapper script to setup the backend config each time depending on what env a user wanted to deploy to, i just didn't feel entirely comfortable with moving/removing/recreating this each time in case something went wrong!
Although it is only a pointed ot the remote state, still doesn't seem right!

You have me interested in a modular approach.  Do you mean for the project you are deploying, all the resources are in one module - be that ec2/ELB/s3/Lambda etc, everything the deployment needs?  I am using modules in a general approach whereby any project call call any module.

Would you be happy to share a bit more on this on what your provider file looks like?  I am using a profile with the .aws/credentials file and we use MFA.
What does your backend config look like?  Are you using "role_arn" for authentication in the remote state config for s3?

thanks again for the input!


Khalid Hosein

unread,
Jul 10, 2017, 9:41:42 AM7/10/17
to Terraform
Hello @matzuba - take a look at this tool: Colonize. It wraps Terraform, sets up remote state and is environment-centric.

HTH!

-- Khalid

matzuba

unread,
Jul 16, 2017, 11:43:53 PM7/16/17
to Terraform
thanks for the link!  i will check it out. 
Are you using colonize?  any feedback?  

I am currently just blowing away the state at the moment with a wrapper script that sets up the appropriate backend config

cheers

Khalid Hosein

unread,
Jul 17, 2017, 3:13:03 AM7/17/17
to Terraform
Yup, I am.

It has more than a few advantages over Terraform alone. For example, being able to break up your project into different parts with distinct state files, and being able to target your project's different environments (e.g. prod vs dev vs test vs qa, etc.)

Perhaps the biggest caveat (and it's really a TF thing) is this: say you've broken up your project into different sections (e.g. ec2 and security groups) and the ec2 section relies on the SG section (via a `data "aws_security_group`) then a `colonize plan` will fail on the very first run because there is no data in the SG's remote state. You'll just have to run a `colonize apply` on the very first run. 
Reply all
Reply to author
Forward
0 new messages