Terraform on different environments with S3 remote state files - help needed

1,148 views
Skip to first unread message

Ionut Cadariu

unread,
Oct 5, 2016, 6:01:22 AM10/5/16
to Terraform
Hello,

I'm trying to create a terraform setup using modules and multiple environments (dev, staging, production, etc) but I think I do something wrong because after I upload tfstate file to S3 using terraform remote config -backend=s3 -backend-config=bucket=$BUCKET_NAME etc I am not able to use terraform plan -var-file dev.tfvars. My current setup looks like this:

├── env-dev
│   ├── dev-main.tf
│   ├── dev.tfvars
│   ├── dev.tfvars.example
│   ├── Initialize.tf -> ../Initialize.tf
│   ├── init.sh -> ../init.sh
│   └── Variables.tf -> ../Variables.tf
├── env-production
│   ├── Initialize.tf -> ../Initialize.tf
│   ├── init.sh -> ../init.sh
│   ├── production-main.tf
│   ├── production.tfvars
│   ├── production.tfvars.example
│   └── Variables.tf -> ../Variables.tf
├── Initialize.tf
├── init.sh
├── modules
│   ├── IAM
│   │   ├── iam_main.tf
│   │   ├── Iam-Outputs.tf
│   │   ├── policies
│   │   │   ├── ecs_instance_policy.json
│   │   │   ├── ecs_role.json
│   │   │   ├── ecs_service_policy.json
│   │   │   └── registry.json
│   │   └── Readme.md
│   ├── SecurityGroups
│   │   ├── Outputs.tf
│   │   ├── Readme.md
│   │   ├── SG-main.tf
│   │   └── Variables.tf
│   └── vpc
│       ├── Outputs.tf
│       ├── Readme.md
│       ├── variables.tf
│       └── vpc_main.tf
└── Variables.tf


- Initialize and Variables.tf contine lines like variable "variable_name" {}
- init.sh, a bash script to push tfstate file to s3 bucket. All the variables are populated (correctly as i can see in case i use bash -x) from env-dev/dev.tfvars (for example)
  $TF remote config -backend=s3 \
                    -backend-config="bucket=$BUCKET_NAME" \
                    -backend-config="region=$AWS_DEFAULT_REGION" \
                    -backend-config="key=$TFSTATE_FILE"

- the following script is indeed uploading terraform file to s3 but now if I use tf plan -var-file dev.tfvars  i will get the following error. The only workaround is to use .aws/credentials, but I have to change it for each env or use multiple profiles.

 How can I use aws_access_key/secret_key that I already have defined in those tfvars files for each env?

Error initializing remote driver 's3': 1 error(s) occurred:

* No valid credential sources found for AWS S3 remote.
Please see https://www.terraform.io/docs/state/remote/s3.html for more information on
providing credentials for the AWS S3 remote


- Here is my dev-main.tf file

# Recommended setup in each env: provider, remote_state, vpc, sg, iam

provider "aws" {
  region      = "${var.aws_region}"
  access_key = "${var.aws_access_key}"
  secret_key = "${var.aws_secret_key}"
}

/* # tried with data "terraform_remote_state" on and off but the same error occur

data "terraform_remote_state" "env-dev" {
    backend      = "s3"
    config {
      region     = "${var.aws_region}"
      access_key = "${var.aws_access_key}"
      secret_key = "${var.aws_secret_key}"
      bucket     = "${var.tf_s3_bucket}"
      key        = "${var.tf_s3_state_file}"
    }
}

*/

Can someone point me to what I do wrong or how can I use remote state file to be pushed on S3 and at the same time to use terraform devided in multiple modules. 



Thank you,
Ionut

David Adams

unread,
Oct 5, 2016, 8:40:26 AM10/5/16
to terrafo...@googlegroups.com
Ionut,
It's true the remote-state config does not use the creds for the provider or any terraform-remote-state resources in your TF files.

I don't know if this works, but you could try setting `-backend-config="access_key=..." -backend-config="secret_key=..."` in your init.sh file. Or you could use a wrapper script to call terraform and export the correct AWS_* env variables in the wrapper script before calling Terraform. Since you have alternate creds specified directly using variables I suspect that might work, depending on the priority order.

If that doesn't work for you, the method we just switched to is to use user credentials from a separate account which then has sts:AssumeRole permissions to roles in other accounts with the perms we need, and in our modules, we define an AWS provider that specifies `assume_role { role_arn = "arn:aws:iam::<acct-number>:role/role-name" }` -- parameterized of course. It works entirely smoothly. Then we just give the credentials in that separate account access to the remote-state bucket.

--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
 
GitHub Issues: https://github.com/hashicorp/terraform/issues
IRC: #terraform-tool on Freenode
---
You received this message because you are subscribed to the Google Groups "Terraform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to terraform-tool+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/terraform-tool/c9fd7687-cfda-4ce0-beaf-61752d8a9439%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ionut Cadariu

unread,
Oct 5, 2016, 2:19:13 PM10/5/16
to Terraform
Hi David,

Thanks for your idea, I've managed to fix my problem by adding   -backend-config="profile=$ENV_TYPE", ehre ENV_TYPE is the name of one of aws profiles defined.

Best regards,
Ionut
To unsubscribe from this group and stop receiving emails from it, send an email to terraform-too...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages