Linking modules with multiple providers

390 views
Skip to first unread message

James Heggs

unread,
May 8, 2016, 6:06:19 PM5/8/16
to Terraform
Hi all,

How does one go about linking modules that contain multiple providers? 

No matter what I try it just doesn't seem to work and it seems like the execution plan always tries the docker provider first as I get an error similar to the following:

* Error pinging Docker server: cannot connect to Docker endpoint

I'm having problems getting an example working and would welcome any advice?

For example a directory structure of 



├── modules


  ├── aws


    ├── main.tf


    ├── outputs.tf


    └── variables.tf


  └── docker


      └── main.tf


├── site.tf


└── variables.tf



AWS Module

The AWS main.tf configures all the usual AWS stuff VPC, Subnet etc resulting in a single ec2_instance defined as:

resource "aws_instance" "my_ec2_instance" {
    connection {
      user = "ubuntu"
    }

    ami = "ami-f95ef58a"
....
}



The AWS module outputs.tf is defined as the following:

output "ec2_instance_address" {
  value = "${aws_instance.my_ec2_instance.public_dns}"
}


Docker Module

The main.tf of the Docker module is pretty simple:

# Configure the Docker provider
provider "docker" {
    host = "tcp://${module.aws.ec2_instance_address}:2375/"
}

# Create a container
resource "docker_container" "jenkins_container" {
    image = "${docker_image.jenkins.latest}"
    name = "jenkins"
}

resource "docker_image" "jenkins" {
    name = "jenkins:latest"
}




The main site.tf file is as follows:

module "aws" {
    source          = "./modules/aws"

    access_key      = "${var.access_key}"
    secret_key      = "${var.secret_key}"
    key_name        = "${var.key_name}"
    public_key_path = "${var.public_key_path}"

}

module "docker" {
    source          = "./modules/docker"
}




Message has been deleted

James Heggs

unread,
May 9, 2016, 9:18:12 AM5/9/16
to Terraform
This post might answer my question:

James Heggs

unread,
May 9, 2016, 5:42:37 PM5/9/16
to Terraform

James Heggs

unread,
May 10, 2016, 3:26:01 AM5/10/16
to Terraform
And further detail on the suggested Terraform adaptations for a n00b like me

Martin Atkins

unread,
May 12, 2016, 8:22:48 PM5/12/16
to Terraform
This is not possible without special workarounds, but it is possible to do it using the -target argument to plan:

terraform plan -target="module.aws"

The above will generate a plan that just applies the changes for the resources in the AWS module, presumably configuring the docker daemon.

You've applied that partial plan, you can then do a regular plan/apply to carry out the remainder of the configuration; the EC2 instance will already be created, that output will be populated, and thus the provider will configure properly on the second run.

This problem applies for essentially any situation where multiple layers of infrastructure are being configured at once, since Terraform needs to instantiate the provider in order to plan, but on the first run the configuration is not yet complete enough to do so and it fails.

There is a proposal under discussion to make Terraform handle this automatically, by effectively applying that -target argument for you automatically on the first run, and guiding the user through the process:

Until something like that is implemented, if you want to avoid the -target workaround described above then it's necessary to split the config into multiple separately-applied configs, possibly using the terraform_remote_state resource to pass resources from one layer to the next.
Reply all
Reply to author
Forward
0 new messages