Terraform modules + consul

390 views
Skip to first unread message

Nigel Magnay

unread,
Aug 19, 2016, 7:25:40 AM8/19/16
to terrafo...@googlegroups.com
I'm trying to do the following - which I'd assumed would be relatively easy, but is proving rather tricky.

We use docker. I want to deploy a docker image for consul, then use that docker image in a consul provider in order to load some config into it.

I have 2 modules, a "consul" module and a "config" module.

In the config module, I have 

variable "consul_address" {}

provider "consul" {
  address    = "${var.consul_address}"
}

resource "consul_key_prefix" "myapp_config" {
 ...
}

I can't just pass "localhost:8500" which is what I want, as it immediately causes the consul keys to attempt to deploy into a non-existent container.

In my consul module, I have

output "address" {  
  value = ".....what?....."
}

​With the two wired together at the calling terraform file.

The problem is - What to set the output address as!

If I set 
value = "localhost:8500"

It's immediately evaluatable and attempts to set the keys before the container is valid.

There is no real property​ on the docker resource I can use. The (external) address would be immediately evaluatable anyway.


I'd thought maybe something like

​v​
alue = "${format("localhost:8500", docker_container.consul.address)}"


​Would work, but it bombs (
* consul_key_prefix.myapp_config: Failed to get datacenter from Consul agent: Get http://$%7Bvar.consul_address%7D/v1/agent/self?dc=DC1: dial tcp: lookup ${var.consul_address}: invalid domain name
​)​


I can't make the variable depends_on, because that only works with resources. I can't find any way to say "set this variable after this resource is created" (like a provisioner, perhaps)

Is there a way of doing this?

Martin Atkins

unread,
Aug 20, 2016, 1:14:23 AM8/20/16
to Terraform
The general answer here is that unfortunately Terraform is right now not particularly good at configurations where you want to work with several different layers in one operation... and by "layers" in your case I mean the infrastructure Consul is running on as one layer and Consul itself as another.

A robust-but-somewhat-awkward approach is to split the docker layer and the consul layer into two separate Terraform configurations, and use the terraform_remote_state data source to access the results of the former from the latter. In this setup, you would first apply the docker layer to spin up the Consul cluster, publish its state using Terraform's remote state mechanism, and then apply the consul layer to write the keys to Consul.

A more clunky solution which is less divergent from what you're already trying is to use the -target option when you plan to get Terraform to work on only a subset of the configuration at a time. On the first run you'd target the docker_container resource that produces your Consul cluster, let Terraform get that set up, and then run Terraform again without -target to complete the rest.

I use both of these approaches in different spots, depending on the situation. I usually tend towards the latter since the issue only really exists during an initial setup "from cold"; once you're doing incremental maintenance of a running system you already have all the parts running and you're just applying updates, and at that point I find it most convenient to have several related parts grouped together into a single configuration. The only place where we universally do the former is a hard boundary we created between "shared infrastructure" that all of our applications use together and the applications themselves; in that case the distinction is useful because the relationship is one-to-many from infrastructure to app, rather than one-to-one as in your situation.

The discussion on github issue #4149 has some additional context on this issue (see the comments below the proposal writeup).

Nigel Magnay

unread,
Aug 20, 2016, 5:31:03 AM8/20/16
to terrafo...@googlegroups.com
That's extremely useful - thanks. I'd done a trawl of the issues on github, but there are rather a lot of them and I hadn't picked up that thread.

I had managed to get it to work with

provider "consul" {
  address    = "${var.consul_address}"
}

resource "template_file" "prefix" {
  template = "localhost:8500"
  vars { uuid = "${docker_container.consul.ip_address}" }
}

output "address" {  
 value = "${template_file.prefix.rendered}"
}

That said, terraform apply works, but terraform destroy does not (as it tries to remove consul keys from the consul that has been destroyed). I can live with that, but I'm a touch confused as to whether providers can, or cannot rely on computed values (I.E: is my example above just going to work 'by chance'. They clearly appear in the dependency graph - but elsewhere I read that providers must always exist so that the 'plan' can decide in a 1-shot operation what activities must happen.

Aslo I was caught out by modules somewhat - I'd sort of expected that they were there to scope exactly this kind of layering, but it seems that they're only really a scoping feature. It'd be nice to be able to declare some sort of sealed/independent module as a "Provider Factory", so the plan could then know that the provider doesn't exist, therefore could be assumed to be in a reasonable 'default' state for the purposes of bootstrapping.

Or maybe even a subtype-of the consul provider that hides the docker provisioning altogether.



--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
 
GitHub Issues: https://github.com/hashicorp/terraform/issues
IRC: #terraform-tool on Freenode
---
You received this message because you are subscribed to the Google Groups "Terraform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to terraform-tool+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/terraform-tool/0267bb74-9963-42d1-859f-1a40245325cb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

ja...@fpcomplete.com

unread,
Aug 20, 2016, 9:15:46 AM8/20/16
to Terraform


On Friday, August 19, 2016 at 7:25:40 AM UTC-4, Nigel Magnay wrote:
I'm trying to do the following - which I'd assumed would be relatively easy, but is proving rather tricky.

We use docker. I want to deploy a docker image for consul, then use that docker image in a consul provider in order to load some config into it.


FWIW:

I generally split up operations like this into a) creating the resource and b) init/provisioning those resources. In some cases, you need the init/provisioning to be rather intelligent in how it initializes the resource (to play well with each of your valid use cases that spread across time and the lifecycle of the resource as it is used). I will use Terraform to initiate or template the high-level init/provisioning process, but I generally use specialized tools/scripts and generic configuration management tools (saltstack in my case) to ensure the details are taken care of and correct.

The other thing I'd like to point out is the concern for circular dependencies, and chicken/egg-type scenarios. When defining a platform like these with Terraform, consul, docker, etc.. while you're dealing with an Ouroboros, you've got to pick a starting point, and it's generally best to simplify that initial step/state. The number of dependencies for that initial step greatly effects the complexity and reliability of that step/state succeeding. For example, I picked forming a consul cluster as my initial state/step, and I maintain stability for that layer by reducing dependencies - consul runs as an executable on the system, and is configured / expected to be run on all hosts via the host OS init. No docker, no nomad, etc. One could easily pick nomad as another initial step/state that makes a lot of sense for their use. In each deployment at large, the consul leaders are deployed and established first. While other systems may work to come online in parallel, node init will wait for consul to actually come online before configuring nomad and running services/tasks on the deployment. Similarly, I use configuration management tools to manage state and contents of config files on the host (rather than doing that through Terraform), because the separation of roles and responsibilities makes it easier to work with each layer and suite of tools across time (there are fewer dependency conflicts or circular deps).

Good luck!

Martin Atkins

unread,
Aug 20, 2016, 11:38:03 AM8/20/16
to Terraform

On Saturday, August 20, 2016 at 2:31:03 AM UTC-7, Nigel Magnay wrote:

That said, terraform apply works, but terraform destroy does not (as it tries to remove consul keys from the consul that has been destroyed). I can live with that, but I'm a touch confused as to whether providers can, or cannot rely on computed values (I.E: is my example above just going to work 'by chance'. They clearly appear in the dependency graph - but elsewhere I read that providers must always exist so that the 'plan' can decide in a 1-shot operation what activities must happen.


This is the crux of the issue, really: in principle, providers can participate the dependency graph and Terraform will correctly instantiate them in the right order relative to other resources. So you could do something like this, for example:

resource "docker_container" "consul" {
    # ...
}
provider "consul" {
    address = "${docker_container.consul.ip_address}:8500"
}
# ... and then consul_key_prefix, consul_keys, etc

The issue that arises is that, unlike most other configuration objects, providers need to be instantiated in order to refresh and plan as well as to apply. When starting "from cold", Terraform wants to plan for those consul resources and so it needs to instantiate the provider, but since we're still only planning Terraform can't spin up the docker container to get that IP address to interpolate into the provider argument, and so you get an error like you mentioned in your original email where Terraform tries to access consul without actually resolving that interpolation.

The proposal I linked to is one approach to address this, where Terraform would detect the situation and essentially guide the user through the -target workaround I described before... Terraform "knows" what subset of things it can create in the first step, so it can in principle act as if you had used -target on all of those things and let you know that more things would be handled on a subsequent run.

 
Aslo I was caught out by modules somewhat - I'd sort of expected that they were there to scope exactly this kind of layering, but it seems that they're only really a scoping feature. It'd be nice to be able to declare some sort of sealed/independent module as a "Provider Factory", so the plan could then know that the provider doesn't exist, therefore could be assumed to be in a reasonable 'default' state for the purposes of bootstrapping.

Or maybe even a subtype-of the consul provider that hides the docker provisioning altogether.


Modules too participate in the graph themselves, and their contents won't be instantiated until all of the module's variables can be populated. As you noticed, modules can't really address the specific problem here because any providers they contain still need to be instantiated during plan. Under the #4149 proposal modules themselves would never actually be "deferred", but individual providers and resources within them could well be. Like you said, modules are mainly a namespacing construct but are also useful as a way to repeat a specific pattern multiple times with slight variations. The resources within still obey the usual rules, and so the elements within a module won't necessarily be created as a single unit... if the dependency graph shows that one part of a module can be created immediately due to having no dependencies on the calling module, Terraform will do that.


 

Nigel Magnay

unread,
Aug 26, 2016, 9:57:37 AM8/26/16
to terrafo...@googlegroups.com
It feels like the more I attempt to use terraform, the more disheartened I get.

Splitting up configuration into modules seems to yield all sorts of unexpected behaviour -- such as 'apply' asking for parameters multiple times. 'terraform graph' shows provider.consul relying on "provider.consul (disabled)" - I can't find any documentation that explains what that means (or why I now seem to have 2 provider.docker items in the graph). Is it inheritance? Defaults? Scoping rules?

Contrary to the docs, my graphs also don't ever seem to show modules in boxes.

It also feels a shame that it still can't pull or auth to private docker registries (though there seem to be PRs languishing that fix this) - and I was also suprised I had to manually declare that a docker container where links = ["consul:consul"] did not figure out there was a dependency. links = ["consul:${docker_container.consul.name}"] seems to work (though I'm nervous that of course that expression can be evaluated successfully without actually starting the container).






--
This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
 
GitHub Issues: https://github.com/hashicorp/terraform/issues
IRC: #terraform-tool on Freenode
---
You received this message because you are subscribed to the Google Groups "Terraform" group.
To unsubscribe from this group and stop receiving emails from it, send an email to terraform-tool+unsubscribe@googlegroups.com.

Michael Corum

unread,
Aug 26, 2016, 10:28:14 AM8/26/16
to terrafo...@googlegroups.com

I was also disappointed.  It seems too many things were started and not finished.  The providers idea is a good one but only if at least many of them work.  The lack of authentication to a repository for Docker is a pretty big missed requirement and I dug in a little and found that it was unlikely the PR was ever going to come back, at least not in a timeframe that would be needed to use it.  I had to switch to another tool.  I'll take a look in another year to see if some of the gaps are fixed.  Some of the best ideas I've seen are in this tool and the implementation just needs to happen.

 

Mike

To unsubscribe from this group and stop receiving emails from it, send an email to terraform-too...@googlegroups.com.


For more options, visit https://groups.google.com/d/optout.

 

--

This mailing list is governed under the HashiCorp Community Guidelines - https://www.hashicorp.com/community-guidelines.html. Behavior in violation of those guidelines may result in your removal from this mailing list.
 
GitHub Issues: https://github.com/hashicorp/terraform/issues
IRC: #terraform-tool on Freenode
---
You received this message because you are subscribed to the Google Groups "Terraform" group.

To unsubscribe from this group and stop receiving emails from it, send an email to terraform-too...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/terraform-tool/CAPYP83QLOwu1%2BN-9%3D1wQBpAWAz9jdukkmUM_XjUUVjQS1NSLcQ%40mail.gmail.com.

David Adams

unread,
Aug 26, 2016, 10:29:05 AM8/26/16
to terrafo...@googlegroups.com
For what it's worth, Nigel, each module needs to define its provider(s) over again. The providers aren't shared between modules. That would probably explain the parameter prompts.

Nigel Magnay

unread,
Aug 26, 2016, 10:52:15 AM8/26/16
to terrafo...@googlegroups.com
On Fri, Aug 26, 2016 at 3:29 PM, David Adams <dave...@gmail.com> wrote:
For what it's worth, Nigel, each module needs to define its provider(s) over again. The providers aren't shared between modules. That would probably explain the parameter prompts.

​Ah ! That's really useful information, thanks.

So providers are entirely module-local? Can modules use providers declared in their 'parent? (I assume that the 'parent' (/sibling modules) can't use providers declared in the module) ?



 

Terrac Skiens

unread,
Nov 5, 2017, 11:29:52 PM11/5/17
to Terraform

This talk from Hashicon 2017 helped me understand module inputs outputs better
https://www.youtube.com/watch?v=wgzgVm7Sqlk
Reply all
Reply to author
Forward
0 new messages