Terraform plan wants to add already existing resources

9,842 views
Skip to first unread message

Franck Ratier

unread,
Oct 11, 2016, 12:00:59 PM10/11/16
to Terraform
Hello,

We are running into an error that we don't understand when running terraform plan and Terraform doesn't seem to see the existing resources and wants to (re)create almost everything:

Plan: 146 to add, 5 to change, 8 to destroy.

If we try and target one of these (existing) resources with e.g. terraform apply -target=module.logs_bucket.aws_s3_bucket.mod then AWS returns an error because the resource already exists.

Adding a totally new resource works as expected (targeting only that resource), and then immediately running terraform plan again doesn't plan anything for that resource, which is the correct behaviour.

Also, the state file refers to version 0.7.3 whereas I'm using 0.7.4 and my colleague 0.7.3. When we updated terraform in the past we usually had an error if one of us had already updated the state using a newer version of Terraform but this is not the case at the moment and both of us can run a plan (and get the same errors).

We are using an s3 remote state with Terragrunt but the behaviour is the same when using Terraform directly.

Before getting this error we had a bunch of the following errors:

* aws_s3_bucket_object.script: PreconditionFailed: Precondition Failed
 status code
: 412, request id: 8AC4EE3A8101C468

and the weird plan came after we emptied the buckets that were causing these errors.

We suspect we've put our Terraform state file in a corrupted err.. state but we can't figure out how this happened.

Has anyone ran into something similar? Any help would be highly appreciated!

Thanks,

Franck

Martin Atkins

unread,
Oct 13, 2016, 12:12:08 PM10/13/16
to Terraform
Hi Franck,

I think there are a couple of different things going on here that I want to unpick one at a time...

First: the reason why you are both able to run terraform plan on different Terraform versions. As of 0.7, the plan command only refreshes the state in-memory, rather than persisting the refreshed state to disk and remote storage. Thus when you run on 0.7.4, Terraform is internally marking the state as being updated on 0.7.4 but this never gets written out to persistent storage. Were you to succeed in applying a change, or if you were to explicitly run terraform refresh, you would see your colleague getting the error message you were seeing before. This change was intended to allow those doing Terraform upgrades to do a "dry run" plan on the new version before upgrading for real, so the user has a chance to notice any issues caused by the upgrade.



There are a few different reasons why Terraform might think it needs to re-create your resource, but they all result from the resource no longer being tracked in Terraform's state. One way I've seen this happen before is to inadvertently switch AWS regions on the provider, which makes Terraform see all of the regionalized resources as having been "deleted". Another problem that can occur is if an error happens during a create where the provider implementation is not correctly taking care of writing out partial state, and so Terraform thinks the object doesn't exist but it actually did get created in the backend service.

You could start debugging this by using the terraform state show command to inspect the resources that Terraform is claiming do not exist. If they do appear to exist in the state then I would compare their ids to what currently exists in AWS (or whatever other backend you are using) and make sure they match. If they don't then it's likely Terraform has ended up tracking an older instance of the resource, and has lost track of the current version due to an error on a previous run as I was describing above. If that has happened then it's almost certainly a bug, so I'd love to hear more details so we can create a github issue for it if one doesn't already exist.
Reply all
Reply to author
Forward
0 new messages