Prevent unwanted deletion of aws resources

673 views
Skip to first unread message

egul...@gmail.com

unread,
Sep 29, 2016, 1:41:51 PM9/29/16
to Terraform


 Hello everyone,

I have aws infrastructure provisioned using terraform and I'm looking for a way to block/limit/prevent accidental/unwanted changes/deletion of resources in multi-user environment (meaning multiple people changing code).


Is there a way to do it? Have anyone tried doing something like that?



Thanks

Andrew Langhorn

unread,
Sep 29, 2016, 2:14:11 PM9/29/16
to terrafo...@googlegroups.com
Some resources, like ec2_instance, can make use of API Termination Protection to prevent this. But, aside from that, you could architect for failure, by using auto-scaling groups and load balancers rather than individual instances. Thus, if an instance fails, a new one spins back up automatically.

egul...@gmail.com

unread,
Sep 29, 2016, 3:34:59 PM9/29/16
to Terraform, andrew....@thoughtworks.com

Thank you for your answer but I'm not talking about instance failures. I'm talking about 'human error' where someone can wipe out resources either intentionally or 'by mistake'. I'm thinking of implementing MFA into terraform but maybe someone had similar scenario with different solution.

Andrew Langhorn

unread,
Sep 29, 2016, 5:04:15 PM9/29/16
to terrafo...@googlegroups.com
The approach I suggested works equally well regardless of failure scenario; I work to the whole idea that infrastructure should be buildable repeatedly, quickly and consistently. Thus, architecting using load balancers and auto-scaling groups (or the local equivalents, depending on your cloud) is the typical best practice for this.

It's the whole pets vs cattle thing; if your stuff is so important that it can't be re-built easily and automatically, then it's a pet. Instead, you should try and aim for cattle: things that are commodity, that you don't care about, and which rebuild and redeploy automatically on failure.

egul...@gmail.com

unread,
Sep 29, 2016, 5:21:08 PM9/29/16
to Terraform, andrew....@thoughtworks.com
OK, so here's a scenario:

1. you write code, provisioned infrastructure (example: vpc.tf, rds.tf, app1.tf) with ELB, ASG, etc.
2. someone adds app2.tf and does:



 $ tf destroy
because something went wrong and wipes out whole infrastructure.. I'm looking for locks/something that will prevent this from happening by:

      - allow users to terminate resources only that they created
      - deny termination of newly created resources at all times.


This is what I'm looking to prevent from happening.

David Maze

unread,
Sep 29, 2016, 8:37:10 PM9/29/16
to Terraform, andrew....@thoughtworks.com
On Thursday, September 29, 2016 at 5:21:08 PM UTC-4, egul...@gmail.com wrote:
OK, so here's a scenario:

1. you write code, provisioned infrastructure (example: vpc.tf, rds.tf, app1.tf) with ELB, ASG, etc.
2. someone adds app2.tf and does:

 $ tf destroy 

For basically exactly this reason, I deploy each of these parts as a separate Terraform project, and the more downstream parts import remote state from the more upstream parts.  So app2/main.tf gets remote state from the VPC and RDS parts, but it can't change them, and even if I do 'terraform destroy' from the app2 directory, I've only destroyed the things I meant to. 

egul...@gmail.com

unread,
Sep 30, 2016, 6:50:51 PM9/30/16
to Terraform, andrew....@thoughtworks.com

Is this basically doing modules or you're doing it a bit differently? If so, would you please share a link to a post/resource that I could read & test?


Really appreciated,
Thank you

James McKay

unread,
Oct 3, 2016, 5:11:59 AM10/3/16
to Terraform
Is lifecycle { prevent_destroy = true } what you are looking for? (See here for more info.)

It's not foolproof -- some idiot could always end up removing this directive -- but it will otherwise stop you from running a plan that forces re-creation of a database, EBS volume or S3 bucket, for example.
Reply all
Reply to author
Forward
0 new messages