AWS ECS auth with AppRole and S3

287 views
Skip to first unread message

Aaron Suggs

unread,
Sep 13, 2016, 11:32:12 AM9/13/16
to vault...@googlegroups.com
I'm interested in having AWS ECS services authenticate to Vault (like issue #1298).

Here's an idea for how to implement ECS auth using recent features, like Vault's AppRole auth backend, and IAM roles per ECS task.

I'm eager to hear if folks think it's a reasonable approach.

(Big caveat: I'm just learning Vault, so apologies in advance for my inevitable misunderstandings.)

How it works

When you create an IAM role, say, myProductionService, you create a corresponding Vault AppRole with the same name. You then publish both the AppRole role-id and a secret-id as JSON to an S3 path named after the role, say s3://my-secrets-bucket/approles/myProductionService.json.

(In our case, we can accomplish this using CloudFormation and a lambda-backed custom resource.)

When an ECS task starts with the myProductionService IAM role, the container reads the role-id and secret-idfrom S3, and exchanges them for a Vault token.

A periodic task (say, AWS Lambda function) can rotate the secret in Vault, and update the S3 file.

Locking down S3 access

The main threat model is to guarantee that only tasks running myProductionService IAM role can read the S3 path with the role-id and secret-id.

We can accomplish this very well with an S3 bucket policy that rejects access except those using the myProductionService IAM role.

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Effect":"Deny",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::my-secrets-bucket/approles/myProductionService.json"],
      "Condition": { "StringNotEquals": { "awsSourceArn": "arn:aws:iam::myaccountid/role/myProductionService" } }
    }
  ]
}

Unfortunately, we'd have to append to this bucket policy for each new role + resource, and bucket policies are limited to 20kb. While using a bucket policy works for a few roles, it doesn't work for potentially many dynamically created ones.

This AWS blog post describes some best practices for locking down S3 access.

In particular, we can:

1. Use an S3 bucket policy to restrict access to a particular VPC endpoint and CIDR
2. Require SSL & server side encryption when uploading objects
3. Consider using AWS KMS to encrypt the contents server-side, where myProductionService has permission to decrypt. This gives us additional audit logs in CloudTrail, but at an extra cost & complexity.
4. Audit S3 access logs to check that only clients authenticated as myProductionService download the path.
5. Use AWS Config to audit IAM policies to make sure that other IAM roles can't access "arn:aws:s3:::my-secrets-bucket/approles/*".

AWS Lambda

An nice side effect of this approach is that it's generic to other AWS services like AWS Lambda. A lambda function running in your VPC with myProductionService IAM role could also get a Vault token.

Conclusion

I know some Vault features are in the works to make Docker auth simpler & more robust (yay!); but this is a pattern that could work today; and would hopefully be easily refactored to future workflows.

Again, I'd love to hear what other people think about this. If it works well, I'd be glad to publish example tooling (such as the lambda function) as open source.

Cheers,
Aaron

Jeff Mitchell

unread,
Sep 13, 2016, 12:48:46 PM9/13/16
to vault...@googlegroups.com
Hi Aaron,

I think this pattern looks pretty reasonable! There are a few reasons
we (HC) don't ourselves promote this kind of workflow:

1) Anything involving multiple external services kind of gets out of
the wheelhouse of Vault and more into "workflows around Vault".
2) We generally want, from Vault directly, to provide auth
controllable per-user/per-instance rather than per class of instance.
Of course, AppRole is designed to let you do either/both, but it
doesn't require a per-class approach.

That all said, the fact that you can build on what Vault provides to
come up with workable, reasonably secure flows for your particular
environment and needs is great, and means we're doing our job
decently.

Although you are interested in authenticating other services (e.g.
Lambda) it may also be of interest to follow
https://github.com/aws/amazon-ecs-agent/issues/451 -- the issue was
started by AWS personnel because they thought the way we're doing EC2
is interesting and want to look into providing enough unique details
in ECS to allow similar workflows. One hopes that if EC2 and ECS go
one way, Lambda might follow suit -- probably not with an agent, but
with a way to uniquely identify the task and auth for a token.

Best,
Jeff
> --
> This mailing list is governed under the HashiCorp Community Guidelines -
> https://www.hashicorp.com/community-guidelines.html. Behavior in violation
> of those guidelines may result in your removal from this mailing list.
>
> GitHub Issues: https://github.com/hashicorp/vault/issues
> IRC: #vault-tool on Freenode
> ---
> You received this message because you are subscribed to the Google Groups
> "Vault" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to vault-tool+...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/vault-tool/908EFA06-FB95-4EBE-88E5-E78FB1D8201E%40kickstarter.com.
> For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages