A non-existent bucket prod-registry-k8s-io-eu-west-2.s3.dualstack.eu-west-2.amazonaws.com
was incorrectly referenced by OCI proxy and a security bug reporter was able to create the bucket.
Since the registry will route around missing content in mirrors and content is synchronized between regions using s3 cross-region replication after being written to a single primary region, we didn't catch that this bucket did not in fact exist.
Because of the design of the registry, only content addressed blobs are served from mirror URLs (such as S3 buckets), when clients retrieve these they do so by the digest (typically sha256
) of the content, and check the downloaded content matches for security and integrity.
Which digests to download comes from the manifest API calls that are NOT served from the S3 mirrors and are only served from the source-of-truth Artifact Registry instances backing registry.k8s.io, which have additional security controls in place.
For more on this see the diagram and docs at request-handling.md
.
Had an attacker registered this bucket they could have created a regional Denial of Service by uploading blob digest keys containing anything other than the correct content, causing clients routed to this regional mirror to error on the downloads failing the digest checks. All major container registry clients perform these checks.
Thankfully, this missing bucket was caught and reported to Kubernetes' bug bounty program / the Kubernetes Security Response Committee by Nicolas Chatelain (nic...@chatelain.me), who created and held the bucket for us until we could respond.
No users should have been impacted by this.
Again, the registry is designed to limit a content mirror compromise to a DOS vector by only using content mirrors (currently S3 buckets) for the bandwidth-intensive but relatively insensitive task of serving content blobs.
Upon receiving the report we removed this bucket from the registry and quickly rolled this out to both staging and production instances.
We confirmed that ALL other buckets referenced are in fact registered under the correct project-controlled AWS account.
Earlier in the project very few people had access to the S3 accounts. Everyone operating the registry has at least read-only AWS console access to manually confirm that buckets exist in the account in the future.
We added automated tests required on pull requests to ensure that at least one well-known content blob is available in all referenced mirrors. When we expand to future regions and backends these tests should catch hosts that do not yet exist.
Thank you especially to Nicolas Chatelain nic...@chatelain.me for reporting this to the Kubernetes Security Response Committee and holding onto the bucket until we could respond.
Thank you as well to @SaranBalaji90 from SRC for meeting with myself and @dims over the weekend to assess the situation and our response and for continuing to liaise.
If you think you've found a similar issue, please also file a report at:
https://kubernetes.io/docs/reference/issues-security/security/#report-a-vulnerability
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-k8s-infra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-k8s...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-k8s-infra/CAOZRXm93yu5BNdhQX8dCSdqwB0AAG8ZmVbSUhHyBCQk0aN3AHw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-k8s-infra/CANw6fcFWn-OdpkZrY1zved_oykkqaMJNAce21T6ku%3DLkeThoOw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-k8s-infra/CAO_RewZojNAz_Hj4xpu7YoUFD9HDvbMfj%2BBG065CPta0K7ULcw%40mail.gmail.com.