cc @kubernetes/sig-storage-pr-reviews
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
@gnufied Are we still waiting for someone to comment on this? Or should I have a look how to move the vode into pv_controller.go?
@scholzj pushed 1 commit.
—
You are receiving this because you are subscribed to this thread.
View it on GitHub or mute the thread.![]()
Hi @gnufied
I adapted the PR as you suggested. I moved the tagging of the volumes into the pv_controller so that it works also outside of AWS. The new name of the annotation is volume.beta.kubernetes.io/additional-tags. Since I moved the tagging outside of the AWS cloud provider I though that I should also moved the utility function for parsing the tags to a higher level.
On my local host all tests are passing and it also seems to work live in AWS. I haven't tried any other providers.
I would appreciate if you could have a look and let me know if you have any comments. It would be great to get this solved and merged.
Thanks
JAkub
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
I am yet to review code in detail (I just returned from vacation). @justinsb does this meets goals of feature request kubernetes/features#300 ?
I guess at some point we will have to migrate these annotations from beta to GA. but other than that - idea looks pretty sound to me.
@justinsb commented on this pull request.
In pkg/cloudprovider/providers/aws/aws_loadbalancer.go:
> @@ -29,39 +29,11 @@ import ( "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/util/sets" + addtionaltags "k8s.io/kubernetes/pkg/util/tags"
nit: typo
In pkg/controller/volume/persistentvolume/pv_controller.go:
> @@ -38,6 +38,7 @@ import ( "k8s.io/kubernetes/pkg/controller/volume/events" "k8s.io/kubernetes/pkg/util/goroutinemap" "k8s.io/kubernetes/pkg/util/goroutinemap/exponentialbackoff" + addtionaltags "k8s.io/kubernetes/pkg/util/tags"
typo again
Sorry I missed this earlier.
I actually really like this, and given we do this for ELBs we should do this for EBS volumes also.
We could do this just for AWS without much further discussion IMO. But if we want to make it cross cloud, that opens up a huge can of worms. What do we do about clouds that don't support tags, or that have different semantics for allowed keys/values etc.
If this goes back to AWS only I think we can get it in, and then we can discuss some sort of cross-cloud design for 1.9 - sig-cloud should be up and running soon.
/ok-to-test
@scholzj: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|---|---|---|
| pull-kubernetes-e2e-gce-etcd3 | 4759e41 | link | /test pull-kubernetes-e2e-gce-etcd3 |
| pull-kubernetes-bazel-test | 8317f9d | link | /test pull-kubernetes-bazel-test |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@scholzj: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-bazel-test | 8317f9d | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-verify | 8317f9d | link | /test pull-kubernetes-verify |
| pull-kubernetes-e2e-gce-bazel | 8317f9d | link | /test pull-kubernetes-e2e-gce-bazel |
@scholzj pushed 1 commit.
—
You are receiving this because you are subscribed to this thread.
View it on GitHub or mute the thread.![]()
@justinsb I fixed the typos - thanks for noticing them.
If needed it should be easy to rollback to the original AWS only version. If it helps we can also use this PR to merge the more straight forward version for AWS only and then I can open a new PR with the version for all cloud providers.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
@scholzj: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-verify | 8317f9d | link | /test pull-kubernetes-verify |
| pull-kubernetes-e2e-gce-bazel | 8317f9d | link | /test pull-kubernetes-e2e-gce-bazel |
| pull-kubernetes-bazel-test | eefcff6 | link | /test pull-kubernetes-bazel-test |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
@scholzj: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-bazel-test | eefcff6 | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-verify | eefcff6 | link | /test pull-kubernetes-verify |
| pull-kubernetes-kubemark-e2e-gce | eefcff6 | link | /test pull-kubernetes-kubemark-e2e-gce |
@scholzj: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-bazel-test | eefcff6 | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-verify | eefcff6 | link | /test pull-kubernetes-verify |
| pull-kubernetes-kubemark-e2e-gce | eefcff6 | link | /test pull-kubernetes-kubemark-e2e-gce |
| pull-kubernetes-e2e-kops-aws | eefcff6 | link | /test pull-kubernetes-e2e-kops-aws |
@scholzj pushed 1 commit.
—
You are receiving this because you are subscribed to this thread.
View it on GitHub or mute the thread.![]()
@scholzj: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-verify | eefcff6 | link | /test pull-kubernetes-verify |
| pull-kubernetes-kubemark-e2e-gce | eefcff6 | link | /test pull-kubernetes-kubemark-e2e-gce |
| pull-kubernetes-e2e-kops-aws | eefcff6 | link | /test pull-kubernetes-e2e-kops-aws |
| pull-kubernetes-e2e-gce-bazel | e24bbe7 | link | /test pull-kubernetes-e2e-gce-bazel |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
@scholzj: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-verify | eefcff6 | link | /test pull-kubernetes-verify |
| pull-kubernetes-kubemark-e2e-gce | eefcff6 | link | /test pull-kubernetes-kubemark-e2e-gce |
| pull-kubernetes-e2e-kops-aws | e24bbe7 | link | /test pull-kubernetes-e2e-kops-aws |
@scholzj: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-kubemark-e2e-gce | eefcff6 | link | /test pull-kubernetes-kubemark-e2e-gce |
| pull-kubernetes-e2e-gce-bazel | e24bbe7 | link | /test pull-kubernetes-e2e-gce-bazel |
| pull-kubernetes-e2e-kops-aws | e24bbe7 | link | /test pull-kubernetes-e2e-kops-aws |
| pull-kubernetes-verify | e24bbe7 | link | /test pull-kubernetes-verify |
@scholzj: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-gce-bazel | e24bbe7 | link | /test pull-kubernetes-e2e-gce-bazel |
| pull-kubernetes-e2e-kops-aws | e24bbe7 | link | /test pull-kubernetes-e2e-kops-aws |
| pull-kubernetes-verify | e24bbe7 | link | /test pull-kubernetes-verify |
| pull-kubernetes-kubemark-e2e-gce | e24bbe7 | link | /test pull-kubernetes-kubemark-e2e-gce |
@kubernetes/sig-gcp-feature-requests @kubernetes/sig-azure-misc this makes sense on AWS. But I'm less sure it makes sure on GCP / Azure (or that it should work the same way). Thoughts?
@scholzj PR needs rebase
@justinsb Since there seems to be no feedback - should I just revert this to the original AWS only version so that we can merge this?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #49390.
/remove-lifecycle rotten /reopen /assign @justinsb
@emmanuel I can rebase / update the PR if needed. But merging it is not up to me.
Would be great to have this functionality, helps us categorise our resources.
Controller and webhook mutations server that could help temporally: https://github.com/almariah/k8s-metadata-injector
So is this dead or some work is going on? Same requirement on Azure and in general any cloud provider seems reasonable to me.
how is this not a thing yet?
This would be a really good feature to help us with cost management
It would be great to have this for our needs as well as it seems possible for other resources
Guys, the work is done already, need just rebase and merge. Please have a look!
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.![]()
Any Update on this PR? this is much needed.
@joel-schaal Not sure there is value of re-opening it if nobody reviews / merges this. I'm fine to rebase / update this, but it would be good to first know that it will not be closed again in another 30 days 🤷 .
@scholzj can we just ping some folks in for reviews and try to get this in? There's a lot to read through in the contributing guidelines (I'll take a look later) so I'm not sure if that's taboo, but it'd be nice to guarantee some eyes on this.
After looking through a few conflicting contributing related docs, I think our best bet is to just reopen, @ mention the reviewers, wait a week or two and either ping them on slack, or see if we can replace them with alternate relevant owners.
@scholzj Please re-open this and rebase (or update if needed). It very much seems like this just got lost after the first round of reviews. Neither of the required reviewers had blocking comments, so I won't feel too bad about pinging them again for a final readthrough/merge since it should hopefully be very quick for them. Both reviewers also seem to still be quite active with k8s, so I'd be surprised if we couldn't get ahold of them.
Just wanted to add that hopefully this feature gets added, we need to use AWS Backup Service and adding custom tags to the EBS volumes would be great for that.
Ugh, +1? :P Can someone make this work, this seems to be 3-years in the desired features list on a variety of issues
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
![]()
Please @k8s-ci-robot somebody reopen this. It's so necessary, not sure how there isn't an official solution for this yet.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
@gnufied @justinsb Is there any issues with this PR? Could we merge it at least for AWS? It's a pity that there's an elegant solution for adding custom tags to ELBs, but none for EBS.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
Yeah, this is like a bad joke, 5 years later and its still as frustrating as it was back then. People can't get proper accounting automatically from services they run in Kubernetes and the resources they use because of the lack of ability to cascade tags into objects. Can someone please take ownership of this and see this to the finish line?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
This docs worked perfectly for me...https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/tagging.md#storageclass-tagging
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
@wandribeiro That auto-tags EVERY volume in that storage class. What this PR and feature is about, is allowing per-volume tagging. If you annotate an PVC, it will tag that SINGLE EBS volume.
To use the above, we would need to (and I have done this) create a single StorageClass for each individual PVC we create, so we can have it tagged appropriately. This is... crude, at best, and almost offensive that we can't tag still in 2023 per-volume via an annotation
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
Unfortunately, Kubernetes doesn't provide any volume-specific information like annotations to CSI Drivers except certain "blessed" parameters (volume size, topology, name, etc). Thus, we on the CSI Driver side largely have our hands tied unless/until that happens.
There is an open feature request here to pass annotations to CSI drivers: kubernetes-csi/external-provisioner#86
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
@ConnorJC3 Although I see this COULD clearly be done at the external-provisioner level, I could also see that you could bypass that entirely if this service was given the RBAC permissions to lookup the annotations which if I'm understanding this pull request, is effectively what this does.
Is there some reason we can't temporarily use something like this Pull Request's code to facilitate this in some temporary way which is noted in the code/comments to eventually replace with native support for CSI annotations being passed through?
I suggest this solution because that external-provisioner issue is 6 years old and has not been solved, the pessimist in me doesn't imagine it will be solved soon.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
"Temporary" features become difficult to remove parts of the codebase as users start to rely on them. They create a semi-permanent maintenance burden.
I'm not going to say we've 100% ruled it out, but this recently has hit our radar and we'd like to attempt to actively engage with SIG Storage around a better solution first, before resorting to hacky solutions like reading the PVC directly.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()