[MILESTONENOTIFIER] Milestone Issue Needs Attention
@msau42 @nfirvine @wojtek-t @kubernetes/sig-storage-misc
Action required: During code slush, issues in the milestone should be in progress.
If this issue is not being actively worked on, please remove it from the milestone.
If it is being worked on, please add the status/in-progress label so it can be tracked with other in-flight issues.
Note: If this issue is not resolved or labeled as priority/critical-urgent by Wed, Nov 22 it will be moved out of the v1.9 milestone.
sig/storage: Issue will be escalated to these SIGs if needed.priority/important-soon: Escalate to the issue owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.kind/bug: Fixes a bug discovered during the current release.—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
/status in-progress
You must be a member of the kubernetes/kubernetes-milestone-maintainers github team to add status labels.
[MILESTONENOTIFIER] Milestone Issue Current
Note: If this issue is not resolved or labeled as priority/critical-urgent by Wed, Nov 22 it will be moved out of the v1.9 milestone.
sig/storage: Issue will be escalated to these SIGs if needed.priority/important-soon: Escalate to the issue owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.kind/bug: Fixes a bug discovered during the current release.—
I'm still experiencing the issue, on GKE 1.9.3-gke.0. I'm struggling to follow exactly what version @davidz627 's PR #52322 would have started at, but surely it's in by now.
It should be in 1.9.3. Could you provide steps to repro your issue?
Ah, I bet what I'm seeing is that these are PVCs created before the upgrade to 1.9.3. You fix wouldn't have fixed those, just prevented it happening to new ones, right? Appears so. I've deleted and recreated those PVCs and they went to correct zones.
Awesome, thanks for confirming! Feel free to comment on this thread if you see the behavior regress.
still having this issue with kubernetes 1.9.8. Currently I have only one worker node in us-east-1c (but I set 1a,1c,1d in kops) while my default gp2 storageclass created pv in us-east-1a
still having this issue in 1.9.8. Currently I have only one worker node in us-east-1c (but I set 1a,1c,1d in kops) while my default gp2 storageclass created pv in us-east-1a
@jackzzj are you referring to AWS General Purpose SSD (gp2)? This issue and corresponding fix are specific to GCE PD.
If you are seeing this issue on gp2 please open a new issue for it. If this is GCE could you please provide more information like the storage class used and steps to repro.
@davidz627 thanks, it's realated to AWS gp2. I'll follow another issue.