@kubernetes/sig-storage-pr-reviews
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: wongma7
We suggest the following additional approver: saad-ali
Assign the PR to them by writing /assign @saad-ali in a comment when ready.
Associated issue: 52138
The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment
@jeffvance commented on this pull request.
In test/e2e/storage/volume_provisioning.go:
> // attempts may fail, as the volume is still attached to a node because
// kubelet is slowly cleaning up the previous pod, however it should succeed
// in a couple of minutes. Wait 20 minutes to recover from random cloud
// hiccups.
- By(fmt.Sprintf("deleting the claim's PV %q", pv.Name))
- framework.ExpectNoError(framework.WaitForPersistentVolumeDeleted(client, pv.Name, 5*time.Second, 20*time.Minute))
+ if pv.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimDelete {
Sorry for lack of knowledge here, but isn't the default reclaim policy "Delete" and the pv automatically deleted as part of dynamic provisioning when the pvc is deleted. If this is true then the e2e test should not have to explicitly delete the pv, but just wait some and check that the pv was deleted (for reclaimPolicy == "Delete"). And, the if here should test for reclaimPolicy == "Retain" ?
> @@ -408,6 +414,39 @@ var _ = SIGDescribe("Dynamic Provisioning", func() {
}
})
+ It("should provision storage with different reclaim policy", func() {
why not be explicit about the reclaimPolicy being "Retain" in this description?
> + }
+ class := newStorageClass(test, ns, "reclaimPolicy")
+ retain := v1.PersistentVolumeReclaimRetain
+ class.ReclaimPolicy = &retain
+ claim := newClaim(test, ns, "reclaimPolicy")
+ claim.Spec.StorageClassName = &class.Name
+ pv := testDynamicProvisioning(test, c, claim, class)
+
+ By(fmt.Sprintf("waiting for the provisioned PV %q to enter phase %s", pv.Name, v1.VolumeReleased))
+ framework.ExpectNoError(framework.WaitForPersistentVolumePhase(v1.VolumeReleased, c, pv.Name, 1*time.Second, 30*time.Second))
+
+ By(fmt.Sprintf("deleting the storage asset backing the PV %q", pv.Name))
+ framework.ExpectNoError(framework.DeletePDWithRetry(pv.Spec.GCEPersistentDisk.PDName))
+
+ By(fmt.Sprintf("deleting the PV %q", pv.Name))
+ framework.ExpectNoError(framework.DeletePersistentVolume(c, pv.Name), "Failed to delete PV ", pv.Name)
It is clearer to delete the pv here but curious as to why you delete it here when it could have been unconditionally deleted in testDynamicProvisioning()
@wongma7: The following test failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|---|---|---|
| pull-kubernetes-e2e-gce-bazel | fce891e | link | /test pull-kubernetes-e2e-gce-bazel |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@wongma7 commented on this pull request.
In test/e2e/storage/volume_provisioning.go:
> // attempts may fail, as the volume is still attached to a node because
// kubelet is slowly cleaning up the previous pod, however it should succeed
// in a couple of minutes. Wait 20 minutes to recover from random cloud
// hiccups.
- By(fmt.Sprintf("deleting the claim's PV %q", pv.Name))
- framework.ExpectNoError(framework.WaitForPersistentVolumeDeleted(client, pv.Name, 5*time.Second, 20*time.Minute))
+ if pv.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimDelete {
the if statement is correct but confusingly inverted because it's saying: if reclaim policy is delete, we expect the pv to be deleted by kube and so we wait for it; else if reclaim policy is retain, there's nothing to do or check for because the job of the kube dynamic provisioner is done.
> + }
+ class := newStorageClass(test, ns, "reclaimPolicy")
+ retain := v1.PersistentVolumeReclaimRetain
+ class.ReclaimPolicy = &retain
+ claim := newClaim(test, ns, "reclaimPolicy")
+ claim.Spec.StorageClassName = &class.Name
+ pv := testDynamicProvisioning(test, c, claim, class)
+
+ By(fmt.Sprintf("waiting for the provisioned PV %q to enter phase %s", pv.Name, v1.VolumeReleased))
+ framework.ExpectNoError(framework.WaitForPersistentVolumePhase(v1.VolumeReleased, c, pv.Name, 1*time.Second, 30*time.Second))
+
+ By(fmt.Sprintf("deleting the storage asset backing the PV %q", pv.Name))
+ framework.ExpectNoError(framework.DeletePDWithRetry(pv.Spec.GCEPersistentDisk.PDName))
+
+ By(fmt.Sprintf("deleting the PV %q", pv.Name))
+ framework.ExpectNoError(framework.DeletePersistentVolume(c, pv.Name), "Failed to delete PV ", pv.Name)
imo it makes sense because the manual deletion is cloudprovider-specific, and outside of the scope of the kube dynamic provisioning process, so i kept it out of testDynamicProvisioning
@jeffvance commented on this pull request.
In test/e2e/storage/volume_provisioning.go:
> // attempts may fail, as the volume is still attached to a node because
// kubelet is slowly cleaning up the previous pod, however it should succeed
// in a couple of minutes. Wait 20 minutes to recover from random cloud
// hiccups.
- By(fmt.Sprintf("deleting the claim's PV %q", pv.Name))
- framework.ExpectNoError(framework.WaitForPersistentVolumeDeleted(client, pv.Name, 5*time.Second, 20*time.Minute))
+ if pv.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimDelete {
ack. makes sense and I read the code below too quickly...
> + }
+ class := newStorageClass(test, ns, "reclaimPolicy")
+ retain := v1.PersistentVolumeReclaimRetain
+ class.ReclaimPolicy = &retain
+ claim := newClaim(test, ns, "reclaimPolicy")
+ claim.Spec.StorageClassName = &class.Name
+ pv := testDynamicProvisioning(test, c, claim, class)
+
+ By(fmt.Sprintf("waiting for the provisioned PV %q to enter phase %s", pv.Name, v1.VolumeReleased))
+ framework.ExpectNoError(framework.WaitForPersistentVolumePhase(v1.VolumeReleased, c, pv.Name, 1*time.Second, 30*time.Second))
+
+ By(fmt.Sprintf("deleting the storage asset backing the PV %q", pv.Name))
+ framework.ExpectNoError(framework.DeletePDWithRetry(pv.Spec.GCEPersistentDisk.PDName))
+
+ By(fmt.Sprintf("deleting the PV %q", pv.Name))
+ framework.ExpectNoError(framework.DeletePersistentVolume(c, pv.Name), "Failed to delete PV ", pv.Name)
I agree.
@wongma7 commented on this pull request.
In test/e2e/storage/volume_provisioning.go:
> + }
+ class := newStorageClass(test, ns, "reclaimPolicy")
+ retain := v1.PersistentVolumeReclaimRetain
+ class.ReclaimPolicy = &retain
+ claim := newClaim(test, ns, "reclaimPolicy")
+ claim.Spec.StorageClassName = &class.Name
+ pv := testDynamicProvisioning(test, c, claim, class)
+
+ By(fmt.Sprintf("waiting for the provisioned PV %q to enter phase %s", pv.Name, v1.VolumeReleased))
+ framework.ExpectNoError(framework.WaitForPersistentVolumePhase(v1.VolumeReleased, c, pv.Name, 1*time.Second, 30*time.Second))
+
+ By(fmt.Sprintf("deleting the storage asset backing the PV %q", pv.Name))
+ framework.ExpectNoError(framework.DeletePDWithRetry(pv.Spec.GCEPersistentDisk.PDName))
+
+ By(fmt.Sprintf("deleting the PV %q", pv.Name))
+ framework.ExpectNoError(framework.DeletePersistentVolume(c, pv.Name), "Failed to delete PV ", pv.Name)
okay, wouldn't hurt: will add WaitForPersistentVolumeDeleted after this line
@wongma7 pushed 1 commit.
—
You are receiving this because you are subscribed to this thread.
View it on GitHub or mute the thread.![]()
@wongma7: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-gce-bazel | fce891e | link | /test pull-kubernetes-e2e-gce-bazel |
| pull-kubernetes-e2e-gce-etcd3 | fce891e | link | /test pull-kubernetes-e2e-gce-etcd3 |
| pull-kubernetes-unit | 3a4f414 | link | /test pull-kubernetes-unit |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
@wongma7: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-gce-etcd3 | fce891e | link | /test pull-kubernetes-e2e-gce-etcd3 |
| pull-kubernetes-unit | 3a4f414 | link | /test pull-kubernetes-unit |
| pull-kubernetes-e2e-gce-bazel | 3a4f414 | link | /test pull-kubernetes-e2e-gce-bazel |
@wongma7 pushed 1 commit.
—
You are receiving this because you are subscribed to this thread.
View it on GitHub or mute the thread.![]()
@wongma7: The following test failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-gce-etcd3 | 8cadf11 | link | /test pull-kubernetes-e2e-gce-etcd3 |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
/retest
@wongma7: The following test failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-gce-etcd3 | 69b51a6 | link | /test pull-kubernetes-e2e-gce-etcd3 |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
@wongma7: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-gce-etcd3 | 69b51a6 | link | /test pull-kubernetes-e2e-gce-etcd3 |
| pull-kubernetes-federation-e2e-gce | 69b51a6 | link | /test pull-kubernetes-federation-e2e-gce |
| pull-kubernetes-e2e-gce-bazel | 69b51a6 | link | /test pull-kubernetes-e2e-gce-bazel |
/retest
@wongma7: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|---|---|---|
| pull-kubernetes-e2e-gce-etcd3 | 69b51a6 | link | /test pull-kubernetes-e2e-gce-etcd3 |
| pull-kubernetes-federation-e2e-gce | 69b51a6 | link | /test pull-kubernetes-federation-e2e-gce |
| pull-kubernetes-e2e-gce-bazel | 69b51a6 | link | /test pull-kubernetes-e2e-gce-bazel |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
@wongma7 FYI, the suite that doesn't pass b/c of the bug is non-blocking and the queue is not happy with the milestone
@wongma7: The following test failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
@jeffvance PTAL, I've addressed yoru comments.
@saad-ali please approve when you get a chance, ty :)
/lgtm
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: jeffvance, wongma7
We suggest the following additional approver: saad-ali
Assign the PR to them by writing /assign @saad-ali in a comment when ready.
Associated issue: 52138
The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment
/approve
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: jeffvance, jsafrane, wongma7
Associated issue: 52138
The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment
—
/lgtm
/approve
/test pull-kubernetes-e2e-gce-bazel
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: jeffvance, jsafrane, saad-ali, wongma7
Associated issue: 52138
The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment
—
Current failure is a sig-instrumentation test. re-running:
/test pull-kubernetes-e2e-gce-bazel
@wongma7: The following test failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|---|---|---|
| pull-kubernetes-e2e-gce-bazel | 69b51a6 | link | /test pull-kubernetes-e2e-gce-bazel |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to @fejta).
Review the full test history for this PR.
Automatic merge from submit-queue (batch tested with PRs 52350, 52659). If you want to cherry-pick this change to another branch, please follow the instructions here..
Merged #52350.