@liggitt commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
volume setup and container start are done in separate work queues. it is possible the information you are looking for is sometimes present when the volume setup is done, and sometimes not. need feedback from @kubernetes/sig-storage-pr-reviews and @kubernetes/sig-node-pr-reviews on whether we can depend on the pod object status at this point. see discussion at #42717 (comment) for the types of issues encountered
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
@hzxuzhonghu commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
also have a concern about cycle import if using Kubelet.podFieldSelectorRuntimeValue
@liggitt commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
I wouldn't call it there, I would move the function to a common location and pass in the podIP/hostIP
@msau42 commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
I spoke with @Random-Liu, he says that Pod IP is allocated when we create the PodSandbox. This happens well after volume setup.
I suspect what is happening here is the initial volume setup gets an empty IP, but sometime afterwards we periodically resync/remount the downward api volume, and it gets the updated IPs from the pod status. If my theory is correct, then a simple test could be to mount your pod ip file as a subpath volume, which does not get remounted. Then you would see the file contents are empty.
@hzxuzhonghu commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
Thanks @msau42 , I have tried, that's correct I suspect what is happening here is the initial volume setup gets an empty IP, but sometime afterwards we periodically resync/remount the downward api volume, and it gets the updated IPs from the pod status.
@hzxuzhonghu commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
@liggitt @msau42 Then what do you suggest: 1. periodically resync/remount the downward api volume , users have to check it before use.
2. do not support "status.podIP" .
@liggitt commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
if we can't provide it on first setup, we don't support it
@hzxuzhonghu commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
ok
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: hzxuzhonghu
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: thockin
Assign the PR to them by writing /assign @thockin in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|---|---|---|
| pull-kubernetes-local-e2e-containerized | e004da7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-bazel-test | 72d7a38 | link | /test pull-kubernetes-bazel-test |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-local-e2e-containerized | e004da7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-bazel-test | 72d7a38 | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-verify | 72d7a38 | link | /test pull-kubernetes-verify |
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-local-e2e-containerized | e004da7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-bazel-test | 72d7a38 | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-verify | 72d7a38 | link | /test pull-kubernetes-verify |
| pull-kubernetes-e2e-kops-aws | 72d7a38 | link | /test pull-kubernetes-e2e-kops-aws |
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-local-e2e-containerized | e004da7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-bazel-test | 72d7a38 | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-verify | 72d7a38 | link | /test pull-kubernetes-verify |
| pull-kubernetes-e2e-kops-aws | 72d7a38 | link | /test pull-kubernetes-e2e-kops-aws |
| pull-kubernetes-e2e-gce | 72d7a38 | link | /test pull-kubernetes-e2e-gce |
@hzxuzhonghu pushed 1 commit.
—
You are receiving this because you are subscribed to this thread.
View it on GitHub or mute the thread.![]()
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: hzxuzhonghu
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: thockin
Assign the PR to them by writing /assign @thockin in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-bazel-test | 8e3a0df | link | /test pull-kubernetes-bazel-test |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: hzxuzhonghu
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: smarterclayton
Assign the PR to them by writing /assign @smarterclayton in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
—
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-node-e2e | 33c4acd | link | /test pull-kubernetes-node-e2e |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
/test pull-kubernetes-node-e2e
@msau42 commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
Did you check that hostIP doesn't have the same issue?
@msau42 commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
Is there somewhere in the code and/or docs we can document the reason why we can't support podIP in downward api volume?
@hzxuzhonghu commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,24 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: unify with Kubelet.podFieldSelectorRuntimeValue
Did you check that hostIP doesn't have the same issue?
Yes, podstatus generate before volume, pkg/kubelet/kubelet.go#L1433
Is there somewhere in the code and/or docs we can document the reason why we can't support podIP in downward api volume?
No, I will add comment now.
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-kubemark-e2e-gce-big | 97feaf2 | link | /test pull-kubernetes-kubemark-e2e-gce-big |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
/retest
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|---|---|---|
| pull-kubernetes-local-e2e-containerized | e004da7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-kubemark-e2e-gce-big | 97feaf2 | link | /test pull-kubernetes-kubemark-e2e-gce-big |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-local-e2e-containerized | e004da7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-kubemark-e2e-gce-big | 97feaf2 | link | /test pull-kubernetes-kubemark-e2e-gce-big |
| pull-kubernetes-e2e-kops-aws | 97feaf2 | link | /test pull-kubernetes-e2e-kops-aws |
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-kubemark-e2e-gce-big | 392b2a9 | link | /test pull-kubernetes-kubemark-e2e-gce-big |
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-gce | 392b2a9 | link | /test pull-kubernetes-e2e-gce |
/retest
/approve
the volume plugin changes
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: hzxuzhonghu, jsafrane
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: smarterclayton
Assign the PR to them by writing /assign @smarterclayton in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
—
/assign @smarterclayton for the api part(fix comments)
/assign @lavalamp
@k8s-mirror-api-machinery-api-reviews for api part
API machinery doesn't own this API. This is for node and the api reviewers.
ping @kubernetes/api-reviewers @kubernetes/sig-node-bugs Since this mostly for volume and this part has been approved.
Could you review small parts and approve it?
/assign @thockin
@feiskyer commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,29 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: support "status.podIP" once we can get pod ip before first setup volume.
Are there any use cases for status.podIP? I think containers could get its IP easily with any networking tools or library if required.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: hzxuzhonghu, jsafrane, thockin
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: smarterclayton
If they are not already assigned, you can assign the PR to them by writing /assign @smarterclayton in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
—
/test all
Tests are more than 96 hours old. Re-running tests.
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-kops-aws | 392b2a9 | link | /test pull-kubernetes-e2e-kops-aws |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
/assign @smarterclayton
/test pull-kubernetes-e2e-kops-aws
@hzxuzhonghu: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
@hzxuzhonghu commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,29 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: support "status.podIP" once we can get pod ip before first setup volume.
yes, I can remove this TODO
rebased
New changes are detected. LGTM label has been removed.
ping @cheftako @mml @smarterclayton Since all other parts are approved and lgtmed, can anyone of you approve the test part?
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-bazel-build | d63bd17 | link | /test pull-kubernetes-bazel-build |
| pull-kubernetes-e2e-gce | d63bd17 | link | /test pull-kubernetes-e2e-gce |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|---|---|---|
| pull-kubernetes-local-e2e-containerized | e004da7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-bazel-build | d63bd17 | link | /test pull-kubernetes-bazel-build |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-local-e2e-containerized | e004da7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-bazel-build | d63bd17 | link | /test pull-kubernetes-bazel-build |
| pull-kubernetes-e2e-gce-device-plugin-gpu | d63bd17 | link | /test pull-kubernetes-e2e-gce-device-plugin-gpu |
@hzxuzhonghu: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-local-e2e-containerized | e004da7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-bazel-build | d63bd17 | link | /test pull-kubernetes-bazel-build |
| pull-kubernetes-e2e-gce | d63bd17 | link | /test pull-kubernetes-e2e-gce |
| pull-kubernetes-e2e-gce-device-plugin-gpu | d63bd17 | link | /test pull-kubernetes-e2e-gce-device-plugin-gpu |
| pull-kubernetes-node-e2e | d63bd17 | link | /test pull-kubernetes-node-e2e |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
@smarterclayton commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,28 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu
return data, utilerrors.NewAggregate(errlist)
}
+// podFieldSelectorRuntimeValue extracts the field from the pod
+// and returns it as a string.
+func podFieldSelectorRuntimeValue(pod *v1.Pod, fieldPath string, host volume.VolumeHost) (string, error) {
+ switch fieldPath {
+ case "spec.nodeName":
+ return pod.Spec.NodeName, nil
+ case "spec.serviceAccountName":
+ return pod.Spec.ServiceAccountName, nil
+ case "status.hostIP":
This is a security leak for many users. Do we expose this for env vars?
@smarterclayton requested changes on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -237,8 +237,7 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu
fileProjection.Mode = *defaultMode
}
if fileInfo.FieldRef != nil {
- // TODO: unify with Kubelet.podFieldSelectorRuntimeValue
- if values, err := fieldpath.ExtractFieldPathAsString(pod, fileInfo.FieldRef.FieldPath); err != nil {
+ if values, err := podFieldSelectorRuntimeValue(pod, fileInfo.FieldRef.FieldPath, host); err != nil {
You removed this comment, but you didn't unify the behavior. This method doesn't report status.podIP like environment variables.
In pkg/apis/core/validation/validation.go:
> @@ -923,7 +923,10 @@ var validVolumeDownwardAPIFieldPathExpressions = sets.NewString( "metadata.namespace", "metadata.labels", "metadata.annotations", - "metadata.uid") + "metadata.uid", + "spec.nodeName", + "spec.serviceAccountName", + "status.hostIP")
I expected to see status.podIP here.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -237,8 +237,7 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu
fileProjection.Mode = *defaultMode
}
if fileInfo.FieldRef != nil {
- // TODO: unify with Kubelet.podFieldSelectorRuntimeValue
- if values, err := fieldpath.ExtractFieldPathAsString(pod, fileInfo.FieldRef.FieldPath); err != nil {
+ if values, err := podFieldSelectorRuntimeValue(pod, fileInfo.FieldRef.FieldPath, host); err != nil {
I would expect this code to actually become consistent with env vars.
@smarterclayton commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -262,6 +261,29 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: support "status.podIP" once we can get pod ip before first setup volume.
We support podIP on env vars, and we do it explicitly because networking tools and libraries are terrible at telling callers what the "correct IP" is. Having it in the downward API makes the behavior predictable and consistent with how the pod will be exposed by services.
I assume, looking at the PR, that you didn't fix the behavior. The TODO should remain until it's fixed.
@hzxuzhonghu commented on this pull request.
In pkg/volume/downwardapi/downwardapi.go:
> @@ -237,8 +237,7 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu
fileProjection.Mode = *defaultMode
}
if fileInfo.FieldRef != nil {
- // TODO: unify with Kubelet.podFieldSelectorRuntimeValue
- if values, err := fieldpath.ExtractFieldPathAsString(pod, fileInfo.FieldRef.FieldPath); err != nil {
+ if values, err := podFieldSelectorRuntimeValue(pod, fileInfo.FieldRef.FieldPath, host); err != nil {
Ok, will revert this. And try to fix it in another pr.
> @@ -262,6 +261,29 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu return data, utilerrors.NewAggregate(errlist) } +// TODO: support "status.podIP" once we can get pod ip before first setup volume.
OK
@smarterclayton addressed, ptal.
> @@ -262,6 +261,28 @@ func CollectData(items []v1.DownwardAPIVolumeFile, pod *v1.Pod, host volume.Volu
return data, utilerrors.NewAggregate(errlist)
}
+// podFieldSelectorRuntimeValue extracts the field from the pod
+// and returns it as a string.
+func podFieldSelectorRuntimeValue(pod *v1.Pod, fieldPath string, host volume.VolumeHost) (string, error) {
+ switch fieldPath {
+ case "spec.nodeName":
+ return pod.Spec.NodeName, nil
+ case "spec.serviceAccountName":
+ return pod.Spec.ServiceAccountName, nil
+ case "status.hostIP":
Yes, exposed in env vars
@hzxuzhonghu commented on this pull request.
In pkg/apis/core/validation/validation.go:
> @@ -923,7 +923,10 @@ var validVolumeDownwardAPIFieldPathExpressions = sets.NewString( "metadata.namespace", "metadata.labels", "metadata.annotations", - "metadata.uid") + "metadata.uid", + "spec.nodeName", + "spec.serviceAccountName", + "status.hostIP")
// TODO: support "status.podIP" once we can get pod ip before first setup volume.
—
kindly ping @smarterclayton Any other comments?
@hzxuzhonghu: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Closed #64186.