@liggitt commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
cc @kubernetes/sig-storage-pr-reviews @kubernetes/sig-auth-pr-reviews for eyes on path traversal possibilities
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process
@dchen1107 @derekwaynecarr @sjenning @vikaschoudhary16 @yujuhong
Pull Request Labelssig/auth sig/cluster-lifecycle sig/node sig/storage: Pull Request will be escalated to these SIGs if needed.priority/critical-urgent: Never automatically move pull request out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/feature: New functionality.@liggitt commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
I expected some information about the container to be input to this chown method... can you describe what is being done here?
> func hasHostNamespace(pod *v1.Pod) bool {
+ if pod.Spec.HostUser != nil {
+ return pod.Spec.HostIPC || pod.Spec.HostNetwork || pod.Spec.HostPID || *pod.Spec.HostUser
+ }
if pod.Spec.SecurityContext == nil {
why was pod.Spec.SecurityContext short-circuiting this previously and why is that no longer valid?
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kubelet_getters.go:
> @@ -194,6 +194,11 @@ func (kl *Kubelet) getRuntime() kubecontainer.Runtime {
return kl.containerRuntime
}
+// getRemappedIDs returns the uid and gid from the host usernamespace that are mapped to container user-namespace
+func (kl *Kubelet) getRemappedIDs() (int, int) {
the function name doesn't indicate whether the returned ids are in the host or container namespace
function comment is mentioning this. Will try to make it more clear.
L#197 // getRemappedIDs returns the uid and gid from the host usernamespace
@liggitt commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> func hasHostNamespace(pod *v1.Pod) bool {
+ if pod.Spec.HostUser != nil {
+ return pod.Spec.HostIPC || pod.Spec.HostNetwork || pod.Spec.HostPID || *pod.Spec.HostUser
+ }
if pod.Spec.SecurityContext == nil {
return false
}
could add HostUser to the existing list with ... || (pod.Spec.HostUser != nil && *pod.Spec.HostUser)
In pkg/kubelet/kuberuntime/helpers.go:
> @@ -190,6 +190,21 @@ func toKubeRuntimeStatus(status *runtimeapi.RuntimeStatus) *kubecontainer.Runtim
return &kubecontainer.RuntimeStatus{Conditions: conditions}
}
+// toKubeRuntimeConfig converts the runtimeapi.ActiveRuntimeConfig to kubecontainer.RuntimeConfigInfo
+func toKubeRuntimeConfig(config *runtimeapi.ActiveRuntimeConfig) *kubecontainer.RuntimeConfigInfo {
+ userNSConfig := kubecontainer.UserNamespaceConfigInfo{
+ UidMapping: kubecontainer.UserNSMapping{
+ ContainerID: config.UserNamespaceConfig.UidMappings[0].ContainerId,
can UidMappings contain 0 items? can it contain more than 1 item? This seems like it is losing info
@liggitt commented on this pull request.
In pkg/kubelet/kubelet_getters.go:
> @@ -194,6 +194,11 @@ func (kl *Kubelet) getRuntime() kubecontainer.Runtime {
return kl.containerRuntime
}
+// getRemappedIDs returns the uid and gid from the host usernamespace that are mapped to container user-namespace
+func (kl *Kubelet) getRemappedIDs() (int, int) {
would something like this be clearer?
func (kl *Kubelet) getHostUID(containerUID int) (int, error)
func (kl *Kubelet) getHostGID(containerGID int) (int, error)
currently, the inputs are implicit (your comment in #64005 (comment) explains this is returning the host uids that map to uid/gid 0, but that isn't apparent to callers of this function)
that also makes it clear when there's no host id mapped to the specified container id (rather than relying on zero values or -1 to indicate errors)
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/container/runtime.go:
> @@ -479,6 +483,42 @@ type RuntimeStatus struct {
Conditions []RuntimeCondition
}
+// RuntimeConfigInfo contains runtime's configuration details, eg: user-namespaces mapping between host and container
+type RuntimeConfigInfo struct {
+ UserNamespaceConfig UserNamespaceConfigInfo
this will be runtime defined.
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
container information is not being passed because non-zero ID on host mapped with 0 ID on container is same for all the containers on a node. And we are using only this non-zero ID to chown the paths which are otherwise owned by ID 0 of host.
@krmayankk commented on this pull request.
> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
// +k8s:conversion-gen=false
// +optional
HostIPC bool
+ // Use host's user namespace for the pods.
+ // This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+ // Optional: Defaults to true, if feature-gate is enabled
+ // +k8s:conversion-gen=false
+ // +optional
+ HostUser *bool
HostUser is a very generic word and can be confusing, could we even do HostUserNamespaceMapping ?
@krmayankk commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -199,6 +202,13 @@ enum NamespaceMode {
// For example, a container with a PID namespace of NODE expects to view
// all of the processes on the host running the kubelet.
NODE = 2;
+ // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+ // and this namespace has uids/gids mapped to a different range of uids/gids on the
+ // kubernetes node namespace.
+ // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+ // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+ // mapped to uid/gid 200000 on node namespace.
+ NODE_WIDE_REMAPPED = 3;
is this the only mode supported ? nit: the wording is confusing. Something like
@liggitt commented on this pull request.
> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
// +k8s:conversion-gen=false
// +optional
HostIPC bool
+ // Use host's user namespace for the pods.
+ // This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+ // Optional: Defaults to true, if feature-gate is enabled
+ // +k8s:conversion-gen=false
+ // +optional
+ HostUser *bool
the idea of the boolean is that the uid/gids are in the host user namespace or not. adding Mapping makes it harder to understand what this means ("true" would actually mean there is no mapping... the uid/gid matches the host value)
@krmayankk commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -199,6 +202,13 @@ enum NamespaceMode {
// For example, a container with a PID namespace of NODE expects to view
// all of the processes on the host running the kubelet.
NODE = 2;
+ // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+ // and this namespace has uids/gids mapped to a different range of uids/gids on the
+ // kubernetes node namespace.
+ // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+ // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+ // mapped to uid/gid 200000 on node namespace.
+ NODE_WIDE_REMAPPED = 3;
I am assuming each pod would get a different range from the host mapped to its namespace. Is that understanding right ?
> @@ -1117,6 +1133,40 @@ message StatusResponse {
map<string, string> info = 2;
}
+// LinuxIDMapping represents a single user namespace mapping in Linux.
+message LinuxIDMapping {
+ // container_id is the starting id for the mapping inside the container.
+ uint32 container_id = 1;
+ // host_id is the starting id for the mapping on the host.
+ uint32 host_id = 2;
+ // size is the length of the mapping.
+ uint32 size = 3;
+}
can you have different mappings for each pod ?
@liggitt commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -199,6 +202,13 @@ enum NamespaceMode {
// For example, a container with a PID namespace of NODE expects to view
// all of the processes on the host running the kubelet.
NODE = 2;
+ // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+ // and this namespace has uids/gids mapped to a different range of uids/gids on the
+ // kubernetes node namespace.
+ // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+ // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+ // mapped to uid/gid 200000 on node namespace.
+ NODE_WIDE_REMAPPED = 3;
I am assuming each pod would get a different range from the host mapped to its namespace. Is that understanding right?
No. I was surprised by this as well. uid 0 inside any pod's container will map to the same host uid.
@rhatdan commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -199,6 +202,13 @@ enum NamespaceMode {
// For example, a container with a PID namespace of NODE expects to view
// all of the processes on the host running the kubelet.
NODE = 2;
+ // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+ // and this namespace has uids/gids mapped to a different range of uids/gids on the
+ // kubernetes node namespace.
+ // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+ // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+ // mapped to uid/gid 200000 on node namespace.
+ NODE_WIDE_REMAPPED = 3;
We can do this in CRI-O, But Kubernetes has no way of telling CRI-O yet to pick different UserNS for each pod. This is the long term goal.
@krmayankk commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -199,6 +202,13 @@ enum NamespaceMode {
// For example, a container with a PID namespace of NODE expects to view
// all of the processes on the host running the kubelet.
NODE = 2;
+ // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+ // and this namespace has uids/gids mapped to a different range of uids/gids on the
+ // kubernetes node namespace.
+ // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+ // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+ // mapped to uid/gid 200000 on node namespace.
+ NODE_WIDE_REMAPPED = 3;
if multiple pods on a given host get mapped to the same uid on host, what is the security impact of this ? It seems in this mode, we should add validation in kubelet to say all pods must run as different uids otherwise we are violating the guarantee that each pod has its own namespace. Now multiple pods belong to the same id namespace.
@rhatdan commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -199,6 +202,13 @@ enum NamespaceMode {
// For example, a container with a PID namespace of NODE expects to view
// all of the processes on the host running the kubelet.
NODE = 2;
+ // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+ // and this namespace has uids/gids mapped to a different range of uids/gids on the
+ // kubernetes node namespace.
+ // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+ // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+ // mapped to uid/gid 200000 on node namespace.
+ NODE_WIDE_REMAPPED = 3;
That is what we have now, exept the ID Namespace is the hosts UIDs. IE root==realroot.
For the first step of UserNS we can increase the security by turning the UserNS in the CRIs.
For example if we turn on CRI-O support for UserNS we end up with all the PODs continuing to run in the same USERNS, except now container root != realroot, and container root has no capabilities. It is only able to act within its namespace. If container root breaks out it would be able to attack other containers, based on UID Mappings, but it would have a harder time since it is not root on the system. Of course SELinux would prevent this on a different level.
In the long run we want to support multiple USERNS, but baby steps...
@jsafrane commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
@liggitt, thanks for heads-up.
A lot of existing volume plugins use FSgroup to make data on a volume (local or remote) available to a pod. Kubernetes does chown and chmod with the GID specified in FSGroup.
Let's say a pod has FSGroup = 500 and uses a volume. Kubernetes chmods and chowns the files on the volume to be accessible by group 500 and run a container with supplementary group 500. How will this work with user namespace? What supplementary group will have a process in the container? Will it be able to access all data on the volume?
As a separate item, we should consider how to remove chown/chmod done for FSGroup in Kubernetes and use user namespaces instead, as it's time consuming.
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -199,6 +202,13 @@ enum NamespaceMode {
// For example, a container with a PID namespace of NODE expects to view
// all of the processes on the host running the kubelet.
NODE = 2;
+ // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+ // and this namespace has uids/gids mapped to a different range of uids/gids on the
+ // kubernetes node namespace.
+ // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+ // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+ // mapped to uid/gid 200000 on node namespace.
+ NODE_WIDE_REMAPPED = 3;
@liggitt No, NODE_WIDE_REMAPPED makes sense only for user namespaces. Initially i was using "POD" to represent user-namespace remapping but because of your concern that currently user-namespace remapping is not actually pod-level, which is true, this new enum value i introduced to bring clarity.
@vikaschoudhary16 commented on this pull request.
In pkg/apis/core/v1/defaults.go:
> @@ -181,6 +181,10 @@ func SetDefaults_PodSpec(obj *v1.PodSpec) {
if obj.SchedulerName == "" {
obj.SchedulerName = v1.DefaultSchedulerName
}
+ if obj.HostUser == nil && utilfeature.DefaultFeatureGate.Enabled(features.UserNamespaceRemapping) {
+ trueObj := true
+ obj.HostUser = &trueObj
From CRI API, this PR is sending "NODE" namespace option to runtime and therefore it is expected that runtimes will provide host usernamespace by default
@vikaschoudhary16 commented on this pull request.
> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
// +k8s:conversion-gen=false
// +optional
HostIPC bool
+ // Use host's user namespace for the pods.
+ // This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+ // Optional: Defaults to true, if feature-gate is enabled
If you mean we should keep default as nil (irrespective of feature gate on or off), because CRI is passing "NODE" as user namespace option to runtime by default, overall effect will be same. Even keeping nil, will provide host usernamespace in container because of CRI defaulting.
> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
// +k8s:conversion-gen=false
// +optional
HostIPC bool
+ // Use host's user namespace for the pods.
+ // This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+ // Optional: Defaults to true, if feature-gate is enabled
+ // +k8s:conversion-gen=false
+ // +optional
+ HostUser *bool
since in case of other namespaces i,e network ipc pid, "namespace" is omitted and just hostNetwork or hostIPC is used, tried to keep it consistent with existing by omitting "Namespace". Dont have a strong opinion though. If you have preference for HostUserNamespace, have no problem in changing to that.
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kubelet_getters.go:
> @@ -194,6 +194,11 @@ func (kl *Kubelet) getRuntime() kubecontainer.Runtime {
return kl.containerRuntime
}
+// getRemappedIDs returns the uid and gid from the host usernamespace that are mapped to container user-namespace
+func (kl *Kubelet) getRemappedIDs() (int, int) {
sure
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> func hasHostNamespace(pod *v1.Pod) bool {
+ if pod.Spec.HostUser != nil {
+ return pod.Spec.HostIPC || pod.Spec.HostNetwork || pod.Spec.HostPID || *pod.Spec.HostUser
+ }
if pod.Spec.SecurityContext == nil {
this pod.Spec.SecurityContext is left by mistake long time ago. This check does not make sense IMO. Will remove this. Thanks!!
> func hasHostNamespace(pod *v1.Pod) bool {
+ if pod.Spec.HostUser != nil {
+ return pod.Spec.HostIPC || pod.Spec.HostNetwork || pod.Spec.HostPID || *pod.Spec.HostUser
+ }
if pod.Spec.SecurityContext == nil {
return false
}
sure
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kuberuntime/helpers.go:
> @@ -190,6 +190,21 @@ func toKubeRuntimeStatus(status *runtimeapi.RuntimeStatus) *kubecontainer.Runtim
return &kubecontainer.RuntimeStatus{Conditions: conditions}
}
+// toKubeRuntimeConfig converts the runtimeapi.ActiveRuntimeConfig to kubecontainer.RuntimeConfigInfo
+func toKubeRuntimeConfig(config *runtimeapi.ActiveRuntimeConfig) *kubecontainer.RuntimeConfigInfo {
+ userNSConfig := kubecontainer.UserNamespaceConfigInfo{
+ UidMapping: kubecontainer.UserNSMapping{
+ ContainerID: config.UserNamespaceConfig.UidMappings[0].ContainerId,
contain 0 items --- yes, should have retrieved like config.GetUserNamespaceConfig().GetUidMappings(). will fix. Thanks!
contain more than 1 item ---- we are interested in only ID on host for ID 0 in container, therefore we are taking first ID of first mapping.
@vikaschoudhary16: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@krmayankk commented on this pull request.
In pkg/kubelet/dockershim/docker_sandbox_test.go:
> + // Stop the sandbox.
+ expected.State = runtimeapi.PodSandboxState_SANDBOX_NOTREADY
+ _, err = ds.StopPodSandbox(getTestCTX(), &runtimeapi.StopPodSandboxRequest{PodSandboxId: id})
+ require.NoError(t, err)
+ // IP not valid after sandbox stop
+ expected.Network.Ip = ""
+ statusResp, err = ds.PodSandboxStatus(getTestCTX(), &runtimeapi.PodSandboxStatusRequest{PodSandboxId: id})
+ require.NoError(t, err)
+ assert.Equal(t, expected, statusResp.Status)
+
+ // Remove the container.
+ _, err = ds.RemovePodSandbox(getTestCTX(), &runtimeapi.RemovePodSandboxRequest{PodSandboxId: id})
+ require.NoError(t, err)
+ statusResp, err = ds.PodSandboxStatus(getTestCTX(), &runtimeapi.PodSandboxStatusRequest{PodSandboxId: id})
+ assert.Error(t, err, fmt.Sprintf("status of sandbox: %+v", statusResp))
+ }
is that too much to test in a single UT? I dont have full context here, but is their a possiblity of breaking this into smaller tests ?
@krmayankk commented on this pull request.
In pkg/kubelet/dockershim/docker_service.go:
> + gidMapping.HostId = remappedNonRootHostID
+ uidMapping.Size_ = uidMappingSize
+ gidMapping.Size_ = gidMappingSize
+ }
+
+ linuxConfig := &runtimeapi.LinuxUserNamespaceConfig{
+ UidMappings: []*runtimeapi.LinuxIDMapping{uidMapping},
+ GidMappings: []*runtimeapi.LinuxIDMapping{gidMapping},
+ }
+ activeRuntimeConfig := &runtimeapi.ActiveRuntimeConfig{UserNamespaceConfig: linuxConfig}
+ return &runtimeapi.GetRuntimeConfigInfoResponse{RuntimeConfig: activeRuntimeConfig}, nil
+}
+
+// isUserNsEnabled parses docker info. Returns true if user-namespace feature is found to enabled, otherwise false
+func isUserNsEnabled(dockerInfo *dockertypes.Info) bool {
+ var usernsEnabled bool
nit: set this to false and not rely on default value ?
@krmayankk commented on this pull request.
> + return fmt.Errorf("error chowning plugins directory: %v", err)
+ }
+
+ return nil
+}
+
+// chownDirForRemappedIDs change dir and path ownerships if NodeUserNamespace is enabled by feature-gate
+func (kl *Kubelet) chownDirForRemappedIDs(path string) error {
+ if !kl.nodeWideUserNamespaceRemapping {
+ // No-Op is NodeUserNamespace feature gate is not enabled
+ return nil
+ }
+ if path == "" {
+ return fmt.Errorf("path to be setup is empty.")
+ }
+ uid, gid := kl.getRemappedIDs()
@vikaschoudhary16 when a user specifies runAsUser/runAsGroup, i am assuming the uid/gid mentioned is from the containers namespace and not the pods namespace right ?
@krmayankk commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
is this chown happening for volumes mounted or just the container file system ?
@vikaschoudhary16 commented on this pull request.
> + return fmt.Errorf("error chowning plugins directory: %v", err)
+ }
+
+ return nil
+}
+
+// chownDirForRemappedIDs change dir and path ownerships if NodeUserNamespace is enabled by feature-gate
+func (kl *Kubelet) chownDirForRemappedIDs(path string) error {
+ if !kl.nodeWideUserNamespaceRemapping {
+ // No-Op is NodeUserNamespace feature gate is not enabled
+ return nil
+ }
+ if path == "" {
+ return fmt.Errorf("path to be setup is empty.")
+ }
+ uid, gid := kl.getRemappedIDs()
i guess you meant "containers namespace and not host namespace". Yes, runAsUser/runAsGroup mentioned in the will be from container namespace.
for example if runtime usernamepsace configuration is "testuser:80000:100" and uid 60 is provided in security context, in container it will be uid 60 but on host it will be 800060
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
@jsafrane fsGroup will not work as intended. On the host fsgroup will get remapped as per usernamespace configuration and remapped ID will not match with group ID on the remote volume. This is because of lack of support in runtimes. However, we can aim to get this working before graduating feature to beta.
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
for the volumes as well. But only for the volumes in var/lib/kubelet/pods/pod-id/volumes/ and not for remote volumes.
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -1117,6 +1133,40 @@ message StatusResponse {
map<string, string> info = 2;
}
+// LinuxIDMapping represents a single user namespace mapping in Linux.
+message LinuxIDMapping {
+ // container_id is the starting id for the mapping inside the container.
+ uint32 container_id = 1;
+ // host_id is the starting id for the mapping on the host.
+ uint32 host_id = 2;
+ // size is the length of the mapping.
+ uint32 size = 3;
+}
This is something we are not targeting in first phase. For now, we are only isolating root user between host and containers.Container level isolation we are focussing in next phase.
@vikaschoudhary16 commented on this pull request.
> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
// +k8s:conversion-gen=false
// +optional
HostIPC bool
+ // Use host's user namespace for the pods.
+ // This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+ // Optional: Defaults to true, if feature-gate is enabled
+ // +k8s:conversion-gen=false
+ // +optional
+ HostUser *bool
just saw other comments. Will update to hostUser
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -199,6 +202,13 @@ enum NamespaceMode {
// For example, a container with a PID namespace of NODE expects to view
// all of the processes on the host running the kubelet.
NODE = 2;
+ // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+ // and this namespace has uids/gids mapped to a different range of uids/gids on the
+ // kubernetes node namespace.
+ // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+ // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+ // mapped to uid/gid 200000 on node namespace.
+ NODE_WIDE_REMAPPED = 3;
No. I was surprised by this as well. uid 0 inside any pod's container will map to the same host uid.
As @rhatdan explained, this is still a level of improvement by it wont be host's root, which currently is the case.
@krmayankk commented on this pull request.
In pkg/kubelet/kuberuntime/kuberuntime_manager.go:
> @@ -116,6 +116,13 @@ type kubeGenericRuntimeManager struct {
// A shim to legacy functions for backward compatibility.
legacyLogProvider LegacyLogProvider
+
+ // cache of runtime configuration eg, user-namespace configuration
+ runtimeConfig *kubecontainer.RuntimeConfigInfo
+ // GID on host usernamespace mapped to GID 0 in container user-namespace
+ firstRemappedGIDOnHost int
where is this configuration coming from ? kubelet configuration ?
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kuberuntime/kuberuntime_manager.go:
> @@ -116,6 +116,13 @@ type kubeGenericRuntimeManager struct {
// A shim to legacy functions for backward compatibility.
legacyLogProvider LegacyLogProvider
+
+ // cache of runtime configuration eg, user-namespace configuration
+ runtimeConfig *kubecontainer.RuntimeConfigInfo
+ // GID on host usernamespace mapped to GID 0 in container user-namespace
+ firstRemappedGIDOnHost int
from runtime through CRI
@krmayankk commented on this pull request.
In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:
> @@ -199,6 +202,13 @@ enum NamespaceMode {
// For example, a container with a PID namespace of NODE expects to view
// all of the processes on the host running the kubelet.
NODE = 2;
+ // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+ // and this namespace has uids/gids mapped to a different range of uids/gids on the
+ // kubernetes node namespace.
+ // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+ // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+ // mapped to uid/gid 200000 on node namespace.
+ NODE_WIDE_REMAPPED = 3;
thanks for the explanation @rhatdan i did know that currently container uids were same as host uids and hence had the exact same privilege. This would mean if we are mixing pods from different customers, they must run with different uids currently.
@jsafrane commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
If I read this correctly, that means all pods that don't use PodSpec.HostUser can't use any volumes with FSGroup? That makes most volumes unusable in multi-tenant environment.
@derekwaynecarr commented on this pull request.
In pkg/apis/core/v1/defaults.go:
> @@ -181,6 +181,10 @@ func SetDefaults_PodSpec(obj *v1.PodSpec) {
if obj.SchedulerName == "" {
obj.SchedulerName = v1.DefaultSchedulerName
}
+ if obj.HostUser == nil && utilfeature.DefaultFeatureGate.Enabled(features.UserNamespaceRemapping) {
+ trueObj := true
+ obj.HostUser = &trueObj
i think we need to discuss that more. what happens if you run kata or gvisor in future?
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
@jsafrane right. If hostUsernamespace is false that means feature is enabled and thus FSGroup wont work because FSGroup might be mapped to different host GIDs on different nodes. This is a limitation in alpha and will be resolved before beta.
@vikaschoudhary16: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
@vikaschoudhary16 commented on this pull request.
In pkg/apis/core/v1/defaults.go:
> @@ -181,6 +181,10 @@ func SetDefaults_PodSpec(obj *v1.PodSpec) {
if obj.SchedulerName == "" {
obj.SchedulerName = v1.DefaultSchedulerName
}
+ if obj.HostUser == nil && utilfeature.DefaultFeatureGate.Enabled(features.UserNamespaceRemapping) {
+ trueObj := true
+ obj.HostUser = &trueObj
To handle runtimes like kata or gvisor, one more boolean field like HostUserNamespaceSharingCapability will be required in CRI API. This will be needed to differentiate between the following cases:
Complete behavior will be like this:
| Enabled at Kubelet | *hostUser | HostUserNamespace-SharingCapability | Enabled at runtime | UserNamespaceOption sent to runtime |
|---|---|---|---|---|
| false | non-nil | * | * | Validation Error |
| * | nil | true | true | NODE_WIDE_REMAPPED |
| * | nil | true | false | NODE |
| true | true | false | * | validation error |
| true | false | false | * | POD |
| true | true | true | * | NODE |
| true | false | true | false | validation error |
@giuseppe commented on this pull request.
In pkg/kubelet/dockershim/docker_service.go:
> + return &runtimeapi.GetRuntimeConfigInfoResponse{RuntimeConfig: activeRuntimeConfig}, nil
+}
+
+// isUserNsEnabled parses docker info. Returns true if user-namespace feature is found to enabled, otherwise false
+func isUserNsEnabled(dockerInfo *dockertypes.Info) bool {
+ var usernsEnabled bool
+ for _, secOpt := range dockerInfo.SecurityOptions {
+ if strings.Contains(secOpt, "userns") {
+ usernsEnabled = true
+ break
+ }
+ }
+ return usernsEnabled
+}
+
+// getRremappedNonRootHostID parses docker info to determine ID on the host usernamespace which is mapped to {U/G}ID 0 in the container user-namespace
typo in the comment: "getRremappedNonRootHostID"
In pkg/kubelet/dockershim/docker_service.go:
> + if isUserNsEnabled(dockerInfo) {
+ remappedNonRootHostID, err := getRemappedNonRootHostID(dockerInfo)
+ if err != nil {
+ return nil, fmt.Errorf("failed to get remappedNonRootHostID. err: %v", err)
+ }
+ uidMappingSize, gidMappingSize, err := getUserNsMappingSizes(remappedNonRootHostID)
+ if err != nil {
+ return nil, fmt.Errorf("failed to get user-namespace mapping sizes. err: %v", err)
+ }
+
+ uidMapping.ContainerId = uint32(0)
+ gidMapping.ContainerId = uint32(0)
+ uidMapping.HostId = remappedNonRootHostID
+ gidMapping.HostId = remappedNonRootHostID
+ uidMapping.Size_ = uidMappingSize
+ gidMapping.Size_ = gidMappingSize
this seems to assume there can be only one mapping available for the user. Is it safe to assume it?
@derekwaynecarr commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
@vikaschoudhary16 can you explain how you think this limitation would be addressed? it's not immediately clear to me at this time, but it seems worth understanding at a high-level now.
@derekwaynecarr commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
it would help if we can flesh out https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/node-usernamespace-remapping.md#remote-volumes to know what the path is for actually graduating this feature. right now, in alpha form, without this it would be very limiting.
@liggitt commented on this pull request.
In pkg/kubelet/kuberuntime/helpers.go:
> + usernsConfig := config.GetUserNamespaceConfig()
+ if usernsConfig == nil {
+ return nil
+ }
+ uidMappings := usernsConfig.GetUidMappings()
+ if uidMappings == nil || len(uidMappings) == 0 {
+ return nil
+ }
+ gidMappings := usernsConfig.GetGidMappings()
+ if gidMappings == nil || len(gidMappings) == 0 {
+ return nil
+ }
+
+ userNSConfig := kubecontainer.UserNamespaceConfigInfo{
+ UidMapping: kubecontainer.UserNSMapping{
+ ContainerID: uidMappings[0].ContainerId,
it still seems incorrect to drop all items other than the first here. isn't this losing mapping range info provided by the runtime?
@liggitt commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
ascending to the parent dir via .. reminds me of the issue #57095 ... is there risk of hitting the wrong location by joining ".."?
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
@derekwaynecarr This behavior is runtime-dependent. As per current behaviour in runtimes for example crio and docker, where FsGroup mentioned in pod spec is translated by runtime, if userns is enabled, as the actual gid on the volume files. Ex: x->f(x) is mapping, f(x) will be gid, where 'x' is FsGroup. This would work fine if the volume is shared by the pods who have same userns settings i.e either enabled at all the pods(nod) with same userns configuration or all using host usernamespace.
Problem will be there if there is a mismatch in user-namespace configuration.
May be if runtimes interpret FsGroup ID as a translated one i.e actual GID on host, in that case it might work for all the cases. This will require some handling to be added at runtimes eg: crio , docker (any runtime).
Will it be ok to go alpha with this userns configuration matching requirement limitation for FsGroup and as a roadmap, before beta, target to get the handling changed/added at supported runtimes?
@vikaschoudhary16 commented on this pull request.
In pkg/kubelet/kubelet_pods.go:
> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
glog.Errorf("Error on creating %q: %v", p, err)
} else {
opts.PodContainerDir = p
+ if err := kl.chownDirForRemappedIDs(p); err != nil {
+ return nil, nil, fmt.Errorf("error while chowning %s", p)
+ }
+ if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {
@liggitt nice catch, will fix this. Thanks!
@vikaschoudhary16: GitHub didn't allow me to request PR reviews from the following users: giuseppe.
Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.
In response to this:
@derekwaynecarr This behavior is runtime-dependent. As per current behaviour in runtimes for example crio and docker, where FsGroup mentioned in pod spec is translated by runtime, if userns is enabled, as the actual gid on the volume files. Ex: x->f(x) is mapping, f(x) will be gid, where 'x' is FsGroup. This would work fine if the volume is shared by the pods who have same userns settings i.e either enabled at all the pods(nod) with same userns configuration or all using host usernamespace.
Problem will be there if there is a mismatch in user-namespace configuration.
May be if runtimes interpret FsGroup ID as a translated one i.e actual GID on host, in that case it might work for all the cases. This will require some handling to be added at runtimes eg: crio , docker (any runtime).
Will it be ok to go alpha with this userns configuration matching requirement limitation for FsGroup and as a roadmap, before beta, target to get the handling changed/added at supported runtimes?
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
@vikaschoudhary16 Hey there! I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks!
past freeze with open issues, moving to the next milestone
/milestone v1.13
ping, needs rebase
/remove-priority critical-urgent
I'm downgrading this based on the lack of activity. It was bumped to this to not get kicked out by a bot that we no longer run. And then got punted from the release anyway.
/priority important-soon
This priority means we are planning to get it in the current release cycle. Is this accurate?
/milestone v1.13
I'm adding this to the v1.13 milestone since it relates to kubernetes/features#127 which is currently planned for v1.13
@spiffxp
@vikaschoudhary16
We operate k8s clusters that allow execution of third-party containers. From a security perspective, this is an an important feature we have been waiting for a while. Please consider this as high-priority security feature and make it available in v1.13 release.
Are we confident this is going to make the v1.13 milestone? Enhancement freeze is tomorrow COB. If there is no communication on this issue or activity on the PR, this is going to be pulled from the milestone as it doesn't fit with our "stability" theme. If there is no communication after COB tomorrow, an exception will be required to add it back to the milestone. Please let me know where we stand. Thanks!
@nikopen/ @spiffxp - the milestone has been pulled from kubernetes/enhancements#127 due to no update, can we stop tracking this PR for 1.13?
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: derekwaynecarr, vikaschoudhary16
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: lavalamp
If they are not already assigned, you can assign the PR to them by writing /assign @lavalamp in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|---|---|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-typecheck | fb9a6d2 | link | /test pull-kubernetes-typecheck |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-typecheck | fb9a6d2 | link | /test pull-kubernetes-typecheck |
| pull-kubernetes-bazel-test | fb9a6d2 | link | /test pull-kubernetes-bazel-test |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-typecheck | fb9a6d2 | link | /test pull-kubernetes-typecheck |
| pull-kubernetes-bazel-test | fb9a6d2 | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-verify | fb9a6d2 | link | /test pull-kubernetes-verify |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-typecheck | fb9a6d2 | link | /test pull-kubernetes-typecheck |
| pull-kubernetes-bazel-test | fb9a6d2 | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-verify | fb9a6d2 | link | /test pull-kubernetes-verify |
| pull-kubernetes-local-e2e-containerized | fb9a6d2 | link | /test pull-kubernetes-local-e2e-containerized |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-verify | a3aded7 | link | /test pull-kubernetes-verify |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-local-e2e-containerized | a3aded7 | link | /test pull-kubernetes-local-e2e-containerized |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-verify | a3aded7 | link | /test pull-kubernetes-verify |
| pull-kubernetes-local-e2e-containerized | a3aded7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-integration | af6494f | link | /test pull-kubernetes-integration |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-local-e2e-containerized | a3aded7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-integration | af6494f | link | /test pull-kubernetes-integration |
| pull-kubernetes-verify | af6494f | link | /test pull-kubernetes-verify |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-local-e2e-containerized | a3aded7 | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-integration | af6494f | link | /test pull-kubernetes-integration |
| pull-kubernetes-verify | af6494f | link | /test pull-kubernetes-verify |
| pull-kubernetes-e2e-gce-100-performance | af6494f | link | /test pull-kubernetes-e2e-gce-100-performance |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-integration | af6494f | link | /test pull-kubernetes-integration |
| pull-kubernetes-verify | af6494f | link | /test pull-kubernetes-verify |
| pull-kubernetes-e2e-gce-100-performance | af6494f | link | /test pull-kubernetes-e2e-gce-100-performance |
| pull-kubernetes-e2e-kops-aws | af6494f | link | /test pull-kubernetes-e2e-kops-aws |
| pull-kubernetes-local-e2e-containerized | af6494f | link | /test pull-kubernetes-local-e2e-containerized |
/test pull-kubernetes-verify
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-integration | af6494f | link | /test pull-kubernetes-integration |
| pull-kubernetes-e2e-gce-100-performance | af6494f | link | /test pull-kubernetes-e2e-gce-100-performance |
| pull-kubernetes-e2e-kops-aws | af6494f | link | /test pull-kubernetes-e2e-kops-aws |
| pull-kubernetes-local-e2e-containerized | af6494f | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-verify | af6494f | link | /test pull-kubernetes-verify |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: derekwaynecarr, vikaschoudhary16
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: thockin
If they are not already assigned, you can assign the PR to them by writing /assign @thockin in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
—
| pull-kubernetes-bazel-test | dc1c08a | link | /test pull-kubernetes-bazel-test |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
/test pull-kubernetes-bazel-test
—
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-e2e-gce-100-performance | af6494f | link | /test pull-kubernetes-e2e-gce-100-performance |
| pull-kubernetes-e2e-kops-aws | af6494f | link | /test pull-kubernetes-e2e-kops-aws |
| pull-kubernetes-local-e2e-containerized | af6494f | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-verify | af6494f | link | /test pull-kubernetes-verify |
| pull-kubernetes-bazel-test | dc1c08a | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-e2e-gce | dc1c08a | link | /test pull-kubernetes-e2e-gce |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-e2e-gce-100-performance | af6494f | link | /test pull-kubernetes-e2e-gce-100-performance |
| pull-kubernetes-local-e2e-containerized | af6494f | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-verify | af6494f | link | /test pull-kubernetes-verify |
| pull-kubernetes-bazel-test | dc1c08a | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-e2e-gce | dc1c08a | link | /test pull-kubernetes-e2e-gce |
| pull-kubernetes-kubemark-e2e-gce-big | dc1c08a | link | /test pull-kubernetes-kubemark-e2e-gce-big |
| pull-kubernetes-e2e-gce-device-plugin-gpu | dc1c08a | link | /test pull-kubernetes-e2e-gce-device-plugin-gpu |
| pull-kubernetes-e2e-kops-aws | dc1c08a | link | /test pull-kubernetes-e2e-kops-aws |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-e2e-gce-100-performance | af6494f | link | /test pull-kubernetes-e2e-gce-100-performance |
| pull-kubernetes-local-e2e-containerized | af6494f | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-bazel-test | dc1c08a | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-e2e-gce | dc1c08a | link | /test pull-kubernetes-e2e-gce |
| pull-kubernetes-kubemark-e2e-gce-big | dc1c08a | link | /test pull-kubernetes-kubemark-e2e-gce-big |
| pull-kubernetes-e2e-gce-device-plugin-gpu | dc1c08a | link | /test pull-kubernetes-e2e-gce-device-plugin-gpu |
| pull-kubernetes-e2e-kops-aws | dc1c08a | link | /test pull-kubernetes-e2e-kops-aws |
| pull-kubernetes-verify | dc1c08a | link | /test pull-kubernetes-verify |
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: derekwaynecarr, vikaschoudhary16
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: lavalamp
If they are not already assigned, you can assign the PR to them by writing /assign @lavalamp in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
—
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-e2e-gce-100-performance | af6494f | link | /test pull-kubernetes-e2e-gce-100-performance |
| pull-kubernetes-local-e2e-containerized | af6494f | link | /test pull-kubernetes-local-e2e-containerized |
| pull-kubernetes-e2e-gce | dc1c08a | link | /test pull-kubernetes-e2e-gce |
| pull-kubernetes-kubemark-e2e-gce-big | dc1c08a | link | /test pull-kubernetes-kubemark-e2e-gce-big |
| pull-kubernetes-e2e-gce-device-plugin-gpu | dc1c08a | link | /test pull-kubernetes-e2e-gce-device-plugin-gpu |
| pull-kubernetes-e2e-kops-aws | dc1c08a | link | /test pull-kubernetes-e2e-kops-aws |
| pull-kubernetes-verify | dc1c08a | link | /test pull-kubernetes-verify |
| pull-kubernetes-integration | 64491e8 | link | /test pull-kubernetes-integration |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-local-e2e-containerized | 64491e8 | link | /test pull-kubernetes-local-e2e-containerized |
/test pull-kubernetes-integration
/test pull-kubernetes-local-e2e-containerized
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
/test pull-kubernetes-local-e2e-containerized
@vikaschoudhary16: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-bazel-build | 1058ae6 | link | /test pull-kubernetes-bazel-build |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
—
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-bazel-build | 1058ae6 | link | /test pull-kubernetes-bazel-build |
| pull-kubernetes-bazel-test | 1058ae6 | link | /test pull-kubernetes-bazel-test |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-bazel-build | 1058ae6 | link | /test pull-kubernetes-bazel-build |
| pull-kubernetes-bazel-test | 1058ae6 | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-godeps | 1058ae6 | link | /test pull-kubernetes-godeps |
@vikaschoudhary16: The following tests failed, say /retest to rerun them all:
| Test name | Commit | Details | Rerun command |
|---|
| pull-kubernetes-cross | c6b8d2f | link | /test pull-kubernetes-cross |
| pull-kubernetes-bazel-build | 1058ae6 | link | /test pull-kubernetes-bazel-build |
| pull-kubernetes-bazel-test | 1058ae6 | link | /test pull-kubernetes-bazel-test |
| pull-kubernetes-godeps | 1058ae6 | link | /test pull-kubernetes-godeps |
| pull-kubernetes-local-e2e-containerized | 1058ae6 | link | /test pull-kubernetes-local-e2e-containerized |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.