Re: [kubernetes/kubernetes] Node-Level UserNamespace implementation (#64005)

13 views
Skip to first unread message

Jordan Liggitt

unread,
Jul 19, 2018, 11:06:01 AM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@liggitt commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

cc @kubernetes/sig-storage-pr-reviews @kubernetes/sig-auth-pr-reviews for eyes on path traversal possibilities


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Kubernetes Submit Queue

unread,
Jul 19, 2018, 11:06:13 AM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process

@dchen1107 @derekwaynecarr @sjenning @vikaschoudhary16 @yujuhong

Pull Request Labels
  • sig/auth sig/cluster-lifecycle sig/node sig/storage: Pull Request will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move pull request out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/feature: New functionality.
Help

Jordan Liggitt

unread,
Jul 19, 2018, 11:06:50 AM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@liggitt commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {

I expected some information about the container to be input to this chown method... can you describe what is being done here?

Jordan Liggitt

unread,
Jul 19, 2018, 11:08:07 AM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@liggitt commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

>  func hasHostNamespace(pod *v1.Pod) bool {
+	if pod.Spec.HostUser != nil {
+		return pod.Spec.HostIPC || pod.Spec.HostNetwork || pod.Spec.HostPID || *pod.Spec.HostUser
+	}
 	if pod.Spec.SecurityContext == nil {

why was pod.Spec.SecurityContext short-circuiting this previously and why is that no longer valid?

Vikas Choudhary (vikasc)

unread,
Jul 19, 2018, 11:08:50 AM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_getters.go:

> @@ -194,6 +194,11 @@ func (kl *Kubelet) getRuntime() kubecontainer.Runtime {
 	return kl.containerRuntime
 }
 
+// getRemappedIDs returns the uid and gid from the host usernamespace that are mapped to container user-namespace
+func (kl *Kubelet) getRemappedIDs() (int, int) {

the function name doesn't indicate whether the returned ids are in the host or container namespace

function comment is mentioning this. Will try to make it more clear.
L#197 // getRemappedIDs returns the uid and gid from the host usernamespace

Jordan Liggitt

unread,
Jul 19, 2018, 11:13:37 AM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@liggitt commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

>  func hasHostNamespace(pod *v1.Pod) bool {
+	if pod.Spec.HostUser != nil {
+		return pod.Spec.HostIPC || pod.Spec.HostNetwork || pod.Spec.HostPID || *pod.Spec.HostUser
+	}
 	if pod.Spec.SecurityContext == nil {
 		return false
 	}

could add HostUser to the existing list with ... || (pod.Spec.HostUser != nil && *pod.Spec.HostUser)


In pkg/kubelet/kuberuntime/helpers.go:

> @@ -190,6 +190,21 @@ func toKubeRuntimeStatus(status *runtimeapi.RuntimeStatus) *kubecontainer.Runtim
 	return &kubecontainer.RuntimeStatus{Conditions: conditions}
 }
 
+// toKubeRuntimeConfig converts the runtimeapi.ActiveRuntimeConfig to kubecontainer.RuntimeConfigInfo
+func toKubeRuntimeConfig(config *runtimeapi.ActiveRuntimeConfig) *kubecontainer.RuntimeConfigInfo {
+	userNSConfig := kubecontainer.UserNamespaceConfigInfo{
+		UidMapping: kubecontainer.UserNSMapping{
+			ContainerID: config.UserNamespaceConfig.UidMappings[0].ContainerId,

can UidMappings contain 0 items? can it contain more than 1 item? This seems like it is losing info

Jordan Liggitt

unread,
Jul 19, 2018, 11:19:11 AM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@liggitt commented on this pull request.


In pkg/kubelet/kubelet_getters.go:

> @@ -194,6 +194,11 @@ func (kl *Kubelet) getRuntime() kubecontainer.Runtime {
 	return kl.containerRuntime
 }
 
+// getRemappedIDs returns the uid and gid from the host usernamespace that are mapped to container user-namespace
+func (kl *Kubelet) getRemappedIDs() (int, int) {

would something like this be clearer?

func (kl *Kubelet) getHostUID(containerUID int) (int, error)
func (kl *Kubelet) getHostGID(containerGID int) (int, error)

currently, the inputs are implicit (your comment in #64005 (comment) explains this is returning the host uids that map to uid/gid 0, but that isn't apparent to callers of this function)

that also makes it clear when there's no host id mapped to the specified container id (rather than relying on zero values or -1 to indicate errors)

Vikas Choudhary (vikasc)

unread,
Jul 19, 2018, 11:50:50 AM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/container/runtime.go:

> @@ -479,6 +483,42 @@ type RuntimeStatus struct {
 	Conditions []RuntimeCondition
 }
 
+// RuntimeConfigInfo contains runtime's configuration details, eg: user-namespaces mapping between host and container
+type RuntimeConfigInfo struct {
+	UserNamespaceConfig UserNamespaceConfigInfo

this will be runtime defined.

Vikas Choudhary (vikasc)

unread,
Jul 19, 2018, 12:10:44 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {

container information is not being passed because non-zero ID on host mapped with 0 ID on container is same for all the containers on a node. And we are using only this non-zero ID to chown the paths which are otherwise owned by ID 0 of host.

krmayankk

unread,
Jul 19, 2018, 4:18:21 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/apis/core/types.go:

> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
 	// +k8s:conversion-gen=false
 	// +optional
 	HostIPC bool
+	// Use host's user namespace for the pods.
+	// This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+	// Optional: Defaults to true, if feature-gate is enabled
+	// +k8s:conversion-gen=false
+	// +optional
+	HostUser *bool

HostUser is a very generic word and can be confusing, could we even do HostUserNamespaceMapping ?

krmayankk

unread,
Jul 19, 2018, 4:34:26 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -199,6 +202,13 @@ enum NamespaceMode {
     // For example, a container with a PID namespace of NODE expects to view
     // all of the processes on the host running the kubelet.
     NODE      = 2;
+    // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+    // and this namespace has uids/gids mapped to a different range of uids/gids on the
+    // kubernetes node namespace.
+    // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+    // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+    // mapped to uid/gid 200000 on node namespace.
+    NODE_WIDE_REMAPPED = 3;

is this the only mode supported ? nit: the wording is confusing. Something like

Jordan Liggitt

unread,
Jul 19, 2018, 4:34:43 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@liggitt commented on this pull request.


In pkg/apis/core/types.go:

> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
 	// +k8s:conversion-gen=false
 	// +optional
 	HostIPC bool
+	// Use host's user namespace for the pods.
+	// This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+	// Optional: Defaults to true, if feature-gate is enabled
+	// +k8s:conversion-gen=false
+	// +optional
+	HostUser *bool

the idea of the boolean is that the uid/gids are in the host user namespace or not. adding Mapping makes it harder to understand what this means ("true" would actually mean there is no mapping... the uid/gid matches the host value)

krmayankk

unread,
Jul 19, 2018, 4:36:08 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -199,6 +202,13 @@ enum NamespaceMode {
     // For example, a container with a PID namespace of NODE expects to view
     // all of the processes on the host running the kubelet.
     NODE      = 2;
+    // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+    // and this namespace has uids/gids mapped to a different range of uids/gids on the
+    // kubernetes node namespace.
+    // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+    // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+    // mapped to uid/gid 200000 on node namespace.
+    NODE_WIDE_REMAPPED = 3;

I am assuming each pod would get a different range from the host mapped to its namespace. Is that understanding right ?

krmayankk

unread,
Jul 19, 2018, 4:51:16 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -1117,6 +1133,40 @@ message StatusResponse {
     map<string, string> info = 2;
 }
 
+// LinuxIDMapping represents a single user namespace mapping in Linux.
+message LinuxIDMapping {
+   // container_id is the starting id for the mapping inside the container.
+   uint32 container_id = 1;
+   // host_id is the starting id for the mapping on the host.
+   uint32 host_id = 2;
+   // size is the length of the mapping.
+   uint32 size = 3;
+}

can you have different mappings for each pod ?

Jordan Liggitt

unread,
Jul 19, 2018, 5:07:21 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@liggitt commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -199,6 +202,13 @@ enum NamespaceMode {
     // For example, a container with a PID namespace of NODE expects to view
     // all of the processes on the host running the kubelet.
     NODE      = 2;
+    // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+    // and this namespace has uids/gids mapped to a different range of uids/gids on the
+    // kubernetes node namespace.
+    // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+    // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+    // mapped to uid/gid 200000 on node namespace.
+    NODE_WIDE_REMAPPED = 3;

I am assuming each pod would get a different range from the host mapped to its namespace. Is that understanding right?

No. I was surprised by this as well. uid 0 inside any pod's container will map to the same host uid.

Daniel J Walsh

unread,
Jul 19, 2018, 5:09:50 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@rhatdan commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -199,6 +202,13 @@ enum NamespaceMode {
     // For example, a container with a PID namespace of NODE expects to view
     // all of the processes on the host running the kubelet.
     NODE      = 2;
+    // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+    // and this namespace has uids/gids mapped to a different range of uids/gids on the
+    // kubernetes node namespace.
+    // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+    // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+    // mapped to uid/gid 200000 on node namespace.
+    NODE_WIDE_REMAPPED = 3;

We can do this in CRI-O, But Kubernetes has no way of telling CRI-O yet to pick different UserNS for each pod. This is the long term goal.

krmayankk

unread,
Jul 19, 2018, 5:50:36 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -199,6 +202,13 @@ enum NamespaceMode {
     // For example, a container with a PID namespace of NODE expects to view
     // all of the processes on the host running the kubelet.
     NODE      = 2;
+    // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+    // and this namespace has uids/gids mapped to a different range of uids/gids on the
+    // kubernetes node namespace.
+    // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+    // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+    // mapped to uid/gid 200000 on node namespace.
+    NODE_WIDE_REMAPPED = 3;

if multiple pods on a given host get mapped to the same uid on host, what is the security impact of this ? It seems in this mode, we should add validation in kubelet to say all pods must run as different uids otherwise we are violating the guarantee that each pod has its own namespace. Now multiple pods belong to the same id namespace.

Daniel J Walsh

unread,
Jul 20, 2018, 4:37:40 AM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@rhatdan commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -199,6 +202,13 @@ enum NamespaceMode {
     // For example, a container with a PID namespace of NODE expects to view
     // all of the processes on the host running the kubelet.
     NODE      = 2;
+    // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+    // and this namespace has uids/gids mapped to a different range of uids/gids on the
+    // kubernetes node namespace.
+    // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+    // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+    // mapped to uid/gid 200000 on node namespace.
+    NODE_WIDE_REMAPPED = 3;

That is what we have now, exept the ID Namespace is the hosts UIDs. IE root==realroot.
For the first step of UserNS we can increase the security by turning the UserNS in the CRIs.
For example if we turn on CRI-O support for UserNS we end up with all the PODs continuing to run in the same USERNS, except now container root != realroot, and container root has no capabilities. It is only able to act within its namespace. If container root breaks out it would be able to attack other containers, based on UID Mappings, but it would have a harder time since it is not root on the system. Of course SELinux would prevent this on a different level.

In the long run we want to support multiple USERNS, but baby steps...

Jan Šafránek

unread,
Jul 20, 2018, 5:39:48 AM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@jsafrane commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

@liggitt, thanks for heads-up.

A lot of existing volume plugins use FSgroup to make data on a volume (local or remote) available to a pod. Kubernetes does chown and chmod with the GID specified in FSGroup.

Let's say a pod has FSGroup = 500 and uses a volume. Kubernetes chmods and chowns the files on the volume to be accessible by group 500 and run a container with supplementary group 500. How will this work with user namespace? What supplementary group will have a process in the container? Will it be able to access all data on the volume?

As a separate item, we should consider how to remove chown/chmod done for FSGroup in Kubernetes and use user namespaces instead, as it's time consuming.

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 12:06:37 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -199,6 +202,13 @@ enum NamespaceMode {
     // For example, a container with a PID namespace of NODE expects to view
     // all of the processes on the host running the kubelet.
     NODE      = 2;
+    // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+    // and this namespace has uids/gids mapped to a different range of uids/gids on the
+    // kubernetes node namespace.
+    // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+    // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+    // mapped to uid/gid 200000 on node namespace.
+    NODE_WIDE_REMAPPED = 3;

@liggitt No, NODE_WIDE_REMAPPED makes sense only for user namespaces. Initially i was using "POD" to represent user-namespace remapping but because of your concern that currently user-namespace remapping is not actually pod-level, which is true, this new enum value i introduced to bring clarity.

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 1:41:52 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/apis/core/v1/defaults.go:

> @@ -181,6 +181,10 @@ func SetDefaults_PodSpec(obj *v1.PodSpec) {
 	if obj.SchedulerName == "" {
 		obj.SchedulerName = v1.DefaultSchedulerName
 	}
+	if obj.HostUser == nil && utilfeature.DefaultFeatureGate.Enabled(features.UserNamespaceRemapping) {
+		trueObj := true
+		obj.HostUser = &trueObj

From CRI API, this PR is sending "NODE" namespace option to runtime and therefore it is expected that runtimes will provide host usernamespace by default

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 1:47:33 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/apis/core/types.go:

> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
 	// +k8s:conversion-gen=false
 	// +optional
 	HostIPC bool
+	// Use host's user namespace for the pods.
+	// This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+	// Optional: Defaults to true, if feature-gate is enabled

If you mean we should keep default as nil (irrespective of feature gate on or off), because CRI is passing "NODE" as user namespace option to runtime by default, overall effect will be same. Even keeping nil, will provide host usernamespace in container because of CRI defaulting.

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 1:50:56 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/apis/core/types.go:

> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
 	// +k8s:conversion-gen=false
 	// +optional
 	HostIPC bool
+	// Use host's user namespace for the pods.
+	// This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+	// Optional: Defaults to true, if feature-gate is enabled
+	// +k8s:conversion-gen=false
+	// +optional
+	HostUser *bool

since in case of other namespaces i,e network ipc pid, "namespace" is omitted and just hostNetwork or hostIPC is used, tried to keep it consistent with existing by omitting "Namespace". Dont have a strong opinion though. If you have preference for HostUserNamespace, have no problem in changing to that.

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 1:51:49 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_getters.go:

> @@ -194,6 +194,11 @@ func (kl *Kubelet) getRuntime() kubecontainer.Runtime {
 	return kl.containerRuntime
 }
 
+// getRemappedIDs returns the uid and gid from the host usernamespace that are mapped to container user-namespace
+func (kl *Kubelet) getRemappedIDs() (int, int) {

sure

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 1:53:46 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

>  func hasHostNamespace(pod *v1.Pod) bool {
+	if pod.Spec.HostUser != nil {
+		return pod.Spec.HostIPC || pod.Spec.HostNetwork || pod.Spec.HostPID || *pod.Spec.HostUser
+	}
 	if pod.Spec.SecurityContext == nil {

this pod.Spec.SecurityContext is left by mistake long time ago. This check does not make sense IMO. Will remove this. Thanks!!

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 1:54:02 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

>  func hasHostNamespace(pod *v1.Pod) bool {
+	if pod.Spec.HostUser != nil {
+		return pod.Spec.HostIPC || pod.Spec.HostNetwork || pod.Spec.HostPID || *pod.Spec.HostUser
+	}
 	if pod.Spec.SecurityContext == nil {
 		return false
 	}

sure

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 2:05:23 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kuberuntime/helpers.go:

> @@ -190,6 +190,21 @@ func toKubeRuntimeStatus(status *runtimeapi.RuntimeStatus) *kubecontainer.Runtim
 	return &kubecontainer.RuntimeStatus{Conditions: conditions}
 }
 
+// toKubeRuntimeConfig converts the runtimeapi.ActiveRuntimeConfig to kubecontainer.RuntimeConfigInfo
+func toKubeRuntimeConfig(config *runtimeapi.ActiveRuntimeConfig) *kubecontainer.RuntimeConfigInfo {
+	userNSConfig := kubecontainer.UserNamespaceConfigInfo{
+		UidMapping: kubecontainer.UserNSMapping{
+			ContainerID: config.UserNamespaceConfig.UidMappings[0].ContainerId,

contain 0 items --- yes, should have retrieved like config.GetUserNamespaceConfig().GetUidMappings(). will fix. Thanks!

contain more than 1 item ---- we are interested in only ID on host for ID 0 in container, therefore we are taking first ID of first mapping.

k8s-ci-robot

unread,
Jul 20, 2018, 6:30:40 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

krmayankk

unread,
Jul 20, 2018, 6:49:22 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/dockershim/docker_sandbox_test.go:

> +		// Stop the sandbox.
+		expected.State = runtimeapi.PodSandboxState_SANDBOX_NOTREADY
+		_, err = ds.StopPodSandbox(getTestCTX(), &runtimeapi.StopPodSandboxRequest{PodSandboxId: id})
+		require.NoError(t, err)
+		// IP not valid after sandbox stop
+		expected.Network.Ip = ""
+		statusResp, err = ds.PodSandboxStatus(getTestCTX(), &runtimeapi.PodSandboxStatusRequest{PodSandboxId: id})
+		require.NoError(t, err)
+		assert.Equal(t, expected, statusResp.Status)
+
+		// Remove the container.
+		_, err = ds.RemovePodSandbox(getTestCTX(), &runtimeapi.RemovePodSandboxRequest{PodSandboxId: id})
+		require.NoError(t, err)
+		statusResp, err = ds.PodSandboxStatus(getTestCTX(), &runtimeapi.PodSandboxStatusRequest{PodSandboxId: id})
+		assert.Error(t, err, fmt.Sprintf("status of sandbox: %+v", statusResp))
+	}

is that too much to test in a single UT? I dont have full context here, but is their a possiblity of breaking this into smaller tests ?

krmayankk

unread,
Jul 20, 2018, 6:54:06 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/dockershim/docker_service.go:

> +		gidMapping.HostId = remappedNonRootHostID
+		uidMapping.Size_ = uidMappingSize
+		gidMapping.Size_ = gidMappingSize
+	}
+
+	linuxConfig := &runtimeapi.LinuxUserNamespaceConfig{
+		UidMappings: []*runtimeapi.LinuxIDMapping{uidMapping},
+		GidMappings: []*runtimeapi.LinuxIDMapping{gidMapping},
+	}
+	activeRuntimeConfig := &runtimeapi.ActiveRuntimeConfig{UserNamespaceConfig: linuxConfig}
+	return &runtimeapi.GetRuntimeConfigInfoResponse{RuntimeConfig: activeRuntimeConfig}, nil
+}
+
+// isUserNsEnabled parses docker info. Returns true if user-namespace feature is found to enabled, otherwise false
+func isUserNsEnabled(dockerInfo *dockertypes.Info) bool {
+	var usernsEnabled bool

nit: set this to false and not rely on default value ?

krmayankk

unread,
Jul 20, 2018, 7:50:16 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/kubelet.go:

> +		return fmt.Errorf("error chowning plugins directory: %v", err)
+	}
+
+	return nil
+}
+
+// chownDirForRemappedIDs change dir and path ownerships if NodeUserNamespace is enabled by feature-gate
+func (kl *Kubelet) chownDirForRemappedIDs(path string) error {
+	if !kl.nodeWideUserNamespaceRemapping {
+		// No-Op is NodeUserNamespace feature gate is not enabled
+		return nil
+	}
+	if path == "" {
+		return fmt.Errorf("path to be setup is empty.")
+	}
+	uid, gid := kl.getRemappedIDs()

@vikaschoudhary16 when a user specifies runAsUser/runAsGroup, i am assuming the uid/gid mentioned is from the containers namespace and not the pods namespace right ?

krmayankk

unread,
Jul 20, 2018, 7:57:34 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {

is this chown happening for volumes mounted or just the container file system ?

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 9:01:41 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet.go:

> +		return fmt.Errorf("error chowning plugins directory: %v", err)
+	}
+
+	return nil
+}
+
+// chownDirForRemappedIDs change dir and path ownerships if NodeUserNamespace is enabled by feature-gate
+func (kl *Kubelet) chownDirForRemappedIDs(path string) error {
+	if !kl.nodeWideUserNamespaceRemapping {
+		// No-Op is NodeUserNamespace feature gate is not enabled
+		return nil
+	}
+	if path == "" {
+		return fmt.Errorf("path to be setup is empty.")
+	}
+	uid, gid := kl.getRemappedIDs()

i guess you meant "containers namespace and not host namespace". Yes, runAsUser/runAsGroup mentioned in the will be from container namespace.
for example if runtime usernamepsace configuration is "testuser:80000:100" and uid 60 is provided in security context, in container it will be uid 60 but on host it will be 800060

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 9:06:39 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

@jsafrane fsGroup will not work as intended. On the host fsgroup will get remapped as per usernamespace configuration and remapped ID will not match with group ID on the remote volume. This is because of lack of support in runtimes. However, we can aim to get this working before graduating feature to beta.

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 9:07:49 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {

for the volumes as well. But only for the volumes in var/lib/kubelet/pods/pod-id/volumes/ and not for remote volumes.

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 9:09:42 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -1117,6 +1133,40 @@ message StatusResponse {
     map<string, string> info = 2;
 }
 
+// LinuxIDMapping represents a single user namespace mapping in Linux.
+message LinuxIDMapping {
+   // container_id is the starting id for the mapping inside the container.
+   uint32 container_id = 1;
+   // host_id is the starting id for the mapping on the host.
+   uint32 host_id = 2;
+   // size is the length of the mapping.
+   uint32 size = 3;
+}

This is something we are not targeting in first phase. For now, we are only isolating root user between host and containers.Container level isolation we are focussing in next phase.

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 9:18:58 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/apis/core/types.go:

> @@ -2608,6 +2608,12 @@ type PodSecurityContext struct {
 	// +k8s:conversion-gen=false
 	// +optional
 	HostIPC bool
+	// Use host's user namespace for the pods.
+	// This field is alpha-level and can be set to false on a node only if user-namespace remapping feature is enabled at runtime and kubelet.
+	// Optional: Defaults to true, if feature-gate is enabled
+	// +k8s:conversion-gen=false
+	// +optional
+	HostUser *bool

just saw other comments. Will update to hostUser

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 9:23:13 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -199,6 +202,13 @@ enum NamespaceMode {
     // For example, a container with a PID namespace of NODE expects to view
     // all of the processes on the host running the kubelet.
     NODE      = 2;
+    // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+    // and this namespace has uids/gids mapped to a different range of uids/gids on the
+    // kubernetes node namespace.
+    // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+    // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+    // mapped to uid/gid 200000 on node namespace.
+    NODE_WIDE_REMAPPED = 3;

@liggitt

No. I was surprised by this as well. uid 0 inside any pod's container will map to the same host uid.

As @rhatdan explained, this is still a level of improvement by it wont be host's root, which currently is the case.

krmayankk

unread,
Jul 20, 2018, 9:37:47 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/kuberuntime/kuberuntime_manager.go:

> @@ -116,6 +116,13 @@ type kubeGenericRuntimeManager struct {
 
 	// A shim to legacy functions for backward compatibility.
 	legacyLogProvider LegacyLogProvider
+
+	// cache of runtime configuration eg, user-namespace configuration
+	runtimeConfig *kubecontainer.RuntimeConfigInfo
+	// GID on host usernamespace mapped to GID 0 in container user-namespace
+	firstRemappedGIDOnHost int

where is this configuration coming from ? kubelet configuration ?

Vikas Choudhary (vikasc)

unread,
Jul 20, 2018, 9:50:42 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kuberuntime/kuberuntime_manager.go:

> @@ -116,6 +116,13 @@ type kubeGenericRuntimeManager struct {
 
 	// A shim to legacy functions for backward compatibility.
 	legacyLogProvider LegacyLogProvider
+
+	// cache of runtime configuration eg, user-namespace configuration
+	runtimeConfig *kubecontainer.RuntimeConfigInfo
+	// GID on host usernamespace mapped to GID 0 in container user-namespace
+	firstRemappedGIDOnHost int

from runtime through CRI

krmayankk

unread,
Jul 20, 2018, 10:45:29 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@krmayankk commented on this pull request.


In pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto:

> @@ -199,6 +202,13 @@ enum NamespaceMode {
     // For example, a container with a PID namespace of NODE expects to view
     // all of the processes on the host running the kubelet.
     NODE      = 2;
+    // A NODE_WIDE_REMAPPED namespace is common to all the pods on the kubernetes node
+    // and this namespace has uids/gids mapped to a different range of uids/gids on the
+    // kubernetes node namespace.
+    // For example, starting from uid/gid 0, 10000 uids/gids in this namespace are mapped to
+    // 10000 ids on node namespace,starting with id 200000. i.e uid/gid 0 in this namespace is
+    // mapped to uid/gid 200000 on node namespace.
+    NODE_WIDE_REMAPPED = 3;

thanks for the explanation @rhatdan i did know that currently container uids were same as host uids and hence had the exact same privilege. This would mean if we are mixing pods from different customers, they must run with different uids currently.

Jan Šafránek

unread,
Jul 23, 2018, 5:47:34 AM7/23/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@jsafrane commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

If I read this correctly, that means all pods that don't use PodSpec.HostUser can't use any volumes with FSGroup? That makes most volumes unusable in multi-tenant environment.

Derek Carr

unread,
Jul 24, 2018, 5:43:48 PM7/24/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@derekwaynecarr commented on this pull request.


In pkg/apis/core/v1/defaults.go:

> @@ -181,6 +181,10 @@ func SetDefaults_PodSpec(obj *v1.PodSpec) {
 	if obj.SchedulerName == "" {
 		obj.SchedulerName = v1.DefaultSchedulerName
 	}
+	if obj.HostUser == nil && utilfeature.DefaultFeatureGate.Enabled(features.UserNamespaceRemapping) {
+		trueObj := true
+		obj.HostUser = &trueObj

i think we need to discuss that more. what happens if you run kata or gvisor in future?

Vikas Choudhary (vikasc)

unread,
Jul 26, 2018, 7:26:42 AM7/26/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

@jsafrane right. If hostUsernamespace is false that means feature is enabled and thus FSGroup wont work because FSGroup might be mapped to different host GIDs on different nodes. This is a limitation in alpha and will be resolved before beta.

k8s-ci-robot

unread,
Jul 28, 2018, 2:49:28 PM7/28/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Vikas Choudhary (vikasc)

unread,
Aug 1, 2018, 1:57:01 AM8/1/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/apis/core/v1/defaults.go:

> @@ -181,6 +181,10 @@ func SetDefaults_PodSpec(obj *v1.PodSpec) {
 	if obj.SchedulerName == "" {
 		obj.SchedulerName = v1.DefaultSchedulerName
 	}
+	if obj.HostUser == nil && utilfeature.DefaultFeatureGate.Enabled(features.UserNamespaceRemapping) {
+		trueObj := true
+		obj.HostUser = &trueObj

To handle runtimes like kata or gvisor, one more boolean field like HostUserNamespaceSharingCapability will be required in CRI API. This will be needed to differentiate between the following cases:

  1. docker/crio type runtimes and usernamespace remapping is not enabled
  2. Runtimes like gvisor and katacontainers

Complete behavior will be like this:

Enabled at Kubelet *hostUser HostUserNamespace-SharingCapability Enabled at runtime UserNamespaceOption sent to runtime
false non-nil * * Validation Error
* nil true true NODE_WIDE_REMAPPED
* nil true false NODE
true true false * validation error
true false false * POD
true true true * NODE
true false true false validation error

Giuseppe Scrivano

unread,
Aug 6, 2018, 10:17:11 AM8/6/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@giuseppe commented on this pull request.


In pkg/kubelet/dockershim/docker_service.go:

> +	return &runtimeapi.GetRuntimeConfigInfoResponse{RuntimeConfig: activeRuntimeConfig}, nil
+}
+
+// isUserNsEnabled parses docker info. Returns true if user-namespace feature is found to enabled, otherwise false
+func isUserNsEnabled(dockerInfo *dockertypes.Info) bool {
+	var usernsEnabled bool
+	for _, secOpt := range dockerInfo.SecurityOptions {
+		if strings.Contains(secOpt, "userns") {
+			usernsEnabled = true
+			break
+		}
+	}
+	return usernsEnabled
+}
+
+// getRremappedNonRootHostID parses docker info to determine ID on the host usernamespace which is mapped to {U/G}ID 0 in the container user-namespace

typo in the comment: "getRremappedNonRootHostID"


In pkg/kubelet/dockershim/docker_service.go:

> +	if isUserNsEnabled(dockerInfo) {
+		remappedNonRootHostID, err := getRemappedNonRootHostID(dockerInfo)
+		if err != nil {
+			return nil, fmt.Errorf("failed to get remappedNonRootHostID. err: %v", err)
+		}
+		uidMappingSize, gidMappingSize, err := getUserNsMappingSizes(remappedNonRootHostID)
+		if err != nil {
+			return nil, fmt.Errorf("failed to get user-namespace mapping sizes. err: %v", err)
+		}
+
+		uidMapping.ContainerId = uint32(0)
+		gidMapping.ContainerId = uint32(0)
+		uidMapping.HostId = remappedNonRootHostID
+		gidMapping.HostId = remappedNonRootHostID
+		uidMapping.Size_ = uidMappingSize
+		gidMapping.Size_ = gidMappingSize

this seems to assume there can be only one mapping available for the user. Is it safe to assume it?

Derek Carr

unread,
Aug 16, 2018, 3:45:08 PM8/16/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@derekwaynecarr commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

@vikaschoudhary16 can you explain how you think this limitation would be addressed? it's not immediately clear to me at this time, but it seems worth understanding at a high-level now.

Derek Carr

unread,
Aug 16, 2018, 3:51:25 PM8/16/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@derekwaynecarr commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

it would help if we can flesh out https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/node-usernamespace-remapping.md#remote-volumes to know what the path is for actually graduating this feature. right now, in alpha form, without this it would be very limiting.

Jordan Liggitt

unread,
Aug 16, 2018, 4:09:07 PM8/16/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@liggitt commented on this pull request.


In pkg/kubelet/kuberuntime/helpers.go:

> +	usernsConfig := config.GetUserNamespaceConfig()
+	if usernsConfig == nil {
+		return nil
+	}
+	uidMappings := usernsConfig.GetUidMappings()
+	if uidMappings == nil || len(uidMappings) == 0 {
+		return nil
+	}
+	gidMappings := usernsConfig.GetGidMappings()
+	if gidMappings == nil || len(gidMappings) == 0 {
+		return nil
+	}
+
+	userNSConfig := kubecontainer.UserNamespaceConfigInfo{
+		UidMapping: kubecontainer.UserNSMapping{
+			ContainerID: uidMappings[0].ContainerId,

it still seems incorrect to drop all items other than the first here. isn't this losing mapping range info provided by the runtime?

Jordan Liggitt

unread,
Aug 16, 2018, 4:16:18 PM8/16/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@liggitt commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

ascending to the parent dir via .. reminds me of the issue #57095 ... is there risk of hitting the wrong location by joining ".."?

Vikas Choudhary (vikasc)

unread,
Aug 20, 2018, 10:47:48 AM8/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

@derekwaynecarr This behavior is runtime-dependent. As per current behaviour in runtimes for example crio and docker, where FsGroup mentioned in pod spec is translated by runtime, if userns is enabled, as the actual gid on the volume files. Ex: x->f(x) is mapping, f(x) will be gid, where 'x' is FsGroup. This would work fine if the volume is shared by the pods who have same userns settings i.e either enabled at all the pods(nod) with same userns configuration or all using host usernamespace.

Problem will be there if there is a mismatch in user-namespace configuration.
May be if runtimes interpret FsGroup ID as a translated one i.e actual GID on host, in that case it might work for all the cases. This will require some handling to be added at runtimes eg: crio , docker (any runtime).
Will it be ok to go alpha with this userns configuration matching requirement limitation for FsGroup and as a roadmap, before beta, target to get the handling changed/added at supported runtimes?

/cc @mrunalp @giuseppe

Vikas Choudhary (vikasc)

unread,
Aug 20, 2018, 10:47:53 AM8/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 commented on this pull request.


In pkg/kubelet/kubelet_pods.go:

> @@ -472,11 +472,17 @@ func (kl *Kubelet) GenerateRunContainerOptions(pod *v1.Pod, container *v1.Contai
 			glog.Errorf("Error on creating %q: %v", p, err)
 		} else {
 			opts.PodContainerDir = p
+			if err := kl.chownDirForRemappedIDs(p); err != nil {
+				return nil, nil, fmt.Errorf("error while chowning %s", p)
+			}
+			if err := kl.chownDirForRemappedIDs(filepath.Join(p, "..")); err != nil {

@liggitt nice catch, will fix this. Thanks!

k8s-ci-robot

unread,
Aug 20, 2018, 10:48:38 AM8/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: GitHub didn't allow me to request PR reviews from the following users: giuseppe.

Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

@derekwaynecarr This behavior is runtime-dependent. As per current behaviour in runtimes for example crio and docker, where FsGroup mentioned in pod spec is translated by runtime, if userns is enabled, as the actual gid on the volume files. Ex: x->f(x) is mapping, f(x) will be gid, where 'x' is FsGroup. This would work fine if the volume is shared by the pods who have same userns settings i.e either enabled at all the pods(nod) with same userns configuration or all using host usernamespace.

Problem will be there if there is a mismatch in user-namespace configuration.
May be if runtimes interpret FsGroup ID as a translated one i.e actual GID on host, in that case it might work for all the cases. This will require some handling to be added at runtimes eg: crio , docker (any runtime).
Will it be ok to go alpha with this userns configuration matching requirement limitation for FsGroup and as a roadmap, before beta, target to get the handling changed/added at supported runtimes?

/cc @mrunalp @giuseppe

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Zach Arnold

unread,
Aug 20, 2018, 12:58:25 PM8/20/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16 Hey there! I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks!

Jordan Liggitt

unread,
Sep 5, 2018, 11:55:14 AM9/5/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

past freeze with open issues, moving to the next milestone

/milestone v1.13

Aaron Crickenberger

unread,
Oct 16, 2018, 6:36:25 PM10/16/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

ping, needs rebase

Aaron Crickenberger

unread,
Oct 16, 2018, 6:39:04 PM10/16/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

/remove-priority critical-urgent
I'm downgrading this based on the lack of activity. It was bumped to this to not get kicked out by a bot that we no longer run. And then got punted from the release anyway.

/priority important-soon
This priority means we are planning to get it in the current release cycle. Is this accurate?

Aaron Crickenberger

unread,
Oct 16, 2018, 7:06:55 PM10/16/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

/milestone v1.13
I'm adding this to the v1.13 milestone since it relates to kubernetes/features#127 which is currently planned for v1.13

Security Sauce

unread,
Oct 22, 2018, 2:45:10 PM10/22/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@spiffxp
@vikaschoudhary16
We operate k8s clusters that allow execution of third-party containers. From a security perspective, this is an an important feature we have been waiting for a while. Please consider this as high-priority security feature and make it available in v1.13 release.

Kendrick Coleman

unread,
Oct 22, 2018, 4:31:07 PM10/22/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

Are we confident this is going to make the v1.13 milestone? Enhancement freeze is tomorrow COB. If there is no communication on this issue or activity on the PR, this is going to be pulled from the milestone as it doesn't fit with our "stability" theme. If there is no communication after COB tomorrow, an exception will be required to add it back to the milestone. Please let me know where we stand. Thanks!

Shubheksha

unread,
Oct 30, 2018, 12:50:08 PM10/30/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@nikopen/ @spiffxp - the milestone has been pulled from kubernetes/enhancements#127 due to no update, can we stop tracking this PR for 1.13?

Kubernetes Prow Robot

unread,
Dec 3, 2018, 9:08:15 AM12/3/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: derekwaynecarr, vikaschoudhary16
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: lavalamp

If they are not already assigned, you can assign the PR to them by writing /assign @lavalamp in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Kubernetes Prow Robot

unread,
Dec 3, 2018, 9:14:05 AM12/3/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-typecheck fb9a6d2 link /test pull-kubernetes-typecheck

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Kubernetes Prow Robot

unread,
Dec 3, 2018, 9:21:09 AM12/3/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-typecheck fb9a6d2 link /test pull-kubernetes-typecheck
pull-kubernetes-bazel-test fb9a6d2 link /test pull-kubernetes-bazel-test

Kubernetes Prow Robot

unread,
Dec 3, 2018, 9:51:40 AM12/3/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-typecheck fb9a6d2 link /test pull-kubernetes-typecheck
pull-kubernetes-bazel-test fb9a6d2 link /test pull-kubernetes-bazel-test
pull-kubernetes-verify fb9a6d2 link /test pull-kubernetes-verify

Kubernetes Prow Robot

unread,
Dec 3, 2018, 11:09:51 AM12/3/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-typecheck fb9a6d2 link /test pull-kubernetes-typecheck
pull-kubernetes-bazel-test fb9a6d2 link /test pull-kubernetes-bazel-test
pull-kubernetes-verify fb9a6d2 link /test pull-kubernetes-verify
pull-kubernetes-local-e2e-containerized fb9a6d2 link /test pull-kubernetes-local-e2e-containerized

Kubernetes Prow Robot

unread,
Dec 3, 2018, 12:27:57 PM12/3/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-local-e2e-containerized fb9a6d2 link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-verify a3aded7 link /test pull-kubernetes-verify

Kubernetes Prow Robot

unread,
Dec 3, 2018, 1:51:32 PM12/3/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-verify a3aded7 link /test pull-kubernetes-verify
pull-kubernetes-local-e2e-containerized a3aded7 link /test pull-kubernetes-local-e2e-containerized

Kubernetes Prow Robot

unread,
Dec 3, 2018, 11:59:56 PM12/3/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-verify a3aded7 link /test pull-kubernetes-verify
pull-kubernetes-local-e2e-containerized a3aded7 link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-integration af6494f link /test pull-kubernetes-integration

Kubernetes Prow Robot

unread,
Dec 4, 2018, 12:25:39 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-local-e2e-containerized a3aded7 link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-integration af6494f link /test pull-kubernetes-integration
pull-kubernetes-verify af6494f link /test pull-kubernetes-verify

Kubernetes Prow Robot

unread,
Dec 4, 2018, 12:31:16 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-local-e2e-containerized a3aded7 link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-integration af6494f link /test pull-kubernetes-integration
pull-kubernetes-verify af6494f link /test pull-kubernetes-verify
pull-kubernetes-e2e-gce-100-performance af6494f link /test pull-kubernetes-e2e-gce-100-performance

Kubernetes Prow Robot

unread,
Dec 4, 2018, 12:31:50 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention
pull-kubernetes-e2e-kops-aws af6494f link /test pull-kubernetes-e2e-kops-aws

Kubernetes Prow Robot

unread,
Dec 4, 2018, 1:34:30 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-integration af6494f link /test pull-kubernetes-integration
pull-kubernetes-verify af6494f link /test pull-kubernetes-verify
pull-kubernetes-e2e-gce-100-performance af6494f link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-e2e-kops-aws af6494f link /test pull-kubernetes-e2e-kops-aws
pull-kubernetes-local-e2e-containerized af6494f link /test pull-kubernetes-local-e2e-containerized

Vikas Choudhary (vikasc)

unread,
Dec 4, 2018, 2:23:58 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

/test pull-kubernetes-verify

Kubernetes Prow Robot

unread,
Dec 4, 2018, 3:21:52 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-integration af6494f link /test pull-kubernetes-integration
pull-kubernetes-e2e-gce-100-performance af6494f link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-e2e-kops-aws af6494f link /test pull-kubernetes-e2e-kops-aws
pull-kubernetes-local-e2e-containerized af6494f link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-verify af6494f link /test pull-kubernetes-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Kubernetes Prow Robot

unread,
Dec 4, 2018, 4:22:22 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: derekwaynecarr, vikaschoudhary16
To fully approve this pull request, please assign additional approvers.

We suggest the following additional approver: thockin

If they are not already assigned, you can assign the PR to them by writing /assign @thockin in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Kubernetes Prow Robot

unread,
Dec 4, 2018, 4:40:02 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention
pull-kubernetes-bazel-test dc1c08a link /test pull-kubernetes-bazel-test

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Vikas Choudhary (vikasc)

unread,
Dec 4, 2018, 4:40:25 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

/test pull-kubernetes-bazel-test

Kubernetes Prow Robot

unread,
Dec 4, 2018, 4:48:21 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

Kubernetes Prow Robot

unread,
Dec 4, 2018, 4:55:20 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-e2e-gce-100-performance af6494f link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-e2e-kops-aws af6494f link /test pull-kubernetes-e2e-kops-aws
pull-kubernetes-local-e2e-containerized af6494f link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-verify af6494f link /test pull-kubernetes-verify
pull-kubernetes-bazel-test dc1c08a link /test pull-kubernetes-bazel-test
pull-kubernetes-e2e-gce dc1c08a link /test pull-kubernetes-e2e-gce

Kubernetes Prow Robot

unread,
Dec 4, 2018, 4:58:16 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention
pull-kubernetes-kubemark-e2e-gce-big dc1c08a link /test pull-kubernetes-kubemark-e2e-gce-big

Kubernetes Prow Robot

unread,
Dec 4, 2018, 5:01:54 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention
pull-kubernetes-e2e-gce-device-plugin-gpu dc1c08a link /test pull-kubernetes-e2e-gce-device-plugin-gpu

Kubernetes Prow Robot

unread,
Dec 4, 2018, 5:05:01 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-e2e-gce-100-performance af6494f link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-local-e2e-containerized af6494f link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-verify af6494f link /test pull-kubernetes-verify
pull-kubernetes-bazel-test dc1c08a link /test pull-kubernetes-bazel-test
pull-kubernetes-e2e-gce dc1c08a link /test pull-kubernetes-e2e-gce
pull-kubernetes-kubemark-e2e-gce-big dc1c08a link /test pull-kubernetes-kubemark-e2e-gce-big
pull-kubernetes-e2e-gce-device-plugin-gpu dc1c08a link /test pull-kubernetes-e2e-gce-device-plugin-gpu
pull-kubernetes-e2e-kops-aws dc1c08a link /test pull-kubernetes-e2e-kops-aws

Kubernetes Prow Robot

unread,
Dec 4, 2018, 5:05:29 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-e2e-gce-100-performance af6494f link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-local-e2e-containerized af6494f link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-bazel-test dc1c08a link /test pull-kubernetes-bazel-test
pull-kubernetes-e2e-gce dc1c08a link /test pull-kubernetes-e2e-gce
pull-kubernetes-kubemark-e2e-gce-big dc1c08a link /test pull-kubernetes-kubemark-e2e-gce-big
pull-kubernetes-e2e-gce-device-plugin-gpu dc1c08a link /test pull-kubernetes-e2e-gce-device-plugin-gpu
pull-kubernetes-e2e-kops-aws dc1c08a link /test pull-kubernetes-e2e-kops-aws
pull-kubernetes-verify dc1c08a link /test pull-kubernetes-verify

Kubernetes Prow Robot

unread,
Dec 4, 2018, 5:22:26 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: derekwaynecarr, vikaschoudhary16
To fully approve this pull request, please assign additional approvers.

We suggest the following additional approver: lavalamp

If they are not already assigned, you can assign the PR to them by writing /assign @lavalamp in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Kubernetes Prow Robot

unread,
Dec 4, 2018, 5:50:48 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-e2e-gce-100-performance af6494f link /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-local-e2e-containerized af6494f link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-e2e-gce dc1c08a link /test pull-kubernetes-e2e-gce
pull-kubernetes-kubemark-e2e-gce-big dc1c08a link /test pull-kubernetes-kubemark-e2e-gce-big
pull-kubernetes-e2e-gce-device-plugin-gpu dc1c08a link /test pull-kubernetes-e2e-gce-device-plugin-gpu
pull-kubernetes-e2e-kops-aws dc1c08a link /test pull-kubernetes-e2e-kops-aws
pull-kubernetes-verify dc1c08a link /test pull-kubernetes-verify
pull-kubernetes-integration 64491e8 link /test pull-kubernetes-integration

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Kubernetes Prow Robot

unread,
Dec 4, 2018, 7:24:44 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-integration 64491e8 link /test pull-kubernetes-integration
pull-kubernetes-local-e2e-containerized 64491e8 link /test pull-kubernetes-local-e2e-containerized

Vikas Choudhary (vikasc)

unread,
Dec 4, 2018, 7:30:10 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

/test pull-kubernetes-integration

Vikas Choudhary (vikasc)

unread,
Dec 4, 2018, 7:30:41 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

/test pull-kubernetes-local-e2e-containerized

Kubernetes Prow Robot

unread,
Dec 4, 2018, 9:34:09 AM12/4/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-local-e2e-containerized 64491e8 link /test pull-kubernetes-local-e2e-containerized

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Vikas Choudhary (vikasc)

unread,
Dec 5, 2018, 2:29:43 AM12/5/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

/test pull-kubernetes-local-e2e-containerized

Kubernetes Prow Robot

unread,
Dec 18, 2018, 6:11:56 AM12/18/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Kubernetes Prow Robot

unread,
Dec 18, 2018, 6:13:39 AM12/18/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-bazel-build 1058ae6 link /test pull-kubernetes-bazel-build

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Kubernetes Prow Robot

unread,
Dec 18, 2018, 6:13:46 AM12/18/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-bazel-build 1058ae6 link /test pull-kubernetes-bazel-build
pull-kubernetes-bazel-test 1058ae6 link /test pull-kubernetes-bazel-test

Kubernetes Prow Robot

unread,
Dec 18, 2018, 6:13:53 AM12/18/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-bazel-build 1058ae6 link /test pull-kubernetes-bazel-build
pull-kubernetes-bazel-test 1058ae6 link /test pull-kubernetes-bazel-test
pull-kubernetes-godeps 1058ae6 link /test pull-kubernetes-godeps

Kubernetes Prow Robot

unread,
Dec 18, 2018, 6:13:55 AM12/18/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

@vikaschoudhary16: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross c6b8d2f link /test pull-kubernetes-cross
pull-kubernetes-bazel-build 1058ae6 link /test pull-kubernetes-bazel-build
pull-kubernetes-bazel-test 1058ae6 link /test pull-kubernetes-bazel-test
pull-kubernetes-godeps 1058ae6 link /test pull-kubernetes-godeps
pull-kubernetes-local-e2e-containerized 1058ae6 link /test pull-kubernetes-local-e2e-containerized

Kubernetes Prow Robot

unread,
Dec 18, 2018, 6:14:08 AM12/18/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention
pull-kubernetes-e2e-gce 1058ae6 link /test pull-kubernetes-e2e-gce
pull-kubernetes-e2e-gce-device-plugin-gpu 1058ae6 link /test pull-kubernetes-e2e-gce-device-plugin-gpu
pull-kubernetes-integration 1058ae6 link /test pull-kubernetes-integration

Kubernetes Prow Robot

unread,
Dec 18, 2018, 6:14:19 AM12/18/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Kubernetes Prow Robot

unread,
Dec 18, 2018, 6:14:19 AM12/18/18
to kubernetes/kubernetes, k8s-mirror-storage-pr-reviews, Team mention

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

It is loading more messages.
0 new messages