[Security Advisory] CVE-2020-8558: Kubernetes: Node setting allows for neighboring hosts to bypass localhost boundary

715 views
Skip to first unread message

Joel Smith

unread,
Jul 8, 2020, 12:01:13 PM7/8/20
to kubernete...@googlegroups.com, Kubernetes developer/contributor discussion, kubernetes-sec...@googlegroups.com, kubernetes-se...@googlegroups.com

Hello Kubernetes Community,

A security issue was discovered in kube-proxy which allows adjacent hosts (hosts running in the same LAN or layer 2 domain) to reach TCP and UDP services on the node(s) which are bound to 127.0.0.1. For example, if a cluster administrator runs a TCP service that listens on 127.0.0.1:1234, because of this bug, that service would be potentially reachable by other hosts on the same LAN as the node, or by containers running on the same node as the service. If the example service on port 1234 required no additional authentication (because it assumed that only other localhost processes could reach it), then it could be vulnerable to attacks that make use of this bug.

The Kubernetes API Server's default insecure port setting causes the API server to listen on 127.0.0.1:8080 where it will accept requests without authentication. Many Kubernetes installers explicitly disable the API Server's insecure port, but in clusters where it is not disabled, an attacker with access to another system on the same LAN or with control of a container running on the master may be able to reach the API server and execute arbitrary API requests on the cluster. This port is deprecated, and will be removed in Kubernetes v1.20.

This issue has been rated medium (CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N), and assigned CVE-2020-8558.

In clusters where the API Server insecure port is not disabled, this issue has been rated high (CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H).

Am I vulnerable?

You may be vulnerable if:

  • You are running a vulnerable version (see below)

  • Your cluster nodes run in an environment where untrusted hosts share the same layer 2 domain (i.e. same LAN) as nodes

  • Your cluster allows untrusted pods to run containers with CAP_NET_RAW (the Kubernetes default is to allow this capability).

  • Your nodes (or hostnetwork pods) run any localhost-only services which do not require any further authentication. To list services that are potentially affected, run the following commands on nodes:

On a master node, an lsof entry like this indicates that the API server may be listening with an insecure port:

COMMAND        PID USER FD TYPE DEVICE SIZE/OFF NODE NAME

kube-apiserver 123 root 7u IPv4  26799      0t0  TCP 127.0.0.1:8080 (LISTEN)

Affected Versions

  • kube-proxy v1.18.0-1.18.3

  • kube-proxy v1.17.0-1.17.6

  • kube-proxy <1.16.10

How do I mitigate this vulnerability?

Prior to upgrading, this vulnerability can be mitigated by manually adding an iptables rule on nodes. This rule will reject traffic to 127.0.0.1 which does not originate on the node.

iptables -I INPUT --dst 127.0.0.0/8 ! --src 127.0.0.0/8 \

-m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP

Additionally, if your cluster does not already have the API Server insecure port disabled, we strongly suggest that you disable it. Add the following flag to your kubernetes API server command line: --insecure-port=0

Detection

Packets on the wire with an IPv4 destination in the range 127.0.0.0/8 and a layer-2 destination MAC address of a node may indicate that an attack is targeting this vulnerability.

Fixed Versions

  • kube-proxy v1.19.0+ (not yet released)

  • kube-proxy v1.18.4+

  • kube-proxy v1.17.7+

  • kube-proxy v1.16.11+

To upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster

Additional Details

See the GitHub issue for more details: https://github.com/kubernetes/kubernetes/issues/92315

Acknowledgements

This vulnerability was reported by János Kövér, Ericsson with additional impacts reported by Rory McCune, NCC Group and Yuval Avrahami and Ariel Zelivansky, Palo Alto Networks.

Thank You,

Joel Smith on behalf of the Kubernetes Product Security Committee

Joel Smith

unread,
Jul 8, 2020, 1:48:14 PM7/8/20
to kubernete...@googlegroups.com, Kubernetes developer/contributor discussion, kubernetes-sec...@googlegroups.com, kubernetes-se...@googlegroups.com

Apologies for the extra email, but an error was discovered in the original announcement from earlier today.  The announcement makes it sound like upgrading kube-proxy will address the issue, but that is incorrect.  While the issue is caused by a setting in kube-proxy, the current fix is in the kubelet. We recommend updating both kubelet and kube-proxy to be sure the issue is addressed.

Please see https://github.com/kubernetes/kubernetes/issues/92315 for the most up-to-date information.

Thanks,

Joel Smith, on behalf of the Kubernetes Product Security Committee

Reply all
Reply to author
Forward
0 new messages