Hello Kubernetes Community,
A security issue was discovered in kube-proxy which allows adjacent hosts (hosts running in the same LAN or layer 2 domain) to reach TCP and UDP services on the node(s) which are bound to 127.0.0.1. For example, if a cluster administrator runs a TCP service that listens on 127.0.0.1:1234, because of this bug, that service would be potentially reachable by other hosts on the same LAN as the node, or by containers running on the same node as the service. If the example service on port 1234 required no additional authentication (because it assumed that only other localhost processes could reach it), then it could be vulnerable to attacks that make use of this bug.
The Kubernetes API Server's default insecure port setting causes the API server to listen on 127.0.0.1:8080 where it will accept requests without authentication. Many Kubernetes installers explicitly disable the API Server's insecure port, but in clusters where it is not disabled, an attacker with access to another system on the same LAN or with control of a container running on the master may be able to reach the API server and execute arbitrary API requests on the cluster. This port is deprecated, and will be removed in Kubernetes v1.20.
This issue has been rated medium (CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N), and assigned CVE-2020-8558.
In clusters where the API Server insecure port is not disabled, this issue has been rated high (CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H).
You may be vulnerable if:
You are running a vulnerable version (see below)
Your cluster nodes run in an environment where untrusted hosts share the same layer 2 domain (i.e. same LAN) as nodes
Your cluster allows untrusted pods to run containers with CAP_NET_RAW (the Kubernetes default is to allow this capability).
Your nodes (or hostnetwork pods) run any localhost-only services which do not require any further authentication. To list services that are potentially affected, run the following commands on nodes:
lsof +c 15 -P -n -i4...@127.0.0.1 -sTCP:LISTEN
lsof +c 15 -P -n -i4...@127.0.0.1
On a master node, an lsof entry like this indicates that the API server may be listening with an insecure port:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-apiserver 123 root 7u IPv4 26799 0t0 TCP 127.0.0.1:8080 (LISTEN)
kube-proxy v1.18.0-1.18.3
kube-proxy v1.17.0-1.17.6
kube-proxy <1.16.10
Prior to upgrading, this vulnerability can be mitigated by manually adding an iptables rule on nodes. This rule will reject traffic to 127.0.0.1 which does not originate on the node.
iptables -I INPUT --dst 127.0.0.0/8 ! --src 127.0.0.0/8 \
-m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
Additionally, if your cluster does not already have the API Server insecure port disabled, we strongly suggest that you disable it. Add the following flag to your kubernetes API server command line: --insecure-port=0
Packets on the wire with an IPv4 destination in the range 127.0.0.0/8 and a layer-2 destination MAC address of a node may indicate that an attack is targeting this vulnerability.
kube-proxy v1.19.0+ (not yet released)
kube-proxy v1.18.4+
kube-proxy v1.17.7+
kube-proxy v1.16.11+
To upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster
See the GitHub issue for more details: https://github.com/kubernetes/kubernetes/issues/92315
This vulnerability was reported by János Kövér, Ericsson with additional impacts reported by Rory McCune, NCC Group and Yuval Avrahami and Ariel Zelivansky, Palo Alto Networks.
Thank You,
Joel Smith on behalf of the Kubernetes Product Security Committee
Apologies for the extra email, but an error was discovered in the
original announcement from earlier today. The announcement makes
it sound like upgrading kube-proxy will address the issue, but
that is incorrect. While the issue is caused by a setting in
kube-proxy, the current fix is in the kubelet. We recommend
updating both kubelet
and kube-proxy
to be sure the issue is addressed.
Please see https://github.com/kubernetes/kubernetes/issues/92315 for the most up-to-date information.
Thanks,
Joel Smith, on behalf of the Kubernetes Product Security Committee