Evicted pods policy

929 views
Skip to first unread message

paolo.m...@sparkfabrik.com

unread,
Mar 21, 2017, 10:05:44 AM3/21/17
to Kubernetes user discussion and Q&A
Hello,

This is my current situation:

```
❯ kubectl get pods --all-namespaces | grep -i evicted ⏎
gitlab gitlab-runner-190353586-wnhc5 0/1 Evicted 0 5d
gitlab minio-966383792-kpp59 0/1 Evicted 0 6d
gitlab runner-a1b569a9-project-119-concurrent-0cbfrf 0/3 Evicted 0 23h
```

It is normal that i still see evicted pods here ? Is there a way to auto purge them ?

Thanks!

paolo.m...@sparkfabrik.com

unread,
Mar 22, 2017, 4:53:35 AM3/22/17
to Kubernetes user discussion and Q&A, paolo.m...@sparkfabrik.com
Anyone ?

Thanks!

Brandon Philips

unread,
Mar 22, 2017, 4:08:44 PM3/22/17
to Kubernetes user discussion and Q&A, paolo.m...@sparkfabrik.com
It is likely that no one has responded because we need more specifics on why the pods are evicted. Can you describe the pods?


Brandon

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

paolo.m...@sparkfabrik.com

unread,
Apr 12, 2017, 5:24:52 AM4/12/17
to Kubernetes user discussion and Q&A, paolo.m...@sparkfabrik.com
This is an example of an evicted pod:

~ ❯ kubectl describe pod/dashboard-develop-t8r7ox-1248421821-7w8r5
Name: dashboard-develop-t8r7ox-1248421821-7w8r5
Namespace: default
Node: gke-spark-op-services-gitlab-ci-0dcd135c-gcxm/
Start Time: Thu, 06 Apr 2017 14:52:01 +0200
Labels: app=dashboard-develop-t8r7ox
name=dashboard-develop-t8r7ox
pod-template-hash=1248421821
Status: Failed
Reason: Evicted
Message: The node was low on resource: nodefsInodes.
IP:
Controllers: ReplicaSet/dashboard-develop-t8r7ox-1248421821
Containers:
app:
Image: gcr.io/spark-int-cloud-services/dashboard-code:develop
Port: 80/TCP
Readiness: http-get http://:80/user/login delay=60s timeout=5s period=10s #success=1 #failure=60
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f6799 (ro)
Environment Variables:
CI_PIPELINE_ID: 652
CI_BUILD_ID: 1626
DB_HOST: dashboard-develop-t8r7ox-mysql
DB_NAME: drupal
DB_PORT: 3306
DB_USER: root
DB_PASS: root
PHP_OPCACHE_ENABLE: 1
Volumes:
default-token-f6799:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f6799
QoS Class: BestEffort
Tolerations: <none>
No events.

Currently i have a lot of evited pod sitting there since more than 5 days.

Joshua Sindy

unread,
Sep 14, 2017, 9:32:16 AM9/14/17
to Kubernetes user discussion and Q&A
I have a similar situation where my cluster rebalances and move containers to other nodes. This is fine but I am curious why the Evicted pods stick around for so long? Is there a scheduled cleanup of evicted nodes or do we need to delete them manually?
Reply all
Reply to author
Forward
Message has been deleted
0 new messages