I deleted the cronjob using kubectl delete cronjob <name>, but the containers are still trying. I tried creating the busybox cronjob in the documentation using the same name from my original cronjob hoping it would be successful and clear things up, but that left the previous pods alone and started scheduling more.
I currently have over 200 containers in pending, error, or CrashLoopBackoff state related to this. Deleting the pods just recreates them, and I have an insufficient pods (2) error.
The cronjob.yaml is below. Is there anyway to force stop these from retrying permanently? Note that even after running kubectl apply -f ..., it says cronjob <name> was configured, but running kubectl get cronjobs immediately after lists nothing.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: payment-${CRON_NAME}
namespace: payment
labels:
app: payment
spec:
schedule: "${CRON_SCHEDULE}"
jobTemplate:
spec:
template:
spec:
containers:
- name: payment-${CRON_NAME}
image: "${GCP_IMAGE}:${BUILD_NUMBER}"
args:
- ./${CRON_NAME}
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: ${DEFAULT_TOKEN_NAME}
readOnly: true
restartPolicy: OnFailure
volumes:
- name: ${DEFAULT_TOKEN_NAME}
secret:
defaultMode: 420
secretName: ${DEFAULT_TOKEN_NAME}
kubectl get cronjobs -n payment
kubectl delete cronjob <name> -n payment
After running those commands the pods were stopped.
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.