Hello,
Thanks for your response.
Sure thing. Here’s some additional context and use cases where allowing the operator to cordon nodes without draining can be useful for us:
- Pre-Maintenance Preparation: About to perform maintenance on a node but don’t want to disrupt existing workloads yet.
- Node health related investigations: investigating node-level issues without affecting currently running pods.
- Graceful Decommissioning: prevent new workloads from landing there while waiting for existing jobs to finish naturally.
- Resource Pressure: Node is low on disk, CPU, or memory, but workloads are stable. Cordoning only gives time to investigate or add capacity elsewhere.
Instead of simply executing `kubectl cordon <node>`, we would also like to rely on the `reason` field of the NodeMaintenance custom resource as part of our workflows, since this field can clearly communicate why a node is under maintenance, which is particularly useful for automation and observability in our system.
I hope this helps clarify things.
Best regards,
David