Thanks for bringing this up shane and antonio !!!!!!
theres a whole lot of "granular production readiness/ conformance" questions floating around across all the SIGs nowadays..... it seems like moving to a higher level tool then e2e.test for giving people a report, as opposed to a pass/fail signal, is really what we want now.......
ONE POSSIBLE SOLUTION: in sig-windows, we
aggregated sets of tags into the windows operational readiness specification to address this...
https://github.com/kubernetes-sigs/windows-operational-readiness. That way there could be a sense of more granular conformance then just the whole " guesbook works and so do EmptyDirs and APIs...." which we currently have.
- In sig node you run into things where certain container runtimes might do different things wrt things like CAdvisor, kubelet statistics, and even have different behaviours regarding the ability to support realtime CPU isolation and so on... and it would be nice if there were higher level ways to measure a node interms of its conformance to a particular specification i.e. supports realtime or supports GPUs or supports ....
- in sig net we have: L7, networkpolicy, ingress/gateway - - - all totally pluggable and totally different across providers...
- and only a handfull (6 or so? maybe more ? ) of the 300+ conformance tests (last i checked) rely specifically on kube-proxy functionality.... but we have many, many tests (like, functioning node port services, mutating services from one type to another, and so on...) which define a "Conformant" kube proxy impl.
Any other ideas on how we should solve this ? I wonder if its possible to get Sigs to agree on a broad definition for granular conformance somehow ........