To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CY4PR21MB05041BE3C418C17FEF3BDC70DBCC0%40CY4PR21MB0504.namprd21.prod.outlook.com.
I would be ok with either suggestion. 2+ nodes is not unreasonable. Maybe we should consider the current profile as the normal one and have a single node profile?
Many of the scheduling tests don’t fit well with a tenanted user executing conformance anyway
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAKCBhs4iUCdiqDyJ%2BkHMBOQA05g10_wO7-pfXQDbmSp%2BiWoiwQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAB_J3bZPDzVrRNQFqVDNbyKsW1b%3Dy_7R1Vst7uBWMQu-DFg1Fg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CY4PR21MB0504016D87220FE8B0AE9C4ADBCF0%40CY4PR21MB0504.namprd21.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.
I'd just like to note that there are 2 current conformant Kubernetes
distributions which target single nodes, Docker Desktop and MicroK8s.
I obviously work for Docker so I can be regarded as biased here, but
both of those (as well as Minikube which targets a single node but
it's yet conformant I believe?) are useful and widely used in the
Kubernetes community.
From Arun Gupta's recent survey of >1000 folks, Docker Desktop was the
most popular solution, used by ~40% of respondents.
https://twitter.com/arungupta/status/1073421250305892352
Do we have an opinion on this usecase or are we saying explicitly
"single node clusters cannot be conformant"?
I find the idea that a single node can't be conformant distressing,
but more in the words than the intention. If conformance includes
behaviors around multi-node scenarios, we simply can't confirm that a
1-node cluster is conformant, but to call it non-conformant has a
particularly negative connotation and real implications wrt trademark
and naming.
I think we can retain the meaning (can't confirm) without yanking
trademark allowances. After all, a user of a 1-node cluster can't
reasonably expect multi-node behaviors to work, so it does conform to
realistic expectations.
Do we have a list of the things that require multi-node?
On Tue, Jan 8, 2019 at 9:12 AM Brian Grant <brian...@google.com> wrote:
>
> On Tue, Jan 8, 2019 at 9:05 AM Tim Hockin <tho...@google.com> wrote:
>>
>> I find the idea that a single node can't be conformant distressing,
>> but more in the words than the intention. If conformance includes
>> behaviors around multi-node scenarios, we simply can't confirm that a
>> 1-node cluster is conformant, but to call it non-conformant has a
>> particularly negative connotation and real implications wrt trademark
>> and naming.
>
>
> I'm unconvinced that the burden of supporting multiple nodes is excessive and that the value of single-node clusters is high enough to carve out an exception for this.
>>
>>
>> I think we can retain the meaning (can't confirm) without yanking
>> trademark allowances. After all, a user of a 1-node cluster can't
>> reasonably expect multi-node behaviors to work, so it does conform to
>> realistic expectations.
>>
>>
>> Do we have a list of the things that require multi-node?
>
>
> Pod networking
I can verify pod networking on a single node. It doesn't prove that
the same config would work for multi-node, but the requirement is only
that pods can reach pods.
> A number of DaemonSet features and behaviors
I think user expectations of DaemonSet are being met in a single node
(unless I am missing something). We just can't verify multi-node. I
have one node. Did I get a pod on every node?
> A number of scheduling features (e.g., node and pod affinity / anti-affinity)
You can verify that anti-affinity works by ensuring the second pod is
not on the same node as the first (even if that means pending).
Affinity is possible to test false positive, but that is true in any
case, isn't it?
> Some PV behaviors
I am trying to think what, but I expect it is similar to the pending
case above. I can verify that a PV got detached, even if there's
nowhere else to re-attach it.
> Probably other things
I think I am saying that some of these tests are relying on a second
node when they could instead rely on "not the first node" instead.
If we could fix those tests, are there lingering objections based on
principles/philosophy or just based on practicality?
On Tue, 8 Jan 2019 at 09:08, Brian Grant <brian...@google.com> wrote:
>
> On Tue, Jan 8, 2019 at 5:51 AM Gareth Rushgrove <gar...@morethanseven.net> wrote:
>>
>> I'd just like to note that there are 2 current conformant Kubernetes
>> distributions which target single nodes, Docker Desktop and MicroK8s.
>
>
> I didn't see these in the list when I looked, but it doesn't change my position.
On Tue, Jan 8, 2019 at 9:35 AM Gareth Rushgrove <gar...@morethanseven.net> wrote:On Tue, 8 Jan 2019 at 09:08, Brian Grant <brian...@google.com> wrote:
>> I'd just like to note that there are 2 current conformant Kubernetes
>> distributions which target single nodes, Docker Desktop and MicroK8s.
> I didn't see these in the list when I looked, but it doesn't change my position.Ah, it's listed in the spreadsheet, but not on the CNCF website, and not prominently mentioned on the Docker Desktop site.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/3caf57bc-b19d-4475-8a59-7298b27f61adn%40googlegroups.com.