Re: The way we discuss control plane members

234 views
Skip to first unread message

Stephen Augustus

unread,
May 17, 2019, 5:53:35 PM5/17/19
to Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernete...@googlegroups.com, kubernetes-sig-architecture
(+ SIG Docs, since they're our trusted wordsmiths and SIG Arch, as they may have opinions)

Thanks so much for raising this, Andrew! Could we also open an issue to track?

I feel like in the places where "master" is not referenced, "control plane" is, and it's generally (maybe my assumption here) understood to be the set of nodes which hold the control plane components, namely, api-server, scheduler, and controller-manager.
Could we not opt to simply continue referencing it as such and search/replace "master", where appropriate?

Another suggestion (combined with your's) would be:
- Plural: control plane pool
- Singular: control plane node

We have the concept of node/agent pools, so this feels "natural" to extend that to control plane naming.

"Controller" feels weird, because I feel it would cause confusion between a controller (machine) and the currently accepted controller (reconciliation loop for some Kubernetes resource).

As for naming of the first control plane node to come up, "primary" may cause confusion, as it implies that the first control plane node to be instantiated is also the going to be the one always doing the work, which isn't necessarily true in HA scenarios.
"Initial" control plane node sounds more "correct" for that.

If we're going for shorter, I like "control node" as well.

-- Stephen

On Thu, May 16, 2019, 20:42 'Andrew Kutz' via kubernetes-sig-cluster-lifecycle <kubernetes-sig-c...@googlegroups.com> wrote:
Hi all,

I think my original recommendations were lost in the initial reply. They were:

- controller
- controller node
- control plane member
- control plane node
- primary controller

I already saw a +1 for "control node", which is similar to the "controller node" I recommended above.

Let me throw another wrinkle into the mix. A node really represents an instance of a kubelet, not the machine itself. Think of a hub/spoke CI design where you might have multiple CI agents running on a single machine. If there are three agents on a single machine, the CI deployment has a controller and three nodes.

--
-a

Andrew Kutz
Engineer, VMware Cloud-Native Apps BU
ak...@vmware.com
512-658-8368


On 5/16/19, 7:40 PM, "shidaqiu2018" <shidaq...@gmail.com> wrote:

    +1
    Sounds good!
    But pilot is a concept in istio.
    Doesn't that cause confusion?



    ------------------ Original ------------------
    From: Daniel Lipovetsky <dan...@platform9.com>
    Date: 周五,5月 17,2019 03:53
    To: Michael Taufen <mta...@google.com>
    Cc: Fabrizio Pandini <fabrizio...@gmail.com>, kubernetes-sig-cluster-lifecycle <kubernetes-sig-c...@googlegroups.com>
    Subject: Re: The way we discuss control plane members



    Typically, apiserver/scheduler/controller-manager are deployed one a single node if they are deployed as static pods. Otherwise, an existing scheduler distributes them across an existing cluster.




    I do like "control node" for its simplicity. But, when I think of "node," I think of a kubelet on a machine. And sometimes the above components run in another Kubernetes cluster, so "node" might be misleading.



    I kind of like "control replica," but it's a mouthful, as is "control node." If we're going to throw out "master," why not considering throwing out "control plane" as well? Is there a more concise alternative?


    What does Kubernetes translate to?
    Pilot, or shipmaster <https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fclassic.studylight.org%2Flex%2Fgrk%2Fview.cgi%3Fnumber%3D2942&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128203992&sdata=dvgtEvyFGE%2FsNORgfJDCa9WxvIrZxbtV99770sLis%2Fw%3D&reserved=0>. So if 'master is short for the latter, and we don't like it, how about pilot?


    Daniel

















    On Thu, May 16, 2019 at 10:49 AM 'Michael Taufen' via kubernetes-sig-cluster-lifecycle <kubernetes-sig-c...@googlegroups.com> wrote:


    +1


    Maybe "control node?"

    Thinking in terms of control-plane and data-plane is more accurate anyway, "master" is ambiguous and also a misnomer as it suggests the control plane components must run on the same machine.



    Whatever we pick it should be short and easy to say, and make sense the first time you hear it. I think this is half the reason people still use "master."


    From: Fabrizio Pandini
    <fabrizio...@gmail.com>
    Date: Wed, May 15, 2019 at 11:04 PM
    To: kubernetes-sig-cluster-lifecycle



    +1
    Fyi during HA work we are trying to use consistently control-plane node ("bootstrap control-plane node" and "secondary control-plane nodes" or "joining control-plane node" when referring to the init/join workflow).


    --
    You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to

    kubernetes-sig-cluster...@googlegroups.com <mailto:kubernetes-sig-cluster-lifecycle%2Bunsu...@googlegroups.com>.
    To post to this group, send email to
    kubernetes-sig-c...@googlegroups.com <mailto:kubernetes-sig-c...@googlegroups.com>.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/kubernetes-sig-cluster-lifecycle/8ca618fd-3413-4a5b-8e2b-acd4cee3416a%40googlegroups.com <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-cluster-lifecycle%2F8ca618fd-3413-4a5b-8e2b-acd4cee3416a%2540googlegroups.com&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128213986&sdata=5KG6%2BO%2FgK3%2BIEqPGA4o99SpB6SH1YVS5ZgSMi%2BSULEc%3D&reserved=0>.
    For more options, visit
    https://groups.google.com/d/optout <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Foptout&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128213986&sdata=6SGT5YRIxsC4NYhhu4Tnqw9sxOklivcKG5JDFfcw56E%3D&reserved=0>.






    --
    Michael Taufen
    Google SWE






    --
    You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to

    kubernetes-sig-cluster...@googlegroups.com <mailto:kubernetes-sig-cluster...@googlegroups.com>.
    To post to this group, send email to
    kubernetes-sig-c...@googlegroups.com <mailto:kubernetes-sig-c...@googlegroups.com>.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/kubernetes-sig-cluster-lifecycle/CAHJadQ%2B0eXL49TZjS-cqfqyJo2UqdTYrEQPGmY1ELAu6Cca5fg%40mail.gmail.com <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-cluster-lifecycle%2FCAHJadQ%252B0eXL49TZjS-cqfqyJo2UqdTYrEQPGmY1ELAu6Cca5fg%2540mail.gmail.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128223977&sdata=s%2FzAnywWJJwQMu3PkCX7JcGOgqb2FCWgMytX3W148ng%3D&reserved=0>.
    For more options, visit
    https://groups.google.com/d/optout <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Foptout&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128223977&sdata=dyEXWr5gdX9J%2Bh32PcMMdM9qnz2STzX0k0f43zwMeJ0%3D&reserved=0>.



    --
    You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to
    kubernetes-sig-cluster...@googlegroups.com.
    To post to this group, send email to
    kubernetes-sig-c...@googlegroups.com <mailto:kubernetes-sig-c...@googlegroups.com>.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/kubernetes-sig-cluster-lifecycle/CAMAYzbCj5E%3D%3D0k-bEC2ogC8v1yW_H%3DfuGtKe6ar703cFi9Kz4Q%40mail.gmail.com <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-cluster-lifecycle%2FCAMAYzbCj5E%253D%253D0k-bEC2ogC8v1yW_H%253DfuGtKe6ar703cFi9Kz4Q%2540mail.gmail.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128233971&sdata=wq5z%2BB1IN8BjMvqCNfE1yN4AxfbS5ea%2BIoynFx7T%2BtY%3D&reserved=0>.
    For more options, visit
    https://groups.google.com/d/optout <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Foptout&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128233971&sdata=sdyZD8KXtDJDeljqpEBAf67%2B6CnaAiUlx80y3ccgKRM%3D&reserved=0>.

    --
    You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to
    kubernetes-sig-cluster...@googlegroups.com.
    To post to this group, send email to
    kubernetes-sig-c...@googlegroups.com <mailto:kubernetes-sig-c...@googlegroups.com>.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/kubernetes-sig-cluster-lifecycle/tencent_3C79EBA6D9A20BA243FAAD09%40qq.com <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-cluster-lifecycle%2Ftencent_3C79EBA6D9A20BA243FAAD09%2540qq.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128243966&sdata=d9zADyznsSAWJml60zPyLN7hFEilNpLZxj2%2BfzMeS9A%3D&reserved=0>.
    For more options, visit
    https://groups.google.com/d/optout <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Foptout&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128243966&sdata=JfWAHBRtlUsR%2FrHRRaLLeQqJj1Baml26vB2TpDaqRtQ%3D&reserved=0>.


--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-cluster...@googlegroups.com.
To post to this group, send email to kubernetes-sig-c...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-cluster-lifecycle/A395BCB3-7BF1-4D50-8DCD-6FBF8D3964F1%40vmware.com.
For more options, visit https://groups.google.com/d/optout.

Clayton

unread,
May 17, 2019, 6:00:56 PM5/17/19
to Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernete...@googlegroups.com, kubernetes-sig-architecture
The last resolution on this was simply control plane nodes and everything else was suggested to not overthink.  I don’t remember the exact issue but there was broad agreement at the time to go no further than “control plane” as the generic term for our brain and “control plane nodes” when referring to nodes that may host a control plane.

We explicitly stated at the time that there is no requirement to have control plane nodes, or to enforce that they are labeled a specific way.

Maybe someone not on their phone can remember the issue # (it was when we discussed stopping use of the word “master”)
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAFQm5yTpdDCZS5WoabGM9rX_vHdFZoYmaKkpNLHpZUvASR-a0g%40mail.gmail.com.

Daniel Smith

unread,
May 17, 2019, 6:09:20 PM5/17/19
to Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernete...@googlegroups.com, kubernetes-sig-architecture
I think the best way to think of this at the node level is that it's a security domain. "Node permitted to host control plane components." "Control plane node" is shorter and probably good enough, but possibly confusing--nodes themselves are not part of the control plane, they are just a substrate.

Aaron Crickenberger

unread,
May 17, 2019, 6:41:25 PM5/17/19
to Daniel Smith, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernete...@googlegroups.com, kubernetes-sig-architecture
https://github.com/kubernetes/website/issues/6525 was the last time I recall this being raised, and it went a bit off the rails.  I am encouraged by the enthusiasm to do better this time.

- aaron

Mike Spreitzer

unread,
May 17, 2019, 11:23:33 PM5/17/19
to Aaron Crickenberger, Andrew Kutz, Daniel Lipovetsky, Daniel Smith, kubernetes-sig-architecture, kubernetes-sig-cluster-lifecycle, kubernete...@googlegroups.com, shidaqiu2018, Clayton Coleman, Stephen Augustus
I have to start by complaining about the choice (made some time ago) to use the word "node" to mean worker in specific contrast to master or control.  I am glad to see that we are drifting away from that.  In the wider world, the word "node" is often used as a synonym for "machine", and so we would surprise nobody to talk about "worker nodes" and "control [plane] nodes".

I noticed the claim that "node" is actually a level of virtualization above "machine" in Kubernetes.  OK, if that's so then we should go with "control machine" or "control plane machine".  I personally do not feel the need to include "plane" in the term, it does not seem to me to add much.

Regards,
Mike


"'Aaron Crickenberger' via kubernetes-sig-architecture" <kubernetes-si...@googlegroups.com> wrote on 05/17/2019 06:41:12 PM:

> From: "'Aaron Crickenberger' via kubernetes-sig-architecture"
> <kubernetes-si...@googlegroups.com>

> To: Daniel Smith <dbs...@google.com>
> Cc: Clayton Coleman <smarter...@gmail.com>, Stephen Augustus
> <Ste...@agst.us>, Andrew Kutz <ak...@vmware.com>, shidaqiu2018
> <shidaq...@gmail.com>, Daniel Lipovetsky <dan...@platform9.com>,
> kubernetes-sig-cluster-lifecycle <kubernetes-sig-cluster-
> life...@googlegroups.com>, kubernete...@googlegroups.com,
> kubernetes-sig-architecture <kubernetes-si...@googlegroups.com>

> Date: 05/17/2019 06:41 PM
> Subject: [EXTERNAL] Re: The way we discuss control plane members

>
> https://github.com/kubernetes/website/issues/6525 was the last time
> I recall this being raised, and it went a bit off the rails.  I am
> encouraged by the enthusiasm to do better this time.

>
> - aaron

>
> On Sat, May 18, 2019 at 12:09 AM 'Daniel Smith' via kubernetes-sig-
> architecture <kubernetes-si...@googlegroups.com> wrote:

> I think the best way to think of this at the node level is that it's
> a security domain. "Node permitted to host control plane
> components." "Control plane node" is shorter and probably good
> enough, but possibly confusing--nodes themselves are not part of the
> control plane, they are just a substrate.

>
> On Fri, May 17, 2019, 3:00 PM Clayton <smarter...@gmail.com> wrote:

> The last resolution on this was simply control plane nodes and
> everything else was suggested to not overthink.  I don’t remember
> the exact issue but there was broad agreement at the time to go no
> further than “control plane” as the generic term for our brain and
> “control plane nodes” when referring to nodes that may host a control plane.

>
> We explicitly stated at the time that there is no requirement to
> have control plane nodes, or to enforce that they are labeled a specific way.

>
> Maybe someone not on their phone can remember the issue # (it was
> when we discussed stopping use of the word “master”)

>
> On May 16, 2019, at 9:29 PM, Stephen Augustus <Ste...@agst.us> wrote:

> (+ SIG Docs, since they're our trusted wordsmiths and SIG Arch, as
> they may have opinions)

>
> Thanks so much for raising this, Andrew! Could we also open an issueto track?
> kubernetes-sig-cluster-lifecycle <kubernetes-sig-cluster-
https://groups.google.com/d/msgid/kubernetes-sig-cluster-
<https://
> nam04.safelinks.protection.outlook.com/?
> url=https%3A%2F%2Fgroups.google.com%2Fd%2Foptout&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128223977&sdata=dyEXWr5gdX9J%2Bh32PcMMdM9qnz2STzX0k0f43zwMeJ0%3D&reserved=0

> >.
>
>
>
>     --
>     You received this message because you are subscribed to the
> Google Groups "kubernetes-sig-cluster-lifecycle" group.
>     To unsubscribe from this group and stop receiving emails from
> it, send an email to
>     kubernetes-sig-cluster...@googlegroups.com.
>     To post to this group, send email to
>     kubernetes-sig-c...@googlegroups.com <mailto:
> kubernetes-sig-c...@googlegroups.com>.
>     To view this discussion on the web visit
>    
https://groups.google.com/d/msgid/kubernetes-sig-cluster-

> lifecycle/CAMAYzbCj5E%3D%3D0k-
> bEC2ogC8v1yW_H%3DfuGtKe6ar703cFi9Kz4Q%40mail.gmail.com <https://
> nam04.safelinks.protection.outlook.com/?
> url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-
> cluster-lifecycle%2FCAMAYzbCj5E%253D%253D0k-
> bEC2ogC8v1yW_H%253DfuGtKe6ar703cFi9Kz4Q%2540mail.gmail.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128233971&sdata=wq5z%2BB1IN8BjMvqCNfE1yN4AxfbS5ea%2BIoynFx7T%2BtY%3D&reserved=0
> >.
>     For more options, visit
>    
https://groups.google.com/d/optout<https://
> nam04.safelinks.protection.outlook.com/?
> url=https%3A%2F%2Fgroups.google.com%2Fd%2Foptout&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128233971&sdata=sdyZD8KXtDJDeljqpEBAf67%2B6CnaAiUlx80y3ccgKRM%3D&reserved=0

> >.
>
>     --
>     You received this message because you are subscribed to the
> Google Groups "kubernetes-sig-cluster-lifecycle" group.
>     To unsubscribe from this group and stop receiving emails from
> it, send an email to
>     kubernetes-sig-cluster...@googlegroups.com.
>     To post to this group, send email to
>     kubernetes-sig-c...@googlegroups.com <mailto:
> kubernetes-sig-c...@googlegroups.com>.
>     To view this discussion on the web visit
>    
https://groups.google.com/d/msgid/kubernetes-sig-cluster-
> lifecycle/tencent_3C79EBA6D9A20BA243FAAD09%40qq.com <https://
> nam04.safelinks.protection.outlook.com/?
> url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubernetes-sig-
> cluster-
> lifecycle%2Ftencent_3C79EBA6D9A20BA243FAAD09%2540qq.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=02%7C01%7Cakut

z%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128243966&sdata=d9zADyznsSAWJml60zPyLN7hFEilNpLZxj2%2BfzMeS9A%3D&reserved=0

> >.
>     For more options, visit
>    
https://groups.google.com/d/optout<https://
> nam04.safelinks.protection.outlook.com/?
> url=https%3A%2F%2Fgroups.google.com%2Fd%2Foptout&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128243966&sdata=JfWAHBRtlUsR%2FrHRRaLLeQqJj1Baml26vB2TpDaqRtQ%3D&reserved=0

> >.
>
>
> --
> You received this m
essage because you are subscribed to the Google
> Groups "kubernetes-sig-cluster-lifecycle" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to kubernetes-sig-cluster...@googlegroups.com
> .
> To post to this group, send email to kubernetes-sig-cluster-
> life...@googlegroups.com.

> To view this discussion on the web visit
https://groups.google.com/
> d/msgid/kubernetes-sig-cluster-lifecycle/

> A395BCB3-7BF1-4D50-8DCD-6FBF8D3964F1%40vmware.com.
> For more options, visit
https://groups.google.com/d/optout.> -
-
> You received this message because you are subscribed to the Google
> Groups "kubernetes-sig-architecture" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to kubernetes-sig-arch...@googlegroups.com.
> To post to this group, send email to kubernetes-sig-
> archit...@googlegroups.com.

> To view this discussion on the web visit
https://groups.google.com/
> d/msgid/kubernetes-sig-architecture/
> CAFQm5yTpdDCZS5WoabGM9rX_vHdFZoYmaKkpNLHpZUvASR-a0g%40mail.gmail.com.
> For more options, visit
https://groups.google.com/d/optout.> --

> You received this message because you are subscribed to the Google
> Groups "kubernetes-sig-architecture" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to kubernetes-sig-arch...@googlegroups.com.
> To post to this group, send email to kubernetes-sig-
> archit...@googlegroups.com.

> To view this discussion on the web visit
https://groups.google.com/
> d/msgid/kubernetes-sig-architecture/4854B942-
> DA89-482D-8D68-894D1AB5D8B0%40gmail.com.
> For more options, visit
https://groups.google.com/d/optout.> --
> You received this message because you are subscribed to the Google
> Groups "kubernetes-sig-architecture" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to kubernetes-sig-arch...@googlegroups.com.
> To post to this group, send email to kubernetes-sig-
> archit...@googlegroups.com.

> To view this discussion on the web visit
https://groups.google.com/
> d/msgid/kubernetes-sig-architecture/
> CAB_J3bYRN6bzMra6dTj7bAA6MZFVgVjLx%2Behw2BZXGeZd3irQA%40mail.gmail.com.
> For more options, visit
https://groups.google.com/d/optout.> --
> You received this message because you are subscribed to the Google
> Groups "kubernetes-sig-architecture" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to kubernetes-sig-arch...@googlegroups.com.
> To post to this group, send email to kubernetes-sig-
> archit...@googlegroups.com.

> To view this discussion on the web visit
https://groups.google.com/
> d/msgid/kubernetes-sig-architecture/CAFto3a2We4G060SavB4Zh1Xc-
> MKOux24rJSyDsFLUdh97XLxog%40mail.gmail.com.
> For more options, visit
https://groups.google.com/d/optout.

Daniel Comnea

unread,
May 18, 2019, 7:24:22 PM5/18/19
to Mike Spreitzer, Aaron Crickenberger, Andrew Kutz, Daniel Lipovetsky, Daniel Smith, kubernetes-sig-architecture, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, shidaqiu2018, Clayton Coleman, Stephen Augustus
Is great to see this topic being address, it was discussed earlier this week in out sig-docs meeting as part of Q2 goals to refresh the k/w content.
Today in our website docs we have a mix of everything with master or control plane nodes (only on kubeadm topics) and nodes, or worker nodes.

My vote/ suggestion will be to go with
  • masters/ control plane nodes -> replaced by control plane machines
  • nodes/ worker nodes -> replaced by compute machines
The reason for replacing "nodes" with "machines" is to be consistent with the terminology used in cluster-api where the machine does describe the K8s node


Cheers,
Dani

P.S  - I also came across today that same terminology is used in the beta OpenShift v4 docs and that will be a bonus for our k/website consumers to have aligned terminology.


You received this message because you are subscribed to the Google Groups "kubernetes-sig-docs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-docs/OF287988F3.9C95EF37-ON852583FE.00120993-852583FE.00129DC6%40notes.na.collabserv.com.

Paris, Eric

unread,
May 21, 2019, 5:36:41 PM5/21/19
to Daniel Comnea, Mike Spreitzer, Aaron Crickenberger, Andrew Kutz, Daniel Lipovetsky, Daniel Smith, kubernetes-sig-architecture, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, shidaqiu2018, Clayton Coleman, Stephen Augustus
Doesn't the cluster API working group suggest that you might have machines in 1 cluster which are actually nodes in a different cluster? If so, moving to "machine" might be strictly worse. I'm not 100% certain of that sig's current plans as there seems to be some major changes in progress on the design and APIs.

-Eric

Brian Grant

unread,
May 22, 2019, 8:11:44 PM5/22/19
to Aaron Crickenberger, Daniel Smith, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, kubernetes-sig-architecture
That's the issue I remember, also.

"Control plane" is the general qualifier to apply to whatever more specific aspect is under discussion.

As Clayton pointed out, reserved control-plane nodes are not required.

Tim Hockin

unread,
May 28, 2019, 11:46:26 AM5/28/19
to Brian Grant, Aaron Crickenberger, Daniel Smith, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, kubernetes-sig-architecture
Late to the chat, but I think we're circling on a topic that comes up about once a year.  It's not just the words we use but they expectations they imply.

Calling a machine (virt or phys) a "Master" implies that machines are set aside for this purpose.  while this is often true, it's not necessarily so.

Calling a machine a "master node" or a "control plane node" sort of implies that the machine is simultaneously a master and a node, though it seems very split on whether tools set things up this way.

We have "node-role.kubernetes.io/master" as a label on nodes, which is used by a handful of things to switch their behaviors (e.g. should a node be considered available for external LB bounces).

Some environments register dedicated control-plane machines as Nodes in the controlled cluster, some do not.

Some environments run the control plane (or parts of it) on non-dedicated nodes in the controlled cluster.

Some environments the control plane as jobs in a different cluster.


This is a mess.  


I am all for finding better words, but I think that is a second-order concern.  I'd really like us to find some consistency in how we explain and think about these various operating modes.  Can we reduce the space?  Can we get more principled?

E.g.  I would love some rules along the lines of, and as crisp as:


* Control plane components can be run on dedicated or non-dedicated machines.
* If a machine is registered as a Node in kubernetes, it is subject to "usual" kubernetes semantics - e.g. scheduling, daemonsets, load-balancers, etc.
* If you do not want "usual" kubernetes semantics on a machine, do not register it as a Node
* Whether a node is currently running components of the control plane is not semantically meaningful
* Specific Kubernetes semantics (e.g. scheduling, daemonsets, load-balancers, etc.) should be governed by specific and orthogonal controls (e.g. labels, annotations, fields)


Those are, pretty obviously, not 100% correct or sufficient, but they get to my point.


Daniel Smith

unread,
May 28, 2019, 11:52:51 AM5/28/19
to Tim Hockin, Brian Grant, Aaron Crickenberger, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, kubernetes-sig-architecture
On Tue, May 28, 2019 at 8:46 AM Tim Hockin <tho...@google.com> wrote:
Late to the chat, but I think we're circling on a topic that comes up about once a year.  It's not just the words we use but they expectations they imply.

Calling a machine (virt or phys) a "Master" implies that machines are set aside for this purpose.  while this is often true, it's not necessarily so.

Calling a machine a "master node" or a "control plane node" sort of implies that the machine is simultaneously a master and a node, though it seems very split on whether tools set things up this way.

We have "node-role.kubernetes.io/master" as a label on nodes, which is used by a handful of things to switch their behaviors (e.g. should a node be considered available for external LB bounces).

Some environments register dedicated control-plane machines as Nodes in the controlled cluster, some do not.

Some environments run the control plane (or parts of it) on non-dedicated nodes in the controlled cluster.

Some environments the control plane as jobs in a different cluster.


This is a mess.  


I am all for finding better words, but I think that is a second-order concern.  I'd really like us to find some consistency in how we explain and think about these various operating modes.  Can we reduce the space?  Can we get more principled?

E.g.  I would love some rules along the lines of, and as crisp as:


* Control plane components can be run on dedicated or non-dedicated machines.
* If a machine is registered as a Node in kubernetes, it is subject to "usual" kubernetes semantics - e.g. scheduling, daemonsets, load-balancers, etc.
* If you do not want "usual" kubernetes semantics on a machine, do not register it as a Node
 
Or taint it etc. (Do we need an admission controller that e.g. does an RBAC check before permitting someone to make a pod with a given toleration?)

* Whether a node is currently running components of the control plane is not semantically meaningful

It is in terms of how severe a container escape is, which is why I maintain that the relevant concept is that of a security domain.

David Emory Watson

unread,
May 28, 2019, 11:55:37 AM5/28/19
to Tim Hockin, Brian Grant, Aaron Crickenberger, Daniel Smith, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, kubernetes-sig-architecture
I understand the desire to be precise, maybe this is too complicated though...

> Whether a node is currently running components of the control plane is not semantically meaningful

From a Cluster API (CAPI) perspective, `Machines` matter. From a users perspective, they don't. What if we just talk about `Clusters`/`ControlPlanes` and `Machines`?

David.

Vallery Lancey

unread,
May 28, 2019, 11:59:11 AM5/28/19
to David Emory Watson, Aaron Crickenberger, Andrew Kutz, Brian Grant, Clayton Coleman, Daniel Lipovetsky, Daniel Smith, Stephen Augustus, Tim Hockin, kubernetes-sig-architecture, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, shidaqiu2018
A thought: might this be something we could punt to a working group (comedic as it may sound)? A lot of strong points have been raised about the complexity/inaccuracies of the terminology.

David Emory Watson

unread,
May 28, 2019, 12:06:24 PM5/28/19
to Tim Hockin, Brian Grant, Aaron Crickenberger, Daniel Smith, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, kubernetes-sig-architecture
FWIW, I prefer `Cluster` over `ControlPlane` for its simplicity...

David,

Clayton Coleman

unread,
May 28, 2019, 12:28:26 PM5/28/19
to Tim Hockin, Brian Grant, Aaron Crickenberger, Daniel Smith, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, kubernetes-sig-architecture


On May 28, 2019, at 11:46 AM, 'Tim Hockin' via kubernetes-sig-architecture <kubernetes-si...@googlegroups.com> wrote:

Late to the chat, but I think we're circling on a topic that comes up about once a year.  It's not just the words we use but they expectations they imply.

Calling a machine (virt or phys) a "Master" implies that machines are set aside for this purpose.  while this is often true, it's not necessarily so.

Calling a machine a "master node" or a "control plane node" sort of implies that the machine is simultaneously a master and a node, though it seems very split on whether tools set things up this way.

We have "node-role.kubernetes.io/master" as a label on nodes, which is used by a handful of things to switch their behaviors (e.g. should a node be considered available for external LB bounces)
Some environments register dedicated control-plane machines as Nodes in the controlled cluster, some do not.

Some environments run the control plane (or parts of it) on non-dedicated nodes in the controlled cluster.

Some environments the control plane as jobs in a different cluster.


This is a mess.  


I am all for finding better words, but I think that is a second-order concern.  I'd really like us to find some consistency in how we explain and think about these various operating modes.  Can we reduce the space?  Can we get more principled?

E.g.  I would love some rules along the lines of, and as crisp as:


* Control plane components can be run on dedicated or non-dedicated machines.
* If a machine is registered as a Node in kubernetes, it is subject to "usual" kubernetes semantics - e.g. scheduling, daemonsets, load-balancers, etc.
* If you do not want "usual" kubernetes semantics on a machine, do not register it as a Node
* Whether a node is currently running components of the control plane is not semantically meaningful
* Specific Kubernetes semantics (e.g. scheduling, daemonsets, load-balancers, etc.) should be governed by specific and orthogonal controls (e.g. labels, annotations, fields)

Yes, we should enforce this strongly going forward and start cleaning up the mistakes (Jordan has caught a couple before they could land, but it’s commonly confusing to new contributors)

 LBs should be keying off a taint or a scoped label (“lb.service.k8s.io/exclude-from-endpoints”), not coupled to role.

I don’t want us to continue coupling “type of node” across multiple concepts (lb, scheduling, volumes) either, and we need to be strict about letting it continue.

This is something I can spend time coordinating, given that I’m spending time fixing it / shoring up the gaps when it does happen.

Tim Hockin

unread,
May 28, 2019, 1:26:36 PM5/28/19
to Daniel Smith, Brian Grant, Aaron Crickenberger, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, kubernetes-sig-architecture
On Tue, May 28, 2019 at 8:52 AM Daniel Smith <dbs...@google.com> wrote:


On Tue, May 28, 2019 at 8:46 AM Tim Hockin <tho...@google.com> wrote:
Late to the chat, but I think we're circling on a topic that comes up about once a year.  It's not just the words we use but they expectations they imply.

Calling a machine (virt or phys) a "Master" implies that machines are set aside for this purpose.  while this is often true, it's not necessarily so.

Calling a machine a "master node" or a "control plane node" sort of implies that the machine is simultaneously a master and a node, though it seems very split on whether tools set things up this way.

We have "node-role.kubernetes.io/master" as a label on nodes, which is used by a handful of things to switch their behaviors (e.g. should a node be considered available for external LB bounces).

Some environments register dedicated control-plane machines as Nodes in the controlled cluster, some do not.

Some environments run the control plane (or parts of it) on non-dedicated nodes in the controlled cluster.

Some environments the control plane as jobs in a different cluster.


This is a mess.  


I am all for finding better words, but I think that is a second-order concern.  I'd really like us to find some consistency in how we explain and think about these various operating modes.  Can we reduce the space?  Can we get more principled?

E.g.  I would love some rules along the lines of, and as crisp as:


* Control plane components can be run on dedicated or non-dedicated machines.
* If a machine is registered as a Node in kubernetes, it is subject to "usual" kubernetes semantics - e.g. scheduling, daemonsets, load-balancers, etc.
* If you do not want "usual" kubernetes semantics on a machine, do not register it as a Node
 
Or taint it etc. (Do we need an admission controller that e.g. does an RBAC check before permitting someone to make a pod with a given toleration?)

My point was that registering as a node enables "usual" semantics.  That includes taints/tolerations as an exclusionary mechanism that is available.
 
* Whether a node is currently running components of the control plane is not semantically meaningful

It is in terms of how severe a container escape is, which is why I maintain that the relevant concept is that of a security domain.

It is only semantically meaningful if the user decides it is.  WE can not decide that in their stead.  If we want to formalize security-something-something that is fine, but it starts to feel a lot like taints to me...

Tim Hockin

unread,
May 28, 2019, 1:29:20 PM5/28/19
to Clayton Coleman, Brian Grant, Aaron Crickenberger, Daniel Smith, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, kubernetes-sig-architecture
On Tue, May 28, 2019 at 9:28 AM Clayton Coleman <ccol...@redhat.com> wrote:


On May 28, 2019, at 11:46 AM, 'Tim Hockin' via kubernetes-sig-architecture <kubernetes-si...@googlegroups.com> wrote:

Late to the chat, but I think we're circling on a topic that comes up about once a year.  It's not just the words we use but they expectations they imply.

Calling a machine (virt or phys) a "Master" implies that machines are set aside for this purpose.  while this is often true, it's not necessarily so.

Calling a machine a "master node" or a "control plane node" sort of implies that the machine is simultaneously a master and a node, though it seems very split on whether tools set things up this way.

We have "node-role.kubernetes.io/master" as a label on nodes, which is used by a handful of things to switch their behaviors (e.g. should a node be considered available for external LB bounces)

Some environments register dedicated control-plane machines as Nodes in the controlled cluster, some do not.

Some environments run the control plane (or parts of it) on non-dedicated nodes in the controlled cluster.

Some environments the control plane as jobs in a different cluster.


This is a mess.  


I am all for finding better words, but I think that is a second-order concern.  I'd really like us to find some consistency in how we explain and think about these various operating modes.  Can we reduce the space?  Can we get more principled?

E.g.  I would love some rules along the lines of, and as crisp as:


* Control plane components can be run on dedicated or non-dedicated machines.
* If a machine is registered as a Node in kubernetes, it is subject to "usual" kubernetes semantics - e.g. scheduling, daemonsets, load-balancers, etc.
* If you do not want "usual" kubernetes semantics on a machine, do not register it as a Node
* Whether a node is currently running components of the control plane is not semantically meaningful
* Specific Kubernetes semantics (e.g. scheduling, daemonsets, load-balancers, etc.) should be governed by specific and orthogonal controls (e.g. labels, annotations, fields)

Yes, we should enforce this strongly going forward and start cleaning up the mistakes (Jordan has caught a couple before they could land, but it’s commonly confusing to new contributors)

 LBs should be keying off a taint or a scoped label (“lb.service.k8s.io/exclude-from-endpoints”), not coupled to role.

We have gone round-and-round on this specific one.  There is a label that is supposed to control this but it's a very cloud-specific thing -- whether a node is needed as a second-hop at all.  I'd prefer it become more cloud-centric, I think.  I think.
 
I don’t want us to continue coupling “type of node” across multiple concepts (lb, scheduling, volumes) either, and we need to be strict about letting it continue.

This was my main point - "type of node" should not be a thing.

David Emory Watson

unread,
May 28, 2019, 1:57:38 PM5/28/19
to Tim Hockin, Clayton Coleman, Brian Grant, Aaron Crickenberger, Daniel Smith, Clayton Coleman, Stephen Augustus, Andrew Kutz, shidaqiu2018, Daniel Lipovetsky, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, kubernetes-sig-architecture
I get it. I don't think the majority of people think in those terms...

Your axioms would be good documentation.

David.

David Emory Watson

unread,
May 29, 2019, 6:09:08 AM5/29/19
to Tim Hockin, Aaron Crickenberger, Andrew Kutz, Brian Grant, Clayton Coleman, Clayton Coleman, Daniel Lipovetsky, Daniel Smith, Stephen Augustus, kubernetes-sig-architecture, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, shidaqiu2018
Maybe we should drop the term node when referring to Clusters/ControlPlanes… It will match CAPI, and address the user concerns.

David.

Stephen Augustus

unread,
Jun 9, 2020, 7:14:16 PM6/9/20
to David Emory Watson, Tim Hockin, Aaron Crickenberger, Andrew Kutz, Brian Grant, Clayton Coleman, Clayton Coleman, Daniel Lipovetsky, Daniel Smith, kubernetes-sig-architecture, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, shidaqiu2018, Stephen Augustus, georgedan...@gmail.com, cel...@cncf.io
It's been on my mind and this feels like a good time to pick this discussion back up.
A few people have reached out to me and I've seen some awesome PRs[1] and issues[2] come through discussing using more inclusive language across the board.

So...
  • Who's interested in working together on this?
  • What do we think are some good next steps?
To echo Vallery's previous email, maybe this actually does merit a Working Group, given we've had some stops and starts in the conversation?

-- Stephen

Zach Corleissen

unread,
Jun 9, 2020, 7:57:22 PM6/9/20
to kubernetes-sig-architecture
I'm interested. I agree that there's enough cross-SIG intersection to make a working group valuable and necessary.

Suggested next steps:

1. Form a WG
2. Get butts in seats
3. Survey the project for instances of hateful language
4. Identify dependencies
5. Resolve
    Cc: Fabrizio Pandini <fabrizi...@gmail.com>, kubernetes-sig-cluster-lifecycle <kubernetes-sig-cluster-life...@googlegroups.com>

    Subject: Re: The way we discuss control plane members



    Typically, apiserver/scheduler/controller-manager are deployed one a single node if they are deployed as static pods. Otherwise, an existing scheduler distributes them across an existing cluster.




    I do like "control node" for its simplicity. But, when I think of "node," I think of a kubelet on a machine. And sometimes the above components run in another Kubernetes cluster, so "node" might be misleading.



    I kind of like "control replica," but it's a mouthful, as is "control node." If we're going to throw out "master," why not considering throwing out "control plane" as well? Is there a more concise alternative?


    What does Kubernetes translate to?
    Pilot, or shipmaster <https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fclassic.studylight.org%2Flex%2Fgrk%2Fview.cgi%3Fnumber%3D2942&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128203992&sdata=dvgtEvyFGE%2FsNORgfJDCa9WxvIrZxbtV99770sLis%2Fw%3D&reserved=0>. So if 'master is short for the latter, and we don't like it, how about pilot?


    Daniel

















    On Thu, May 16, 2019 at 10:49 AM 'Michael Taufen' via kubernetes-sig-cluster-lifecycle <kubernetes-sig-cluster-life...@googlegroups.com> wrote:


    +1


    Maybe "control node?"

    Thinking in terms of control-plane and data-plane is more accurate anyway, "master" is ambiguous and also a misnomer as it suggests the control plane components must run on the same machine.



    Whatever we pick it should be short and easy to say, and make sense the first time you hear it. I think this is half the reason people still use "master."


    From: Fabrizio Pandini
    <fabrizi...@gmail.com>
    Date: Wed, May 15, 2019 at 11:04 PM
    To: kubernetes-sig-cluster-lifecycle



    +1
    Fyi during HA work we are trying to use consistently control-plane node ("bootstrap control-plane node" and "secondary control-plane nodes" or "joining control-plane node" when referring to the init/join workflow).


    --
    You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to


    To post to this group, send email to

    To post to this group, send email to

    To post to this group, send email to

    To post to this group, send email to
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-cluster-lifecycle+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-sig-cluster-life...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsub...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsub...@googlegroups.com.

Benjamin Elder

unread,
Jun 9, 2020, 11:05:20 PM6/9/20
to Zach Corleissen, kubernetes-sig-architecture
There's been a lot of discussion about how these things are run and described in various downstream situations but as far as upstream tools are concerned these components are pretty universally run on dedicated nodes.

This as far as I know includes at least: kops, kube-up.sh, cluster-api, kubespray, kind, kubeadm, ... with the sole exception being single (do-everything) node clusters.

I don't see any reason we can't standardize on something more precise for these and I'd like to help clean that up.
As far as I can tell we already more or less agreed to control plane / control plane node before, let's implement this in the tools and docs we control?

The kubeadm "master node role" taint used to implement dedicated nodes is the only place KIND has ever referred to it as anything other than a "control-plane" node since the beginning ... I don't believe we've had a single complaint of confusion etc. related to this.

Count me in.

    Cc: Fabrizio Pandini <fabrizi...@gmail.com>, kubernetes-sig-cluster-lifecycle <kubernetes-sig-c...@googlegroups.com>

    Subject: Re: The way we discuss control plane members



    Typically, apiserver/scheduler/controller-manager are deployed one a single node if they are deployed as static pods. Otherwise, an existing scheduler distributes them across an existing cluster.




    I do like "control node" for its simplicity. But, when I think of "node," I think of a kubelet on a machine. And sometimes the above components run in another Kubernetes cluster, so "node" might be misleading.



    I kind of like "control replica," but it's a mouthful, as is "control node." If we're going to throw out "master," why not considering throwing out "control plane" as well? Is there a more concise alternative?


    What does Kubernetes translate to?
    Pilot, or shipmaster <https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fclassic.studylight.org%2Flex%2Fgrk%2Fview.cgi%3Fnumber%3D2942&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128203992&sdata=dvgtEvyFGE%2FsNORgfJDCa9WxvIrZxbtV99770sLis%2Fw%3D&reserved=0>. So if 'master is short for the latter, and we don't like it, how about pilot?


    Daniel

















    On Thu, May 16, 2019 at 10:49 AM 'Michael Taufen' via kubernetes-sig-cluster-lifecycle <kubernetes-sig-c...@googlegroups.com> wrote:


    +1


    Maybe "control node?"

    Thinking in terms of control-plane and data-plane is more accurate anyway, "master" is ambiguous and also a misnomer as it suggests the control plane components must run on the same machine.



    Whatever we pick it should be short and easy to say, and make sense the first time you hear it. I think this is half the reason people still use "master."


    From: Fabrizio Pandini
    <fabrizi...@gmail.com>
    Date: Wed, May 15, 2019 at 11:04 PM
    To: kubernetes-sig-cluster-lifecycle



    +1
    Fyi during HA work we are trying to use consistently control-plane node ("bootstrap control-plane node" and "secondary control-plane nodes" or "joining control-plane node" when referring to the init/join workflow).


    --
    You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to


    To post to this group, send email to

    To post to this group, send email to

    To post to this group, send email to

    To post to this group, send email to
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-cluster...@googlegroups.com.
To post to this group, send email to kubernetes-sig-c...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/ed734af9-aa93-4e4e-9248-7d3e390fcff0o%40googlegroups.com.

Taylor Dolezal

unread,
Jun 10, 2020, 10:54:06 AM6/10/20
to Carlos Tadeu Panato Jr, David Emory Watson, Tim Hockin, Aaron Crickenberger, Andrew Kutz, Brian Grant, Clayton Coleman, Clayton Coleman, Daniel Lipovetsky, Daniel Smith, kubernetes-sig-architecture, kubernetes-sig-cluster-lifecycle, kubernetes-sig-docs, shidaqiu2018, Stephen Augustus, georgedan...@gmail.com, cel...@cncf.io, Stephen Augustus
Ditto! Count me in. I'd love to help on this front.

Sincerely,

Taylor Dolezal


On Wed, Jun 10, 2020 at 7:44 AM, Carlos Tadeu Panato Jr <cta...@gmail.com> wrote:
I would like to join the WG if possible to help in the work

On Wed, Jun 10, 2020 at 1:14 AM Stephen Augustus <Stephen@agst.us> wrote:
It's been on my mind and this feels like a good time to pick this discussion back up.
A few people have reached out to me and I've seen some awesome PRs[1] and issues[2] come through discussing using more inclusive language across the board.

So...
  • Who's interested in working together on this?
  • What do we think are some good next steps?
To echo Vallery's previous email, maybe this actually does merit a Working Group, given we've had some stops and starts in the conversation?

-- Stephen

On Wed, May 29, 2019 at 6:09 AM David Emory Watson <davidewatson@gmail.com> wrote:
Maybe we should drop the term node when referring to Clusters/ControlPlanes… It will match CAPI, and address the user concerns.

David.
On Tue, May 28, 2019 at 10:57 AM David Emory Watson <davidewatson@gmail.com> wrote:
I get it. I don't think the majority of people think in those terms...

Your axioms would be good documentation.

David.

On Tue, May 28, 2019 at 10:29 AM 'Tim Hockin' via kubernetes-sig-architecture <kubernetes-sig-architecture@googlegroups.com> wrote:


On Tue, May 28, 2019 at 9:28 AM Clayton Coleman <ccoleman@redhat.com> wrote:

512-658-8368


On 5/16/19, 7:40 PM, "shidaqiu2018" <shidaqiu2018@gmail.com> wrote:

    +1
    Sounds good!
    But pilot is a concept in istio.
    Doesn't that cause confusion?



    ------------------ Original ------------------
    From: Daniel Lipovetsky <daniel@platform9.com>

    Date: 周五,5月 17,2019 03:53
    To: Michael Taufen <mtaufen@google.com>
    Cc: Fabrizio Pandini <fabrizio.pandini@gmail.com>, kubernetes-sig-cluster-lifecycle <kubernetes-sig-cluster-lifecycle@googlegroups.com>

    Subject: Re: The way we discuss control plane members



    Typically, apiserver/scheduler/controller-manager are deployed one a single node if they are deployed as static pods. Otherwise, an existing scheduler distributes them across an existing cluster.




    I do like "control node" for its simplicity. But, when I think of "node," I think of a kubelet on a machine. And sometimes the above components run in another Kubernetes cluster, so "node" might be misleading.



    I kind of like "control replica," but it's a mouthful, as is "control node." If we're going to throw out "master," why not considering throwing out "control plane" as well? Is there a more concise alternative?


    What does Kubernetes translate to?
    Pilot, or shipmaster <https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fclassic.studylight.org%2Flex%2Fgrk%2Fview.cgi%3Fnumber%3D2942&data=02%7C01%7Cakutz%40vmware.com%7C8251ae8e548740056a2608d6da603790%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C636936504128203992&sdata=dvgtEvyFGE%2FsNORgfJDCa9WxvIrZxbtV99770sLis%2Fw%3D&reserved=0>. So if 'master is short for the latter, and we don't like it, how about pilot?


    Daniel

















    On Thu, May 16, 2019 at 10:49 AM 'Michael Taufen' via kubernetes-sig-cluster-lifecycle <kubernetes-sig-cluster-lifecycle@googlegroups.com> wrote:


    +1


    Maybe "control node?"

    Thinking in terms of control-plane and data-plane is more accurate anyway, "master" is ambiguous and also a misnomer as it suggests the control plane components must run on the same machine.



    Whatever we pick it should be short and easy to say, and make sense the first time you hear it. I think this is half the reason people still use "master."


    From: Fabrizio Pandini
    <fabrizio.pandini@gmail.com>
    Date: Wed, May 15, 2019 at 11:04 PM
    To: kubernetes-sig-cluster-lifecycle



    +1
    Fyi during HA work we are trying to use consistently control-plane node ("bootstrap control-plane node" and "secondary control-plane nodes" or "joining control-plane node" when referring to the init/join workflow).


    --
    You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to


    To post to this group, send email to

    To post to this group, send email to

    To post to this group, send email to

    To post to this group, send email to
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-cluster-lifecycle+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-sig-cluster-lifecycle@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.

Stephen Augustus

unread,
Jun 12, 2020, 11:55:36 PM6/12/20
to Nori Heikkinen, kubernetes-sig-architecture, kubernete...@googlegroups.com, kubernetes-sig-contribex, kubernetes-sig-cluster-lifecycle
Hey Nori (and everyone else who volunteered),

It's really great to see so many people step up!
I'm on an all-company day off today, but I'm planning to send an official note to kick off the Working Group formation process on Monday with more details (you'll see what we have in mind is pretty close to what you mentioned).

-- Stephen

On Fri, Jun 12, 2020 at 6:31 PM 'Nori Heikkinen' via kubernetes-sig-cluster-lifecycle <kubernetes-sig-c...@googlegroups.com> wrote:
hey folks -- new to this group; not new to k8s (i'm co-lead the GKE SRE team at Google).  i'd love to see this terminology change as well.

Stephen: i couldn't quite tell from your question if you're volunteering to lead/co-lead this effort, or if you'd like someone else to step up.  if the former and you'd like a co-lead, i hereby nominate myself.  if you're happy to run with this one, please add me to your list of volunteers.  if you'd like someone to lead other than you, i also nominate myself. :)

as far as next steps go: a Working Group sounds official (i told you i'm new here), and perhaps good for the amount of scrubbing this might require?  so perhaps something like:

1. Establish WG
2. Continue to put out a call for volunteers (unless the above number of folks is sufficient -- I think we have 7) 
3. Figure out how we're going to agree on new terminology
4. Figure out a plan for doing it.
5. Do it!

thoughts?  guidance?  happy to contribute however i can here, anywhere from offering a cheerleading "+1!" up through leading, as it's helpful.

-nh

On Thursday, June 11, 2020 at 5:15:05 AM UTC-7 nziada wrote:
+1, I would like to help as well

From: kubernetes-sig-c...@googlegroups.com <kubernetes-sig-c...@googlegroups.com> on behalf of Stephen Augustus <Ste...@agst.us>
Sent: Tuesday, June 9, 2020 7:13 PM

To: David Emory Watson <davide...@gmail.com>
Cc: Tim Hockin <tho...@google.com>; Aaron Crickenberger <spi...@google.com>; Andrew Kutz <ak...@vmware.com>; Brian Grant <brian...@google.com>; Clayton Coleman <smarter...@gmail.com>; Clayton Coleman <ccol...@redhat.com>; Daniel Lipovetsky <dan...@platform9.com>; Daniel Smith <dbs...@google.com>; kubernetes-sig-architecture <kubernetes-si...@googlegroups.com>; kubernetes-sig-cluster-lifecycle <kubernetes-sig-c...@googlegroups.com>; kubernetes-sig-docs <kubernete...@googlegroups.com>; shidaqiu2018 <shidaq...@gmail.com>; Stephen Augustus <steph...@agst.us>; georgedan...@gmail.com <georgedan...@gmail.com>; cel...@cncf.io <cel...@cncf.io>

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-cluster-lifecycle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-cluster...@googlegroups.com.

wfe...@google.com

unread,
Jun 15, 2020, 12:25:07 PM6/15/20
to kubernetes-sig-architecture

One other place I can think of where we tend to use the term "master" is for the winner of leader election. This can can something like the primary working instance of Scheduler/Controller-Managers in a HA cluster or the Etcd instance responsible for determining writes in a Etcd cluster. For each of these a term like primary would seem to work.

Daniel Smith

unread,
Jun 15, 2020, 12:53:42 PM6/15/20
to wfe...@google.com, kubernetes-sig-architecture
On Mon, Jun 15, 2020 at 9:25 AM 'wfe...@google.com' via kubernetes-sig-architecture <kubernetes-si...@googlegroups.com> wrote:

One other place I can think of where we tend to use the term "master" is for the winner of leader election. This can can something like the primary working instance of Scheduler/Controller-Managers in a HA cluster or the Etcd instance responsible for determining writes in a Etcd cluster. For each of these a term like primary would seem to work.

"lock holder" or "active" are the terms I've heard people use for controllers. I've actually never heard anyone use "master" to mean holder of the lock. And I think someone using the term that way would confuse all listeners, since people saying "master" are (always, in my experience) actually referring to the control plane or a control plane node. And, FTR, since I was there, I recall "active with passive standbys" were the actual words people used when originally discussing the controller manager design. So if you want a canonical word, I recommend that.

For etcd, I have never heard anyone use a word other than "leader".
 

Clayton Coleman

unread,
Jun 15, 2020, 2:47:23 PM6/15/20
to Daniel Smith, wfe...@google.com, kubernetes-sig-architecture
"Leader, lock owner, lease holder, active" is the full set of terms I've ever seen us agree to, and I know when I've reviewed this code if I had seen "master" show up I would have had the submitter change to use correct terminology.

Fortunately, we are well on the way to eliminating the use of the node-role label to denote "feature enablement" within Kube (https://github.com/kubernetes/enhancements/issues/1143), and the use of the "master" node role as special were the only real problems there.  So deployers of Kube can as of 1.19 start using the beta levels to break any dependency on the old problematic terminology, and ~1.21 or 1.22 will remove the coupling totally.

Note that deployers may have integrators, existing deployments, and end user administrative tool that makes the assumption that the master node-role has meaning and have conventions for their users that depend on it.  The KEP above does not fully drive through that much larger effort - it is up to each deployer to remove that dependency on their own.

As we have noted in the original issue, 'control-plane' is a more suitable role name.



Celeste Horgan

unread,
Jun 15, 2020, 7:26:57 PM6/15/20
to kubernetes-sig-architecture
I've been lurking, but let it be known: I'm in too :)

Celeste Horgan

On Friday, June 12, 2020 at 8:55:36 PM UTC-7, Stephen Augustus wrote:
Hey Nori (and everyone else who volunteered),

It's really great to see so many people step up!
I'm on an all-company day off today, but I'm planning to send an official note to kick off the Working Group formation process on Monday with more details (you'll see what we have in mind is pretty close to what you mentioned).

-- Stephen

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-cluster-lifecycle+unsubscribe@googlegroups.com.

Abhisek Purwar

unread,
Jun 16, 2020, 1:44:44 AM6/16/20
to Celeste Horgan, kubernetes-sig-architecture
I would like to contribute in this....

Thanks 
Abhisek 

Sent from my iPhone

Stephen Augustus

unread,
Jun 16, 2020, 2:10:15 AM6/16/20
to Andrew Kutz, Nori Heikkinen, kubernetes-sig-architecture, kubernete...@googlegroups.com, kubernetes-sig-contribex, kubernetes-sig-cluster-lifecycle
Hey everyone,

I've opened another thread to discuss Working Group formation.

-- Stephen

On Sat, Jun 13, 2020 at 10:39 PM Andrew Kutz <ak...@vmware.com> wrote:
Hi all,

I’m also more than happy to help lead this charge. I feel quite strongly about it, and I’m glad to see it is once again picking up steam. I let it whither a bit last year after I raised the issue because it is not uncontentious, and I did not want to allow the roar of the conversation eclipse the reason we were having it in the first place. 

-- 
-a

Andrew Kutz
Engineer

On Jun 12, 2020, at 10:55 PM, Stephen Augustus <steph...@agst.us> wrote:


Reply all
Reply to author
Forward
0 new messages