Bare Metal SIG

351 views
Skip to first unread message

Joseph Jacks

unread,
Oct 7, 2016, 11:56:36 AM10/7/16
to kuberne...@googlegroups.com, kubernet...@googlegroups.com, Rakesh Malhotra, Isaac Arias

Hi All,


At Apprenda, we have many large clients, OSS efforts and product initiatives underway to improve the operational experience of running Kubernetes on bare metal. I thought it would make sense and be useful to create and start leading a SIG for this area specifically as we are extremely interested in contributing our ideas, code and best practices with the community to improve the usability, documentation, implementation approaches and standards around designing, deploying and operating Kubernetes clusters on metal -- specifically in physical private data center environments. 


I see a fair bit of intersection with Cluster-Lifecycle and Cluster-Ops SIGs, but given the complexities and specific challenges here, it jumped out to propose this.


A few questions:

  • How can we outline objectives of the SIG?
    • Use case definitions
    • Outline problem areas and challenges with existing upstream UX
    • Major differences in deploying and running on-prem/bare metal vs. on public cloud compute VMs/instances
    • ...

  • 2. Who is interested in collaborating here? (I know CoreOS has some exciting projects in this area)

  • 3. Anything I am missing?


Best,

JJ.



Tim Hockin

unread,
Oct 7, 2016, 12:02:31 PM10/7/16
to kubernet...@googlegroups.com, kuberne...@googlegroups.com, Rakesh Malhotra, Isaac Arias
I have a note on my desk that simply says "sig-brownfield". The idea
was that it *might* be interesting to have a discussion forum for
people deploying Kubernetes into existing environments, which has high
correlation with on-prem. There are many issues, but as you point
out, they almost all overlap with other functional SIGs - auth,
network, storage, cluster*, etc.
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-use...@googlegroups.com.
> To post to this group, send email to kubernet...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

Spencer Smith

unread,
Oct 7, 2016, 12:10:06 PM10/7/16
to Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
This is an interesting point. Maybe a "sig-brownfield" or "sig-on-prem" could be an interesting way to tackle this problem. Focusing not only on bare metal deploys, but how the full flow goes from booting the infra with something like MAAS, all the way to ensuring deploys work as expected with private registries and internal package repos. I know we've got several customers at Solinea that are interested in those aspects.

Justin Garrison

unread,
Oct 7, 2016, 12:12:55 PM10/7/16
to Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
Yes please! Only concern is this would spin off to start including SIGs for each cloud environment. I think it would be very hard to justify why this is needed over intricacies when running in AWS/GCE/Azure/etc. or even on-prem with VMware/OpenStack

With that being said, we'd also need to make sure the scope is limited to Kubernetes. Bare metal environments are far too broad to try and address many of the possible topics I could see being asked. Some of the topics I would suggest we avoid is
  • What hardware should I buy/is supported
  • What OS should I use
  • How do I provision $OS to $HARDWARE
  • What load balancer should I use
Rather I think this SIG could have benefit from people sharing experience with generic setups not in a public cloud (on-prem VMs, Raspberry pi clusters, etc.) as well as it could help surface limitations and problems users should expect along the way. Most documentation is targeted toward kube-up/kops which obviously don't apply but some newer tools do (kubeadm).

Maybe the SIG could be called on-prem instead of bare metal to cover more use cases and setups that all have similar limitations in my experience.

Alexis Richardson

unread,
Oct 7, 2016, 12:14:51 PM10/7/16
to Justin Garrison, Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com

VMnetes?


--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/6f9c03ce-eec3-4574-87e4-dbec28d50038%40googlegroups.com.

Chris Aaron Gaun

unread,
Oct 7, 2016, 1:32:02 PM10/7/16
to Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
On-prem can be a hornet's nest because of broad scope. Bare-metal is a constant request from large organizations seeking to spend less on virtualization technology. They want virtualization and bare metal to stand on equal footing at the very least.  

John Giffin

unread,
Oct 7, 2016, 2:36:30 PM10/7/16
to Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
The question "how do I provision $OS to $HARDWARE" is particularly interesting, especially from a security standpoint. PXE has some powerful features for just this task, but also some very careful security considerations. If anyone else can put a machine on the same network segment, a rogue DHCP server can highjack the process. You either need a separate control network that is tightly controlled (i.e. nothing talks on that network other than PXE) or you need PXE burned into the NIC ROMs.

-gif

Connor Doyle

unread,
Oct 7, 2016, 2:40:18 PM10/7/16
to Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
A number of groups at Intel (ours included) would be keen to participate. In addition to what's already been mentioned here our interest is in improving support for workloads that require private/hybrid cloud that includes bare metal. It could include things like exposing the heterogeneity, data-plane acceleration, enhanced low-level isolation (pinned/exclusive cores, etc.) So, +1!
--
Connor

Reza Mohammadi

unread,
Oct 7, 2016, 2:57:54 PM10/7/16
to Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
We're also interested to participate. We've created a "Bare-Metal CoreOS Cluster Manager" which boots CoreOS on machines through PXE, and we're using it to provision new machines and add them to our kubernetes clusters:

David Oppenheimer

unread,
Oct 7, 2016, 3:07:21 PM10/7/16
to Reza Mohammadi, Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
It seems that we're on a path to end up with separate sets of SIGs to cover use cases/deployment environments, vs. technologies. I'm not sure whether that's a good or bad thing.



--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/2b4f496e-a802-48b9-8bfc-ed9ae12af58f%40googlegroups.com.

Klaus Ma

unread,
Oct 7, 2016, 9:40:46 PM10/7/16
to Kubernetes developer/contributor discussion, remoh...@gmail.com, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
+1 for "sig-on-perm", we also have some user running k8s on-perm. it'll be great to have such a SIG to discuss the solution global.

Regarding "use cases/deployment environments SIG, vs. technologies SIG", IMO, use case SIG is really helpful to the k8s community.

Thanks
Klaus



On Saturday, October 8, 2016 at 3:07:21 AM UTC+8, David Oppenheimer wrote:
It seems that we're on a path to end up with separate sets of SIGs to cover use cases/deployment environments, vs. technologies. I'm not sure whether that's a good or bad thing.


On Fri, Oct 7, 2016 at 11:57 AM, Reza Mohammadi <remoh...@gmail.com> wrote:
We're also interested to participate. We've created a "Bare-Metal CoreOS Cluster Manager" which boots CoreOS on machines through PXE, and we're using it to provision new machines and add them to our kubernetes clusters:

https://github.com/cafebazaar/blacksmith
https://github.com/cafebazaar/blacksmith-kubernetes

Bests,
Reza

On Friday, October 7, 2016 at 7:26:36 PM UTC+3:30, Joseph Jacks wrote:

Hi All,


At Apprenda, we have many large clients, OSS efforts and product initiatives underway to improve the operational experience of running Kubernetes on bare metal. I thought it would make sense and be useful to create and start leading a SIG for this area specifically as we are extremely interested in contributing our ideas, code and best practices with the community to improve the usability, documentation, implementation approaches and standards around designing, deploying and operating Kubernetes clusters on metal -- specifically in physical private data center environments. 


I see a fair bit of intersection with Cluster-Lifecycle and Cluster-Ops SIGs, but given the complexities and specific challenges here, it jumped out to propose this.


A few questions:

  • How can we outline objectives of the SIG?
    • Use case definitions
    • Outline problem areas and challenges with existing upstream UX
    • Major differences in deploying and running on-prem/bare metal vs. on public cloud compute VMs/instances
    • ...

  • 2. Who is interested in collaborating here? (I know CoreOS has some exciting projects in this area)

  • 3. Anything I am missing?


Best,

JJ.



--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.

Tomasz 'Zen' Napierala

unread,
Oct 8, 2016, 9:11:13 AM10/8/16
to Joseph Jacks, kuberne...@googlegroups.com, kubernet...@googlegroups.com, Rakesh Malhotra, Isaac Arias
Thanks for bringing this up. We were actually discussing similar proposal on Thursday internally in Mirantis, but we assumed that everyone is trying to use SIG-cluster-ops as a vehicle to work on that. Looks like there is some interest, so we are very happy to join.

Bare metal/on-prem is our main focus, as our customers are mostly running their own clusters. We see a lot of room for improvements or helpers for BM environments. Couple of months ago we discussed with some folks writing bare metal provider, in similar way like AWS/GCE etc. This idea was somehow not accepted by community and instead there was a proposal for more modularity on how we handle storage, networking, and other components. Our goal is simple: make Bare Metal first class citizen in kubernetes world. And I think it might be good motto for the new SIG.

What we see as areas of work in near future:
- Networking on BM
We would like to see equally pleasant experience with load balancers and how we handle external cluster IPs. UX is crucial here. We have couple of ideas here.

- Installation
Installation on BM is much different from other environments. We are contributing to Kargo a lot, but also looking at more “native” kubeadm. Still, Kargo solves some underlay problems, that kubeadm does not even look at. Installation is critically important as this is first contact with kubernetes on BM for many customers.

- Testing
We would like to see improvements on how e2e passes on BM, currently there are many problems in this area. It would be also nice to have more tests dedicated to BM.

- Scale
Scaling on BM is different area where we invest. It would be great to have roadmap here and start collaborating with SIG-scaling

- Storage
Same as networking, it would be nice to have some improvements in UX here.

So, to sum this up, we have dedicated teams working on bare metal and we are all in for creating this SIG. We can also commit to work actively and even take co-lead seat if community is OK with that.

Regards,
--
Tomasz 'Zen’ Napierala
Mirantis
Kubernetes Engineering - Poland






Tim St. Clair

unread,
Oct 10, 2016, 10:56:24 AM10/10/16
to David Oppenheimer, Reza Mohammadi, Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
Right now this is quite confusing for folks (sig-sprawl), and at some
point we need a rationalization of the SIGs to ensure that there is
enough lead coverage, and to ensure that SIGs have the ability to
execute against a well established charter.

What is ambiguous, is there are already several sigs that cross over this topic:

- Networking
- Storage
- Scale
- Scheduling
...
etc.

So where exactly would the responsibilities lie, such that we can
ensure timely execution, and decrease overlap?

-Tim


On Fri, Oct 7, 2016 at 2:07 PM, 'David Oppenheimer' via Kubernetes
developer/contributor discussion <kuberne...@googlegroups.com>
wrote:
>> email to kubernetes-de...@googlegroups.com.
>> To post to this group, send email to kuberne...@googlegroups.com.
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes developer/contributor discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-de...@googlegroups.com.
> To post to this group, send email to kuberne...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kubernetes-dev/CAOU1bzeDZfs-VUCgoXLzt%3DmkxmmdhK3_u_yprfctYh3aW-Vh0A%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.



--
Cheers,
Timothy St. Clair

“Do all the good you can. By all the means you can. In all the ways
you can. In all the places you can. At all the times you can. To all
the people you can. As long as ever you can.”

Ihor Dvoretskyi

unread,
Oct 10, 2016, 5:44:20 PM10/10/16
to Tim St. Clair, David Oppenheimer, Reza Mohammadi, Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
Tim,

Following this logic, in Kubernetes Community only SIG-Networking, SIG-Storage, SIG-Scale and SIG-Scheduling should exist, but we know that the number of the current SIG's is much bigger.

I would suggest placing SIG-Bare Metal aka SIG-On-prem in the single line with SIG-AWS, SIG-Azure and SIG-OpenStack - these SIG's are not managing directly the most crucial parts of Kubernetes, but their role is incredibly important in the specific areas that they cover.

Tomasz 'Zen' Napierala

unread,
Oct 11, 2016, 7:45:22 AM10/11/16
to Tim St. Clair, David Oppenheimer, Reza Mohammadi, Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, Rakesh Malhotra, iar...@apprenda.com
Hi,

Naturally, in such complex ecosystem there will be some overlaps and we already have them. SIG-Apps is a good example, where there is a lot of overlap and at the same time this group is making great job. There is no single component which SIG-Apps covers, it’s not “organic” SIG, but rather group concentrated on certain use cases. Still, it’s one of most productive SIGs in my opinion.

I see on-prem/bare metal SIG with similar role. We are getting feedback from many enterprises that it is extremely hard to have bare metal requirements accepted into kubernetes codebase. We need to remember that after stabilisation period, bare metal use case will be one of the the biggest vehicles for kubernetes adoption, similarly as we’ve observed with other projects (e.g. OpenStack). SIG bare metal would be here to ensure support for those particular cases. For now user experience of running on bare metal is far from being pleasant, and we want it to be perfect.

I understand that from business perspective for some companies here this is the last case to support, but we need to be open community and help others engage without hurting core functionality. We cannot discourage people, but rather provide medium to get their cases covered, with proper scrutiny from experts.

How could it be organised? I think SIG on-prem would need to take holistic view on bare metal support, work on proposing concrete solutions and then proceed with them through “organic” SIGs like node, storage, networking. There were some individual efforts already, but without SIGs backing them, it is already extremely hard, as I mentioned before.

To wrap up, in Mirantis we hope to have this SIG helping our big customers to make their requirements visible and to get proper attention.

As a side note recent sprawl should make as think if current governance model is ideal. I think it might be a sign of frustration that many areas are not getting proper attention. At least this is what I hear from different people.

Regards,
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/CALM%2Bqp9Wdsowp5keRcY4Ok_JWwgDUnBxbMFM-ffVa6Z2ke0i8g%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

--
Tomasz 'Zen' Napierala
Kubernetes Engineering - Poland






Joseph Jacks

unread,
Nov 6, 2016, 7:48:40 PM11/6/16
to Kubernetes user discussion and Q&A, timo...@gmail.com, davi...@google.com, remoh...@gmail.com, kuberne...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
Excellent. Thanks everyone for chiming in.

I count support from the overwhelming majority of folks here from the following organizations: Apprenda, Disney, Google, Mirantis, Intel and SoundCloud. Very excited to get this off the ground. 

I will personally co-lead this SIG from Apprenda's side with maybe one other rep. If someone would like to co-lead this SIG, please speak up!

Please look out for another new note in kubernetes-users announcing SIG Bare Metal with a proposed time for our meeting to flesh things out further along with goals, etc.

Thanks,
JJ.

Tomasz 'Zen' Napierala

unread,
Nov 7, 2016, 5:41:21 AM11/7/16
to Joseph Jacks, Kubernetes user discussion and Q&A, timo...@gmail.com, davi...@google.com, remoh...@gmail.com, kuberne...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
I’m happy to help leading the SIG from Mirantis side. I would suggest to move the conversation to kubernetes-dev.

Regards,
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/aec44d76-fa24-4a64-92bd-1b1a2529e61c%40googlegroups.com.

Sarah Novotny

unread,
Nov 7, 2016, 11:13:48 AM11/7/16
to Tomasz 'Zen' Napierala, Joseph Jacks, Kubernetes user discussion and Q&A, Tim St. Clair, David Oppenheimer, remoh...@gmail.com, kubernetes-dev, Rakesh Malhotra, Isaac Arias
Hi All, 

Did i miss a proposal document including leadership and charter?

Sarah

> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kubernetes-dev@googlegroups.com.

> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/aec44d76-fa24-4a64-92bd-1b1a2529e61c%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

--
Tomasz 'Zen' Napierala
Kubernetes Engineering - Poland






--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/AA73440A-3ED2-4956-A9D3-0F92598E5B8B%40mirantis.com.

Joseph Jacks

unread,
Nov 7, 2016, 11:42:52 AM11/7/16
to Kubernetes developer/contributor discussion, tnapi...@mirantis.com, jack...@gmail.com, kubernet...@googlegroups.com, timo...@gmail.com, davi...@google.com, remoh...@gmail.com, rmal...@apprenda.com, iar...@apprenda.com
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.

> To post to this group, send email to kuberne...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/aec44d76-fa24-4a64-92bd-1b1a2529e61c%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

--
Tomasz 'Zen' Napierala
Kubernetes Engineering - Poland






--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.

To post to this group, send email to kuberne...@googlegroups.com.

Joseph Jacks

unread,
Nov 7, 2016, 11:43:53 AM11/7/16
to Kubernetes developer/contributor discussion, jack...@gmail.com, kubernet...@googlegroups.com, timo...@gmail.com, davi...@google.com, remoh...@gmail.com, rmal...@apprenda.com, iar...@apprenda.com
Awesome. Thanks, Tomasz! I added you to the SIG charter Google Doc: https://groups.google.com/forum/#!topic/kubernetes-dev/Uu5bWhJ23II 

Dalton Hubble

unread,
Nov 10, 2016, 3:18:09 PM11/10/16
to Kubernetes user discussion and Q&A, kuberne...@googlegroups.com, jack...@gmail.com, timo...@gmail.com, davi...@google.com, remoh...@gmail.com, rmal...@apprenda.com, iar...@apprenda.com
I'm happy to help from the CoreOS side.

CoreOS docs and Tectonic make use of https://github.com/coreos/coreos-baremetal for bare-metal clusters of all sorts, but there are Kubernetes on bare-metal specific topics which would be interesting to discuss in a SIG.

Joseph Jacks

unread,
Nov 10, 2016, 3:28:16 PM11/10/16
to Dalton Hubble, Kubernetes user discussion and Q&A, kuberne...@googlegroups.com, timo...@gmail.com, davi...@google.com, remoh...@gmail.com, rmal...@apprenda.com, iar...@apprenda.com
Awesome! Thanks, Dalton. I'll add you to the doc. We had also previously added the CoreOS Bare Metal link you referenced.

JJ.

Sandeep Srinivasa

unread,
Nov 21, 2016, 1:46:34 AM11/21/16
to Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
Very very needed!
I would argue that k8s is the kind of disruptor that would replace the ROI of something like AWS with cheaper hardware. I can completely see people building  OVH clusters on k8s that is much cheaper but almost as reliable as AWS.

However, bare metal has heavier lifting involved - for example the load balancers for bare metal dont exist. I have this bug on documentation needed for ingress architecture on metal (https://github.com/kubernetes/ingress/issues/17).

I dont consider myself very qualified... but this is essential and blocking for us.
Thanks! 

r...@rackn.com

unread,
Nov 28, 2016, 4:13:28 PM11/28/16
to Kubernetes user discussion and Q&A, kuberne...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
On Friday, October 7, 2016 at 10:56:37 AM UTC-5, Joseph Jacks wrote:
> Hi All,
>
>
>
>
>
>
>
> At Apprenda, we have many large clients, OSS efforts and product initiatives underway to improve the operational experience of running Kubernetes on bare metal. I thought it would make sense and be useful to create and start leading a SIG for this area specifically
> as we are extremely interested in contributing our ideas, code and best practices with the community to improve the usability, documentation, implementation approaches and standards around designing, deploying and operating Kubernetes clusters on metal --
> specifically in physical private data center environments. 
>
>
>
>
>
>
>
> I see a fair bit of intersection with Cluster-Lifecycle and Cluster-Ops SIGs, but given the complexities and specific challenges here, it jumped out to propose this.
>
>
>
>
>
>
>
> A few questions:
>
>
>
> How can we outline objectives of the SIG?
>
>
> Use case definitionsOutline problem areas and challenges with existing upstream UXMajor differences in deploying and running on-prem/bare metal vs. on public cloud compute VMs/instances...
>
>
>
>
> 2. Who is interested in collaborating here? (I know
> CoreOS has some exciting projects in this area)
>
>
>
> 3. Anything I am missing?
>
>
>
>
>
> Best,
>
> JJ.

I'd like to invite people interested in this topic to join the Cluster Ops SIG meeting this week on Thursday @ 1 PM PT.

These concerns are totally in line with Cluster Ops and would be welcome in our agenda. There is plenty of time and attention to bring up these issues in a collaborative way there. I don't see a need for a dedicated SIG and believe that the duplication is distracting for the community.

I'm at re:Invent this week if anyone wants to talk 1x1

Rob

Sandeep Srinivasa

unread,
Nov 28, 2016, 10:58:21 PM11/28/16
to r...@rackn.com, rmal...@apprenda.com, kuberne...@googlegroups.com, iar...@apprenda.com, Kubernetes user discussion and Q&A


On Nov 29, 2016 02:43, <r...@rackn.com> wrote:
 There is plenty of time and attention to bring up these issues in a collaborative way there.  I don't see a need for a dedicated SIG and believe that the duplication is distracting for the community.

I'm at re:Invent this week if anyone wants to talk 1x1

Rob

hi Rob, 
i will lean towards a separate SIG here and concur with Joseph. While on the surface, it seems that there is no need for a dedicated SIG, the reality is already different. 

For example, the implementation of load balancers has all the different cloud providers.. but no metal implementation. I agree it is easier to hook into cloud providers than haproxy/nginx...but it has been pretty hard for  us to find documentation or help. Similarly the proposal for source ip preservation does not deal with non cloud use cases. (Check the bug for the load balancing umbrella issue and the source ip preservation proposal)

Additionally, I found out pretty recently that the
recommended orchestration tool for metal deployments is kargo... while for cloud is kops.  This was repeated to me many times on the slack along with the advice that i should ask on the #kargo channel if I had questions on metal deployments. 

So the sig already exists in a manner of speaking - its #kargo. Its just not very easy to discover or create an agenda around. 

just my $0.02

Regards 
Sandeep 

Rob Hirschfeld

unread,
Nov 28, 2016, 11:16:43 PM11/28/16
to Sandeep Srinivasa, rmal...@apprenda.com, kuberne...@googlegroups.com, iar...@apprenda.com, Kubernetes user discussion and Q&A
Sandeep,

1) we are dividing efforts.  Why don't you TRY joining Cluster Ops before forking it.
2) Kargo or KOPS is absolutely NOT the only tool.  Cluster Ops specifically explores multiple appoaches.  There are many ways to do this and they have different reasons.

I simply don't understand the push to not even try to join.

Rob
--
Rob Hirschfeld
RackN.com, CEO & Founder
@zehicle, 512-773-7522

Sandeep Srinivasa

unread,
Nov 28, 2016, 11:26:09 PM11/28/16
to Rob Hirschfeld, kuberne...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com, Kubernetes user discussion and Q&A
hi Rob
Thanks for trying. And i will probably join cluster ops. 

But I hope you will ACKNOWLEDGE that this situation already exists. If you search on slack as early as yesterday evening,  the #kargo statement was reiterated. 
You may be right technically in claiming other ways exist... but this is the way that the community has *already * begun to shift.  

Regards 
Sandeep 

Rob Hirschfeld

unread,
Nov 28, 2016, 11:29:17 PM11/28/16
to Sandeep Srinivasa, kuberne...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com, Kubernetes user discussion and Q&A
Kargo was an earlier Ansible set and did a good job.  I used it for the v1.2 release,
Of course, the Kargo channel will position that way!
Are you suggesting that the Bare Metal SIG is really the Kargo SIG?

Sandeep Srinivasa

unread,
Nov 28, 2016, 11:35:34 PM11/28/16
to Rob Hirschfeld, rmal...@apprenda.com, kuberne...@googlegroups.com, iar...@apprenda.com, Kubernetes user discussion and Q&A
actually the bigger point is that sig-cluster-lifecycle is NOT the  bare metal sig.... or at least that's the impression one gets generally. 

we'll go anywhere where we can find help ;) for now Kargo is seemingly the place that'll take us in! 

Rob Hirschfeld

unread,
Nov 28, 2016, 11:42:24 PM11/28/16
to Sandeep Srinivasa, rmal...@apprenda.com, kuberne...@googlegroups.com, iar...@apprenda.com, Kubernetes user discussion and Q&A
Cluster-lifecycle is not about ops.  they are building configuration tools and the maintainer of kops is there.  
Cluster Ops is a different SIG.  Is that the confusion?

I don't know how to be any more inviting - we have an existing SIG where we talk about running clusters on cloud AND metal using multiple tools.  We document architectures and best practices.  We are about to focus on upgrades and conformance tests.  Join if you want.

Justin Garrison

unread,
Nov 29, 2016, 12:10:57 PM11/29/16
to Kubernetes developer/contributor discussion, s...@lambdacurry.com, rmal...@apprenda.com, iar...@apprenda.com, kubernet...@googlegroups.com
Everyone interested in the bare metal sig should also join cluster lifecycle. No questions IMO. But I think the sigs have different goals. Cluster lifecycle from my experience (I've only joined a few calls) is more abstracted towards best practices and maintaining a cluster. There are occasionally topics around specific deployments or user experience problems but I don't think cluster lifecycle should try to focus on the minutiae that a environment specific SIG should focus on. I also don't want bare metal to focus on cluster hygiene or long term maintenance.

I think the smaller focus is why sig aws and sig openstack already exist. They are trying to focus on the implementation of that specific platform and not abstract or long term cluster management.

Rob Hirschfeld

unread,
Nov 29, 2016, 2:59:55 PM11/29/16
to kubernet...@googlegroups.com, Kubernetes developer/contributor discussion, s...@lambdacurry.com, rmal...@apprenda.com, iar...@apprenda.com
Just for clarity: there are TWO SIGs here: cluster-lifecycle and cluster ops.

Lifecycle is focused on kubeadm development that impact future releases (Tuesday meetings)
Cluster Ops is focused on operational concerns (including metal) around current releases (Thursday meetings)



--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/i3XTbJvLj38/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.

For more options, visit https://groups.google.com/d/optout.

Sandeep Srinivasa

unread,
Nov 30, 2016, 12:09:48 AM11/30/16
to Justin Garrison, Kubernetes user discussion and Q&A, rmal...@apprenda.com, Kubernetes developer/contributor discussion, iar...@apprenda.com


On Nov 29, 2016 10:40 PM, "Justin Garrison" <justin....@disneyanimation.com> wrote:
I think the smaller focus is why sig aws and sig openstack already exist. They are trying to focus on the implementation of that specific platform and not abstract or long term cluster management.

precisely. and i would argue that bare metal poses an even bigger platform-specific challenge.  How do you build a cluster without depending on VPC/ELB/EBS, etc. 

If sig-aws has a reason to exist, then so does sig-metal. People who use the cloud have (justifiably)  no interest in mechanics that will be mandatory in bare metal - like choice of load balancers, networking across data centers, etc

Justin Garrison

unread,
Nov 30, 2016, 10:06:58 AM11/30/16
to Kubernetes developer/contributor discussion, justin....@disneyanimation.com, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com
If sig-aws has a reason to exist, then so does sig-metal. People who use the cloud have (justifiably)  no interest in mechanics that will be mandatory in bare metal - like choice of load balancers, networking across data centers, etc

With that being said, I don't think this should be called sig-metal. The focus of the sig shouldn't be explicitly about running on hardware (although that may be many of the topics). Running kubernetes without cloud environment support is more what I would like to discuss and would not alienate users running on VMs behind their firewall.

Possible name ideas
  • sig-on-prem
  • sig-no-cloud
  • sig-firewalled
  • sig-legacy
  • sig-the-hard-way
  • sig-unhosted
  • sig-from-scratch
It must be too early. I'm obviously joking about sig-no-cloud

Tomasz 'Zen' Napierala

unread,
Nov 30, 2016, 10:23:31 AM11/30/16
to Justin Garrison, Kubernetes developer/contributor discussion, kubernet...@googlegroups.com, rmal...@apprenda.com, iar...@apprenda.com


I also agree that sig-on-prem better describes the goal.
> --
> You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
> To post to this group, send email to kuberne...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/fbf7dd5b-a66b-480f-9e67-6b87680c9af1%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

--

Sandeep Srinivasa

unread,
Nov 30, 2016, 10:33:34 AM11/30/16
to Tomasz 'Zen' Napierala, rmal...@apprenda.com, Kubernetes user discussion and Q&A, Kubernetes developer/contributor discussion, Justin Garrison, iar...@apprenda.com
I agree. sig-on-prem is best suited.

On Nov 30, 2016 20:53, "Tomasz 'Zen' Napierala" <tnapi...@mirantis.com> wrote:


I also agree that sig-on-prem better describes the goal.


> On 30 Nov 2016, at 16:06, Justin Garrison <justin.garrison@disneyanimation.com> wrote:
>
> If sig-aws has a reason to exist, then so does sig-metal. People who use the cloud have (justifiably)  no interest in mechanics that will be mandatory in bare metal - like choice of load balancers, networking across data centers, etc
>
> With that being said, I don't think this should be called sig-metal. The focus of the sig shouldn't be explicitly about running on hardware (although that may be many of the topics). Running kubernetes without cloud environment support is more what I would like to discuss and would not alienate users running on VMs behind their firewall.
>
> Possible name ideas
>       • sig-on-prem
>       • sig-no-cloud
>       • sig-firewalled
>       • sig-legacy
>       • sig-the-hard-way
>       • sig-unhosted
>       • sig-from-scratch
> It must be too early. I'm obviously joking about sig-no-cloud
>
> --
> You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kubernetes-dev@googlegroups.com.
--
Tomasz 'Zen' Napierala
Kubernetes Engineering - Poland






--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-dev/ztVnvbrpTK8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/03484A68-605B-489A-A6B9-D37B43FC52BB%40mirantis.com.

Joseph Jacks

unread,
Nov 30, 2016, 10:51:28 AM11/30/16
to kubernet...@googlegroups.com, Sandeep Srinivasa, Tomasz 'Zen' Napierala, Kubernetes developer/contributor discussion, Justin Garrison, Sarah Novotny
I think SIG-On-Prem makes a lot of sense! +1 from me on that name. Will wait another day for feedback before getting it created hopefully with Sarah's help. 

Thanks,
JJ.


On Nov 30, 2016, at 10:37 AM, Isaac Arias <iar...@apprenda.com> wrote:

sig-datacenter

 

From: Sandeep Srinivasa <s...@lambdacurry.com>
Date: Wednesday, November 30, 2016 at 7:33 AM
To: Tomasz 'Zen' Napierala <tnapi...@mirantis.com>
Cc: Rakesh Malhotra <rmal...@apprenda.com>, Kubernetes user discussion and Q&A <kubernet...@googlegroups.com>, Kubernetes developer/contributor discussion <kuberne...@googlegroups.com>, Justin Garrison <justin....@disneyanimation.com>, Isaac Arias <iar...@apprenda.com>
Subject: [GRAYMAIL] Re: Bare Metal SIG

 

I agree. sig-on-prem is best suited.

On Nov 30, 2016 20:53, "Tomasz 'Zen' Napierala" <tnapi...@mirantis.com> wrote:



I also agree that sig-on-prem better describes the goal.



> On 30 Nov 2016, at 16:06, Justin Garrison <justin....@disneyanimation.com> wrote:
>
> If sig-aws has a reason to exist, then so does sig-metal. People who use the cloud have (justifiably)  no interest in mechanics that will be mandatory in bare metal - like choice of load balancers, networking across data centers, etc
>
> With that being said, I don't think this should be called sig-metal. The focus of the sig shouldn't be explicitly about running on hardware (although that may be many of the topics). Running kubernetes without cloud environment support is more what I would like to discuss and would not alienate users running on VMs behind their firewall.
>
> Possible name ideas
>       • sig-on-prem
>       • sig-no-cloud
>       • sig-firewalled
>       • sig-legacy
>       • sig-the-hard-way
>       • sig-unhosted
>       • sig-from-scratch
> It must be too early. I'm obviously joking about sig-no-cloud
>
> --

> You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.

> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.

> To post to this group, send email to kuberne...@googlegroups.com.

--
Tomasz 'Zen' Napierala
Kubernetes Engineering - Poland







--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-dev/ztVnvbrpTK8/unsubscribe.

To unsubscribe from this group and all its topics, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.

 



Disclaimer

The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been automatically archived by Mimecast Ltd, an innovator in Software as a Service (SaaS) for business. Providing a safer and more useful place for your human generated data. Specializing in; Security, archiving and compliance. To find out more Click Here.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.

To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.

Jorge O. Castro

unread,
Nov 30, 2016, 5:19:52 PM11/30/16
to Sandeep Srinivasa, Justin Garrison, Kubernetes user discussion and Q&A, rmal...@apprenda.com, Kubernetes developer/contributor discussion, iar...@apprenda.com
On Wed, Nov 30, 2016 at 12:09 AM, Sandeep Srinivasa <s...@lambdacurry.com> wrote:
precisely. and i would argue that bare metal poses an even bigger platform-specific challenge.  How do you build a cluster without depending on VPC/ELB/EBS, etc. 

We do this on bare metal all the time, I'll be at the cluster-ops meeting tomorrow I'd be happy to walk you through it. In my (admittedly short) time involved with both SIGs I've not seen a bare metal problem/question brushed aside yet, but maybe it's just me.

Though I hope you understand that that's one more meeting people would have to attend, and it seems like twice a week for ops meetings is already bordering on too many. 


Ihor Dvoretskyi

unread,
Dec 1, 2016, 11:46:43 AM12/1/16
to Joseph Jacks, kubernet...@googlegroups.com, Sandeep Srinivasa, Tomasz 'Zen' Napierala, Kubernetes developer/contributor discussion, Justin Garrison, Sarah Novotny, Steve Gordon
SIG On-Prem was one of the initial namings of the SIG when the new SIG discussion had been initiated. I've upvoted this naming and absolutely will do it now.

The only note that I have - we have SIG-OpenStack, that covers the on-prem solutions as well. - we have to find the way how our [overlapping] fields of responsibilities won't confuse the community.

At the same time, on behalf of SIG-OpenStack, I'd like to collaborate with the newly-established SIG.

Reply all
Reply to author
Forward
This conversation is locked
You cannot reply and perform actions on locked conversations.
0 new messages