Bare Metal Hardware Recommendations

919 views
Skip to first unread message

Trevor Hartman

unread,
Nov 4, 2016, 11:19:12 AM11/4/16
to Kubernetes developer/contributor discussion
I was wondering if there exists any information on bare metal setups at scale?

For example: is there anywhere I can find the hardware configurations other people have successfully deployed Kubernetes on along with the number of containers and pods they manage and the performance implications of these setups.

I'm looking to use CoreOS in a bare metal Kubernetes configuration at my organization, and would like some suggestions on right sizing the hardware I need.  We would need to deploy somewhere between 300 to 500 containers.  Mostly LAMP stack type stuff.  Is there any information on bare metal hardware requirements based off this type of workload?

Justin Garrison

unread,
Nov 10, 2016, 12:55:33 PM11/10/16
to Kubernetes developer/contributor discussion
Hardware selection should be selected based on your OS support and not Kubernetes. If you can install your preferred distro of Linux on the box then Kubernetes will run. There are example of people having development clusters running on Raspberry Pis as an example.

Also as a note you should maybe be aware of a new special interest group (SIG) that is starting specifically for running Kubernetes on bare metal. It's not focusing on hardware selection but rather provisioning and management. There have been some recent attempts to gather statistics from clusters. It was a service you could deploy to your cluster but I couldn't find the link. Maybe someone else knows.

With all that said, I would recommend a few things when running on bare metal
  • You should use ignition to deploy the OS and check out the coreos-baremetal examples https://github.com/coreos/coreos-baremetal/tree/master/examples
  • You should have at minimum 3 nodes if you want a fault tolerant (HA setup)
  • You should probably have a test environment with VMs that you can test your deployments (vagrant, vmware, etc). Don't put the test environment in AWS/Google because PXE booting is not usually possible in those environments.
  • You'll want to make sure you have external/clustered persistent storage if you're planning to run databases or want persistent data
300-500 containers isn't that many pods for LAMP stacks. Just for a general guess (not knowing how many sites or how much traffic/data you'll be hosting) I would suggest ~6 nodes (3 masters 3 workers) with ~24 cores (12 with hyperthreading would probably be good enough) and 64-128 Gb RAM. You can schedule pods on the masters as well as the workers and that would give you 50-80 pods per host which should allow you to keep running in case 1-2 nodes fail or are rebooted for maintenance.

Eric Tune

unread,
Nov 14, 2016, 10:49:40 AM11/14/16
to Justin Garrison, Kubernetes developer/contributor discussion

Don't buy multiple rotating disks unless you plan to combine them into a single device using hardware or kernel.


--
You received this message because you are subscribed to the Google Groups "Kubernetes developer/contributor discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-dev+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-dev@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-dev/b0e7356c-6bab-4179-b6ae-6fdb975ecf71%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Rob Hirschfeld

unread,
Nov 15, 2016, 12:16:58 AM11/15/16
to Kubernetes developer/contributor discussion, justin....@disneyanimation.com
Justin,

Cluster Ops SIG has been building documentation for general use directly along these lines:


I suggest that we bring these items into the existing SIG where we're actively working to create shared reference architectures and practices during our weekly meetings.

Rob
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-de...@googlegroups.com.
To post to this group, send email to kuberne...@googlegroups.com.
Reply all
Reply to author
Forward
This conversation is locked
You cannot reply and perform actions on locked conversations.
0 new messages