Kubernetes on Bar Metal servers!

825 views
Skip to first unread message

luce...@gmail.com

unread,
Jun 27, 2017, 1:06:48 AM6/27/17
to Kubernetes user discussion and Q&A
Using Kubernetes 1.6 on Bar Metal servers (production) seems to be difficult.

Tested CoreOs and Tectonic, not working on Bar Metal servers.
Bar Metal servers and conjure-up kubernetes not possible. (+ Maas)

Rancher is working but the whole system is based on the docker daemon, systemd is a better choice as base system for Kubernetes. (Like CoreOs does)


A possible solution is kubeadm but kubeadm is not ready for production!

Are there other possibilities?

mrpanigale

unread,
Jun 28, 2017, 5:55:24 AM6/28/17
to Kubernetes user discussion and Q&A, luce...@gmail.com
I have evaluated to the best of my ability:
- Rancher + RancherOS
- Rancher + CoreOS
- CoreOS Kubernetes (Manual installation)
- CoreOS Kubernetes (Tectonic)
- CoreOS SelfHosted (Bootkube+matchbox) 

Quick Summary:
With RancherOS my experience is that you have to be careful to install a OS version with a compatilbe docker for kubernetes. I also dont like to be locked into docker, prefer systemd as an init system as the rancherOS init system, in comparison, is limited.

Rancher + CoreOS installation experience is the best. However only 1.5 is supported. 1.6 is not yet released and i could not get a cluster started using there development branch. Also for advanced K8 users, advanced customization of the installation process I found difficult (e.g. custom CA for certificate authentication, etc.). In order to make the provisioning process smooth you will want to use CoreOS matchbox.

CoreOS manual installation is excellent however it does not explain how to provision the systemd units onto the machines. I am a big CoreOS fleet fan so I initially created fleet units to deploy the documented systemd units. CoreOS has now deprecated Fleet so this approach is not also not recommended. Also the documentation can sometime be out of date.

CoreOS bootkube+matchbox
I also struggled with CoreOS tectonic but ironically am very happy with bootkube+matchbox on CoreOS (which is what tectonic is based on). I would highly recommend it. https://github.com/coreos/matchbox/blob/master/Documentation/bootkube.md
I would also recommend bootkube over CoreOS manual installation as the documentation is more likely to be maintained and up to date as it is the basis for tectonic. 
More interesting is that bootkube gives you access to features such as "self hosted etcd" where etcd is deployed into kubernetes itsself. 

With bootkube+matchbox I am able to iPXE boot a bare metal server kubernetes master in 5 minutes with no manual intervention. New workers are also automatcially joined with no manual intervention simply from an iPXE boot. The advantage of the iPXE approach is also that you can seemlessly test it using virtualization and the process is the same (there are some small issue like network adapters taking more time to come online on bare metal but in general i dont have to make any changes).

One strange part of the boot-kube flow is having to ssh on to each machine and "scp" certificates and kubeconfig. 
I extended the matchbox bootkube-install example to dynamically download all required assets such that no manual intervention is required. This is one of the advantages of the CoreOS ignition process where you can setup indirection to download all your kubernetes assets from a remote server.

Some examples scripts below:

#generated bootkube config
bootkube render --asset-dir=assets --api-servers=https://192.168.122.10:443 --api-server-alt-names=DNS=k8.example.com,IP=192.168.122.10 --etcd-servers=https://192.168.122.10:2379 --experimental-self-hosted-etcd

#start matchbox ipxe server
docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/bootkube-install:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug

#examples/groups/bootkube-install/node1.json extended for dynamic bootkube
{
  "id": "node1",
  "name": "Controller Node",
  "profile": "bootkube-controller",
  "selector": {
    "mac": "18:03:73:3f:6d:bf",
    "os": "installed"
  },
  "metadata": {
    "cluster_id" : "test_agent",
    "config_server": "http://172.20.20.219:1979/k8/test_agent",
    "ip_address" : "192.168.122.10",
    "ip_gateway" : "192.168.122.1",
    "net_mask" : "23",
    "domain_name": "k8.example.com",
    "etcd_initial_cluster": "k8=https://192.168.122.10:2380",
    "etcd_endpoints": "https://192.168.122.10:2379",
    "etcd_name": "k8",
    "k8s_dns_service_ip": "10.3.0.10",
    "ssh_authorized_keys": [
      "ssh-rsa ..."    ]
  }
}


#examples/ignition/bootkube-controller.yaml extended for dynamic bootkube
networkd:
  units:
    - name: 00-static.network
      contents: |
        [Match]
        Name=e*
        [Network]
        DNS=8.8.8.8
        Address={{.ip_address}}/{{.net_mask}}
        Gateway={{.ip_gateway}}
systemd:
  units:
    - name: bootkube.service
      enable: true
      contents: |
        [Unit]
        Description=Bootstrap a Kubernetes control plane with a temp api-server
        [Service]
        Type=simple
        RemainAfterExit=yes
        Restart=always
        RestartSec=20
        WorkingDirectory=/opt/bootkube
        ExecStartPre=/bin/sh -c 'wget -r -np -nH -x --cut-dirs=1 {{.config_server}}/'
        ExecStartPre=/bin/sh -c 'mv {{.cluster_id}} /opt/bootkube/assets'
        ExecStart=/opt/bootkube/bootkube-start
        [Install]
        WantedBy=multi-user.target
storage:
  files:
    - path: /etc/ssl/etcd/etcd-client.crt
      filesystem: root
      mode: 0644
      contents:
        remote:
          url: "{{.config_server}}/tls/etcd-client.crt"

Luc Evers

unread,
Jun 28, 2017, 11:54:31 AM6/28/17
to mrpanigale, Kubernetes user discussion and Q&A
Thanks, I'll check them out. 

We are alsof a big fan of fleet, but no support in 2018.
Coreos , kubernetes 1.6 and etcd3 is not so easy to install.

We shall test Ubuntu Maas now ,  kubernetes updates are possible.

paulrab...@gmail.com

unread,
Sep 20, 2017, 10:59:59 AM9/20/17
to Kubernetes user discussion and Q&A
Thank you for your post.

I too am trying different methods/stacks as well to pxe boot a 42u bare metal rack into a cluster. os running in ram of course. what was your experience with Rancher and core os together? Do you recommend Bootkube+matchbox over coreosmatchbox/ rancher? I like ranchers extra environment / name spaces, acl web interface, logging and cli access. However, I dont know how to automate creating and joining the cluster. it appears rancher was primarily designed for a hard disk install, and manual cluster creation. Could this be automatied? or is it just not worth it.

(I already use pxe matchbox /ignition / coreos so maybe I should dump rancher and run bootkube)

Thoughts?



On Wednesday, June 28, 2017 at 4:55:24 AM UTC-5, mrpanigale wrote:
> I have evaluated to the best of my ability:
> - Rancher + RancherOS
> - Rancher + CoreOS
> - CoreOS Kubernetes (Manual installation)
> - CoreOS Kubernetes (Tectonic)
>
> - CoreOS SelfHosted (Bootkube+matchbox) 
>
>
> Quick Summary:
> With RancherOS my experience is that you have to be careful to install a OS version with a compatilbe docker for kubernetes. I also dont like to be locked into docker, prefer systemd as an init system as the rancherOS init system, in comparison, is limited.
>
>
> Rancher + CoreOS installation experience is the best. However only 1.5 is supported. 1.6 is not yet released and i could not get a cluster started using there development branch. Also for advanced K8 users, advanced customization of the installation process I found difficult (e.g. custom CA for certificate authentication, etc.). In order to make the provisioning process smooth you will want to use CoreOS matchbox.
>
>
> CoreOS manual installation is excellent however it does not explain how to provision the systemd units onto the machines. I am a big CoreOS fleet fan so I initially created fleet units to deploy the documented systemd units. CoreOS has now deprecated Fleet so this approach is not also not recommended. Also the documentation can sometime be out of date.
>
>
> CoreOS bootkube+matchboxI also struggled with CoreOS tectonic but ironically am very happy with bootkube+matchbox on CoreOS (which is what tectonic is based on). I would highly recommend it. https://github.com/coreos/matchbox/blob/master/Documentation/bootkube.md

mrpanigale

unread,
Sep 21, 2017, 4:07:25 AM9/21/17
to Kubernetes user discussion and Q&A
Hey z3ro,

My current position is that I would recommend PXE+matchbox+bootkube. We have been working with this deployment approach very successfully since my last post and all our internal clusters are deployed using iPXE. 

Ultimately the following issues where deciding factors against Rancher:

- Rancher deploys kubernetes using a technology called Cattle. This adds additional complexity and knowledge requirements especially when troubleshooting (think having to learn Puppet/Ansible/Chef + OpenStack neutron networking in addition).
  We preferred the bootkube "self hosted kubernetes" approach as deploying and upgrading kubernetes was a consistent experience to deploying our business applications.

- Rancher's approach to ingress, in my oppinion, is inflexible. For example, it was very difficult to impossible to expose a "hostNetwork=true" running container or deploy a custom ingress controller. This is very important when planning for the future and reducing vendor lock-in. For example, we are already evaluating istio.io technology and maintaining some kind of flexibilty at the networking layer is important. Ultimately, I think, you would want to be able to solve problems in multiple ways if you need to and Rancher is not flexible enough. Again all my opinion.

- We love CEPH, StorageClasses and PVCs. Unfortunately, integration with Rancher was not possible because it was not possible to extend the Rancher deployment scripts to mount additional kernel modules required for CEPH integration. One of our engineers after two months was able to fork the Rancher kubernetes catalog and somehow get this working but this means more maintenance merging updates from the Rancher community. It also highlighted the steps/complexity required to customize the deployment process, if needed.

- Compared to the matchbox + bootkube approach, deployment customization is straight forward and after some time you gain more confidence. We ended up pulling upstream updates before bootkube or matchbox performed official releases. We simply monitor this URL for new hyperkube releases and test the new kubernetes images as they arrive. https://quay.io/repository/coreos/hyperkube?tab=tags

- We love Network policy. Unfortunately we were not able to deploy Project Calico within Rancher for similar reasons to not being able to deploy CEPH/PVC integration. Network policy is important for us as we use the security features, lucky for us bootkube has a flag to generate Calico deployment assets and the deployment process is painless

- Security patches and existing CoreOS experience. Our company has already been using CoreOS in production with fleet since 2014. Keeing things CoreOS allows us to take advantage of the automated security updates operator integration with kubernetes (https://github.com/coreos/container-linux-update-operator).  Unfortunately we were not able to find recommendations for patching within a Rancher cluster


kind regards,

Andrew
Reply all
Reply to author
Forward
0 new messages