AWS EKS support with metal worker nodes

627 views
Skip to first unread message

jay

unread,
Jul 10, 2021, 4:03:41 PM7/10/21
to kubevirt-dev
Hello,

I am trying to run kubevirt in aws eks with node group as metal instances, but having few issues when trying to deploy v0.43.0 or v0.42.1 versions

for v0.42.1 virt-api pod is in error state

W0710 19:54:43.254272       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0710 19:54:43.255223       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0710 19:54:43.255307       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
panic: mkdir /tmp/certsdir569989491: read-only file system

goroutine 1 [running]:
pkg/virt-api/api.go:158 +0x2c9
main.main()
cmd/virt-api/virt-api.go:36 +0xb7

Please let me know if EKS is supported with metal worker nodes or not ?

Thank you,

dvo...@redhat.com

unread,
Jul 12, 2021, 9:29:24 AM7/12/21
to kubevirt-dev
EKS isn't a part of the KubeVirt CI test suite, so there's likely some gaps in being able to run with EKS. As long as nested virtualization is enabled on the baremetal nodes, I don't see a reason why we can't aim to work on EKS.  

That error you hit is a result of the virt-api component not being able to write to the /tmp` directory in the virt-api pod. I bet this could be solved by having virt operator mount an empty dir volume to virt-api's /tmp directory.

I think this theory can be tested without needing to re-build KubeVirt. We have the ability to tell virt-operator to patch the underlying components with a specific set of JSON patches. You'd just need to add a patch that adds an empty dir to the virt-api deployment and adds a VolumeMount for that empty dir for the pod's "/tmp" directory.

Here's an example of how to use the KubeVirt CR's CustomizeComponents fields, [1]

If that does get you further, then permanently addressing this would involve adding the empty dir to the virt-api deployment generated during the release process here [2].

 

Thank you,

Roman Mohr

unread,
Jul 12, 2021, 9:33:23 AM7/12/21
to dvo...@redhat.com, kubevirt-dev
On Mon, Jul 12, 2021 at 3:29 PM dvo...@redhat.com <dvo...@redhat.com> wrote:


On Saturday, July 10, 2021 at 4:03:41 PM UTC-4 jay wrote:
Hello,

I am trying to run kubevirt in aws eks with node group as metal instances, but having few issues when trying to deploy v0.43.0 or v0.42.1 versions

for v0.42.1 virt-api pod is in error state

W0710 19:54:43.254272       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0710 19:54:43.255223       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0710 19:54:43.255307       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
panic: mkdir /tmp/certsdir569989491: read-only file system

goroutine 1 [running]:
pkg/virt-api/api.go:158 +0x2c9
main.main()
cmd/virt-api/virt-api.go:36 +0xb7

Please let me know if EKS is supported with metal worker nodes or not ?

EKS isn't a part of the KubeVirt CI test suite, so there's likely some gaps in being able to run with EKS. As long as nested virtualization is enabled on the baremetal nodes, I don't see a reason why we can't aim to work on EKS.  

That error you hit is a result of the virt-api component not being able to write to the /tmp` directory in the virt-api pod. I bet this could be solved by having virt operator mount an empty dir volume to virt-api's /tmp directory.

I think this theory can be tested without needing to re-build KubeVirt. We have the ability to tell virt-operator to patch the underlying components with a specific set of JSON patches. You'd just need to add a patch that adds an empty dir to the virt-api deployment and adds a VolumeMount for that empty dir for the pod's "/tmp" directory.

Here's an example of how to use the KubeVirt CR's CustomizeComponents fields, [1]

If that does get you further, then permanently addressing this would involve adding the empty dir to the virt-api deployment generated during the release process here [2].

I think the same question was also raised on slack. I think this is caused by having custom PSPs in the EKS cluster. If the default PSP in EKS [3] is replaced with a custom one which makes the container overlay read-only one would see this error. The default PSP should however work with kubevirt (looking at [3]).

Best regards,
Roman
 

Thank you,

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/9fe663e0-162a-4347-94a5-7d8493f6cbb9n%40googlegroups.com.

jay

unread,
Jul 13, 2021, 2:45:02 PM7/13/21
to kubevirt-dev
Thanks Roman, David got it working in EKS for kubevirt v0.43.0 , had to do some permission issues in the eks node which got resolved by this
https://github.com/kubevirt/kubevirt/issues/4303#issuecomment-749243768

Thank you,
Reply all
Reply to author
Forward
0 new messages