On Saturday, July 10, 2021 at 4:03:41 PM UTC-4 jay wrote:
Hello,
I am trying to run kubevirt in aws eks with node group as metal instances, but having few issues when trying to deploy v0.43.0 or v0.42.1 versions
for v0.42.1 virt-api pod is in error state
W0710 19:54:43.254272 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0710 19:54:43.255223 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0710 19:54:43.255307 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
panic: mkdir /tmp/certsdir569989491: read-only file system
goroutine 1 [running]:
pkg/virt-api/api.go:158 +0x2c9
main.main()
cmd/virt-api/virt-api.go:36 +0xb7
Please let me know if EKS is supported with metal worker nodes or not ?
EKS isn't a part of the KubeVirt CI test suite, so there's likely some gaps in being able to run with EKS. As long as nested virtualization is enabled on the baremetal nodes, I don't see a reason why we can't aim to work on EKS.
That error you hit is a result of the virt-api component not being able to write to the /tmp` directory in the virt-api pod. I bet this could be solved by having virt operator mount an empty dir volume to virt-api's /tmp directory.
I think this theory can be tested without needing to re-build KubeVirt. We have the ability to tell virt-operator to patch the underlying components with a specific set of JSON patches. You'd just need to add a patch that adds an empty dir to the virt-api deployment and adds a VolumeMount for that empty dir for the pod's "/tmp" directory.
Here's an example of how to use the KubeVirt CR's CustomizeComponents fields, [1]
If that does get you further, then permanently addressing this would involve adding the empty dir to the virt-api deployment generated during the release process here [2].