Unable to SSH/connect to the kubevirt linux VM. Where are logs for VM boot located?

523 views
Skip to first unread message

Rohith Vemula

unread,
Dec 20, 2021, 12:16:52 PM12/20/21
to kubevirt-dev
Hi,

After migrating an on-premise ubuntu VM to kubernetes, SSH is not working.
1. Where are the logs located for VM boot or VM creation procedure?
2. Is there something wrong with the procedure followed below?

Steps that I followed:

1. Used qemu-img to convert /dev/sda to disk.img (/dev/sda2 mounted on /). 
  Contents of /etc/fstab:
UUID=e2bf1878-575f-11ec-a4e3-005056957eee / ext4 defaults 0 1
/swap.img       none    swap    sw      0       0

2. Uploaded this disk.img file to data volume and created a VM from it. VM yaml:
kind: VirtualMachine
metadata:
  generation: 1
  labels:
    kubevirt.io/os: ubuntu
    kubevirt.io/size: large
  name: onpremvm
spec:
  running: true
  template:
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/domain: onpremvm
    spec:
      domain:
        cpu:
          cores: 1
        devices:
          blockMultiQueue: true
          disks:
          - disk:
              bus: virtio
              readonly: false
            name: disk0
            cache: none
        machine:
          type: q35
        resources:
          requests:
            memory: 2048M
      volumes:
      - name: disk0
        persistentVolumeClaim:
          claimName: onpremvm-dv

SSH to the VM fails with error: 
ssh: connect to host xx.xx.xx.xx port 22: Connection timed out

I am trying to migrate the machine to kubernetes cluster on Azure (Azure Kubernetes Service).

Regards
Rohith.

Dan Kenigsberg

unread,
Dec 21, 2021, 2:39:31 AM12/21/21
to Rohith Vemula, kubevirt-dev
It seems that you have not defined any network interface in your VM.
Then you typically want to expose its port 22 as a service and ssh into the IP:port of the service.
  

SSH to the VM fails with error: 
ssh: connect to host xx.xx.xx.xx port 22: Connection timed out

I am trying to migrate the machine to kubernetes cluster on Azure (Azure Kubernetes Service).

Let us know how this works out for you.


Regards
Rohith.

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/26f6b22c-811f-4a11-be53-d24645a1b947n%40googlegroups.com.

Rohith Vemula

unread,
Dec 21, 2021, 3:24:14 AM12/21/21
to kubevirt-dev
Hi Dan,

I have already created a service in order to expose port 22 and I am trying SSH using the service's IP address.

I created the VMI with a yaml similar to the VM yaml used in Lab2 demo in kubevirt.io site (https://kubevirt.io/labs/manifests/vm1_pvc.yml). There is no entry for interfaces/networks in the Lab2 tutorial as well. But, I am able to SSH into the Lab2 VMI (by exposing a service and SSH using service IP).

Anyway, I have tried adding interfaces entry and networks entry in the yaml, but I am still unable to SSH into the VMI.

Is there a standard way to debug this? Are there any logs in the virt-launcher pod or any other entity, which can confirm that the VM has booted up or which can show any connectivity/boot issues?
Also, is there any documentation related to the way in which the disk image has to be prepared in order for kubevirt to support booting a VM from it?

Regards
Rohith.

Kat Morgan

unread,
Dec 22, 2021, 11:59:38 AM12/22/21
to kubevirt-dev
Rohith,

If it helps, here is a gist[1] which walks through setting up kubevirt on kind based on the official quickstart docs[2] that demonstrates:
  • writing a service for the vm (nodeport in this example)
  • using virtctl to observe & interact with vnc and serial consoles
  • ssh to vm via nodeport service
  • additional troubleshooting commands
If you can get that working locally, the steps shown may translate to applicable debugging/troubleshooting steps for your migrated ubuntu vm.

From there, you can share output of the console(s), vm and vmi status and descriptions, and perhaps we can be of more help with that information.

Kat
gh:usrbinkat

Roman Mohr

unread,
Dec 23, 2021, 6:55:51 AM12/23/21
to Dan Kenigsberg, Rohith Vemula, kubevirt-dev
Just to avoid confusions, the pod network is added by default if there is no explicit `autoattachPodInterface: false` added.

Best regards,
Roman
 
Then you typically want to expose its port 22 as a service and ssh into the IP:port of the service.
  

SSH to the VM fails with error: 
ssh: connect to host xx.xx.xx.xx port 22: Connection timed out

I am trying to migrate the machine to kubernetes cluster on Azure (Azure Kubernetes Service).

Let us know how this works out for you.


Regards
Rohith.

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubevirt-dev/26f6b22c-811f-4a11-be53-d24645a1b947n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev...@googlegroups.com.

Rohith Vemula

unread,
Jan 7, 2022, 12:48:59 AM1/7/22
to kubevirt-dev
Hi,

As mentioned by Dan above, I have tried including network config in the yaml, but that did not change anything (as mentioned by Roman, as long as autoAttachPodInterface is true, there is no need to set the network config).

I was able to solve my issue by getting a screenshot of the console of the VMI. I could not get a console to the running kubevirt VMI, but I could get a screenshot of the console using virsh commands in the following way:
1. Login to the virt-launcher pod corresponding to the running VMI.
2. Use 'virsh screenshot <VM name>' command to get the screenshot of the VMI console.
3. Pull the saved screenshot from the pod using 'kubectl cp' command.

The issue in my case was that the VMI is not booting up and this is because of the fact that the VM that I was trying to migrate has UEFI kernel. In order to make this work, I included uefi firmware details in the VMI yaml, which in turn requires smm feature to be enabled.

features:
  smm:
    enabled: true
firmware:
  bootloader:
    efi: {}

After doing this, I was able to start the VMI and ssh into the VM.

PS - I could not find a way to get the VM boot logs or redirect console output to pod logs. It would be helpful if this can be made possible in some way.

Regards
Rohith.

Fabian Deutsch

unread,
Jan 7, 2022, 5:09:02 AM1/7/22
to Rohith Vemula, kubevirt-dev
On Fri, Jan 7, 2022 at 6:49 AM Rohith Vemula <vemularoh...@gmail.com> wrote:
Hi,

As mentioned by Dan above, I have tried including network config in the yaml, but that did not change anything (as mentioned by Roman, as long as autoAttachPodInterface is true, there is no need to set the network config).

I was able to solve my issue by getting a screenshot of the console of the VMI. I could not get a console to the running kubevirt VMI, but I could get a screenshot of the console using virsh commands in the following way:
1. Login to the virt-launcher pod corresponding to the running VMI.
2. Use 'virsh screenshot <VM name>' command to get the screenshot of the VMI console.
3. Pull the saved screenshot from the pod using 'kubectl cp' command.

Do you know why virtctl vnc or virtctl console did not work for you?

IIUIC then you could also try to remove the graphics device from the VMI definition, to then force the guest to leverage the serial console - which should then turn up as output on virtctl console.
 
Reply all
Reply to author
Forward
0 new messages