can't boot after restart

165 views
Skip to first unread message

Chonimit Eksompunthapap

unread,
Mar 27, 2019, 5:27:52 AM3/27/19
to gce-discussion
after i restart. I can't ssh to this vm.

log from Serial port


SeaBIOS (version 1.8.2-20190122_225043-google)
Total RAM Size = 0x0000000590000000 = 22784 MiB
CPUs found: 2     Max CPUs supported: 2
found virtio
-scsi at 0:3
virtio
-scsi vendor='Google' product='PersistentDisk' rev='1' type=0 removable=0
virtio
-scsi blksize=512 sectors=-1 = more than 2097152 MiB
drive
0x000f2990: PCHS=0/0/0 translation=lba LCHS=1024/255/63 s=-1
Booting from Hard Disk 0...


John (Cloud Platform Support)

unread,
Mar 27, 2019, 3:29:05 PM3/27/19
to gce-discussion

I have seen in the past where this indicates an issue with the data on the disk.

I would highly recommend in this case attaching the boot disk to another instance and running a ‘chkdisk’ against it if possible.


Let me share this documentation [1] where you can find the guide to verify that your disk has a valid file system. Another option [2] is to create a snapshot of the disk “christian-gpu-14p-disk” and after this create a new disk from the snapshot and add it like an additional disk [3] (for this guide please skip the formatting part) to a totally new instance and then you can search in “/var/log” logs why your instance didn’t launch at OS level.


[1] https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-instances#pdboot

[2] https://cloud.google.com/compute/docs/disks/create-snapshots#creating_snapshots

[3] https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-instances#use_your_disk_on_a_new_instance


Justin Reiners

unread,
Mar 27, 2019, 4:59:07 PM3/27/19
to gce-discussion
Hopefully this points the order out better

1.) create a snapshot of your existing (broken) VM
2.) create a new VM from the snapshot and start it (will probably be broken, but it's ok at this point)
3.) edit the new VM, remove "delete boot disk when VM is deleted" check under edit and save.
4.) delete the new VM so you just have it's image of the disk lying around.
5.) create another new linux VM (centos/ubuntu works great for this), mounting this new copy of the disk still laying around from the deleted VM as a second hard disk
6.) log into the NEW vm
7.) fdisk -l (to find your disk, lowercase L)
8.) sudo mount /dev/sdb1 /mnt/old-drive
9.) rescue the files, or check the logs under /var/log.
Reply all
Reply to author
Forward
0 new messages