Hello,
I am actually rebooting my ganeti nodes and for that I did a live migration of the instances. While migrating the instances back to their original nodes after the reboot I have one instance which has both its primary and secondary disks in *DEGRADED* status as you can see here from the output of gnt-instance info:
on primary: /dev/drbd22 (147:22) in sync, status *DEGRADED*
on secondary: /dev/drbd2 (147:2) in sync, status *DEGRADED*
/dev/drbd22 looks like this:
22: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-----
ns:0 nr:0 dw:81248 dr:228021 al:262 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:79068
/dev/drbd2 looks like this:
2: cs:StandAlone ro:Secondary/Unknown ds:UpToDate/DUnknown r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:48
Now I tried a verify-disks and activate disks nothing works I still can't migrate back my instance as I get the following error message:
Sat Mar 21 15:18:52 2015 * checking disk consistency between source and target
Failure: command execution error:
Disk 0 is degraded or not fully synchronized on target node, aborting migration
As a last resort I tried a replace-disks but that did not work neither and I got this error:
Failure: command execution error:
Node
node1.domain.com has degraded storage, unsafe to replace disks for instance
inst3.domain.comWhat should I do now? Any ideas?
Best regards
John