Hi Team,
We have changed the Unity storage ports from 1Gig to 10Gig. After that we have deleted the old iqn's from storage level and compute level.
old iqn's :
170.0.0.10
170.0.0.11
New 10Gig iqn's :
170.0.0.20
170.0.0.21
After deleting old iqn's we are able to access the storage at compute level, that storage is assigned to openstack vm.
We have rebooted one compute host, after that we are unable to access the storage and the openstack vm is going to error state.
As per the nova-compute logs, the nova is trying to search for old iqn's which is not present that's the reason vm is going to error state.
Could you please find the below logs and advise further...
root@compute75 ~]# iscsiadm -m session
tcp: [1]
170.0.0.11:3260,8 iqn.1992-04.com.emc:cx.ckm00185002995.b0 (non-flash)
tcp: [2]
170.0.0.10:3260,9 iqn.1992-04.com.emc:cx.ckm00185002995.a0 (non-flash)
tcp: [3]
170.0.0.20:3260,7 iqn.1992-04.com.emc:cx.ckm00185002995.a1 (non-flash)
tcp: [4]
170.0.0.21:3260,6 iqn.1992-04.com.emc:cx.ckm00185002995.b1 (non-flash)
We have rebooted the old iqn's from compute level, Storage team already removed 1Gig ports from Unity storage side.
[root@compute75 ~]# iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00185002995.b0 -p 170.0.0.11 -u
Logging out of session [sid: 1, target: iqn.1992-04.com.emc:cx.ckm00185002995.b0, portal: 170.0.0.11,3260]
Logout of [sid: 1, target: iqn.1992-04.com.emc:cx.ckm00185002995.b0, portal: 170.0.0.11,3260] successful.
[root@compute75 ~]# iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00185002995.a0 -p 170.0.0.10 -u
Logging out of session [sid: 2, target: iqn.1992-04.com.emc:cx.ckm00185002995.a0, portal: 170.0.0.10,3260]
Logout of [sid: 2, target: iqn.1992-04.com.emc:cx.ckm00185002995.a0, portal: 170.0.0.10,3260] successful.
[root@compute75 ~]# iscsiadm -m node -o delete -T iqn.1992-04.com.emc:cx.ckm00185002995.b0
[root@compute75 ~]# iscsiadm -m node -o delete -T iqn.1992-04.com.emc:cx.ckm00185002995.a0
[root@compute75 ~]#
[root@compute75 ~]# systemctl restart iscsi
[root@compute75 ~]# systemctl restart multipathd
Try `iscsiadm --help' for more information.
[root@compute75 ~]# iscsiadm --m node
[root@compute75 ~]#
After reboot of the compute host
root@compute75 ~]# iscsiadm --m session
tcp: [3]
170.0.0.20:3260,7 iqn.1992-04.com.emc:cx.ckm00185002995.a1 (non-flash)
tcp: [4]
170.0.0.21:3260,6 iqn.1992-04.com.emc:cx.ckm00185002995.b1 (non-flash)
[root@compute75 ~]# multipath -ll
mpathb (36006016029104b0084e7955d71109aa0) dm-1 DGC ,VRAID
size=1.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 10:0:0:12395 sdm 8:192 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 11:0:0:12395 sdn 8:208 active ready running
mpatha (36006016029104b0050e3955d0d37f4ae) dm-0 DGC ,VRAID
size=10G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 11:0:0:4390 sdl 8:176 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 10:0:0:4390 sdk 8:160 active ready running
[root@compute75 ~]#
After removing old paths, still we are able to access the storage.
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-142-generic x86_64)
98 packages can be updated.
56 updates are security updates.
Last login: Fri Oct 4 16:49:06 2019 from 192.0.2.1
sdn@ubuntu:~$ df -hT /mnt/test
Filesystem Type Size Used Avail Use% Mounted on
/dev/vdb1 ext3 991M 35M 906M 4% /mnt/test
sdn@ubuntu:~$ cd /mnt/test
sdn@ubuntu:/mnt/test$ touch bb
touch: cannot touch 'bb': Permission denied
sdn@ubuntu:/mnt/test$ sudo su -
[sudo] password for sdn:
root@ubuntu:~# cd /mnt/test
root@ubuntu:/mnt/test# touch bb
root@ubuntu:/mnt/test# ls
aa bb docs docs2 lost+found
root@ubuntu:/mnt/test#
Now rebooted the compute node. The vm is going to error state.
[root@compute75 ~]# reboot
Connection to compute75 closed by remote host.
Connection to compute75 closed.
[root@osc ~(keystone_admin)]#
[root@osc ~(keystone_admin)]# openstack server list
+--------------------------------------+-----------+--------+-----------------+------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+-----------+--------+-----------------+------------+
| 15c064ca-8bd0-40be-b384-c796db1da953 | test_vol3 | ERROR | net1=192.0.2.14 | ubuntu |
| afd5c571-2152-4313-988e-c74a8fc7f586 | test_vol2 | ACTIVE | net1=192.0.2.13 | ubuntu |
| 34d6b9aa-4642-461c-ba22-508de8f5ba5a | test_vol1 | ERROR | net1=192.0.2.2 | ubuntu |
+--------------------------------------+-----------+--------+-----------------+------------+
[root@osc ~(keystone_admin)]#
2019-10-07 16:09:52.456 28347 WARNING os_brick.initiator.connectors.iscsi [req-1f65b818-108d-41cb-9ea1-ca18761a673f - - - - -] Failed to login iSCSI target iqn.1992-04.com.emc:cx.ckm00185002995.b0 on portal
170.0.0.11:3260 (exit code 8).
2019-10-07 16:09:52.458 28347 INFO os_brick.initiator.connectors.iscsi [req-1f65b818-108d-41cb-9ea1-ca18761a673f - - - - -] Trying to connect to iSCSI portal
170.0.0.10:32602019-10-07 16:11:52.636 28347 WARNING os_brick.initiator.connectors.iscsi [req-1f65b818-108d-41cb-9ea1-ca18761a673f - - - - -] Failed to login iSCSI target iqn.1992-04.com.emc:cx.ckm00185002995.a0 on portal
170.0.0.10:3260 (exit code 8).
2019-10-07 16:11:52.789 28347 ERROR os_brick.initiator.connectors.iscsi [req-1f65b818-108d-41cb-9ea1-ca18761a673f - - - - -] Could not login to any iSCSI portal.
2019-10-07 16:11:52.790 28347 WARNING nova.compute.manager [req-1f65b818-108d-41cb-9ea1-ca18761a673f - - - - -] [instance: 15c064ca-8bd0-40be-b384-c796db1da953] Failed to resume instance
Could you please advise further on this.