Yeah.. The likelihood of it being something unrelated to ESOS is what stopped me from asking about it before, but it was too much of a coincidence to not at least say "me too" just in case there's something to it..
I will say that in my case, I used to have a hacked up old OpenFiler system with the FC target stuff enabled, and I was able to get my three ESX machines to talk to it, so I don't think it's my switches or masking on the switches..
I'm a long-time Cisco network geek, so I can talk all day about Ethernet, token ring, routing protocols, etc, but I am admittedly a novice at best at the low-level workings of the FC network, so I'll describe what I have, and what my theories of how it was supposed to work, and maybe there will be something fundamental that you can just point to and say "no - it's not gonna work like that"
My config consists of two 8-port 2gig switches I bought from Ebay for about $50 each, so not the highest end equipment here.. I bought a big box of 2g QLOGIC cards from ebay, and thought I'd put two ports (either one 2-port card, or two, 1-port cards) into each of my ESX servers, and into the Openfiler / ESOS target, then connect each machine's "port 1" to switch 1, and each port-2 to switch 2. That way, I'd have multiple paths to the storage from each host. This part worked pretty well under Openfiler, although I had occasional disconnects/freezeups with no diagnostic information, which is what made me go looking for another solution and led me to ESOS. Along the way, I started suspecting one of my switches was bad, but then when I'd just leave that switch off, I'd find that I had the same trouble with only the other one, and vice versa, so I don't THINK I have a hardware problem. The only additional thing I wanted to do with the physical config is to connect a port from switch1 directly to switch2, to hopefully give me more possible paths (ie: card1 on the ESOS goes down, and card 2 on an ESX host goes down, but I can still get connectivity through the link between switches.) I never did this though, because I figured it just added more complexity to a system I was already having trouble diagnosing.
I only mention it here, in case there's some assumption made by the drivers that all WWNs can use any path to any other WWN.. I don't think that's the case, based on some strange masking configurations our SAN guys do at work, but at work, we have MUCH better hardware, so it might not be an apples to apples compare.
ANYWAY, the other reason I went into that description is to explain why I have so many initiators listed in my SCST.CONF.. Each of my 3 ESX servers have 2 ports, and on the ESX screen where it identifies the ports, it shows 2 WWNs, and I've never been clear on which one to use, so to be on the safe side, I'm including all four per server. In this config, I had reduced it to only 2 hosts, in hopes of simplifying the world until I get it all working, and then add the other initiator(s)
So - Here's my config.. If you have any ideas/hints/recommendations, I'm all ears and grateful for any help! I've got a manually configured /dev/md0 which I'm just trying to share out as a big multi-access LUN. I've also got one of the ESOS cards disabled in this config, for testing
=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2012.12.29 17:34:06 =~=~=~=~=~=~=~=~=~=~=~=
root@san39 ~ #
root@san39 ~ # cat /etc/scst.conf
# Automatically generated by SCST Configurator v3.0.0-pre1.
# Non-key attributes
max_tasklet_cmd 10
setup_id 0x0
threads 4
HANDLER vdisk_blockio {
DEVICE disk_2t_1 {
filename /dev/md0
# Non-key attributes
blocksize 512
nv_cache 0
read_only 0
removable 0
rotational 1
t10_dev_id 42b6e98-disk_2t_1
thin_provisioned 0
threads_num 1
threads_pool_type per_initiator
usn 42b6e98
write_through 0
}
}
TARGET_DRIVER iscsi {
enabled 0
}
TARGET_DRIVER qla2x00t {
TARGET 21:00:00:e0:8b:86:97:d6 {
HW_TARGET
enabled 1
rel_tgt_id 1
# Non-key attributes
addr_method PERIPHERAL
cpu_mask ffffffff,ffffffff
explicit_confirmation 0
io_grouping_type auto
node_name 20:00:00:e0:8b:86:97:d6
GROUP myserver {
LUN 0 disk_2t_1 {
# Non-key attributes
read_only 0
}
INITIATOR 10:00:00:00:c9:34:78:7c
INITIATOR 10:00:00:00:c9:34:79:8b
INITIATOR 10:00:00:00:c9:3a:47:f8
INITIATOR 10:00:00:00:c9:40:b1:e4
INITIATOR 20:00:00:00:c9:34:78:7c
INITIATOR 20:00:00:00:c9:34:79:8b
INITIATOR 20:00:00:00:c9:3a:47:f8
INITIATOR 20:00:00:00:c9:40:b1:e4
INITIATOR 20:00:00:e0:8b:08:51:e9
INITIATOR 20:01:00:e0:8b:28:51:e9
INITIATOR 21:00:00:e0:8b:08:51:e9
INITIATOR 21:01:00:e0:8b:28:51:e9
# Non-key attributes
addr_method PERIPHERAL
cpu_mask ffffffff,ffffffff
io_grouping_type auto
}
}
TARGET 21:01:00:e0:8b:a6:97:d6 {
HW_TARGET
enabled 0
# Non-key attributes
addr_method PERIPHERAL
cpu_mask ffffffff,ffffffff
explicit_confirmation 0
io_grouping_type auto
node_name 20:01:00:e0:8b:a6:97:d6
rel_tgt_id 0
}
}