RE: iscsi through linux software bridge?

125 views
Skip to first unread message

netz-haut - stephan seitz

unread,
Oct 6, 2009, 11:41:52 AM10/6/09
to open-...@googlegroups.com
Hi there,

I'ld recommend not to export LUNs directly to domU's. We did this on a few dom0 with dedicated
iSCSI NICs. Running about 40-60 domUs concurrently results in flaky iSCSI traffic and random loss of iSCSI sessions. We're now using multipathed CLVM LV's without any problem.

Cheers,

Stephan


> -----Original Message-----
> From: open-...@googlegroups.com [mailto:open-...@googlegroups.com]
> On Behalf Of Hoot, Joseph
> Sent: Sunday, October 04, 2009 8:56 PM
> To: open-...@googlegroups.com
> Subject: Re: iscsi through linux software bridge?
>
>
> Has anyone done this? I can't seem to get it to work. The reason
> that I want to plug (2) physical ethernet cards to a linux software
> bridge, is because then I can have my Xen dom0 mount its own iSCSI
> volumes as well as "pass-through" additional iscsi traffic out to the
> guest vm's. This will allow for me to take advantage of EqualLogic's
> thin provisioning.
>
>
>
>
>
> On Oct 1, 2009, at 1:56 PM, Hoot, Joseph wrote:
>
> >
> > I dont' think I sent this properly to the list. This is my second
> > attempt :)
> >
> >
> >
> > Has anyone been successful in setting up a linux software bridge and
> > loging into an iscsi target?
> >
> > I've setup the following:
> >
> > Linux Target:
> > ==========
> > eth2 -> 192.168.30.1
> > eth3 -> 192.168.30.2
> >
> > Linux Client:
> > ==========
> >
> > eth2 -> BRIDGE=iscsi0
> > iscsi0 -> TYPE=Bridge
> > iscsi0:1 -> 192.168.30.3
> >
> > eth3 -> BRIDGE=iscsi1
> > iscsi1 -> TYPE=Bridge
> > iscsi1:1 -> 192.168.30.4
> >
> >
> > I can discover all targets. But when I go to log into them, it
> > fails. If I break the software bridges and just leave the IP's on
> > eth2 and eth3, they can login and discover just fine.
> >
> > Thanks
> > Joe
> >
> > >
>
> ==============================
> Joseph R. Hoot
> Lead System Programmer/Analyst
> (w) 716-878-4832
> (c) 716-759-HOOT
> joe....@itec.suny.edu
> ==============================
>
>
> >


Mit freundlichen Gruessen

--
Stephan Seitz
Senior System Administrator

*netz-haut* e.K.
multimediale kommunikation

zweierweg 22
97074 würzburg

fon: +49 931 2876247
fax: +49 931 2876248

web: http://www.netz-haut.de/

registriergericht: amtsgericht würzburg, hra 5054

Hoot, Joseph

unread,
Oct 7, 2009, 11:15:07 AM10/7/09
to open-...@googlegroups.com
Thanks for the info Stephan. I'm taking your response into
consideration and will most likely NOT go forward with this based on
your response, as well as Oracle's response to this same type of
question. They seemed to believe that performance may be impacted.

1) However, before I completely ditch the idea, does anyone know if
this is even possible? <-- that is, using linux software bridges via
the `brctl` command.

2) When you were using "dedicated iSCSI NICS" were you just connecting
to the sessions from the dom0 and then passing the raw device through
to the domU's as a physical device? I guess that is another way of
doing this, but then I have to manage which blk devices are used for
physical access to the domU and which ones are actually being mounted
on the dom0 (for Oracle VM, these are mounted under /OVS/)

Thanks,
Joe

netz-haut - stephan seitz

unread,
Oct 8, 2009, 5:22:39 AM10/8/09
to open-...@googlegroups.com
Joseph,
we're currently using following setup without any problems:

- attached are 4 Gbit NICs for iSCSI (I'll name them eth4 to eth7 here)

- eth4 and eth5 are on one card, eth6 and eth7 on another

- the iSCSI switch is 802.3ad capable (loss of redundancy, I know, but only one switch in service)

- eth4 + eth6 are bonded with mode=4 to bond0, eth5 + eth7 = bond1

- iSCSI target machine is offering LUNs via two portals on different subnets

- bond0 has IP on first subnet, bond0:1 an alias IP on second subnet. bond1 second subnet, bond1:1 first subnet
This results in four pathes to each LUN.

- configured multipathd recognizes each iSCSI lun after login and offers every lun via /dev/mapper

- the exported luns are pvcreate'd , the VG ontop of them is clustered via openais, but this is only necessary if you're going to use the VG on different dom0's

- domU's are configured to use 'phy:/dev/VolumeGroup/domU-disk-LV'


We've tried many different scenarios, but only the use of multipathd offered a usable stability if the machines are heavy loaded. The decision to use CLVM on top was made for easy use. We do not need to fiddle around with LUN ID's any more.
If you need a raw scsi disk in your domU's, I'ld recommend to connect multipathed LUN's via scsi passthru.

Regarding your original question: We've never tried bridged iSCSI sessions, but technically there shouldn't be any problem. Have you tried to add some virtual NIC to your bridge for discover / login via the virtual NIC in dom0?

Mike Christie

unread,
Oct 8, 2009, 2:50:29 PM10/8/09
to open-...@googlegroups.com
On 10/07/2009 10:15 AM, Hoot, Joseph wrote:
> Thanks for the info Stephan. I'm taking your response into
> consideration and will most likely NOT go forward with this based on
> your response, as well as Oracle's response to this same type of
> question. They seemed to believe that performance may be impacted.
>
> 1) However, before I completely ditch the idea, does anyone know if
> this is even possible?<-- that is, using linux software bridges via
> the `brctl` command.

I have never tested that before. I am not sure why it would not work.
In your setup, does iscsiadm fail when you try to login? At that time
are you trying to also use ifaces? If you do not use them does it work?

In /var/log/messages, do you see a error message from iscsid? What is it?

Hoot, Joseph

unread,
Oct 8, 2009, 3:03:31 PM10/8/09
to open-...@googlegroups.com
I'll have to look again. I remember that basically I can discover my
targets just fine with `iscsiadm -m discovery -t st -p <port_here>`

and then when I go to log into those with `iscsiadm -m node --
loginall=all` it tries but fails. If I reconfigure (rm -rf /lib/
iscsi/nodes/* /lib/iscsi/send_targets/* and then redo the ifaces) so
that my ifaces use the nics and not the software bridges, it works
just fine.

Unfortunately, my test system was reclaimed, but this is the model
that we'd like to move forward with so I will get another system going
shortly and I can debug this more.

good to hear that, in theory, it should work :)
===========================
Joseph R. Hoot
Lead System Programmer/Analyst
(w) 716-878-4832
(c) 716-759-HOOT
joe....@itec.suny.edu
GPG KEY: 7145F633
===========================

Reply all
Reply to author
Forward
0 new messages