iscsi on debian lenny and netxtreme II cards

33 views
Skip to first unread message

Angel L. Mateo

unread,
Sep 4, 2009, 7:54:49 AM9/4/09
to open-...@googlegroups.com
Hello everyone,

I am introducing myself to iSCSI world, so maybe these are newbie
questions. I'm sorry.

My first test has been to create a LUN (in a celerra file server) and
export it with iscsi. I have mounted this lun in a debian server
(lenny), created a file system, and mounted it. The problem I have is
that performance is poor than in a NFS FS (with NFS I get ~50MB/s and
with iSCSI I get ~10MB/s). I am mounting NFS and iSCSI filesystem over a
bonding (active/pasive) of two gigabit cards. I am using tcp transport
for iSCSI.

Another question related with this, is that one of my cards is a
Broadcom NetXtremm II BMC5708S wich supports iSCSI. Although some man
pages talk about a bnx2i transport, I haven't found any kernel module
for this. Could I use this transport? How? Should it improves
performance?

--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información _o)
y las Comunicaciones Aplicadas (ATICA) / \\
http://www.um.es/atica _(___V
Tfo: 868887590
Fax: 868888337

Mike Christie

unread,
Sep 4, 2009, 11:46:31 AM9/4/09
to open-...@googlegroups.com
On 09/04/2009 06:54 AM, Angel L. Mateo wrote:
> Hello everyone,
>
> I am introducing myself to iSCSI world, so maybe these are newbie
> questions. I'm sorry.
>
> My first test has been to create a LUN (in a celerra file server) and
> export it with iscsi. I have mounted this lun in a debian server
> (lenny), created a file system, and mounted it. The problem I have is
> that performance is poor than in a NFS FS (with NFS I get ~50MB/s and
> with iSCSI I get ~10MB/s). I am mounting NFS and iSCSI filesystem over a
> bonding (active/pasive) of two gigabit cards. I am using tcp transport
> for iSCSI.

Can you try different IO schedulers

echo noop > /sys/block/sdXYZ/queue/scheduler

And for your IO test what are you using and what IO sizes are used? For
a good throughput you want lots of IO (around the queue depth of the
target which is probably around 128 or 256) and larger IOs (around 64 or
128K).

>
> Another question related with this, is that one of my cards is a
> Broadcom NetXtremm II BMC5708S wich supports iSCSI. Although some man
> pages talk about a bnx2i transport, I haven't found any kernel module
> for this. Could I use this transport? How? Should it improves
> performance?
>

You cannot use this transport with the upstream open-iscsi tools. The
2.6.31 kernel will have a bnx2i driver for that card, but you will still
need a special broadcom daemon to use the offload capabilites of the
card. Broadcom and other offload guys like Chelsio are working on
merging a common daemon/lib into the open-iscsi package right now.

I will let the broadcom guys talk about perf more because I do not have
any numbers handy. From my testing I have seen it performs well with
smaller IOs (less than 32K) and iscsi_tcp does not. With larger IOs and
lots of them the throughput seems to the be same (at least with a 1 gig
network), but the braodcom cpu usage is much less. iscsi_tcp has a xmit
thread which can take up almost 100% of the cpu at times, but broadcom
does not have that thread since it does the same operations in hardware.

Angel L. Mateo

unread,
Sep 7, 2009, 6:47:30 AM9/7/09
to open-...@googlegroups.com
El vie, 04-09-2009 a las 10:46 -0500, Mike Christie escribió:

> Can you try different IO schedulers
>
> echo noop > /sys/block/sdXYZ/queue/scheduler
>
> And for your IO test what are you using and what IO sizes are used? For
> a good throughput you want lots of IO (around the queue depth of the
> target which is probably around 128 or 256) and larger IOs (around 64 or
> 128K).
>
I am running tests with dd, trying different block sizes:

lynx0:~# dd if=/dev/zero of=/var/LISTAS/kk bs=16k count=10k
10240+0 records in
10240+0 records out
167772160 bytes (168 MB) copied, 2,37322 s, 70,7 MB/s

lynx0:~# dd if=/dev/zero of=/var/LISTAS3/kk bs=16k count=10k
10240+0 records in
10240+0 records out
167772160 bytes (168 MB) copied, 2,42447 s, 69,2 MB/s

The first is a NFS filesystem and the second an iSCSI one. But this
time, performance is very similar. When I sent this firs message, iSCSI
performance was about 10MB/s and the only change I've done meanwhile has
been restarting iscsi daemon :-(

Angel L. Mateo

unread,
Sep 7, 2009, 7:31:23 AM9/7/09
to open-...@googlegroups.com
El vie, 04-09-2009 a las 10:46 -0500, Mike Christie escribió:

> You cannot use this transport with the upstream open-iscsi tools. The
> 2.6.31 kernel will have a bnx2i driver for that card, but you will still
> need a special broadcom daemon to use the offload capabilites of the
> card. Broadcom and other offload guys like Chelsio are working on
> merging a common daemon/lib into the open-iscsi package right now.
>
But, open-iscsi reports me that transport:

felis305:~# iscsiadm -m iface
default tcp,default,default,unknown
iser iser,default,default,unknown
bnx2i bnx2i,default,default,unknown

Mike Christie

unread,
Sep 8, 2009, 2:13:42 PM9/8/09
to open-...@googlegroups.com
On 09/07/2009 06:31 AM, Angel L. Mateo wrote:
> El vie, 04-09-2009 a las 10:46 -0500, Mike Christie escribió:
>
>> You cannot use this transport with the upstream open-iscsi tools. The
>> 2.6.31 kernel will have a bnx2i driver for that card, but you will still
>> need a special broadcom daemon to use the offload capabilites of the
>> card. Broadcom and other offload guys like Chelsio are working on
>> merging a common daemon/lib into the open-iscsi package right now.
>>
> But, open-iscsi reports me that transport:
>
> felis305:~# iscsiadm -m iface
> default tcp,default,default,unknown
> iser iser,default,default,unknown
> bnx2i bnx2i,default,default,unknown
>

That should be removed now. It got in there by accident. At one point
Broadcom was hoping that you could just use the normal netdev tools for
network setup, and so for iscsi setup you could just tell it to use
bnx2i. It did not work out, so we have to set params like the ip address.

Reply all
Reply to author
Forward
0 new messages