[PATCH 2.6.24.4 0/4] I/OAT patches for open-iscsi

8 views
Skip to first unread message

supreeth

unread,
Apr 4, 2008, 4:54:37 PM4/4/08
to open-iscsi, supreeth.v...@intel.com
The following series implements Intel I/O Acceleration Technology (I/
OAT)
support for open-iscsi. This series works on the 2.6.24.x kernels. We
are
working on modifying the patches to work with the new tcp recv path in
open-iscsi from the 2.6.25-rcx kernels. I recommend using the latest
stable
2.6.24 kernel to apply the patches. We're putting these out now to
solicit
feedback from the community, and will submit new patches against the
upstream
open-iscsi kernel. Please feel free to send any comments/suggestions
to me or
the list. We also continue to work on performance tuning these
patches. Each
patch in this series has additional descriptive notes. Please apply
the
patches in the order 01-tcp_dma, 02-networking, 03-iscsibase, and 04-
iscsioffload.
I have just applied these patches using stg on the stable version of
kernel
2.6.24.4 from git.kernel.org successfully. Please let me know if for
some reason
these don't apply cleanly for you.

The following additional features may need to be enabled in .config by
the
user if not active by default by running "make menuconfig."

1. Under "Device Drivers" enable DMA Engine Support, and Intel I/OAT.
2. Under "Device Drivers" enable SCSI low level drivers support and
the iSCSI
initiator over TCP/IP.

To enable the features in the patches, please ensure that the
"ioatdma" module
is loaded using "modprobe ioatdma." We noticed a marginal increase in
throughput using Intel 1Gb ethernet cards (e1000) and a significant
increase
in throughput using the Intel 10Gb cards.

NOTE: THIS SERIES WILL ONLY COMPILE ON THE 2.6.24.X KERNELS AND NOT
THE LATEST
(2.6.25-RCx)UPSTREAM KERNELS.


--
Supreeth Venkataraman

Pasi Kärkkäinen

unread,
Apr 5, 2008, 5:06:21 AM4/5/08
to open-...@googlegroups.com, supreeth.v...@intel.com
On Fri, Apr 04, 2008 at 01:54:37PM -0700, supreeth wrote:
>
> The following series implements Intel I/O Acceleration Technology (I/
> OAT)
> support for open-iscsi. This series works on the 2.6.24.x kernels. We
> are
> working on modifying the patches to work with the new tcp recv path in
> open-iscsi from the 2.6.25-rcx kernels. I recommend using the latest
> stable
> 2.6.24 kernel to apply the patches. We're putting these out now to
> solicit
> feedback from the community, and will submit new patches against the
> upstream
> open-iscsi kernel. Please feel free to send any comments/suggestions
> to me or
> the list. We also continue to work on performance tuning these
> patches.

<snip>

> We noticed a marginal increase in throughput using Intel 1Gb ethernet
> cards (e1000) and a significant increase in throughput using the
> Intel 10Gb cards.
>

Hi!

Do you have any numbers available? Would be nice to know what kind
significant increase you got..

Thanks!

-- Pasi

Mike Christie

unread,
Mar 3, 2011, 7:20:59 AM3/3/11
to open-...@googlegroups.com, supreeth, supreeth.v...@intel.com, Pasi Kärkkäinen
On 04/04/2008 03:54 PM, supreeth wrote:
> throughput using Intel 1Gb ethernet cards (e1000) and a significant
> increase
> in throughput using the Intel 10Gb cards.
>

For writes I am getting 1000MB/s (netperf reports 1100), but for reads I
am getting only 600 MB/s. On the write side we use sendpage which is
zero copy. For reads though, we use memcpy. perf traces are showing we
are in that memcpy a lot. So I thought these patches might help since
Supreeth had also mentioned a significant increase in throughput.

The attached patch ports Supreeth's patches to linus's tree.

For reads I just do something like

fio --filename=/dev/sdXYZ --direct=1 --rw=randread --bs=1m --size=10G
--numjobs=4 --runtime=10 --group_reporting --name=file1

For writes I do:

fio --filename=/dev/XYZ --direct=1 --rw=randwrite --bs=1m --size=10G
--numjobs=4 --runtime=10 --group_reporting --name=file1

The iscsi target disks are memory backed disks, so not actually going to
real spinning disks :)

Also in the attached patch is a change to the r2t code which speeds up
writes when the IO size requires R2Ts.

With Linus's tree I had to use the "noop" io scheduler, turn of iptables
(/etc/init.d/iptables stop" on fedora/RHEL systems), turn off
irqbalance, turn off cpuspeed, and then play around with the
/sys/block/sdXYZ/queue/rq_affinity setting. Also it sometimes helped
when the the MaxRecvDataSegmentLength and MaxXmitDataSegmentLength were
larger (128 - 256K) and then the /sys/block/sdXYZ/queue/max_sectors_kb
matched them.

And make sure ioatdma is loaded and that /sys/class/dma/ has some dma
channels.

If you have a fast system with ioatdma please try it out.

ioat-and-r2t.patch

Mike Christie

unread,
Mar 3, 2011, 7:22:36 AM3/3/11
to open-...@googlegroups.com, supreeth, supreeth.v...@intel.com, Pasi Kärkkäinen
On 03/03/2011 06:20 AM, Mike Christie wrote:
> If you have a fast system with ioatdma please try it out.
>

Oh yeah, starting vacation ...... Now :)

Pasi Kärkkäinen

unread,
Mar 3, 2011, 10:21:01 AM3/3/11
to Mike Christie, open-...@googlegroups.com, supreeth, supreeth.v...@intel.com

Thanks for the heads up!

Actually I have some new hardware with 10gig nics
so I can probably try this stuff..

Enjoy your vacation!

-- Pasi

Or Gerlitz

unread,
Mar 3, 2011, 11:36:38 AM3/3/11
to open-...@googlegroups.com, Mike Christie, supreeth, supreeth.v...@intel.com, Pasi Kärkkäinen
Mike Christie <mich...@cs.wisc.edu> wrote:>

> For writes I am getting 1000MB/s (netperf reports 1100), but for reads I am getting only 600 MB/s.

what was the perf before the patches?

Or.

Mike Christie

unread,
Mar 8, 2011, 3:11:32 AM3/8/11
to open-...@googlegroups.com

That is the perf before the patches. I do not have a box with ioatdma
and 10 gig nics, so I am not able to test right now. With 1 gig, it is
not making any difference. I am getting around 113/112 MB/s read/write
throughout. I ported the patches hoping intel or Pasi or someone on the
list, would be interested in trying them out.

Reply all
Reply to author
Forward
0 new messages