Linux-iSCSI.org West goes online..

2 views
Skip to first unread message

Nicholas A. Bellinger

unread,
Sep 27, 2007, 1:05:35 AM9/27/07
to Linux-iSCSI.org Target Dev, H. Peter Anvin, Levine, Daniel J., Eric Hall, FUJITA Tomonori, Mike Christie, Christoph Hellwig, Matthew Wilcox, Patrick Mansfield
Greetings all,

Now that the first set of machines has stabilized, I have been going about installing the various software compontents
on the new machines.  Here is the current setup:

Initiators: CentOS 5U0 x86 - EVMS - coreiscsi-1.6.2 - 2.6.18-8.el5

Targets: CentOS 4U5 x86_64 - EVMS - linux-iscsi.org trunk - 2.6.9-55.ELsmp

On the target side, the setup is:  iSCSI <-> Storage Engine <-> IBLOCK <-> Volume <-> EVMS Drive Links
<-> SCSI LUNs.  There is a 2x hardware LSI Megaraid RAID5 on ElCapitan.Linux-iSCSI.org, and 1x hardware
1x software MD RAID5 for HalfDome.Linux-ISCSI.org.  On EC, the evms_gather_output looks like:

Volume Name: /dev/evms/linux-iscsi
Major: 253
Minor: 4
Active: TRUE
Volume Size: 1.85 TB
Minor Number: 4

[root@elcapitan ~]# target-ctl listluninfo tpgt=1
-----------------------------[LUN Info for iSCSI TPG 1]-----------------------------
Status: ACTIVATED  Execute/Left/Max Queue Depth: 0/32/32  SectorSize: 512  MaxSectors: 128
        iBlock device: dm-4
        Major: 253 Minor: 4  CLAIMED: IBLOCK
        Type: Direct-Access     ANSI SCSI revision: 02  Unit Serial: sn.83a01be224e9:0_253_4  DIRECT  EXPORTED
        iSCSI Host ID: 0 iSCSI LUN: 0  Active Cmds: 0  Total Bytes: 2032984801280
        ACLed iSCSI Initiator Node(s):
                iqn.2003-01.org.linux-iscsi.sysimage.i686:sn.fad2c60e3d0  0 -> 0
                iqn.2003-01.org.linux-iscsi.mail.i686:sn.3dd42b4b2927  0 -> 0

and HalfDome.Linux-iSCSI.org looks like:

Volume Name: /dev/evms/linux-iscsi-dev
Major: 253
Minor: 7
Active: TRUE
Volume Size: 887.65 GB
Minor Number: 7

[root@halfdome target]# target-ctl listluninfo tpgt=1
-----------------------------[LUN Info for iSCSI TPG 1]-----------------------------
Status: ACTIVATED  Execute/Left/Max Queue Depth: 0/32/32  SectorSize: 512  MaxSectors: 128
        iBlock device: dm-7
        Major: 253 Minor: 7  CLAIMED: IBLOCK
        Type: Direct-Access     ANSI SCSI revision: 02  Unit Serial: sn.9a2c2bfed439:0_253_7  DIRECT  EXPORTED
        iSCSI Host ID: 0 iSCSI LUN: 0  Active Cmds: 0  Total Bytes: 953112002560
        ACLed iSCSI Initiator Node(s):
                iqn.2003-01.org.linux-iscsi.sysimage.i686:sn.fad2c60e3d0  0 -> 0
                iqn.2003-01.org.linux-iscsi.mail.i686:sn.3dd42b4b2927  0 -> 0

These EVMS target objects are imported into the Storage Engine via IBLOCK at Major/Minor 253:4 and 253:7 and then attached to Target Portal Group Objects as iSCSI LUNs.  These are attached to the default Storage Objects for EC and HD, and the iSCSI Initiators are connected over IPv4 over a single 1 Gb/sec link initially.  The first test, using EVMS, was to create drive links from the raw iSCSI logical units on the iSCSI Initiator, and then a volume on top of the EVMS drive link feature object.  Below is the final volume on copy of the feature object drive-link on the initiator side with GFS2 formated storage.  sde and sdf are the iSCSI LUNs:

Object Name: linux-iscsi-initiator
Major: 253
Minor: 2
Active: TRUE
Object Size: 2.72 TB

Parents:
  /dev/evms/linux-iscsi-initiator

Children:
  sde1
  sdf1

Volume Name: /dev/evms/linux-iscsi-initiator
Major: 253
Minor: 2
Active: TRUE
Volume Size: 2.72 TB
Minor Number: 2
Mount Point: /mnt/linux-iscsi-west
Filesystem: OGFS
Max Filesystem Size: 0.00 bytes
Min Filesystem Size: 732.76 GB

[root@sysimage ~]# df -TH
Filesystem    Type     Size   Used  Avail Use% Mounted on
/dev/md0      ext3      18G   4.0G    13G  24% /
tmpfs        tmpfs     996M      0   996M   0% /dev/shm
/dev/evms/linux-iscsi-initiator
              gfs2     3.0T    36M   3.0T   1% /mnt/linux-iscsi-west

Doing this same type of setup with ocfs2 is another option for production.  I also did some quick local storage benchmarks on
the target machines, the Opteron 275 CPUs are clocked down at boot by cpuspeed to 1.8 for EC and
1.0 GHz for HD.   The local disktest bandwith with O_DIRECT to the EVMS volume objects (the 2x RAID5) are pretty quick:

[root@elcapitan ~]# disktest -K64 -ID -h3 -PT -T3000 -B131072 -r /dev/evms/linux-iscsi
| 2007/09/27-04:22:01 | START | 18507 | v1.2.8 | /dev/evms/linux-iscsi | Start args: -K64 -ID -h3 -PT -T3000 -B131072 -r (-N 2000) (-c) (-p r) (-D 100:0)
| 2007/09/27-04:22:01 | INFO  | 18507 | v1.2.8 | /dev/evms/linux-iscsi | Starting pass
| 2007/09/27-04:22:04 | STAT  | 18507 | v1.2.8 | /dev/evms/linux-iscsi | Read throughput: 395182080.0B/s (376.88MB/s), IOPS 3036.3/s.
| 2007/09/27-04:22:07 | STAT  | 18507 | v1.2.8 | /dev/evms/linux-iscsi | Read throughput: 398808405.3B/s (380.33MB/s), IOPS 3053.3/s.

[root@halfdome ~]# disktest -K64 -ID -h3 -PT -T3000 -B131072 /dev/evms/linux-iscsi-dev
| 2007/09/26-21:26:15 | START | 16180 | v1.2.8 | /dev/evms/linux-iscsi-dev | Start args: -K64 -ID -h3 -PT -T3000 -B131072 (-N 2000) (-r) (-c) (-p r) (-D 100:0)
| 2007/09/26-21:26:15 | INFO  | 16180 | v1.2.8 | /dev/evms/linux-iscsi-dev | Starting pass
| 2007/09/26-21:26:18 | STAT  | 16180 | v1.2.8 | /dev/evms/linux-iscsi-dev | Read throughput: 238332586.7B/s (227.29MB/s), IOPS 1839.7/s.
| 2007/09/26-21:26:21 | STAT  | 16180 | v1.2.8 | /dev/evms/linux-iscsi-dev | Read throughput: 237284010.7B/s (226.29MB/s), IOPS 1821.0/s.

For performance numbers on the initiator side accessing the 3 TB Volume, the current initiators are both 32-bit x86 Xeons that are hard to push to that
I/O.  I think the next step will putting multi 1 Gb/sec port x86_64 initiators with dedicated links for initiator machines, and starting to scale the
cluster filesystems on the shared storage fabric.  I spent some getting cluster manager up with GFS and was not immediately successful.  I figured that since
these are the production machines, to wait to test with dedicated (and faster) hardware. 

On the code side, the plan is to debianize the svn repository with svn-buildpackage, as well as post the RPMs for the above setup, which will be posting
on the homepage as the wiki is updated.  Also, getting SNMP up on EC and HD is my next step, as SNMP makes it much easier to debug the
initial setup for new users to SAN and iSCSI.  From there, getting the kernel code and other patches into GIT is the step.  Also the status list is a work in progress,
so please stay tuned.  Also more documentation is quickly becoming a requirement. There is a fair amount of the configuration that can be automated to reduce complexlity for this simpler cases.  More on this part one multi-storage object SNMP is up. :-)

--nab

PS:   Also, the Linux-iSCSI.org homepage and repository has been settling into its new home quite well.  A huge great many thanks to Mike and Mickey Mazarick for hosting Linux-iSCSI.org East for so long, and Bryan Black and Sig Lange for all the help with setup and migration of current infrastructure.  Being a sysadmin is fun. ;-)












Reply all
Reply to author
Forward
0 new messages