# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
iscsiadm version 2.0-870
Target: iqn.1986-03.com.hp:storage.msa2312i.0919d81d3a
Current Portal: XXX.XXX.7.88:3260,2
Persistent Portal: XXX.XXX.7.88:3260,2
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.fedora:
82fd14af68f9
Iface IPaddress: XXX.XXX.20.70
Iface HWaddress: default
Iface Netdev: default
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 524288
FirstBurstLength: 262144
MaxBurstLength: 2097152
ImmediateData: No
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 8 State: running
scsi8 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running
scsi8 Channel 00 Id 0 Lun: 2
Attached scsi disk sdc State: running
scsi8 Channel 00 Id 0 Lun: 3
Attached scsi disk sdd State: running
# iscsiadm -m node -p XXX.XXX.7.88 -u
Logging out of session [sid: 1, target: iqn.
1986-03.com.hp:storage.msa2312i.0919d81d3a, portal: XXX.XXX.7.88,3260]
Logout of [sid: 1, target: iqn.1986-03.com.hp:storage.msa2312i.
0919d81d3a, portal: XXX.XXX.7.88,3260]: successful
# iscsiadm -m node -p XXX.XXX.7.88 -l
Logging in to [iface: default, target: iqn.
1986-03.com.hp:storage.msa2312i.0919d81d3a, portal: XXX.XXX.7.88,3260]
Login to [iface: default, target: iqn.1986-03.com.hp:storage.msa2312i.
0919d81d3a, portal: XXX.XXX.7.88,3260]: successful
# cat /sys/block/sdc/device/timeout
30
# vi /sys/block/sdc/device/timeout
300
(When I writed ,it showed below
"/sys/devices/platform/host9/session1/target9:0:0/9:0:0:2/timeout"
WARNING: The file has been changed since reading it!!!
Do you really want to write to it (y/n)?y
"/sys/devices/platform/host9/session1/target9:0:0/9:0:0:2/timeout"
E667: Fsync failed
Press ENTER or type command to continue)
# cat /sys/block/sdc/device/timeout
30
# echo 300 > /sys/block/sdc/device/timeout
# cat /sys/block/sdc/device/timeout
300
# dd if=/dev/zero of=/dev/sdc bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1203.55 s, 892 kB/s
# cat /var/log/messages
May 25 10:51:20 milk nm-dispatcher.action: Script '/etc/NetworkManager/
dispatcher.d/04-iscsi' exited with error status 1.
May 25 10:52:25 milk kernel: sd 8:0:0:0: [sdb] Synchronizing SCSI
cache
May 25 10:52:25 milk kernel: sd 8:0:0:2: [sdc] Synchronizing SCSI
cache
May 25 10:52:25 milk kernel: sd 8:0:0:3: [sdd] Synchronizing SCSI
cache
May 25 10:52:47 milk kernel: scsi9 : iSCSI Initiator over TCP/IP
May 25 10:52:48 milk kernel: scsi 9:0:0:0: Direct-Access HP
MSA2312i M110 PQ: 0 ANSI: 5
May 25 10:52:48 milk kernel: sd 9:0:0:0: Attached scsi generic sg2
type 0
May 25 10:52:48 milk kernel: sd 9:0:0:0: [sdb] 3984374976 512-byte
logical blocks: (2.03 TB/1.85 TiB)
May 25 10:52:48 milk kernel: scsi 9:0:0:2: Direct-Access HP
MSA2312i M110 PQ: 0 ANSI: 5
May 25 10:52:48 milk kernel: sd 9:0:0:2: Attached scsi generic sg3
type 0
May 25 10:52:48 milk kernel: sd 9:0:0:0: [sdb] Write Protect is off
May 25 10:52:48 milk kernel: scsi 9:0:0:3: Direct-Access HP
MSA2312i M110 PQ: 0 ANSI: 5
May 25 10:52:48 milk kernel: sd 9:0:0:3: Attached scsi generic sg4
type 0
May 25 10:52:48 milk kernel: sd 9:0:0:2: [sdc] 97656224 512-byte
logical blocks: (49.9 GB/46.5 GiB)
May 25 10:52:48 milk kernel: sd 9:0:0:0: [sdb] Write cache: enabled,
read cache: enabled, doesn't support DPO or FUA
May 25 10:52:48 milk kernel: sd 9:0:0:3: [sdd] 29296832 512-byte
logical blocks: (14.9 GB/13.9 GiB)
May 25 10:52:48 milk kernel: sd 9:0:0:2: [sdc] Write Protect is off
May 25 10:52:48 milk kernel: sd 9:0:0:2: [sdc] Write cache: enabled,
read cache: enabled, doesn't support DPO or FUA
May 25 10:52:48 milk kernel: sd 9:0:0:3: [sdd] Write Protect is off
May 25 10:52:48 milk kernel: sdb:
May 25 10:52:48 milk kernel: sd 9:0:0:3: [sdd] Write cache: enabled,
read cache: enabled, doesn't support DPO or FUA
May 25 10:52:48 milk kernel: sdb1
May 25 10:52:48 milk kernel: sdc:
May 25 10:52:48 milk kernel: sdd: sdd1
May 25 10:52:48 milk kernel: unknown partition table
May 25 10:52:48 milk kernel: sd 9:0:0:0: [sdb] Attached SCSI disk
May 25 10:52:48 milk kernel: sd 9:0:0:3: [sdd] Attached SCSI disk
May 25 10:52:48 milk kernel: sd 9:0:0:2: [sdc] Attached SCSI disk
May 25 10:52:48 milk iscsid: connection2:0 is operational now
May 25 10:53:40 milk ntpd[1603]: synchronized to 59.124.71.8, stratum
3
May 25 10:53:40 milk ntpd[1603]: kernel time sync status change 2001
May 25 11:18:33 milk kernel: connection2:0: ping timeout of 5 secs
expired, recv timeout 5, last rx 1457759, last ping 1462759, now
1467759
May 25 11:18:33 milk kernel: connection2:0: detected conn error (1011)
May 25 11:18:33 milk iscsid: Kernel reported iSCSI connection 2:0
error (1011) state (3)
May 25 11:18:51 milk iscsid: connection2:0 is operational after
recovery (2 attempts)
end
I tried the dd test on CentOS 5.5(2.6.18-194.3.1.el5 x86_64) and it
can be upto 180 MB/sec.(1Gb ethernet)
I guess that I should use CentOS to establish a server will be better.
On 05/25/2010 03:40 AM, 立凡 王 wrote:
> May 25 11:18:33 milk kernel: connection2:0: ping timeout of 5 secs
> expired, recv timeout 5, last rx 1457759, last ping 1462759, now
> 1467759
> May 25 11:18:33 milk kernel: connection2:0: detected conn error (1011)
> May 25 11:18:33 milk iscsid: Kernel reported iSCSI connection 2:0
> error (1011) state (3)
> May 25 11:18:51 milk iscsid: connection2:0 is operational after
> recovery (2 attempts)
That error will hurt performance, but I do not think it should cause the
perf to drop to less than a MB.
Do you see the ping timeout error everytime you run your write tests? If
so set the
node.conn[0].timeo.noop_out_interval = 0
node.conn[0].timeo.noop_out_timeout = 0
in iscsid.conf then rerun the iscsiadm discovery command (make sure you
logout of sessions, do discovery, then relogin).
> end
>
> I tried the dd test on CentOS 5.5(2.6.18-194.3.1.el5 x86_64) and it
> can be upto 180 MB/sec.(1Gb ethernet)
> I guess that I should use CentOS to establish a server will be better.
>
That is strange because the fedora and Centos 5.5 code is almost the
same. At least there are not major changes in the data path that would
improve performance.
If you do
ps -l -u root | grep iscsi_q*
Do you see a major difference in the PRI values for the process between
fedora and Centos?