Ok, I am now moving I/O running with LIO-SE RAMDISK_DR in my x86_64 VM
with CONFIG_SLUB=y on 2.6.24. I also have CONFIG_SLUB_DEBUG=y set, and
am passing slub_debug=FZPU into the kernel command line and not seen any
warnings or exceptions. So far I have used Open/iSCSI to move 10 GB of
R/W traffic, and I am going to let it run for a few hours to make sure
it is stable.
You should be ready to go with LIO-Target r250 and the default Ubuntu
config with SLUB enabled. Please let me know if you run into any futher
issues, and I will jump on them right away.
--nab
On Wed, 2008-02-13 at 14:40 +0100, Bart Van Assche wrote:
> On Feb 13, 2008 1:24 PM, Nicholas A. Bellinger <n...@linux-iscsi.org> wrote:
> > Sorry to keep bugging you on your progress. I am eagerly awaiting to
> > hear your results now that we have been able to get back CONFIG_SLUB.
>
> It might help to add the following to the kernel command line:
> slub_debug=FZPU. The output that I got on my setup proves that there
> is a memory corruption triggered when performing target discovery on
> LIO-SE. I'm not going to run any further LIO-SE tests until this is
> solved:
>
> ------------------------------------------------------------------
> HeaderDigest: None
> DataDigest: None
> MaxRecvDataSegmentLength: 32768
> IFMarker: No
> OFMarker: No
> ------------------------------------------------------------------
> ------------------------------------------------------------------
> InitiatorName: iqn.1993-08.org.debian:01:e52cfb64aea
> TargetAlias: iSBE Target
> InitiatorAlias: INF010
> TargetPortalGroupTag: 0
> DefaultTime2Wait: 2
> DefaultTime2Retain: 0
> ErrorRecoveryLevel: 0
> SessionType: Discovery
> ------------------------------------------------------------------
> iSCSI Login successful on CID: 0 from 192.168.102.10 to 192.168.102.12:3260,0
> Incremented iSCSI Connection count to 1 from node:
> iqn.1993-08.org.debian:01:e52cfb64aea
> Established iSCSI session from node: iqn.1993-08.org.debian:01:e52cfb64aea
> Incremented number of active iSCSI sessions to 1 on iSCSI Target Portal Group: 0
> =============================================================================
> Cleared np->np_login_tpg
> BUG kmalloc-16: Redzone overwritten
> -----------------------------------------------------------------------------
>
> INFO: 0xffff81007b2a6380-0xffff81007b2a6380. First byte 0x0 instead of 0xcc
> INFO: Allocated in iscsi_target_rx_thread+0x7f6/0x3330
> [iscsi_target_mod] age=0 cpu=1 pid=4449
> INFO: Freed in mthca_MAD_IFC+0x1be/0x240 [ib_mthca] age=5329 cpu=0 pid=3525
> INFO: Slab 0xffff810004209370 used=10 fp=0xffff81007b2a6318
> flags=0x40000000000000c3
> INFO: Object 0xffff81007b2a6370 @offset=880 fp=0xffff81007b2a6318
>
> Bytes b4 0xffff81007b2a6360: 0d b1 00 00 01 00 00 00 5a 5a 5a 5a 5a
> 5a 5a 5a .�......ZZZZZZZZ
> Object 0xffff81007b2a6370: 53 65 6e 64 54 61 72 67 65 74 73 3d 41
> 6c 6c 00 SendTargets=All.
> Redzone 0xffff81007b2a6380: 00 cc cc cc cc cc cc cc
> .�������
> Padding 0xffff81007b2a63c0: 5a 5a 5a 5a 5a 5a 5a 5a
> ZZZZZZZZ
> Pid: 4449, comm: iscsi_trx/0 Not tainted 2.6.24.2-dbg #1
>
> Call Trace:
> [<ffffffff80299a69>] check_bytes_and_report+0xb9/0x100
> [<ffffffff882e3842>] :iscsi_target_mod:iscsi_target_rx_thread+0x1302/0x3330
> [<ffffffff80299d16>] check_object+0x66/0x270
> [<ffffffff8029ac13>] __slab_free+0x1d3/0x2d0
> [<ffffffff882e3842>] :iscsi_target_mod:iscsi_target_rx_thread+0x1302/0x3330
> [<ffffffff8029b402>] kfree+0xb2/0x150
> [<ffffffff882e3842>] :iscsi_target_mod:iscsi_target_rx_thread+0x1302/0x3330
> [<ffffffff80234e37>] finish_task_switch+0x57/0xe0
> [<ffffffff8020d228>] child_rip+0xa/0x12
> [<ffffffff80234e37>] finish_task_switch+0x57/0xe0
> [<ffffffff8020c93f>] restore_args+0x0/0x30
> [<ffffffff882e2540>] :iscsi_target_mod:iscsi_target_rx_thread+0x0/0x3330
> [<ffffffff8020d21e>] child_rip+0x0/0x12
>
> FIX kmalloc-16: Restoring 0xffff81007b2a6380-0xffff81007b2a6380=0xcc
>
> Decremented iSCSI connection count to 0 from node:
> iqn.1993-08.org.debian:01:e52cfb64aea
> Released iSCSI session from node: iqn.1993-08.org.debian:01:e52cfb64aea
> Decremented number of active iSCSI Sessions on iSCSI TPG: 0 to 0
>
> Bart Van Assche.
Just FYI, this LIO-Target VM w/ r250 has been up all day and has moved a
few hundred GB of traffic from two different Open/iSCSI initiators. I
tested PSCSI (to a USB drive) and RAMDISK_DR, and everything looks fine.
Please let me know if you have any futher questions with your setup.
--nab
Thanks for asking. I am trying to configure a target with LIO, but
open-iscsi discovery does not report any target nodes. All that is
reported by open-iscsi discovery is the following:
# iscsi_target_ip=192.168.102.12
# rm -rf /etc/iscsi/nodes /etc/iscsi/send_targets
# iscsiadm -m discovery -t sendtargets -p ${iscsi_target_ip}
10.100.100.12:3260,1 iqn.2007-05.com.example
192.168.102.12:3260,1 iqn.2007-05.com.example
There is probably something wrong with the way I configured LIO-SE.
It's not clear to me how to configure target node names via LIO-SE ?
rmmod iscsi_target_mod
modprobe iscsi_target_mod
target-ctl settargetname targetname=iqn.2007-05.com.example
target-ctl addtpg tpgt=1
target-ctl settpgattrib tpgt=1 authentication=0
target-ctl addnptotpg tpgt=1 ip=$(ip -family inet addr show dev eth0 \
| sed -n 's:.*inet \([0-9.]*\).*:\1:p') port=3260
target-ctl addnptotpg tpgt=1 ip=$(ip -family inet addr show dev ib0 \
| sed -n 's:.*inet \([0-9.]*\).*:\1:p') port=3260
target-ctl addhbatotarget hba_id=0 hba_type=5 rd_host_id=0
target-ctl createvirtdev hba_id=0 rd_device_id=0 rd_pages=$((2**14/4096))
target-ctl addluntodev tpgt=1 iscsi_lun=0 hba_id=0 rd_device_id=0
target-ctl enabletpg tpgt=1
Bart Van Assche.
This is what would be expected from your setup below (ie: a single node
name + single target portal group with two network portals) when you
still Open/iSCSI to perform SendTarget=All with SessionType=Discovery.
>From here with Open/iSCSI (once the Portals and TargetNames have been
reported, and can be seen with iscsiadm -m node), I am using the
following to establish SessionType=Normal and access iSCSI LUNs, etc:
iscsiadm -m node targetname=$IQN --portal=$IP:$PORT --login
Do these steps differ from what you have tested with other targets and
Open/iSCSI..? If so, how so..?
> There is probably something wrong with the way I configured LIO-SE.
> It's not clear to me how to configure target node names via LIO-SE ?
>
> rmmod iscsi_target_mod
> modprobe iscsi_target_mod
> target-ctl settargetname targetname=iqn.2007-05.com.example
Btw, these first three steps you really should be using
'/etc/init.d/target start'.
This script will generate /etc/targetname.iscsi from iscsi-name, and
load this IQN with settargetname after modprobe.
Also, you can setup and manage additional target node names (you don't
need this for your test, but is handy to know) with target-ctl
coreaddtiqn and target-ctl coredeltiqn.
By default when you issue any target-ctl with 'tpgt=', it will assume
you are reference the _DEFAULT_ target IQN name that was set with
target-ctl settargetname. You can explictly pass 'targetname=' to
reference a non default target node name into any target-ctl that acceps
'tpgt='. Again, something that you do not have to worry about now, but
allows the admin to take advantage of the full iSCSI addressing model as
described in "What does the conceptual model of iSNS and iSCSI look
like..?"
http://linux-iscsi.org/index.php/ISNS
> target-ctl addtpg tpgt=1
> target-ctl settpgattrib tpgt=1 authentication=0
> target-ctl addnptotpg tpgt=1 ip=$(ip -family inet addr show dev eth0 \
> | sed -n 's:.*inet \([0-9.]*\).*:\1:p') port=3260
> target-ctl addnptotpg tpgt=1 ip=$(ip -family inet addr show dev ib0 \
> | sed -n 's:.*inet \([0-9.]*\).*:\1:p') port=3260
> target-ctl addhbatotarget hba_id=0 hba_type=5 rd_host_id=0
> target-ctl createvirtdev hba_id=0 rd_device_id=0 rd_pages=$((2**14/4096))
> target-ctl addluntodev tpgt=1 iscsi_lun=0 hba_id=0 rd_device_id=0
> target-ctl enabletpg tpgt=1
>
This all looks fine, and can go in your /etc/iscsi/install.target, that
will be loaded from /etc/init.d/target start after modprobe and
settargetname is set from /etc/targetname.iscsi.
--nab
> Bart Van Assche.
>
Ok, I see what yo mean.
The SendTargets=All payload will contain identical TargetName= values
for whatever you pass in via target-ctl settargetname or target-ctl
coreaddtiqn.
The list of exactly what will be returned during SendTargets can be
viewed with target-ctl listgninfo.
Btw, adding additional storage naming information (say if you wanted to
use a portal group per storage object in your setup, as compared to say
multiple-luns per target portal group) is something that I can easily
append and enabled if this helps the admin.
As I mentioned, the LIO-Target implements the advanced method of
TargetName+TPGT+NP mappings, and you can create any arbitary
configuration using target-ctl.
--nab
> Bart Van Assche.
>
Just for clarity, I was expecting something like this:
10.100.100.12:3260,1 iqn.2007-05.com.example:storage.disk2.sys1.xyz
192.168.102.12:3260,1 iqn.2007-05.com.example:storage.disk2.sys1.xyz
Bart Van Assche.
Apparently I also have to issue an 'addnodetotpg' command ? With the
commands below I can log in from the initiator:
[ target ]
/bin/bash -c 'source /etc/init.d/target stop'
/bin/bash -c 'source /etc/init.d/target start'
target-ctl addtpg tpgt=1
target-ctl settpgattrib tpgt=1 authentication=0
target-ctl addnptotpg tpgt=1 ip=$(ip -family inet addr show dev eth0 \
| sed -n 's:.*inet \([0-9.]*\).*:\1:p') port=3260
target-ctl addnptotpg tpgt=1 ip=$(ip -family inet addr show dev ib0 \
| sed -n 's:.*inet \([0-9.]*\).*:\1:p') port=3260
target-ctl addhbatotarget hba_id=0 hba_type=5 rd_host_id=0
target-ctl createvirtdev hba_id=0 rd_device_id=0 rd_pages=$((2**14/4096))
target-ctl addluntodev tpgt=1 iscsi_lun=0 hba_id=0 rd_device_id=0
target-ctl addnodetotpg tpgt=1 queue_depth=32
initiatorname=iqn.1993-08.org.debian:01:e52cfb64aea
target-ctl enabletpg tpgt=1
[ initiator ]
$ iscsiadm -m node -T
iqn.2003-01.org.linux-iscsi.INF012.x86_64:sn.60326d629fd3 -p
${iscsi_target_ip} --login
After this command the following is logged in the initiator kernel log:
scsi27 : iSCSI Initiator over TCP/IP
However, no device name (/dev/sd...) appears. How can I now access
this device from the initiator ?
This is the output I get when logging in with the same iSCSI initiator
to another iSCSI target (SCST):
scsi4 : iSCSI Initiator over TCP/IP
scsi 4:0:0:0: Direct-Access SCST_FIO vdisk0 096 PQ: 0 ANSI: 4
sd 4:0:0:0: [sde] 4194304 512-byte hardware sectors (2147 MB)
sd 4:0:0:0: [sde] Write Protect is off
sd 4:0:0:0: [sde] Mode Sense: 6b 00 10 08
sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, supports
DPO and FUA
sd 4:0:0:0: [sde] 4194304 512-byte hardware sectors (2147 MB)
sd 4:0:0:0: [sde] Write Protect is off
sd 4:0:0:0: [sde] Mode Sense: 6b 00 10 08
sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, supports
DPO and FUA
sde: unknown partition table
sd 4:0:0:0: [sde] Attached SCSI disk
sd 4:0:0:0: Attached scsi generic sg5 type 0
Bart Van Assche.
Sorry, I was really tired last night and I did not notice this in your
config. By default, the LIO-Target requires that each exported SCSI
Target port (ie: addluntodev) be explictly mapped to each SCSI Initiator
Port.
This means that addnodetotpg, and then subsequent addnodetolun calls are
required for each initiator to access each TPG mapped LUN. Please add
the additional line here:
target-ctl addnodetolun tpgt=1 iscsi_lun=0 mapped_lun=0 lun_access=1
initiatorname=iqn.1993-08.org.debian:01:e52cfb64aea
FYI, iscsi_lun= is the value from addluntodev, mapped_lun= is what the
initiator will actually see (they can be different) and lun_access= is 0
== RO and 1 == RW.
Note that there is also the 'Demo Mode' feature, that is used by default
with the LIO-VM images, and will allow (on a per Target Portal Group
basis) all Initiators to have access to all SCSI Target Ports.
This can be enabled by adding:
/sbin/target-ctl settpgattrib tpgt=1 generate_node_acls=1
By default, all TYPE_DISK LUNs in DEMO mode will be RO. I do think to
prevent folks using demo mode from overwriting devices without a cluster
filesystem on them (which is usually the case). For folks who really
know what they are doing, there is also the following attrib to enable
RW with demo mode.
/sbin/target-ctl settpgattrib tpgt=1 demo_mode_lun_access=1
--nab
Btw, I was thinking about this some more, and the key point with regard
to any iSCSI IQN is that they are 1) Unique, 2) Permanent.
>From the admin's point of view, the best way to describe a iSCSI Portal
Group with a human readable string is with TargetAlias. This can be set
with:
target-ctl settpgparam tpgt=1 TargetAlias=SuperTurboDiskArray
--nab
Thanks for the help -- I can now log in with open-iscsi to a LIO-SE
target. Performance for 1MB is as expected, but performance for
512-byte requests is abnormally low (IPoIB):
$ dd if=/dev/sde of=/dev/null iflag=direct bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 3.4495 seconds, 304 MB/s
$ dd if=/dev/sde of=/dev/null iflag=direct bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 39.612 seconds, 12.9 kB/s
Target setup:
initiator=iqn.1993-08.org.debian:01:b5698b924985 # INF012
/bin/bash -c 'source /etc/init.d/target stop'
/bin/bash -c 'source /etc/init.d/target start'
target-ctl addtpg tpgt=1
target-ctl settpgattrib tpgt=1 authentication=0
target-ctl addnptotpg tpgt=1 ip=$(ip -family inet addr show dev eth0 \
| sed -n 's:.*inet \([0-9.]*\).*:\1:p') port=3260
target-ctl addnptotpg tpgt=1 ip=$(ip -family inet addr show dev ib0 \
| sed -n 's:.*inet \([0-9.]*\).*:\1:p') port=3260
target-ctl addhbatotarget hba_id=0 hba_type=5 rd_host_id=0
target-ctl createvirtdev hba_id=0 rd_device_id=0 rd_pages=$((2**31/4096))
target-ctl addluntodev tpgt=1 iscsi_lun=0 hba_id=0 rd_device_id=0
target-ctl addnodetotpg tpgt=1 queue_depth=32 initiatorname=${initiator}
target-ctl addnodetolun tpgt=1 iscsi_lun=0 mapped_lun=0 lun_access=1
initiatorname=${initiator}
target-ctl enabletpg tpgt=1
Initiator setup:
iscsi_target_ip=192.168.102.10
targetname=iqn.2003-01.org.linux-iscsi.INF010.x86_64:sn.856bce71e53c
rm -rf /etc/iscsi/nodes /etc/iscsi/send_targets
iscsiadm -m discovery -t sendtargets -p ${iscsi_target_ip}
iscsiadm --mode node --targetname ${targetname} \
--portal ${iscsi_target_ip}:3260 --op update -n
"node.conn[0].iscsi.HeaderDigest" -v None
iscsiadm -m node -T iscsiadm -m node -T ${targetname} -p
${iscsi_target_ip} --login
Bart Van Assche.
Indeed. I have no explantation why LIO-Target would be causing this to
be so slow. You can check /proc/iscsi_target/mibs/scsi_lu to see the
total CDBs, MB transferred, etc. It would be interesting to see if you
can notice anything strange about how quickly these numbers (espically
the total CDB count) get incremented while running the 512 byte block
test.
Could you try running this across the non IPoIB network portal and see
if we can get a baseline for the small blocksize..? Also, what types of
numbers have you previously seen using IPoIB with other targets..? Is
there any way this could be related to the IPoIB performance jitters
with iperf you has posted earlier..?
--nab
The results over the Ethernet network are consistent:
# dd if=/dev/sde of=/dev/null iflag=direct bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 9.43356 seconds, 111 MB/s
# dd if=/dev/sde of=/dev/null iflag=direct bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 39.6742 seconds, 12.9 kB/s
The data transfer rate for 512 byte blocks is really slow: even
carrying over one floppy disk per minute manually from target to
inititiator yields a higher transfer rate (1440 KB / 60 s = 24 KB/s).
All other targets I tested (iETD, STGT, SCST) work considerably faster
for 512 byte blocks (transfer rates between 4 and 12 MB/s via IPoIB).
What transfer rate do you obtain for direct I/O of 512 byte blocks
over Ethernet ?
Bart Van Assche.
I am seeing the same issue with 512 and 1024 block sizes between
different VMs. Once I jump up to 2048 I get ~5 MB, and everything above
2k looks fine. I will keep looking and let you know if I find anything.
--nab
> Bart Van Assche.
>
Ok, so 512 and 1024 for WRITEs with iflag=direct is fine. Also,
removing iflag=direct from 512 and 1024 for READs produces normal
results.
I am still trying a couple of different configurations to see if I can
locate the reason why <= 1024 w/ iflag=direct is so dead slow.
--nab
> --nab
>
> > Bart Van Assche.
> >
>
>
> >
While read tests from the LIO-SE target work fine, write tests result
in strange errors (block size 2KB).
From the initiator kernel log:
iscsi: cmd 0x2a is not queued (8)
sd 4:0:0:0: [sde] Result: hostbyte=DID_NO_CONNECT
driverbyte=DRIVER_OK,SUGGEST_OK
end_request: I/O error, dev sde, sector 2161664
From the target kernel log:
Received iSCSI login request from 192.168.102.12 on TCP Network Portal
192.168.102.10:3260
Located Storage Object:
iqn.2003-01.org.linux-iscsi.INF010.x86_64:sn.856bce71e53c
Located Portal Group Object: 1
Set np->np_login_tpg to ffff8101588c8200
iscsi_handle_login_thread_timeout:665: ***ERROR*** iSCSI Login timeout
on Network Portal 192.168.102.10:3260
Bart Van Assche.
I am still doing some profiling here on the issue with <= 1024 READs. I
have not found anything just yet..
> write tests result
> in strange errors (block size 2KB).
>
> >From the initiator kernel log:
>
> iscsi: cmd 0x2a is not queued (8)
> sd 4:0:0:0: [sde] Result: hostbyte=DID_NO_CONNECT
> driverbyte=DRIVER_OK,SUGGEST_OK
> end_request: I/O error, dev sde, sector 2161664
>
It looks like Open/iSCSI failed the outstanding struct scsi_cmnd back up
to the SCSI ML.
> >From the target kernel log:
>
> Received iSCSI login request from 192.168.102.12 on TCP Network Portal
> 192.168.102.10:3260
> Located Storage Object:
> iqn.2003-01.org.linux-iscsi.INF010.x86_64:sn.856bce71e53c
> Located Portal Group Object: 1
> Set np->np_login_tpg to ffff8101588c8200
> iscsi_handle_login_thread_timeout:665: ***ERROR*** iSCSI Login timeout
> on Network Portal 192.168.102.10:3260
>
Here, the initiator did not complete the login phase before successfully
moving to full feature phase, and hence the login timeout handler fired.
(This can be set with target-ctl settpgattrib login_timeout btw).
Are you still able to --logout and --login with Open/iSCSI..?
Are you able to /etc/init.d/target restart properly..?
--nab
The above happened during transferring data, not during a normal
login. So it is very suspicious that open-iscsi had to login again --
this is either a bug in open-iscsi or in LIO-SE.
After the above happened, LIO-SE on the target system seems to be locked up:
* Discovery from the initiator results in timeouts, even after having
rebooted the initiator system.
* The following output from netstat shows that LIO-SE does no longer
read the data that arrived at its sockets (Recv-Q > 0):
$ netstat -aen|grep -Ew '^Active|^Proto|3260'
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address
State User Inode
tcp 0 0 192.168.102.10:3260 0.0.0.0:*
LISTEN 0 11347
tcp 0 0 10.100.100.10:3260 0.0.0.0:*
LISTEN 0 11337
tcp 525 0 192.168.102.10:3260 192.168.102.12:59518
CLOSE_WAIT 0 0
tcp 525 0 192.168.102.10:3260 192.168.102.12:59520
CLOSE_WAIT 0 0
tcp 97 0 192.168.102.10:3260 192.168.102.12:33320
CLOSE_WAIT 0 11373
tcp 1 0 192.168.102.10:3260 192.168.102.12:59515
CLOSE_WAIT 0 11374
tcp 525 0 192.168.102.10:3260 192.168.102.12:59517
CLOSE_WAIT 0 0
tcp 525 0 192.168.102.10:3260 192.168.102.12:59521
CLOSE_WAIT 0 0
tcp 525 0 192.168.102.10:3260 192.168.102.12:59519
CLOSE_WAIT 0 0
tcp 525 0 192.168.102.10:3260 192.168.102.12:59516
CLOSE_WAIT 0 0
Bart Van Assche.
Ok, I am thinking that this must have something to do with IPoIB, or the
interaction between LIO and IPoIB. The next question is:
Can this be reproduced every time running over the IPoIB network
portal..? Can this be reproduced running over the 1 Gb/sec network
portal..? I am guessing that things will be stable on the latter case.
I do not believe this is an issue with LIO-Target and LIO-SE scaling to
the typical IPoIB numbers you are seeing (~300 MB/sec). Over the years
on 10 Gb/sec Ethernet, going ~600 MB/sec to real (SAS) storage has been
typical, and using RAMDISK_DR, ~1200 MB/sec with multiple initiators is
also not a problem.
--nab
> Bart Van Assche.
>
You are making an assumption here, this is not something you have verified.
> Can this be reproduced every time running over the IPoIB network
> portal..? Can this be reproduced running over the 1 Gb/sec network
> portal..? I am guessing that things will be stable on the latter case.
Perfectly reproducible via IPoIB. The IPoIB stack works perfectly with
iETD / STGT and SCST. So this issue is probably a LIO-SE issue. I will
repeat the test via the Gigabit Ethernet network.
Bart Van Assche.
Yes, I am making this assumption because I don't have IB hardware handy
at the moment. I am also making this assumption because there are
plenty of nodes have been using LIO-SE in production on 1 Gb/sec and 10
Gb/sec ports (for years in some cases) and do not report any problems.
> > Can this be reproduced every time running over the IPoIB network
> > portal..? Can this be reproduced running over the 1 Gb/sec network
> > portal..? I am guessing that things will be stable on the latter case.
>
> Perfectly reproducible via IPoIB. The IPoIB stack works perfectly with
> iETD / STGT and SCST. So this issue is probably a LIO-SE issue. I will
> repeat the test via the Gigabit Ethernet network.
They are working for your simple tests. As I do not believe you have
conducted any long running stress tests or simulations on any of the
above implementations, I think that 'works perfectly' is an
exaggeration.
If you are really interested in algorithmically stress testing iSCSI
Target implementations, you should look at Core-iSCSI-dv. This will
give you a much better idea of the maturity of different iSCSI targets
than simply running dd.
http://linux-iscsi.org/index.php/Core-iSCSI-dv
Being able to pass Core-iSCSI-dv using a tool that will write known
patterns across the face of the media and then read-back and compare
will demonstrate the maturity much better than your tests.
--nab
Not to my knowledge, no. I think these chances are very slim,
considering the number of eyes that are going over patches that are
posted to LKML and elsewhere these days. Either someone who maintains a
particular tree would have to overlook the malicious code (and then
include it in their tree) and then have their tree pulled by upstream
maintainers, who would also have to miss it during peer review.
--nab
Can you please repeat the argument about NBD in the thread about the
integration of a kernelspace iSCSI target in the mainstream Linux
kernel ?
Thanks,
Bart Van Assche.