Hi,
I've seen these messages on one of my IET servers. What could be the cause?
Mar 6 15:53:06 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:1 by sid:1131397468979712 (Function Complete)
Mar 6 15:53:06 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:0 by sid:1131397468979712 (Function Complete)
Mar 6 15:53:06 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:2 by sid:3946147236086272 (Function Complete)
Mar 6 15:53:06 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:3 by sid:3946147236086272 (Function Complete)
Mar 6 15:53:06 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:2 by sid:4230920747680256 (Function Complete)
Mar 6 15:53:06 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:3 by sid:3946147236086272 (Function Complete)
Mar 6 15:53:11 iscsi-prod-05 last message repeated 33 times
Mar 6 15:53:11 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:3 by sid:4230920747680256 (Function Complete)
Mar 6 15:53:12 iscsi-prod-05 last message repeated 35 times
Mar 6 15:53:12 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:3 by sid:3946147236086272 (Function Complete)
Mar 6 15:54:34 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:3 by sid:4512395724390912 (Function Complete)
Mar 6 15:54:37 iscsi-prod-05 kernel: iscsi_trgt: Abort Task (01) issued on tid:1 lun:3 by sid:4790572166218240 (Function Complete)
Mar 6 15:54:37 iscsi-prod-05 last message repeated 2 times
It’s a CentOS 5.7 x64 with latest stable IET.
Kind regards, Marko Kobal
CTO, Arctur d.o.o.
From: Marko Kobal [mailto:marko...@arctur.si]
Sent: Tuesday, March 06, 2012 1:29 PM
To: iscsitar...@lists.sourceforge.net
Subject: [Iscsitarget-devel] iscsi_trgt: Abort Task (01) issued on tid:1lun:1
One more thing, make sure this target, tid:1, isn't servicing multiple
systems unless they are utilizing a cluster file system. Throwing all
your disks into a single target and having all your servers connect to
it is a sure way to corrupt your data.
Best practice is to put each disk into it's own target. This would
be LUN 0 in each target. This allows for better control and better
performance as each disk will have dedicated worker threads
instead of sharing the same worker threads.
Most commercial targets don't support multiple LUNs per target, just
to avoid these issues with customers.
-Ross
______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.
------------------------------------------------------------------------------
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
_______________________________________________
Iscsitarget-devel mailing list
Iscsitar...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel
> One more thing, make sure this target, tid:1, isn't servicing multiple
> systems unless they are utilizing a cluster file system. Throwing all
> your disks into a single target and having all your servers connect
> to it is a sure way to corrupt your data.
It serving VMFS, so that shouldn't be any problem.
> Best practice is to put each disk into it's own target. This would be LUN
0 in each target.
Huh, didn't realize that ... will take into account on my further
implementations.
Lep pozdrav, Marko Kobal
CTO, Arctur d.o.o.
Hi,
> Bad communications.
The problematic guy turned out to be the e1000e Intel 82572EI NIC. I had the 1.6.2-NAPI drivers, now I’ve upgraded to 1.9.5-NAPI… looks good for now, but I’ll will monitor it more closely just in case there aren’t any NIC hw issues (btw: e1000e really has some major issues under centos5 in combination with various middleware…).