Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ECKD-to-SCSI Performance

35 views
Skip to first unread message

Kevin Corkery

unread,
Jul 27, 2015, 12:37:47 PM7/27/15
to

Gang …

 

Has anyone switched from ECKD (DS6800) to SCSI-(FBA) supported devices?  IBM cites a 13% increase in CPU usage.  Can anyone confirm this?  How is the throughput?  If throughput is constant, which is could be due to better performance, then the extra CPU usage is moot.  TIA for your comments.

 

 

Kevin P Corkery

Independent Consultant

Voorhees, New Jersey

 

 

Kris Buelens

unread,
Jul 27, 2015, 1:14:12 PM7/27/15
to
If you still have free CPU cycles, the increase in CPU usage will not harm the application response time, and if the current I/O subsystem is overloaded you could gain something.

However if you don't have free CPU cycles, you will not be able to exploit the higher IO bandwidth.

Do you have z/VM and/or Linux?

The problem -in z/VM, but I guess VSE is similar- is that there is an emulation layer: traditional FBA I/O requests are converted to SCSI, and that costs some CPU.  In z/VM, only CP's paging routines where adapted to avoid the emulation layer, so VM can page with low overhead on SCSI.  I don't know about VSE.  Another reason for increased CPU usage can be introduced by multipathing: with ECKD, path selection is done by z's IO subsystem; with SCSI, path selection has to be done by the operating system.

I don't know VSE details, but Linux on z is somewhat happier with SCSI devices, and can indeed get the benefits.  The VM website has some performance reports available.

If you go from DS6000 to a newer device (DS8000), you may find gains, even what staying with ECKD.


Kris Buelens,
     --- freelance z/VM consultant, Belgium ---
-----------------------------------------------------------------------

_______________________________________________
VSE-L mailing list
VS...@lists.lehigh.edu
https://lists.lehigh.edu/mailman/listinfo/vse-l


Kevin Corkery

unread,
Jul 27, 2015, 1:38:45 PM7/27/15
to

This is a native z/VSE on a z114-A01, so there aren’t a lot of CPU cycles.  It’s not a huge workload and measurements would indicate that we only use 2 of the 3 MSUs available.  We can drive the CPU near 100% when we’re doing a lot of concurrent processing.  I would say we’re CPU-bound.  The problem is that a SCSI solution is the only price competitive alternative to the DS6800.  We’ll be running the V3700 at 4Gps vs 2Gps for the DS6800.  We could go with 8Gps FC but that’s more money that they don’t want to spend.  One thing we do have a lot of is real memory on the z114.  This may lend itself to a caching product to avoid I/O, but that could be a trade since it would also require CPU cycles for look-a-side processing.  The bottom line is that we’re going to need to move forward on this, I’m just doing some triage so as to temper expectations and provide some alternatives.   Anyway, that’s why my original question was about throughput.  When we went to the z114 from the z9 we saw some unusual measurements at IPL time and during our batch.  One things settled in normal processing times were pretty much consistent although certain jobs ran better while others ran worse and others ran pretty much the same.  In all, we put the newer hardware in place and pretty much maintained out service levels as before.  I’d really prefer to come of a I/O system swap with something looking like a win more than a lateral but that may not be possible.

0 new messages