Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CeBIT and mainframes

697 views
Skip to first unread message

Elardus Engelbrecht

unread,
Mar 17, 2016, 9:01:54 AM3/17/16
to
Hi

I played around the CeBIT website and came across this interesting thing:

http://www.cebit.de/exhibitor/lzlabs/E363469

http://www.bankingtech.com/454942/lzlabs-unveils-worlds-first-software-defined-mainframe/

I see this note:

LzLabs Software Defined Mainframe (TM) enables both Red Hat Linux and Cloud infrastructure such as Microsoft's Azure to process thousands of transactions per second, while maintaining enterprise requirements for reliability, scalability, serviceability and security. This software solution includes a faithful re-creation of the primary online, batch and database environments, which enables unrivaled compatibility and exceptional performance, to dramatically reduce IT costs.

Wonder what is big blue saying of this interesting development?

PS: I am NOT with CeBIT or LzLabs or anything with them.

Groete / Greetings
Elardus Engelbrecht

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to list...@listserv.ua.edu with the message: INFO IBM-MAIN

Joel C. Ewing

unread,
Mar 17, 2016, 11:37:51 AM3/17/16
to
On 03/17/2016 08:01 AM, Elardus Engelbrecht wrote:
> Hi
>
> I played around the CeBIT website and came across this interesting thing:
>
> http://www.cebit.de/exhibitor/lzlabs/E363469
>
> http://www.bankingtech.com/454942/lzlabs-unveils-worlds-first-software-defined-mainframe/
>
> I see this note:
>
> LzLabs Software Defined Mainframe (TM) enables both Red Hat Linux and Cloud infrastructure such as Microsoft's Azure to process thousands of transactions per second, while maintaining enterprise requirements for reliability, scalability, serviceability and security. This software solution includes a faithful re-creation of the primary online, batch and database environments, which enables unrivaled compatibility and exceptional performance, to dramatically reduce IT costs.
>
> Wonder what is big blue saying of this interesting development?
>
> PS: I am NOT with CeBIT or LzLabs or anything with them.
>
> Groete / Greetings
> Elardus Engelbrecht
>
> ...
I notice they also claim
"no need for recompilation of Cobol or PL/1 application programmes, no
source code changes, or changes to operational procedures".

So they have somehow managed to replicate the functional behavior of all
the SVC and PC interfaces and control blocks that application code might
be using in z/OS batch and CICS environments, to replicate the
functional behavior of I/O to data sets that batch jobs and CICS
transactions might be doing, to replicate all the CICS APIs and CICS
control blocks CICS applications might be using, to replicate all the LE
run time support needed to execute COBOL and PL/I programs in batch and
CICS, to replicate all the related DB2 functional APIs, and to emulate
the execution of z-architecture application program code in batch and
CICS environments, and to replicate operational interfaces. And since
security was "maintained", that implies they have also managed to
replicate the functionality of RACF for their batch, CICS, and DB2
environments, and integrated that security somehow into the supporting
physical operating environment to secure the "mainframe" data from
external tampering. In other words, to do what they seem to claim, they
have re-implemented a significant portion of z/OS and some major
subsystems of z/OS for another hardware platform. All correctly and
without infringing on any IBM patents or licensing restrictions? And
have achieved reasonable transaction rates without sacrificing
"reliability, scalability , serviceability, and security" on hardware
platforms that have historically been less robust than z-architecture?

Color me skeptical.

They don't say no re-linking of load modules, which makes me suspect
that to be legal you would have to re-link and somehow replace any
linked-in LE run time modules, since those modules would be IBM-licensed
code.

Even "stabilized" applications may require occasional minor changes
--e.g.,to adapt to trivial changes in local sales tax rates. Without a
mainframe compiler even a trivial change becomes a difficult load module
patch.

--
Joel C. Ewing, Bentonville, AR jce...@acm.org

David L. Craig

unread,
Mar 17, 2016, 1:20:00 PM3/17/16
to
Queue the IP lawyers, and action! This looks to me a firm built to be
acquired for a tidy lump sum payoff by the competition with subsequent
euthanasia. I'll be slow to migrate off that proven decades-old
mission-critical platform, that's for sure.
--
<not cent from sell>
May the LORD God bless you exceedingly abundantly!

Dave_Craig______________________________________________
"So the universe is not quite as you thought it was.
You'd better rearrange your beliefs, then.
Because you certainly can't rearrange the universe."
__--from_Nightfall_by_Asimov/Silverberg_________________

Mike Schwab

unread,
Mar 17, 2016, 1:47:18 PM3/17/16
to
Sounds a lot like http://www.z390.org/ .
It took about 5 years for one guy to develop.
It emulates hardware instructions and operating system calls. No IBM
software (other than macro definitions for the system calls).
--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

Elardus Engelbrecht

unread,
Mar 18, 2016, 4:04:20 AM3/18/16
to
Joel C. Ewing wrote:

>I notice they also claim "no need for recompilation of Cobol or PL/1 application programmes, no source code changes, or changes to operational procedures".

I am really struggling to swallow that claim...

and Dave_Craig wrote:

>Queue the IP lawyers, and action!

This is why I said this is 'interesting development'. I am very sure big blue and their lawyers won't like that one bit unless there is an agreement.

> I'll be slow to migrate off that proven decades-old mission-critical platform, that's for sure.

I will also be slow to get off the mainframes.

Joel C. Ewing wrote:

>Color me skeptical.

With all the colors of the rainbow! ;-)

Groete / Greetings
Elardus Engelbrecht

R.S.

unread,
Mar 18, 2016, 5:59:47 AM3/18/16
to
IMHO they don't run compiled code. They recompile source code.
Clue: Raincode is their technology partner. Raincode makes COBOL compiler.

--
Radoslaw Skorupka
Lodz, Poland






--
Treść tej wiadomości może zawierać informacje prawnie chronione Banku przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie lub inne działanie o podobnym charakterze jest prawnie zabronione i może być karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorized to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, www.mBank.pl, e-mail: kon...@mBank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, nr rejestru przedsiębiorców KRS 0000025237, NIP: 526-021-50-88. Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości wpłacony) wynosi 168.955.696 złotych.

Mark Regan

unread,
Mar 18, 2016, 6:02:00 AM3/18/16
to
There is a ComputerWeekly article on this product at
http://www.computerweekly.com/blogs/quocirca-insights/2016/03/the-software-defined-mainframe.html.
I noticed that the article author needs to know how to spell the acronym
for EBCDIC correctly. He spells it 'EBSDIC' and did it twice.

Bill Woodger

unread,
Mar 18, 2016, 9:31:03 AM3/18/16
to
A google-translate of part of the final article in French from the media section of the LzLabs website.

"Lzlabs technology leans on a container system which embeds the mainframe application and data. The application and its lines of code are included and the native format of the original data is kept - all without recompilation. "The only thing we are changing is the APIs. We take the old APIs, and replace them with ours, "said Thilo Rockmann."

They have a little video which shows the running of the NIST compiler-validation suite, compiled on a Mainframe, and run in their container. With 260 JCL steps (they proudly point out that this is more steps than the Mainframe supports...).

The NIST suite is doing nothing fancy, it is for validating COBOL compilers to the 1985 Standard. Running the compiled code would show that all COBOL statements work.

"we take the old APIs, and replace them" in this case seems to mean "taking the COBOL runtime (Language Environment) and replacing its functionality". In the video there is a brief simple mention of "ported" without explanation. This could mean, effectively, a "relink" as suggested earlier in this thread.

There is a mention of COBOL-IT in the French article for programs which do need to be recompiled.

Raincode has experience, and tools, although directed at .Net, but including an alleged DFSORT-compatible product.



On Friday, 18 March 2016 09:59:47 UTC, R.S. wrote:
> IMHO they don't run compiled code. They recompile source code.
> Clue: Raincode is their technology partner. Raincode makes COBOL compiler.
>
> --
> Radoslaw Skorupka
> Lodz, Poland

Itschak Mugzach

unread,
Mar 18, 2016, 10:27:26 AM3/18/16
to
no recompile involved. Just relink to replace IBM's LE modules.

Itschak

Clark Morris

unread,
Mar 18, 2016, 2:00:36 PM3/18/16
to
On 18 Mar 2016 07:27:09 -0700, in bit.listserv.ibm-main Itschak wrote:

>no recompile involved. Just relink to replace IBM's LE modules.
So what processor(s) is this code running on? What exactly is being
done?

Clark Morris

John McKown

unread,
Mar 18, 2016, 2:11:27 PM3/18/16
to
I just read the article. Interestin, but, really, EBSDIC? Twice?!?

On Fri, Mar 18, 2016 at 1:00 PM, Clark Morris <cfmp...@ns.sympatico.ca>
wrote:
--
A fail-safe circuit will destroy others. -- Klipstein

Maranatha! <><
John McKown

Tom Brennan

unread,
Mar 18, 2016, 2:35:11 PM3/18/16
to
Maybe EBSDIC is just like colour vs. color, spanner vs. wrench.

John McKown wrote:
> I just read the article. Interestin, but, really, EBSDIC? Twice?!?
>

Mike Schwab

unread,
Mar 18, 2016, 2:47:05 PM3/18/16
to
It has a small proof-of-concept box that it can make available to
those running mainframe apps where they can see how it works and try
out some of their own applications. This box, based on an Intel NUC
running an i7 CPU, is smaller than the size of a hardback book, but
can run workloads as if it was a reasonable-sized mainframe.

On Fri, Mar 18, 2016 at 1:00 PM, Clark Morris <cfmp...@ns.sympatico.ca> wrote:
> On 18 Mar 2016 07:27:09 -0700, in bit.listserv.ibm-main Itschak wrote:
>
>>no recompile involved. Just relink to replace IBM's LE modules.
> So what processor(s) is this code running on? What exactly is being
> done?
>
> Clark Morris
>>
>>Itschak
>>
>>
>>
>>On Fri, Mar 18, 2016 at 12:01 PM, Mark Regan <markt...@gmail.com> wrote:
>>
>>> There is a ComputerWeekly article on this product at
>>>
>>> http://www.computerweekly.com/blogs/quocirca-insights/2016/03/the-software-defined-mainframe.html



--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

Ed Gould

unread,
Mar 19, 2016, 12:14:19 AM3/19/16
to
On Mar 18, 2016, at 1:00 PM, Clark Morris wrote:

> On 18 Mar 2016 07:27:09 -0700, in bit.listserv.ibm-main Itschak wrote:
>
>> no recompile involved. Just relink to replace IBM's LE modules.
> So what processor(s) is this code running on? What exactly is being
> done?
>
> Clark Morris

--------------SNIP----------------------------

So ... how do you know when an LE module changes?

Manual effort?

Ed

Dave Wade

unread,
Mar 19, 2016, 7:12:49 AM3/19/16
to
Please note I have no connection with lzlabs, other than I know some people who work there from other things I have dabbled in....

1) On that note its worth doing a LinkedIn search and seeing who say they work for LZLABS. I notice a couple I know from the Hercules that I didn't know worked there, plus Laurence Wilkinson who built an FPGA 360/30 clone, so folks how have been down the battle field with IBM and know a little bit about mainframes.

2) The approach isn't totally new or original. The original Don Higgins MS-DOS IBM 370 Assembler and Emulator took a similar approach, emulating the hardware for the problem state code and implementing the SVC's as shims in X86 code that call the MS-DOS APIs. This is still downloadable freeware. Then there was the MicroFocus Cobol which evolved from this. More recently Don wrote the Z390 Java emulator, which provides support for a number API's including parts of CICS and some access methods. This product continues to be available at www.z390.org. although Don no longer takes an active part in working on the project.

In fact its a bit like SVC's in VM/370. The code which handles them is very different to that in the OS world, but the code still runs....

3) I always thought the LE API's were, like most IBM mainframe backwards compatible. So if they changed in a way which would break code running on the LzLabs software defined mainframe, they would break the same code on a real mainframe.

4) Of course the challenge is to have an emulation which is both accurate and complete. Oh and of course is cheap enough to be cost effective. If we are talking legacy code, then LE is possibly irrelevant and its basic access methods such as QSAM and VSAM that would be critical. Also how do you provide RACF protection. What really is the scope of this project. Of course this sort of question probably won't get answered because the journalists don't know enough to ask it. I wonder if the current testing phase is also about defining the scope needed.


Dave Wade

Anne & Lynn Wheeler

unread,
Mar 19, 2016, 10:29:42 AM3/19/16
to
dave....@GMAIL.COM (Dave Wade) writes:
> In fact its a bit like SVC's in VM/370. The code which handles them is
> very different to that in the OS world, but the code still runs....

there was joke about the time MVS came out with 8mbyte kernel image in
every virtual address space ... that the 32kbyte os/360 system services
simulation in VM/CMS was a lot more efficient than the 8mbyte os/360
system services simulation in MVS.

--
virtualization experience starting Jan1968, online at home since Mar1970

Mick Graley

unread,
Mar 19, 2016, 10:30:06 AM3/19/16
to
Nah, not letting him off that easily!
The word "coded" is the same in both languages, and BCD ¬= BSD.
Like us Brits tend to say "kicks" and the Americans tend to say "see,
eye, see, ess" but it's still actually CICS :-)
Cheers,
Mick.

Joel C. Ewing

unread,
Mar 19, 2016, 3:42:49 PM3/19/16
to
On 03/18/2016 11:14 PM, Ed Gould wrote:
> On Mar 18, 2016, at 1:00 PM, Clark Morris wrote:
>
>> On 18 Mar 2016 07:27:09 -0700, in bit.listserv.ibm-main Itschak wrote:
>>
>>> no recompile involved. Just relink to replace IBM's LE modules.
>> So what processor(s) is this code running on? What exactly is being
>> done?
>>
>> Clark Morris
>
> --------------SNIP----------------------------
>
> So ... how do you know when an LE module changes?
>
> Manual effort?
>
> Ed
>
I'm pretty sure he means re-link to replace any IBM-licensed LE modules
within application load modules with non-IBM functional replacement
modules that are either part of or interface to whatever replaces the LE
run time in the new environment. If that is successful, the resulting
load module is no longer associated with IBM's LE and you could care
less if IBM makes subsequent LE changes. In the unlikely event that
fixes/enhancements to the LzLabs run time environment might have
versioning issues that require re-linking to change their interface
modules at some later time, that would be an issue independent from any
changes in IBM's LE.

--
Joel C. Ewing, Bentonville, AR jce...@acm.org

Ed Gould

unread,
Mar 19, 2016, 4:13:33 PM3/19/16
to
Unless he relinks the module(s) to pick up any changed LE modules
each time maintenance is applied he will be in for a big surprise.
LE is *KNOWN* for changing the rules without telling anyone. LE is
the *WORST* IBM product ever delivered, IMO.

Ed

Frank Clarke

unread,
Mar 19, 2016, 4:47:25 PM3/19/16
to
On 19 Mar 2016 13:13:16 -0700, edgou...@COMCAST.NET (Ed Gould)
wrote:


>LE is *KNOWN* for changing the rules without telling anyone. LE is
>the *WORST* IBM product ever delivered, IMO.

LE was not 'delivered'. LE was 'foisted'.


Frank Clarke
m5s...@tampabay.rr.com
re20...@yahoo.comm
(Change Arabic numerals to Roman to email)

Ed Jaffe

unread,
Mar 19, 2016, 5:53:25 PM3/19/16
to
On 3/18/2016 6:30 AM, Bill Woodger wrote:
> A google-translate of part of the final article in French from the media section of the LzLabs website.
>
> "Lzlabs technology leans on a container system which embeds the mainframe application and data. The application and its lines of code are included and the native format of the original data is kept - all without recompilation. "The only thing we are changing is the APIs. We take the old APIs, and replace them with ours, "said Thilo Rockmann."

This is what happens when a billionaire loses a court battle with IBM.
He strikes back!

--
Edward E Jaffe
Phoenix Software International, Inc
831 Parkview Drive North
El Segundo, CA 90245
http://www.phoenixsoftware.com/

pro...@berkeley.edu

unread,
Mar 19, 2016, 6:49:08 PM3/19/16
to
On Saturday, March 19, 2016 at 7:30:06 AM UTC-7, Mick Graley wrote:
> Nah, not letting him off that easily!
> The word "coded" is the same in both languages, and BCD ¬= BSD.
> Like us Brits tend to say "kicks" and the Americans tend to say "see,
> eye, see, ess" but it's still actually CICS :-)
> Cheers,
> Mick.
>

But if you looked closely, the little clock showed 'Six minutes after Six'...

;-)
--
Phil Robyn
U.C. Berkeley (retired)

Clark Morris

unread,
Mar 19, 2016, 9:35:42 PM3/19/16
to
On 19 Mar 2016 14:53:09 -0700, in bit.listserv.ibm-main you wrote:

>On 3/18/2016 6:30 AM, Bill Woodger wrote:
>> A google-translate of part of the final article in French from the media section of the LzLabs website.
>>
>> "Lzlabs technology leans on a container system which embeds the mainframe application and data. The application and its lines of code are included and the native format of the original data is kept - all without recompilation. "The only thing we are changing is the APIs. We take the old APIs, and replace them with ours, "said Thilo Rockmann."
>
>This is what happens when a billionaire loses a court battle with IBM.

Can we expect this product to be the subject of a court battle if it
is successful in doing what it claims? What are the medium to long
term implications for the z series? The i and p series?

Clark Morris
>He strikes back!

Tom Marchant

unread,
Mar 19, 2016, 10:50:35 PM3/19/16
to
On Fri, 18 Mar 2016 13:46:55 -0500, Mike Schwab <mike.a...@GMAIL.COM> wrote:

>It has a small proof-of-concept box that it can make available to
>those running mainframe apps where they can see how it works and try
>out some of their own applications. This box, based on an Intel NUC
>running an i7 CPU, is smaller than the size of a hardback book, but
>can run workloads as if it was a reasonable-sized mainframe.

Right.

Assuming that they can interpret the z/Architecture instructions at a reasonable
rate, since they don't recompile. And assuming that they can provide emulation
for all of the things that real systems do, including serialization of accesses. And
ssuming that they can lash together enough commodity x86 systems to perform
significant real work.

I'm skeptical of all that but assuming all that, where are they going to get the I/O
bandwidth needed?

--
Tom Marchant

Joel C. Ewing

unread,
Mar 20, 2016, 12:04:31 AM3/20/16
to
Ed,
As I understand it, in a context where there would be NO IBM LE modules,
your comment simply doesn't make sense: the context being executing
legacy mainframe application object code on non-mainframe hardware, in a
non-IBM operating environment that has NO IBM run time libraries or
code, an environment that supposedly runs application load modules
designed for CICS and batch environments by replicating the
functionality of application interfaces that would be needed by such
load modules previously generated by COBOL and PL/I compilers for
those environments.

Surely a customer owns the rights to object modules generated by
compiling customer source code, but just as surely IBM owns the rights
to any modules generated solely from IBM source code. The LE run time
environment is part of IBM's licensed code on z/OS. Unless you believe
IBM is going to reverse its Hercules policy and start licensing IBM
software to run in non-IBM-sanctioned hardware environments, that means
the LzLabs implementation must not depend on an IBM LE modules or any
other IBM code for providing LE-like functionality in its execution
environment, and any IBM code modules that were linked into the
application load modules for run time support would likely need to be
functionally replaced to legally run the application under an LzLabs
environment. Dynamic linking to IBM LE run time library routines would
also be unavailable. We are talking here about an environment to
execute functionally-frozen mainframe application code -- there are no
claims of support for application development or use of COBOL and PL/I
compilers under LzLabs.

Once an application successfully makes the jump to the LZLabs
environment, that of necessity means it is no longer running any LE or
other IBM restricted-license code and that all LE-like functionality
provided by LzLabs code at the time of that jump must be compatible with
what the application load module needs. IT MAKES NO DIFFERENCE AFTER
THAT POINT IF IBM CHANGES ANY LE CODE OR ANY OF THE RULES FOR LE OR FOR
THE IBM COMPILERS. The old application modules under LzLabs are
unchanged so their requirements that are being satisfied by LzLabs code
are also unchanged. The stable application load module is at that
point running in the LzLabs environment with the IBM LE run time
environment code and IBM COBOL and PL/I compilers totally out of the
picture!

Whether LzLabs can successfully provide the claimed support, and whether
they can do it without some illegal reverse engineering and without
violation of IBM patents or other intellectual property rights are
separate issues beyond my skill set.

--
Joel C. Ewing, Bentonville, AR jce...@acm.org

John McKown

unread,
Mar 20, 2016, 1:01:41 AM3/20/16
to
On Sat, Mar 19, 2016 at 8:35 PM, Clark Morris <cfmp...@ns.sympatico.ca>
wrote:

> On 19 Mar 2016 14:53:09 -0700, in bit.listserv.ibm-main you wrote:
>
> >On 3/18/2016 6:30 AM, Bill Woodger wrote:
> >> A google-translate of part of the final article in French from the
> media section of the LzLabs website.
> >>
> >> "Lzlabs technology leans on a container system which embeds the
> mainframe application and data. The application and its lines of code are
> included and the native format of the original data is kept - all without
> recompilation. "The only thing we are changing is the APIs. We take the old
> APIs, and replace them with ours, "said Thilo Rockmann."
> >
> >This is what happens when a billionaire loses a court battle with IBM.
>
> Can we expect this product to be the subject of a court battle if it
> is successful in doing what it claims?


​Especially considering the on-going legal battle between Oracle and Google
about the Java API specification itself being copyrighted so that Google
cannot legally create a separate product​ (Android) which implements a
"clean room" implementation of a "work alike" which mirrors the Java API.
To write Android code, you basically write and compile Java into Java byte
code. The Java byte code is the translated from JVM byte code to ART
(earlier Dalvik) byte code. The ART virtual machine runs this, different,
byte code. But the "calling sequence", i.e. API, is the same as the
"calling sequence" (API) that the JVM uses. Which Oracle insists is illegal
copying. What LzLabs appears to have done is write two things: 1) an
instruction emulator for zArch "problem state" instructions and 2) an
alternate LE implementation which uses the identical API as z/OS LE. This
latter, if Oracle wins, will most totally kill the LzLabs product. Um,
assuming that IBM as copyrighted the API. If, indeed, such a thing can even
be copyrighted. That is what is being fought over.

The above reminds me of the "UNIX wars" where ever UNIX vendor decided to
"extend" UNIX in such a way as to cause "vendor lock in". There are many
reasons why I like FOSS and the FSF. I understand wanting to make money.



> What are the medium to long
> term implications for the z series? The i and p series?
>
> Clark Morris
> >He strikes back!
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to list...@listserv.ua.edu with the message: INFO IBM-MAIN
>



--
A fail-safe circuit will destroy others. -- Klipstein

Maranatha! <><
John McKown

Ed Gould

unread,
Mar 20, 2016, 1:08:13 AM3/20/16
to
I may have missed something in the thread. However,
When you get a mixed LE module (PTF type) level all bets are off when
it comes to LE. Execution platform comes with its own issues and
complicates debugging beyond anything I would be willing to support
and I suspect other on here would have the same issues.
At one time Amdahl had issues and with the coming of OCO came the
demise and I suspect IBM saw (caused) the issue as well.
I suspect any OEM would have similar issues without source. IIRC some
modules are semi public source available in JES2 although some
modules in JES2 are OCO. I don't know about JES3 but I suspect its
close to being the same.
LE is an example of all OCO and seeing as they (LE) can't get their
act together any other vendor IMO would be stupid if they think they
can emulate LE .
Ed

John McKown

unread,
Mar 20, 2016, 1:09:09 AM3/20/16
to
On Sat, Mar 19, 2016 at 9:50 PM, Tom Marchant <
0000000a2a8c202...@listserv.ua.edu> wrote:

> ​<snip>
>
>
> I'm skeptical of all that but assuming all that, where are they going to
> get the I/O
> bandwidth needed?
>

​I wonder that myself. But then, there are now SSD devices which run
directly on the PCIe bus at PCIe bus speeds. I could envision a purpose
built system (likely way too expensive) which has an Intel Xeon class CPU
for regular​ instructions, a GPU array using CUDA for numeric intensive
work, and "channels" made up of ARM processors, each controlling only a few
SSD (SATA 3 or PCIe attached) drives sharing memory with the main system
memory for I/O.



>
> --
> Tom Marchant
>

--
A fail-safe circuit will destroy others. -- Klipstein

Maranatha! <><
John McKown

David Crayford

unread,
Mar 20, 2016, 5:08:05 AM3/20/16
to
On 20/03/2016 10:50 AM, Tom Marchant wrote:
> I'm skeptical of all that but assuming all that, where are they going to get the I/O
> bandwidth needed?

Emulex sells an HBA that handles over 1M IOPS on a single port. IIRC,
x86 Xeon class servers have something called DDIO which facilitates
writes directly to processor cache.
It's not too dissimilar to offloading I/O to SAPs. I've got old
colleagues that work on distributed now and they are of the opinion that
I/O bandwidth is not an issue on x86 systems,
but it's not exactly commodity hardware. They're all hooked up using
16Gbs fiber connected to a SAN using PCIe, the same as z Systems.

I would question the RAS capabilities rather than I/O.

Dave Wade

unread,
Mar 20, 2016, 6:28:18 AM3/20/16
to
>Can we expect this product to be the subject of a court battle if it
>is successful in doing what it claims? What are the medium to long
>term implications for the z series? The i and p series?
>
>Clark Morris

Well i and p both use proprietary hardware so no effect. For the "z Series"I don't see a major impact, certainly immediately, but I have been wrong before and will be wrong again. Emulating the central CPU Channel Subsystem and API's isn't too hard, folks have been doing it for a long time. Most of the folks involved understand the issues. Replicating the entire environment is much harder.

I believe that folks like my ex-employer (I am retired) are probably the likely takers, but not sure how many of them are left. Up until around five years ago they had an MP3000 running traditional DOS/VSE CICS really had no place to go. They started out down the LANMANAGER, OS/2 path which became NT Servers because IBM pulled the plug on OS/2. This left them with an small Mainframe running VM/ESA & DOS/CICS and 200 or so NT servers. Given no upgrade path to a modern Mainframe at an affordable price, they ditched it the Mainframe. I don't believe IBM wants such customers unless they move to a mixed Traditional (VM/DOS/zOS) and Linux farm running on a large "Z" box. My employer had lots of commodity software in use that was tied to Windows it wasn't an option, perhaps for the same reasons LzLabs won't take the world by storm.

They now have replaced the 250 NT servers with 30 IBM x3650 2 x 6 Core boxes running VMWARE ESXi and various flavours of Windows Server. Performance isn't really a problem. That's not to say there aren't performance problems, so there is one database app where they were the first site to buy. The developers decided to switch from Oracle to SQL server to save costs, and really didn't know how to tune the code. When I left it was improving as they tuned the indexing, but these issues used to occur on Mainframes...

... it will be interesting to see how many takers there are...

Dave Wade

Anne & Lynn Wheeler

unread,
Mar 20, 2016, 1:33:00 PM3/20/16
to
dcra...@GMAIL.COM (David Crayford) writes:
> Emulex sells an HBA that handles over 1M IOPS on a single port. IIRC,
> x86 Xeon class servers have something called DDIO which facilitates
> writes directly to processor cache.
> It's not too dissimilar to offloading I/O to SAPs. I've got old
> colleagues that work on distributed now and they are of the opinion
> that I/O bandwidth is not an issue on x86 systems,
> but it's not exactly commodity hardware. They're all hooked up using
> 16Gbs fiber connected to a SAN using PCIe, the same as z Systems.
>
> I would question the RAS capabilities rather than I/O.

Last published mainframe I/O I've seen was peak I/O benchmark for z196
which got 2M IOPS using 104 FICON (running over 104 fibre-channel). Also
that all 14 SAPs would run 100% busy getting 2.2M SSCHs/sec but
recommendation was keeping SAPs to 75% or 1.5M SSCHs/sec.

About the same time of the z196 peak I/O benchmark there was
fibre-channel announced for e5-2600 blade claiming over million IOPS,
two such fibre-channel getting more throughput than 104 FICON (running
over 104 fibre-channel) ... aka FICON is enormously heavy-weight
protocol that drastically cuts the native throughput of fibre-channel.

disclaimer: 1980 I was asked to do the support for channel extender for
STL (now IBM Silicon Valley Lab), they were moving 300 people from the
IMS group to offsite bldg. with access back to the STL datacenter; they
had tried remote 3270 but found the human factors intolerable. The
channel extender support put channel attached 3270 controllers out at
the offsite bldg ... and resulted in response indistinguishable from
channel attach 3270 controllers within the STL bldg. The vendor they
tried to get approval from IBM to release the support, but there was a
group in POK that was playing with some serial stuff and they got it
blocked because they were afraid it might interfer with getting their
stuff released.

In 1988, I'm asked to help standardize some serial stuff that LLNL was
playing with which quickly becomes fibre channel standard ... one of the
issues is that protocol latency effects increases with increase in
bandwidth ... so that it becomes apparent at relatively short
distances. One of the features with the 1980 work is that it localized
the enormous IBM channel protocol latency at the offsite bldg and then
used much more efficient protocol the longer distance. For fibre-channel
used the much more efficient protocol for everything.

In 1990, the POK group finally get their stuff release as ESCON when it
is already obsolete. Then some POK engineers become involved with fibre
channel standard and define a protocol that enormously cuts the native
throughput ... that is eventually released as FICON. Note that the more
recent zHPF/TCW work for FICON looks a little more like the work that I
had done back in 1980.

Besides the peak I/O benchmark FICON throughput issue (compared to
native fibre channel issue) there is also the overhead of CKD
simulation. There hasn't been any real CKD disks built for decades,
current CKD disks are all simulation on industry standard commodity
disks.

Other tivia, when I moved to San Jose Research in the 70s, they let me
wander around. At the time the disk engineering lab (bldg 14) and disk
product test lab (bldg 15) they were running pre-scheduled standalone
mainframe around the clock, 7x24. At one point they had tried to us MVS
for concurrent testing, but found that MVS had 15mins MTBF in that
environment. I offerred to rewrite I/O supervisor that made it bullet
proof and never fail ... being able to do ondemand, anytime concurrent
testing, greatly improving productivity. I happened to mention that MVS
15min MTBF in an internal-only report on the work ... which brings down
the wrath of the MVS group on my head (not that it was untrue, but that
it exposed the information to the rest of the company). When they found
that they couldn't get me fired, they then were to make sure they made
my career as unpleasant as possible (blocking promotions and awards
whenever they could).

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012

z13 published refs is 30% move throughput than EC12 (or about 100BIPS)
with 40% more processors ... or about 710MIPS/proc

z196 era e5-2600v1 blade rated at 400-500+BIPS depending on model,
e5-2600v4 blades are three-four times that, around 1.5TIPS (1500BIPS).

i.e. since the start of the century, commodity processors have increased
their processing power significantly more aggresively than
mainframe. They have also come to dominate the wafer-chip manufacturing
technology ... and essentially mainframe chips have converged to use the
same technology (in much the same way mainframe has converged to use
industry standard fibre channel and disks). EC12 financials implied that
a single minimum sized chip wafer run produced more EC12 processor chips
than will ever be sold.

Typical cloud megadatacenter has several hundred thousand systems with
millions of processors ... operated by around 100 people or less (rather
than people/system, it is systems/person) ... and have more aggregate
processing capacity than all mainframes in the world today. Systems are
designed for fall-over and redundancy ... and with larger operations
with dozen or more such cloud megadatacenters around the world, they are
also designed for fallover and redundancy between datacenters.

Max. mainframe configuration around $30M compared to couple thousand for
e5-2600 blade ... say 1/10,000th the cost for 15 times the processing
power. System costs have dropped so drastically that power&cooling cost
have increasingly come to dominate for cloud megadatacenter.

--
virtualization experience starting Jan1968, online at home since Mar1970

Ed Jaffe

unread,
Mar 20, 2016, 3:53:38 PM3/20/16
to
On 3/19/2016 6:35 PM, Clark Morris wrote:
> On 19 Mar 2016 14:53:09 -0700, in bit.listserv.ibm-main you wrote:
>> This is what happens when a billionaire loses a court battle with IBM.
> Can we expect this product to be the subject of a court battle if it
> is successful in doing what it claims? What are the medium to long
> term implications for the z series? The i and p series?

Haha! Good question! IBM has proven it will sue even without grounds or
standing. The legality of zPrime appeared to be air-tight, based on how
the IBM Customer Agreement was worded, but that didn't stop IBM from
suing NEON anyway. (Of course, IBM later changed the ICA for new
machines and for z/OS V2 to plug those loopholes.) I imagine they've
taken additional precautions this time around ...

--
Edward E Jaffe
Phoenix Software International, Inc
831 Parkview Drive North
El Segundo, CA 90245
http://www.phoenixsoftware.com/

R.S.

unread,
Mar 21, 2016, 9:14:25 AM3/21/16
to
Well,
I observed 1,3M IOPS on EC12 or z196 machine during WAS installation.
With minimal CPU utilisation (I mean regular CPU, I haven't checked SAP).
IMNSHO a PC server with collection of new shining Emulex cards has
waaaay worse I/O capabilities.
We did some tests of database operations on PC. Effects are unequivocal.

BTW: Typical z/OS I/O workload is very different from PC workload. Much
less IOPS, much more data, much less CPU%.

--
Radoslaw Skorupka
Lodz, Poland






W dniu 2016-03-20 o 18:32, Anne & Lynn Wheeler pisze:
---
Treść tej wiadomości może zawierać informacje prawnie chronione Banku przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie lub inne działanie o podobnym charakterze jest prawnie zabronione i może być karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorized to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, www.mBank.pl, e-mail: kon...@mBank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, nr rejestru przedsiębiorców KRS 0000025237, NIP: 526-021-50-88. Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości wpłacony) wynosi 168.955.696 złotych.

David Crayford

unread,
Mar 21, 2016, 9:40:08 AM3/21/16
to
On 21/03/2016 9:14 PM, R.S. wrote:
> Well,
> I observed 1,3M IOPS on EC12 or z196 machine during WAS installation.
> With minimal CPU utilisation (I mean regular CPU, I haven't checked SAP).
> IMNSHO a PC server with collection of new shining Emulex cards has
> waaaay worse I/O capabilities.
> We did some tests of database operations on PC. Effects are unequivocal.
>

I admire you honesty :) What class of PC server? Was it connected to a
SAN and did it offload I/O to a peripheral device?

> BTW: Typical z/OS I/O workload is very different from PC workload.
> Much less IOPS, much more data, much less CPU%.
>

My wife used to work for HDS and I had some interesting conversations
with some of the engineers she used to work with. That was a few years
ago but in their opinion the
high end *nix servers could match a mainframe for I/O throughput. PC
commodity hardware is different but racked up with enterprise kit I
would be interested to know how
they would shape in a drag race. How would a Dell blade with Infiniband,
SAN and enterprise class HBAs compare?

John McKown

unread,
Mar 21, 2016, 9:54:30 AM3/21/16
to
On Mon, Mar 21, 2016 at 8:14 AM, R.S. <R.Sko...@bremultibank.com.pl>
wrote:

> Well,
> I observed 1,3M IOPS on EC12 or z196 machine during WAS installation. With
> minimal CPU utilisation (I mean regular CPU, I haven't checked SAP).
> IMNSHO a PC server with collection of new shining Emulex cards has waaaay
> worse I/O capabilities.
> We did some tests of database operations on PC. Effects are unequivocal.
>
> BTW: Typical z/OS I/O workload is very different from PC workload. Much
> less IOPS, much more data, much less CPU%.
>

​Just relating a story my boss tells. A major U.S. manufacturer decide to
move all their z/OS work to a SAP distributed platform. A daily job, which
was basically did all the shipping work, ran over a day on the SAP platform
(remember, daily job). On the z/OS system, it runs 45 minutes. So they have
one, very old, IBM mainframe (don't remember what) which they IPL daily to
run this one job, then shut back down until the next day. This is, indeed,
"hearsay". But I trust my boss when he says that it is so.​



>
> --
> Radoslaw Skorupka
> Lodz, Poland
>
>
>
--
A fail-safe circuit will destroy others. -- Klipstein

Maranatha! <><
John McKown

David Crayford

unread,
Mar 21, 2016, 10:14:21 AM3/21/16
to
On 21/03/2016 9:54 PM, John McKown wrote:
> On Mon, Mar 21, 2016 at 8:14 AM, R.S. <R.Sko...@bremultibank.com.pl>
> wrote:
>
>> Well,
>> I observed 1,3M IOPS on EC12 or z196 machine during WAS installation. With
>> minimal CPU utilisation (I mean regular CPU, I haven't checked SAP).
>> IMNSHO a PC server with collection of new shining Emulex cards has waaaay
>> worse I/O capabilities.
>> We did some tests of database operations on PC. Effects are unequivocal.
>>
>> BTW: Typical z/OS I/O workload is very different from PC workload. Much
>> less IOPS, much more data, much less CPU%.
>>
> ​Just relating a story my boss tells. A major U.S. manufacturer decide to
> move all their z/OS work to a SAP distributed platform. A daily job, which
> was basically did all the shipping work, ran over a day on the SAP platform
> (remember, daily job). On the z/OS system, it runs 45 minutes. So they have
> one, very old, IBM mainframe (don't remember what) which they IPL daily to
> run this one job, then shut back down until the next day. This is, indeed,
> "hearsay". But I trust my boss when he says that it is so.​
>

My wife worked on a couple of mainframe to SAP migration projects and I
can't recall any performance war
stories but they were not big shops. I do recall that the customers had
to change their work processes to fit
around SAP and not the other way round, which is bad!

>
>> --
>> Radoslaw Skorupka
>> Lodz, Poland
>>
>>
>>

Steve Thompson

unread,
Mar 21, 2016, 10:43:17 AM3/21/16
to
A few years ago, IBM took a Power system and a z/Architecture
system and configured them as closely as they could.

As I recall, they both had the same amount of C-Store available
to the operating system, and they both had the same number of
channels (8 if I remember correctly), and they ran to
equivalently sized RAID boxes.

And, if I remember correctly, they were using programs written in
C, that were ported from the one to the other.

The object was to check out how efficiently I/O was prosecuted
(done).

The z/Architecture machine finished long before the power system.

I thought I had a copy of that comparison, but I just can't find
it so I can give a link to it.

Regards,
Steve Thompson

Ed Jaffe

unread,
Mar 21, 2016, 11:42:47 AM3/21/16
to
On 3/21/2016 7:14 AM, David Crayford wrote:
>
> My wife worked on a couple of mainframe to SAP migration projects and
> I can't recall any performance war
> stories but they were not big shops. I do recall that the customers
> had to change their work processes to fit
> around SAP and not the other way round, which is bad!

Some of our largest customers run SAP on the mainframe.

--
Edward E Jaffe
Phoenix Software International, Inc
831 Parkview Drive North
El Segundo, CA 90245
http://www.phoenixsoftware.com/

David Crayford

unread,
Mar 21, 2016, 11:56:05 AM3/21/16
to
On 21/03/2016 11:42 PM, Ed Jaffe wrote:
> On 3/21/2016 7:14 AM, David Crayford wrote:
>>
>> My wife worked on a couple of mainframe to SAP migration projects and
>> I can't recall any performance war
>> stories but they were not big shops. I do recall that the customers
>> had to change their work processes to fit
>> around SAP and not the other way round, which is bad!
>
> Some of our largest customers run SAP on the mainframe.
>

I know of some very big mainframe SAP customers in the UK like Royal
Mail Parcels who have huge mainframe footprints configured to run
SAP with CPs to run DB2. I used to work with the ex-CIO back in the day
and she speaks very highly of it. I'm not knocking SAP, my wife has made
a good living out of it

David Crayford

unread,
Mar 21, 2016, 12:16:45 PM3/21/16
to
Just for fun, because I know this is a very contrived test! I wrote a C
program to read/writes block to an zFS file on a z114 z/OS system
connected via FICON to an HDS SAN and Ubuntu server on a Dell PowerEdge
blade server
writing to SAS disks on the rack. Of course, there are latency
differences and my program is probably quite lame. The z/OS was idle at
the time as was the Linux system. I would imagine if they were both
running at full capacity
with high throughput sustained I/O the results would be very different.

#include <stdio.h>
#include <stdlib.h>

int main()
{
size_t numBlocks = 100000;
char * filename = "./io.temp";

#define BLOCK_SIZE 1024

FILE * fp = fopen( filename, "w+" );
if ( fp == NULL )
{
perror( "fopen" );
exit( 8 );
}
char buffer[BLOCK_SIZE] = {0};
for ( int j = 0; j < numBlocks; j++ )
{
if ( fwrite( buffer, sizeof buffer, 1, fp ) == 0 )
{
perror( "fwrite" );
exit( 8 );
}
}
rewind( fp );
while ( fread( buffer, 1, sizeof buffer, fp ) )
{
continue;
}
remove( filename );
return 0;
}


z/OS:

DOC:/u/doc/src: >time iospeed

real 0m 1.15s
user 0m 0.62s
sys 0m 0.20s

Dell:

davcra01@cervidae:~$ time ./iospeed

real 0m0.254s
user 0m0.048s
sys 0m0.199s

Tony Harminc

unread,
Mar 21, 2016, 1:04:47 PM3/21/16
to
On 19 March 2016 at 10:29, Mick Graley <mick....@gmail.com> wrote:

> Nah, not letting him off that easily!
> The word "coded" is the same in both languages, and BCD ¬= BSD.
> Like us Brits tend to say "kicks" and the Americans tend to say "see,
> eye, see, ess" but it's still actually CICS :-)
>

Often enough in the US "kicks" is "see, ah, see, ess"... What the linguists
might call "monophthongal I".

Tony H.

Tom Marchant

unread,
Mar 21, 2016, 1:17:19 PM3/21/16
to
On Tue, 22 Mar 2016 00:16:52 +0800, David Crayford wrote:

>z/OS:
>
>DOC:/u/doc/src: >time iospeed
>
>real 0m 1.15s
>user 0m 0.62s
>sys 0m 0.20s
>
>Dell:
>
>davcra01@cervidae:~$ time ./iospeed
>
>real 0m0.254s
>user 0m0.048s
>sys 0m0.199s

I have two questions.

1. What do the above figures mean?

2. What happens on z/OS when you put a heavy load on 320 channels all at once?

--
Tom Marchant

Mark Regan

unread,
Mar 21, 2016, 3:29:50 PM3/21/16
to

Anne & Lynn Wheeler

unread,
Mar 21, 2016, 9:14:52 PM3/21/16
to
0000000a2a8c202...@LISTSERV.UA.EDU (Tom Marchant) writes:

> On Tue, 22 Mar 2016 00:16:52 +0800, David Crayford wrote:
>
>>z/OS:
>>
>>DOC:/u/doc/src: >time iospeed
>>
>>real 0m 1.15s
>>user 0m 0.62s
>>sys 0m 0.20s
>>
>>Dell:
>>
>>davcra01@cervidae:~$ time ./iospeed
>>
>>real 0m0.254s
>>user 0m0.048s
>>sys 0m0.199s
>
> I have two questions.
>
> 1. What do the above figures mean?
>
> 2. What happens on z/OS when you put a heavy load on 320 channels all at once?

don't know the model of the Dell blade ... however some amount of the
z/os system processing has been offloaded to the SAPs and don't show up,
while much of the equivalent system processing will show in the DELL
numbers.

z/os 1.15seconds versus dell .254seconds elapsed time: 1.15/.254=4.528 times
.62seconds versus dell .048seconds user cpu: .62/.048=12.917 times
.20seconds versus dell .199seconds system cpu: .20/.199=1.005 times

z196 time-frame before ibm sold its commodity server business,
max. configured z196 was around $30M and rated for 50BIPS
aka $30M/50BIPS = $600000/BIPS.

IBM base price for commodity server blade (some e5-2600v1 model) was
$1815 and e5-2600v1 were rated 400-500+BIPS (depending on model)
or around $1815/400=$4.54/BIPS.

Major cloud vendors have been claiming for over a decade that they
assemble their own server blades for 1/3rd the price of brand
name blades (contributing to motivation for IBM to sell off its
commodity server business) ... or approaching $1/BIPS.

Last published peak I/O benchmark that I've seen was 2M IOPS for z196
using 104 FICONs (running over 104 fibre-channel). At the same time
there was (single) fibre-channel announced for e5-2600v1 blade claiming
over million IOPS ... two such fibre-channel have higher throughput than
104 FICONs running over 104 fibre-channel, aka FICON protocol running
over fibre-channel enormously cuts the native throughput.
http://manana.garlic.com/~lynn/submisc.html#ficon

Also for z196 published information was all SAPs run 100% busy at 2.2M
SSCH/second, but recommendations are SAPs kept to 75% busy or 1.5M
SSCH/second. Also zHPF/TCW (a bit of what I did for channel-extender in
1980) claims it improves FICON throughput by about 30% (still way below
native fibre-channel throughput).
http://manana.garlic.com/~lynn/submisc.html#chanel-extender

z13 with 320 FICON only increases z196 peak I/O 104 FICON by about
factor of three (6M IOPS?). The equivalent would then be six native
fibre-channel on e5-2600v4 blade (which claim rating of 3-4 times that
of e5-2600v1 blade).

Before IBM unloaded its commodity server blade business they were
advertising high density 64blade racks. Base price of 64 IBM blades
would then be almost $120K ... cloud assembling their own would be 1/3rd
that or around $40K for single rack (compared to fully configured z13
around $30M?).

Intel tick-tock
https://en.wikipedia.org/wiki/Intel_Tick-Tock

e5-2600v1 was 32nm chip technology, same as z196&ec12

e5-2600v4 is 14nm chip technology "tick"/broadwell
http://www.itworld.com/article/2985214/hardware/intels-xeon-roadmap-for-2016-leaks.html

moving to "tock"/skylake

with PCIe solid state disks (some nearly 1M IOPS/drive)
http://www.fastestssd.com/featured/ssd-rankings-the-fastest-solid-state-drives/#pcie

and coming up next year:
http://www.kitguru.net/components/graphic-cards/anton-shilov/pci-express-4-0-with-16gts-data-rates-and-new-connector-to-be-finalized-by-2017/?PageSpeed=noscript

16GT/s base transfer rate will allow PCI Express 4.0 x1 interconnection
to transfer up to 2GB of data per second, whereas the PCIe 4.0 x16 slots
used for graphics cards and ultra-high-end solid-state drives will
provide up to 32GB/s of bandwidth. Higher transfer rates will also let
mobile devices to save power since it will take less time to transfer
data.

....

32GByte/sec for solid state disks.

David Crayford

unread,
Mar 22, 2016, 4:02:33 AM3/22/16
to
On 22/03/2016 1:17 AM, Tom Marchant wrote:
> On Tue, 22 Mar 2016 00:16:52 +0800, David Crayford wrote:
>
>> z/OS:
>>
>> DOC:/u/doc/src: >time iospeed
>>
>> real 0m 1.15s
>> user 0m 0.62s
>> sys 0m 0.20s
>>
>> Dell:
>>
>> davcra01@cervidae:~$ time ./iospeed
>>
>> real 0m0.254s
>> user 0m0.048s
>> sys 0m0.199s
> I have two questions.
>
> 1. What do the above figures mean?

real = elapsed (wall) time
user = userspace (application) time
sys = system (kernel) time

There is obviously quite a lot of suspend time when running the test on
our z/OS system. Otherwise the times are comparable.

> 2. What happens on z/OS when you put a heavy load on 320 channels all at once?
>

I don't know we've only got 14 channels on the z114 at the lab :^) The
Linux server is a VM running under HyperV, which I must say is very
impressive tech.
I've never been much of a fan of Microsoft but the fail over and live
migration capabilities are excellent. Those x86 blades absolutely thrash
our zIIP for running
Java workloads.

David Crayford

unread,
Mar 23, 2016, 4:48:40 AM3/23/16
to
On 22/03/2016 4:02 PM, David Crayford wrote:
>
>
> On 22/03/2016 1:17 AM, Tom Marchant wrote:
>> On Tue, 22 Mar 2016 00:16:52 +0800, David Crayford wrote:
>>
>>> z/OS:
>>>
>>> DOC:/u/doc/src: >time iospeed
>>>
>>> real 0m 1.15s
>>> user 0m 0.62s
>>> sys 0m 0.20s
>>>
>>> Dell:
>>>
>>> davcra01@cervidae:~$ time ./iospeed
>>>
>>> real 0m0.254s
>>> user 0m0.048s
>>> sys 0m0.199s
>> I have two questions.
>>
>> 1. What do the above figures mean?
>
> real = elapsed (wall) time
> user = userspace (application) time
> sys = system (kernel) time
>


Interestingly the numbers are even worse if I use QSAM instead of zFS.
I'm slightly surprised that zFS out-performs QSAM by such a
large margin but not shocked.

Program Name IOSPEEDQ hh:mm:ss.th
Step Name IOSPEED Elapsed Time 11.05
Procedure Step TCB CPU Time 00.41
Return Code 00 SRB CPU Time 00.17
Total I/O 33377 Total CPU Time 00.58
Service Units 9175

The blade server is a Dell PowerEdge E5-2640 SandyBridge-EP connected to
a direct access disk array using 12Gbit/s SAS.
We got 3 blades, the rack and 100TB of disk for $30,000!! Each blade
also has 10TB of internal disk, 256GB RAM. Now that's a steal!
And the core Hyper-V virtualization software is free! We consolidated
all of our existing Windows servers onto one blade and still have 80%
capacity free.

MaXab

unread,
Oct 8, 2021, 11:48:07 AM10/8/21
to

> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to list...@listserv.ua.edu with the message: INFO IBM-MAIN
https://tvnotch.com/the-best-65-inch-tv-for-the-money/
0 new messages