Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

State of the Port - July 2017

690 views
Skip to first unread message

clairg...@gmail.com

unread,
Jul 11, 2017, 7:35:14 AM7/11/17
to

Roy Omond

unread,
Jul 11, 2017, 8:02:03 AM7/11/17
to
On 11/07/17 12:35, clair...@vmssoftware.com wrote:
> http://www.vmssoftware.com/pdfs/State_of_Port_20170707.pdf

Thanks Clair ...

"Standard benchmark tools such as dhrystone and primes have been used
along with a variety of C, FORTRAN, BASIC, and COBOL programs, for
example a FORTRAN version of Adventure."

Woohooo ... Adventure !!! :-)

Paul Sture

unread,
Jul 12, 2017, 3:57:09 PM7/12/17
to
Wow. It's 1982 again[1]

Except this time around, no need to feel guilty about hogging too much
of an expensive 11/780 :-)

[1] except that back then I didn't have access to a C compiler
--
Everybody has a testing environment. Some people are lucky enough to
have a totally separate environment to run production in.


Hans Vlems

unread,
Jul 13, 2017, 3:02:37 PM7/13/17
to
Around 1980 all you neede to run Adventure was a Fortran compiler. The source compiled and ran on a B7700 and a little later unaltered (istr) on a pdp 11/40 under rt-11. The B7700 didn't even have a C compiler then, it was added much later.
Hans

johnso...@gmail.com

unread,
Jul 17, 2017, 9:03:12 AM7/17/17
to
On Tuesday, July 11, 2017 at 7:35:14 AM UTC-4, clair...@vmssoftware.com wrote:
> http://www.vmssoftware.com/pdfs/State_of_Port_20170707.pdf

The state of the port made the front page of hacker news earlier today.

If you don't have an account, make one, and upvote it. :-)

https://news.ycombinator.com/item?id=14785504

John Reagan

unread,
Jul 17, 2017, 10:24:18 AM7/17/17
to
I didn't want to bother with making an account but somebody who does have one should reply to the question about NonStop porting to non-Itanium about half way down the page... NonStop has been on x86 for several years now and performs much faster than its Itanium counterpart (partly due to a faster fabric interconnect using Infiniband)

Camiel Vanderhoeven

unread,
Jul 17, 2017, 10:32:18 AM7/17/17
to
Op maandag 17 juli 2017 16:24:18 UTC+2 schreef John Reagan:

> I didn't want to bother with making an account but somebody who does have one should reply to the question about NonStop porting to non-Itanium about half way down the page... NonStop has been on x86 for several years now and performs much faster than its Itanium counterpart (partly due to a faster fabric interconnect using Infiniband)

Done. :-)

Rich Alderson

unread,
Jul 18, 2017, 3:26:14 PM7/18/17
to
You do know that Adventure started life as a FORTRAN program on a PDP-10 at
BBN, right? And that the game most people are familiar with was an expansion
of the original, still in FORTRAN, done on a PDP-10 at the Stanford Artificial
Intelligence Laboratory, right?

By 1980 it had been in existence for several years.

--
Rich Alderson ne...@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen

MG

unread,
Jul 18, 2017, 3:59:53 PM7/18/17
to
Op 17-jul-2017 om 16:24 schreef John Reagan:
> I didn't want to bother with making an account but somebody
> who does have one should reply to the question about NonStop
> porting to non-Itanium about half way down the page... NonStop
> has been on x86 for several years now and performs much faster
> than its Itanium counterpart (partly due to a faster fabric
> interconnect using Infiniband)

Is InfiniBand ideal though? Maybe the faster multicore/hyper-
threading/etc. processors these days mitigate the overhead
issues, and perhaps NonStop is the type of platform that can
get away with lower overall performance, since it's largely
about so-called 'transaction availability/reliability' and
probably I/O, too. But, still though... efficiency would be
something else then.

I remember reading some rather heated discussions in research
and HPC spheres, complaining about it and I wasn't exactly
blown away the times when I tried InfiniBand myself...
(Which, in fact, was already years ago and I've long since
'reverted' to 10GbE.)

- MG

Scott Dorsey

unread,
Jul 18, 2017, 5:27:21 PM7/18/17
to
MG <marc...@SPAMxs4all.nl> wrote:
>Op 17-jul-2017 om 16:24 schreef John Reagan:
>> I didn't want to bother with making an account but somebody
>> who does have one should reply to the question about NonStop
>> porting to non-Itanium about half way down the page... NonStop
>> has been on x86 for several years now and performs much faster
>> than its Itanium counterpart (partly due to a faster fabric
>> interconnect using Infiniband)
>
>Is InfiniBand ideal though? Maybe the faster multicore/hyper-
>threading/etc. processors these days mitigate the overhead
>issues, and perhaps NonStop is the type of platform that can
>get away with lower overall performance, since it's largely
>about so-called 'transaction availability/reliability' and
>probably I/O, too. But, still though... efficiency would be
>something else then.

Infiniband is designed for low latency. If what you need is the lowest
possible latency, Infiniband is likely a big win over ethernet. If you
need fastest throughput for bulk transfers, ethernet is likely a big win
for you instead.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

johnso...@gmail.com

unread,
Jul 19, 2017, 6:45:26 AM7/19/17
to
On Tuesday, July 18, 2017 at 5:27:21 PM UTC-4, Scott Dorsey wrote:

> Infiniband is designed for low latency. If what you need is the lowest
> possible latency, Infiniband is likely a big win over ethernet. If you
> need fastest throughput for bulk transfers, ethernet is likely a big win
> for you instead.

That was certainly true in the past. But I believe that gap has been
narrowed substantially over the past decade such that its no longer
the slam dunk it once was.

EJ

Saifi Khan

unread,
Jul 19, 2017, 1:34:08 PM7/19/17
to
On Tuesday, July 11, 2017 at 5:05:14 PM UTC+5:30, clair...@vmssoftware.com wrote:
> http://www.vmssoftware.com/pdfs/State_of_Port_20170707.pdf

Very impressive. Thanks Clair.

Curious, to know more about the "Isolation" technologies that ship with OpenVMS kernel and compare favourably with Zones (cf. Solaris).

Is there any plan to incorporate technologies like Intel RDT ?

Looking forward to reading your thoughts.


warm regards
Saifi.

Stephen Hoffman

unread,
Jul 20, 2017, 5:37:47 PM7/20/17
to
On 2017-07-18 21:27:18 +0000, Scott Dorsey said:

> Infiniband is designed for low latency. If what you need is the lowest
> possible latency, Infiniband is likely a big win over ethernet. If you
> need fastest throughput for bulk transfers, ethernet is likely a big
> win for you instead.

Ethernet is reaching well up into the same market Infiniband is aimed
at, and VSI is going to want to and need to go after better Ethernet
support to start with as it's far more broadly applicable. Once the
x86-64 port is out and VSI has 40 GbE and 100 GbE and other related
support available, then maybe adding Infiniband support gets
interesting. This if there's enough of an advantage over then-current
Ethernet and then-current Infiniband.

Some related reading, both for and against...

https://www.nextplatform.com/2015/04/01/infiniband-too-quick-for-ethernet-to-kill-it/

http://www.chelsio.com/wp-content/uploads/2013/11/40Gb-Ethernet-A-Competitive-Alternative-to-InfiniBand.pdf

https://www.nas.nasa.gov/assets/pdf/papers/40_Gig_Whitepaper_11-2013.pdf


For some of the discussions of why supporting faster Ethernet can
involve kernel performance and tuning issues, here's a
previously-posted discussion from the Linux kernel:

https://lwn.net/Articles/629155/



If VSI does decide to go after HPC with OpenVMS, then maybe we see
Infiniband support added. But Ethernet is ubiquitous.

And yes, Infiniband is interesting, and clustering over Ethernet RDMA
(iWARP) might well be patterned after the Memory Channel work, but
there's a bunch of stuff in the queue ahead of iWARP and Infiniband.




--
Pure Personal Opinion | HoffmanLabs LLC

Jan-Erik Soderholm

unread,
Jul 20, 2017, 5:40:56 PM7/20/17
to
Den 2017-07-20 kl. 23:37, skrev Stephen Hoffman:
> On 2017-07-18 21:27:18 +0000, Scott Dorsey said:
>
>> Infiniband is designed for low latency. If what you need is the lowest
>> possible latency, Infiniband is likely a big win over ethernet. If you
>> need fastest throughput for bulk transfers, ethernet is likely a big win
>> for you instead.
>
> Ethernet is reaching well up into the same market Infiniband is aimed at,
> and VSI is going to want to and need to go after better Ethernet support to
> start with as it's far more broadly applicable. Once the x86-64 port is
> out and VSI has 40 GbE and 100 GbE and other related support available,...

Doesn't that come "for free" if you're running under a VM?
You get whatever netork support that the VM supports, not?

Stephen Hoffman

unread,
Jul 20, 2017, 6:01:32 PM7/20/17
to
We don't yet know what VSI will be providing for their virtual machine
support; which particular virtual machines will be supported and what
features. If the virtual machine and the I/O path involves
virtualized device support, then OpenVMS will need drivers for the
virtual device and network I/O will incur some overhead going through
the host driver layer and the host drivers will deal with the specifics
of the particiular device. This is simpler, but there's more overhead
and particularly if there's a lot of I/O buffer copying involved. If
the I/O device is accessed directly from the guest operating system
bypassing the host operating system or the host VM, then there'll be
device-specific drivers needed in OpenVMS. For VM-related details
here, see discussions of device virtualization and paravirtualization,
among others. In either approach, system performance around 100 GbE
or Infiniband involves a whole lot of interrupts, and those have to be
handled expeditiously for the hardware to be used effectively. Also
see the TCP Offload Engine (TOE) discussions and the details of what
Infiniband provides and how, as both of those seek to provide faster
and lower-latency networking.

> You get whatever netork support that the VM supports, not?

We get what VSI supports, or maybe what a third-party provides with
their hardware.

Kerry Main

unread,
Jul 20, 2017, 7:45:34 PM7/20/17
to comp.os.vms to email gateway
The big advantages with Infiniband and ROCEv2 is not only large
bandwidth, but much, much lower latency which of course is perfect for
cluster communications.

Note that ROCEv2 is orders of magnitude better than RDMA V1 which was
what OpenVMS first looked at.

The V2 spec (2014 timeframe) allows one to maintain a great deal of
application / driver transparency which, in theory, means it might not
be that hard to adopt for OpenVMS.

Reference page 2:
<http://www.mellanox.com/related-docs/whitepapers/roce_in_the_data_cente
r.pdf>

ROCEv2 spec release in 2014 timeframe:
<https://www.youtube.com/watch?v=8kTAXhujn08>

Extract:
- Transparent to Applications and underlying network infrastructures (km
- question - how much effort to adapt for cluster comm's?)
- Infiniband Architecture followed OSI model closely
- RoCEv2 only modified third layer
- frames generated and consumed in the NIC (below API)
- enables standard network mechanisms for forwarding, management,
monitoring, metering, accounting, firewalling, snooping and multicast


Regards,

Kerry Main
Kerry dot main at starkgaming dot com




Galen

unread,
Jul 21, 2017, 6:45:24 PM7/21/17
to
Were the theatening (sic) little dwarves in the original PDP-10 source? Or was this typo/misspelling introduced elsewhere? The version we had on Cal State's Cyber had it.

--Galen (NOT the same person as the Latin-speaking Dr. Galen in Rich's sigfile.)

David Froble

unread,
Jul 21, 2017, 10:49:42 PM7/21/17
to
Galen wrote:
> Were the theatening (sic) little dwarves in the original PDP-10 source? Or was this typo/misspelling introduced elsewhere? The version we had on Cal State's Cyber had it.
>
> --Galen (NOT the same person as the Latin-speaking Dr. Galen in Rich's sigfile.)

Well from at least as far back as 1985, on a VAX:

4 There is a threatening little dwarf in the room with you!

Galen

unread,
Jul 22, 2017, 12:02:37 AM7/22/17
to
When I graduated from CSU Hayward in 1980, -11's were the only DEC cpus I'd ever used: a PDP-11/45 RSTS/E system, and a bare metal LSI-11. I had played with a VAX at DECUS SF a few years earlier. About 4 years later I was a neophyte DEC-trained VMS system manager and MACRO-32 systems programmer at Lockheed in Sunnyvale.

Those were some good times... :-)

Galen

unread,
Jul 22, 2017, 12:09:43 AM7/22/17
to
But it was the (mis)spelling "theatening" (lacking' 'r' after 'h' I was asking about--not the dwarves themselves. (In case I wasn't clear before.)

David Froble

unread,
Jul 22, 2017, 11:20:53 AM7/22/17
to
Galen wrote:
> But it was the (mis)spelling "theatening" (lacking' 'r' after 'h' I was asking about--not the dwarves themselves. (In case I wasn't clear before.)

You were clear. I was just showing the version I have here, where the spelling
was correct.

Not sure of the progression of the game, which systems it was first on, and then
how it propagated onto others. I experienced it on PDP-11 RSTS prior to the VAX.

Bill Gunshannon

unread,
Jul 22, 2017, 9:15:48 PM7/22/17
to
I can play it on my Kindle. My daughter plays it on her phone.

bill

Kerry Main

unread,
Jul 23, 2017, 10:06:07 PM7/23/17
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> johnson.eric--- via Info-vax
> Sent: July 19, 2017 6:45 AM
> To: info...@rbnsn.com
> Cc: johnso...@gmail.com
> Subject: Re: [Info-vax] State of the Port - July 2017
>
Still a huge win for latency when using Infiniband and RoCEV2 (RDMA over
ethernet).

RDMA bypasses the entire TCPIP stack altogether.

Reference: See page 2.
<http://www.mellanox.com/related-docs/whitepapers/roce_in_the_data_cente
r.pdf>

In terms of capability, check out what the Infiniband crews are up to
lately: June 30, 2017
<
https://www.nextplatform.com/2017/06/30/infiniband-proprietary-networks-
still-rule-real-hpc/>

In addition, from what I understand (read pure speculation) the new
RoCEV2 spec that came out in 2014 timeframe provides a lot of
compatibility with existing drivers, so may not require all that much
work to adapt for cluster communications i.e. high bandwidth, very low
latency.

Stephen Hoffman

unread,
Jul 24, 2017, 3:31:59 PM7/24/17
to
On 2017-07-20 23:44:23 +0000, Kerry Main said:

>>
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
>> Stephen Hoffman via Info-vax
>> Sent: July 20, 2017 5:38 PM
>> To: info...@rbnsn.com
>> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
>> Subject: Re: [Info-vax] State of the Port - July 2017
>> ...
>> If VSI does decide to go after HPC with OpenVMS, then maybe we see
>> Infiniband support added. But Ethernet is ubiquitous.
>>
>> And yes, Infiniband is interesting, and clustering over Ethernet RDMA
>> (iWARP) might well be patterned after the Memory Channel work, but
>> there's a bunch of stuff in the queue ahead of iWARP and Infiniband.
>>
>
> The big advantages with Infiniband and ROCEv2 is not only large
> bandwidth, but much, much lower latency which of course is perfect for
> cluster communications.

First stage is getting the E working, stable, and faster, and on faster
E hardware.
0 new messages