Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

VSI Announces Initial Focus on Hypervisor Support for OpenVMS on x86

1,028 views
Skip to first unread message

Jan-Erik Söderholm

unread,
May 5, 2022, 5:37:42 PM5/5/22
to
Maybe others have seen this, but I just got this announcement from VSI.
Regards, Jan-Erik.



After analyzing customer feedback, VMS Software Inc. has made the decision
to move hypervisor support on OpenVMS V9.2 to the top of our priority list.
We plan to extend support for the following: VMWare, KVM and Hyper-V. Kevin
Shaw, the CEO of VMS Software, Inc. and Stephen Nelson, VP Engineering will
present this update of the VSI x86 rollout strategy at the webinar on May
11th. [Registration at https://vmssoftware.com/about/webinars/]

Better hypervisor support will mean being able to run OpenVMS on any x86
hardware, which is a priority for many customers. However, this also means
that bare metal support will be delayed – although eventually the DL380 and
several other server models are planned to be supported. The first limited
production release of OpenVMS on x86, V9.2, is scheduled for July 2022.

Arne Vajhøj

unread,
May 5, 2022, 7:56:55 PM5/5/22
to
On 5/5/2022 5:37 PM, Jan-Erik Söderholm wrote:
> Maybe others have seen this, but I just got this announcement from VSI.

> After analyzing customer feedback, VMS Software Inc. has made the
> decision to move hypervisor support on OpenVMS V9.2 to the top of our
> priority list. We plan to extend support for the following: VMWare, KVM
> and Hyper-V. Kevin Shaw, the CEO of VMS Software, Inc. and Stephen
> Nelson, VP Engineering will present this update of the VSI x86 rollout
> strategy at the webinar on May 11th. [Registration at
> https://vmssoftware.com/about/webinars/]
>
> Better hypervisor support will mean being able to run OpenVMS on any x86
> hardware, which is a priority for many customers. However, this also
> means that bare metal support will be delayed – although eventually the
> DL380 and several other server models are planned to be supported.

I guess that makes sense:
- many want specifically to run in VM
- even those that don't can run in VM with little extra work
- VM is easy HW support wise
- physical is complicated HW support wise

> The
> first limited production release of OpenVMS on x86, V9.2, is scheduled
> for July 2022.

So July is it.

:-)

Arne

Dave Froble

unread,
May 5, 2022, 7:57:32 PM5/5/22
to
On 5/5/2022 5:37 PM, Jan-Erik Söderholm wrote:
Yeah, I was about to post the info, but you beat me to it.

:-)

Too bad about the "bare metal" support. Whatever will Phillip do?

:-)

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Richard Maher

unread,
May 5, 2022, 9:03:38 PM5/5/22
to
Great Stuff!

Simon Clubley

unread,
May 6, 2022, 8:07:38 AM5/6/22
to
On 2022-05-05, Jan-Erik Söderholm <jan-erik....@telia.com> wrote:
>
> Better hypervisor support will mean being able to run OpenVMS on any x86
> hardware, which is a priority for many customers. However, this also means
> that bare metal support will be delayed ? although eventually the DL380 and
> several other server models are planned to be supported. The first limited
> production release of OpenVMS on x86, V9.2, is scheduled for July 2022.

General hypervisor support is a far better tradeoff for VSI to be pushing
for at this point, rather than bare metal support on just a few hardware
boxes, especially given the general trends in the industry.

BTW, if it's a production release, does that mean all the compilers
will be ready by then to go with it ?

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Simon Clubley

unread,
May 6, 2022, 8:09:05 AM5/6/22
to
Yes, but July 1st, or July 32nd ?

Also :-)

Dave Froble

unread,
May 6, 2022, 10:07:58 AM5/6/22
to
On 5/6/2022 8:07 AM, Simon Clubley wrote:
> On 2022-05-05, Jan-Erik Söderholm <jan-erik....@telia.com> wrote:
>>
>> Better hypervisor support will mean being able to run OpenVMS on any x86
>> hardware, which is a priority for many customers. However, this also means
>> that bare metal support will be delayed ? although eventually the DL380 and
>> several other server models are planned to be supported. The first limited
>> production release of OpenVMS on x86, V9.2, is scheduled for July 2022.
>
> General hypervisor support is a far better tradeoff for VSI to be pushing
> for at this point, rather than bare metal support on just a few hardware
> boxes, especially given the general trends in the industry.
>
> BTW, if it's a production release, does that mean all the compilers
> will be ready by then to go with it ?
>
> Simon.
>

Note the wording, perhaps their "get out of jail free" card?

"first limited production release of OpenVMS on x86, V9.2, is scheduled for July
2022"

Key word, "limited".

My opinion, without native compilers, it would not be "production ready".

John Reagan

unread,
May 6, 2022, 11:15:21 AM5/6/22
to
Other than Macro, compilers are layered products. If you want to use the
word "production" to mean "also has a minimum set of layered products",
then go for it.

For the record, we will not have all of the compilers ready as native compilers
by the release of V9.2. V9.2 includes a native Macro.

John Reagan

unread,
May 6, 2022, 11:21:20 AM5/6/22
to
Which compilers? Oh, never mind, you mean BASIC. :)

BASIC will not be ready in the July timeframe (even if you include July 32nd).
As I've mentioned before, BASIC presents an additional challenge on how it
describes the MAP statement to GEM. Our current G2L converter struggles
with describing it to the LLVM 3.4.2 used in the cross-compilers. The native
compilers are based on a much newer LLVM which I think/hope has a better
interface to allow us to describe them from G2L. However, I also worry that I
have to go into the BASIC frontend and do some changes (which would be
unique since G2L has been able to mimic GEM so far even with other frontends
that don't correctly create GEM symbol tables/IR)

Dave Froble

unread,
May 6, 2022, 12:10:58 PM5/6/22
to
No problem John. I have faith in you. I'm also aware that you're faced with a
rather large task. Take your time, do it right.

Arne Vajhøj

unread,
May 6, 2022, 3:43:05 PM5/6/22
to
On 5/6/2022 11:15 AM, John Reagan wrote:
> On Friday, May 6, 2022 at 8:07:38 AM UTC-4, Simon Clubley wrote:
>> BTW, if it's a production release, does that mean all the compilers
>> will be ready by then to go with it ?

> Other than Macro, compilers are layered products. If you want to use the
> word "production" to mean "also has a minimum set of layered products",
> then go for it.
>
> For the record, we will not have all of the compilers ready as native compilers
> by the release of V9.2. V9.2 includes a native Macro.

What are current ETA's for the various native compilers?

Arne

Dave Froble

unread,
May 6, 2022, 5:22:00 PM5/6/22
to
Seems to me, if they knew that, some of them would already be available.
Knowing what is needed to be done just needs man hours. Trying to figure out
what must be done sort of waits for "eurika, I've figured it out", and that is
sort of tough to schedule.

John Reagan

unread,
May 6, 2022, 6:20:57 PM5/6/22
to
Didn't I just do that a few weeks ago? I think I did. We're working on native
BLISS, native C, and native C++ right now. All three are in various stages of "done".
We've been tracking down a memory corrupter in the native C compiler for a week now.
We're doing it the hard way with printf's, etc. since SET WATCH isn't working in the
debugger yet. All of these are shaking out bugs in the cross-compilers that we
build them with, finding latent incorrect code, etc. My hand-wave is late summer
but I keep getting surprised (sometimes in the good way, sometimes in the bad way)

chris

unread,
May 6, 2022, 6:37:36 PM5/6/22
to
I would expect a C compiler to be part of an OS these days, or
just a package download, not so much C++. Even early unix
machines had a minimum of pcc, which was enough to build an
early gcc and associated tools. An OS is useless without
that, unless you are to run prepackaged apps...

Chris

Arne Vajhøj

unread,
May 6, 2022, 7:35:09 PM5/6/22
to
On 5/6/2022 6:20 PM, John Reagan wrote:
> On Friday, May 6, 2022 at 3:43:05 PM UTC-4, Arne Vajhøj wrote:
>> On 5/6/2022 11:15 AM, John Reagan wrote:
>>> On Friday, May 6, 2022 at 8:07:38 AM UTC-4, Simon Clubley wrote:
>>>> BTW, if it's a production release, does that mean all the compilers
>>>> will be ready by then to go with it ?
>>> Other than Macro, compilers are layered products. If you want to use the
>>> word "production" to mean "also has a minimum set of layered products",
>>> then go for it.
>>>
>>> For the record, we will not have all of the compilers ready as native compilers
>>> by the release of V9.2. V9.2 includes a native Macro.

>> What are current ETA's for the various native compilers?

> Didn't I just do that a few weeks ago? I think I did.

You did. Early April. But I was curious for whether there were an
update.

I think it is fair to say that native compilers are important
for the perception of how "ready" VMS x86-64 is.

That is not necessarily a good or fair criteria, but the criteria
is applied in the world we live in.

> We're working on native
> BLISS, native C, and native C++ right now. All three are in various stages of "done".
> We've been tracking down a memory corrupter in the native C compiler for a week now.
> We're doing it the hard way with printf's, etc. since SET WATCH isn't working in the
> debugger yet. All of these are shaking out bugs in the cross-compilers that we
> build them with, finding latent incorrect code, etc. My hand-wave is late summer
> but I keep getting surprised (sometimes in the good way, sometimes in the bad way)

OK.

Arne

Single Stage to Orbit

unread,
May 7, 2022, 6:01:27 AM5/7/22
to
On Fri, 2022-05-06 at 08:15 -0700, John Reagan wrote:

> For the record, we will not have all of the compilers ready as native
> compilers by the release of V9.2.  V9.2 includes a native Macro.

Perfect, that can be used to bootstrap a compilier. it's just a matter
of writing something in macro that can be used to compile the compilier
itself until it can compile itself.
--
Tactical Nuclear Kittens

Mike K.

unread,
May 7, 2022, 10:03:30 PM5/7/22
to
So, is native Pascal planned? It would be fun to get something like VMS Moria up and running, which should in theory be possible with Pascal and Macro. Alternatively, has anybody with access to the current cross-compilers tried to build it? It likely wouldn't take long and would be amusing to see pop up in the list of officially supported open source* packages.

Mike

* VMS Moria isn't technically open source as the license contains a prohibition on commercial use. But I suspect that simply compiling it and offering it as a free download wouldn't count as commercial.

John Reagan

unread,
May 8, 2022, 2:11:08 PM5/8/22
to
Yes, all native compilers are planned. I don't expect Pascal to be a challenge once we get
the bugs from BLISS and C shaken out. I might have the Moria kit somewhere but it might
take a while before I find time to try the cross-compiler with it.

Arne Vajhøj

unread,
May 8, 2022, 3:15:30 PM5/8/22
to
On 5/8/2022 2:11 PM, John Reagan wrote:
> Yes, all native compilers are planned. I don't expect Pascal to be a challenge once we get
> the bugs from BLISS and C shaken out. I might have the Moria kit somewhere but it might
> take a while before I find time to try the cross-compiler with it.

My guess is that VMS is one of the most diverse platforms
when it comes to native languages usage.

*nix and Win are almost entirely C and C++.

Mainframe has a strong presence of Cobol and PL/I
and today probably some C and C++ and maybe a little
bit of Fortran..

On VMS all the languages are really widely used: Pascal,
Basic, Cobol, Fortran, C. The only one I am not sure about
is C++.

Arne


John Dallman

unread,
May 8, 2022, 3:42:03 PM5/8/22
to
In article <627816cf$0$705$1472...@news.sunsite.dk>, ar...@vajhoej.dk
(Arne Vajhøj) wrote:

> My guess is that VMS is one of the most diverse platforms
> when it comes to native languages usage.
>
> On VMS all the languages are really widely used: Pascal,
> Basic, Cobol, Fortran, C. The only one I am not sure about
> is C++.

A good C++ compiler will be necessary to get ISVs to start supporting VMS.
(It's also required for compiling the LLVM backend for native compilers
on x86.)

John

Arne Vajhøj

unread,
May 8, 2022, 7:34:34 PM5/8/22
to
Oh yes.

C++ is needed for a lot of platform type stuff.

LLVM, OpenJDK etc..

I was not indicating that C++ was not needed. I was just trying to
say that I don't think there are that many VMS C++ custom
business applications out there.

Arne

Simon Clubley

unread,
May 9, 2022, 8:55:25 AM5/9/22
to
On 2022-05-08, Arne Vajhøj <ar...@vajhoej.dk> wrote:
>
> My guess is that VMS is one of the most diverse platforms
> when it comes to native languages usage.
>
> *nix and Win are almost entirely C and C++.
>
> Mainframe has a strong presence of Cobol and PL/I
> and today probably some C and C++ and maybe a little
> bit of Fortran..
>
> On VMS all the languages are really widely used: Pascal,
> Basic, Cobol, Fortran, C. The only one I am not sure about
> is C++.
>

[Now it's my turn. :-)]

Hey, you forgot about Ada!

On a more serious note, so did VSI, probably because all the Ada users
have probably been forced away from VMS by now...

Single Stage to Orbit

unread,
May 9, 2022, 10:01:27 AM5/9/22
to
On Mon, 2022-05-09 at 12:55 +0000, Simon Clubley wrote:
> On 2022-05-08, Arne Vajhøj <ar...@vajhoej.dk> wrote:
>
> [Now it's my turn. :-)]
>
> Hey, you forgot about Ada!
>
> On a more serious note, so did VSI, probably because all the Ada
> users
> have probably been forced away from VMS by now...

As soon as they make the compilers available, it should be
straightforward to port the GCC suite over to the x86_64 platform and
that'll include ADA if VSI doesn't port Ada over as well.

On another note, I think it'll be interesting to get Rust running on
this platform as well.
--
Tactical Nuclear Kittens

Ian Miller

unread,
May 9, 2022, 10:32:21 AM5/9/22
to
A supported ADA compiler for OpenVMS is wanted.

Simon Clubley

unread,
May 9, 2022, 2:04:52 PM5/9/22
to
On 2022-05-09, Single Stage to Orbit <alex....@munted.eu> wrote:
> On Mon, 2022-05-09 at 12:55 +0000, Simon Clubley wrote:
>> On 2022-05-08, Arne Vajhøj <ar...@vajhoej.dk> wrote:
>>
>> [Now it's my turn. :-)]
>>
>> Hey, you forgot about Ada!
>>
>> On a more serious note, so did VSI, probably because all the Ada
>> users
>> have probably been forced away from VMS by now...
>
> As soon as they make the compilers available, it should be
> straightforward to port the GCC suite over to the x86_64 platform and
> that'll include ADA if VSI doesn't port Ada over as well.
>

You have clearly never looked deeply into the gcc and binutils internals
or you have way more knowledge about gcc internals than I do... :-)

I looked at this for Alpha and I eventually gave up, because even with
some VMS support still in gcc/binutils at that time, there appeared to
be bits missing.

I was able to compile gcc/binutils for VMS Alpha as a cross-compiler on
a Linux host and was able to generate some C language Alpha binaries on
Linux which ran ok on VMS Alpha but that was as far as I got.

I was never able to get GNAT to build as a cross-compiler on Linux
for VMS Alpha because I kept hitting an internal compiler error.

IIRC, the VMS support was completely stripped from FSF binutils/gcc
in a later version of those toolkits.

The VMS support (such as it was in the FSF GCC version) would need to
be added back in and fixed, x86-64 VMS added as a gcc target (which
should be easier for x86-64 than it for VMS Alpha) into gcc itself and
then support for a x86-64 VMS target added into the Ada runtime itself.

Have fun. :-)

John Dallman

unread,
May 9, 2022, 2:45:05 PM5/9/22
to
In article <t5bl41$unq$2...@dont-email.me>,
clubley@remove_me.eisner.decus.org-Earth.UFP (Simon Clubley) wrote:

> IIRC, the VMS support was completely stripped from FSF binutils/gcc
> in a later version of those toolkits.
>
> The VMS support (such as it was in the FSF GCC version) would need
> to be added back in and fixed, x86-64 VMS added as a gcc target
> (which should be easier for x86-64 than it for VMS Alpha) into
> gcc itself and then support for a x86-64 VMS target added into
> the Ada runtime itself.

This will be simplified, a bit, by x86-64 using ELF, as opposed to the
VAX or Alpha object languages. But that means only parts of the old
support need putting back.

John

Arne Vajhøj

unread,
May 9, 2022, 7:10:42 PM5/9/22
to
On 5/9/2022 8:55 AM, Simon Clubley wrote:
> On 2022-05-08, Arne Vajhøj <ar...@vajhoej.dk> wrote:
>> My guess is that VMS is one of the most diverse platforms
>> when it comes to native languages usage.
>>
>> *nix and Win are almost entirely C and C++.
>>
>> Mainframe has a strong presence of Cobol and PL/I
>> and today probably some C and C++ and maybe a little
>> bit of Fortran..
>>
>> On VMS all the languages are really widely used: Pascal,
>> Basic, Cobol, Fortran, C. The only one I am not sure about
>> is C++.
>
> [Now it's my turn. :-)]
>
> Hey, you forgot about Ada!
>
> On a more serious note, so did VSI, probably because all the Ada users
> have probably been forced away from VMS by now...

I must admit that I consider Ada on VMS as dead (along
with PL/I).

Arne


Bill Gunshannon

unread,
May 9, 2022, 8:17:06 PM5/9/22
to
Not to mention APL. :-)

bill


Arne Vajhøj

unread,
May 9, 2022, 8:34:34 PM5/9/22
to
That I did not even consider.

BTW, is APL a compiler or an interpreter?

Arne

Bill Gunshannon

unread,
May 9, 2022, 9:19:40 PM5/9/22
to
I have never seen an APL Compiler. Doesn't mean one never existed, but
what Ken Iverson created was an interpreted language. I still have a
few versions of APL running around here. It's a fun language to play
with and got more use than some people like to think. We had a student
get an internship with a financial office in Scranton. They had a
complete, in-house developed Information System for their business
written entirely in APL. My student really liked it and got very good
at it. I also think Marist in Poughkeepsie still teaches it as one
of their primary educational languages. Not a surprise when you look
at where they are located. :-)

bill

Single Stage to Orbit

unread,
May 10, 2022, 5:01:27 AM5/10/22
to
On Mon, 2022-05-09 at 18:04 +0000, Simon Clubley wrote:

> The VMS support (such as it was in the FSF GCC version) would need to
> be added back in and fixed, x86-64 VMS added as a gcc target (which
> should be easier for x86-64 than it for VMS Alpha) into gcc itself
> and
> then support for a x86-64 VMS target added into the Ada runtime
> itself.
>
> Have fun. :-)

As another poster explained, the process for that will be considerably
simpler as VMS on x86_64 supports ELF and the all the heavy lifting for
generating x86_64 object code is already implemented.
--
Tactical Nuclear Kittens

VMSgenerations working group

unread,
May 10, 2022, 7:44:31 AM5/10/22
to
Except for who use it and who maintain it :)

Yes Adacore killed its support in 2015. Yes VSI cannot do anything until
today (no sufficient use cases).

But as "the king is dead, long live te king", "VMS is dead, long live
VMS",... who knows?
>
> Arne
>
>


--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

VMSgenerations working group

unread,
May 10, 2022, 7:50:58 AM5/10/22
to
Agreed. A lot of things can be done upgrading the current
gcc/ada/itanium cross compiled chain (maintained and used here in
France). But it would be a nighmare doing the same for VMS/x86.

However, Adacore has a pure OpenSource project using LLVM
(https://github.com/AdaCore/gnat-llvm), and porting that to x86 will be
a lot more possible. Keep the secret: I'm justing begining this task.

Gérard Calliet

unread,
May 10, 2022, 7:55:42 AM5/10/22
to

Ian Miller

unread,
May 10, 2022, 11:26:22 AM5/10/22
to
that sounds like a good idea :-)

plugh

unread,
May 10, 2022, 1:20:31 PM5/10/22
to
On Monday, May 9, 2022 at 7:01:27 AM UTC-7, Single Stage to Orbit wrote:

>
> On another note, I think it'll be interesting to get Rust running on
> this platform as well.
> --
> Tactical Nuclear Kittens

If the hobbyist pgm is dead, I'm not sure how to move this forward. If we can get a few people together, maybe we can storm the castle.

Simon Clubley

unread,
May 10, 2022, 1:27:46 PM5/10/22
to
Once again, as I and others have pointed out yet again just recently,
there is a VSI Alpha hobbyist program.

It's only the HPE program that is dead.

I'm running VSI Alpha VMS just fine on the VSI hobbyist program at
home (even if I have to run FreeAXP on an old Windows box to do so).

plugh

unread,
May 10, 2022, 1:36:19 PM5/10/22
to
On Tuesday, May 10, 2022 at 10:27:46 AM UTC-7, Simon Clubley wrote:
> On 2022-05-10, plugh <jchi...@gmail.com> wrote:
> > If the hobbyist pgm is dead, I'm not sure how to move this forward. If we can get a few people together, maybe we can storm the castle.
> Once again, as I and others have pointed out yet again just recently,
> there is a VSI Alpha hobbyist program.
>
> It's only the HPE program that is dead.
>
> I'm running VSI Alpha VMS just fine on the VSI hobbyist program at
> home (even if I have to run FreeAXP on an old Windows box to do so).
> Simon.

I appreciate the enthusiasm, but it's emulators all the way down.

I'm won't do this kind of work in that environment. Quoth the Mickey, "Hey kids, let's port Rust to the PDP/8!"

jec

-- Microsoft free since 2003

Simon Clubley

unread,
May 10, 2022, 1:39:42 PM5/10/22
to
On 2022-05-10, plugh <jchi...@gmail.com> wrote:
> On Tuesday, May 10, 2022 at 10:27:46 AM UTC-7, Simon Clubley wrote:
>> On 2022-05-10, plugh <jchi...@gmail.com> wrote:
>> > If the hobbyist pgm is dead, I'm not sure how to move this forward. If we can get a few people together, maybe we can storm the castle.
>> Once again, as I and others have pointed out yet again just recently,
>> there is a VSI Alpha hobbyist program.
>>
>> It's only the HPE program that is dead.
>>
>> I'm running VSI Alpha VMS just fine on the VSI hobbyist program at
>> home (even if I have to run FreeAXP on an old Windows box to do so).
>> Simon.
>
> I appreciate the enthusiasm, but it's emulators all the way down.
>
> I'm won't do this kind of work in that environment. Quoth the Mickey, "Hey kids, let's port Rust to the PDP/8!"
>

Well, you would be famous if you pulled off the last sentence... :-)

Whether that's "good" or "bad" famous remains to be seen... :-)

plugh

unread,
May 10, 2022, 2:06:50 PM5/10/22
to
On Tuesday, May 10, 2022 at 10:39:42 AM UTC-7, Simon Clubley wrote:
> On 2022-05-10, plugh <jchi...@gmail.com> wrote:
> > On Tuesday, May 10, 2022 at 10:27:46 AM UTC-7, Simon Clubley wrote:
> >> On 2022-05-10, plugh <jchi...@gmail.com> wrote:
> >> > If the hobbyist pgm is dead, I'm not sure how to move this forward. If we can get a few people together, maybe we can storm the castle.
> >> Once again, as I and others have pointed out yet again just recently,
> >> there is a VSI Alpha hobbyist program.
> >>
> >> It's only the HPE program that is dead.
> >>
> >> I'm running VSI Alpha VMS just fine on the VSI hobbyist program at
> >> home (even if I have to run FreeAXP on an old Windows box to do so).
> >> Simon.
> >
> > I appreciate the enthusiasm, but it's emulators all the way down.
> >
> > I'm won't do this kind of work in that environment. Quoth the Mickey, "Hey kids, let's port Rust to the PDP/8!"
> >
> Well, you would be famous if you pulled off the last sentence... :-)
>
> Whether that's "good" or "bad" famous remains to be seen... :-)
> Simon.

Per aspera ad astra!

chris

unread,
May 10, 2022, 6:51:48 PM5/10/22
to
Per Ardua ad Astra :-)...


plugh

unread,
May 10, 2022, 7:02:09 PM5/10/22
to
Yeah, I did say "storm the castle". I'll pay attention to that disabled checkbox. There used to be monitoring services for such things.

plugh

unread,
May 10, 2022, 7:05:33 PM5/10/22
to
On Monday, May 9, 2022 at 7:01:27 AM UTC-7, Single Stage to Orbit wrote:
As I understand the current memory model, the stack is 32 bits, the heap 64. What impact if any will that have?

Arne Vajhøj

unread,
May 10, 2022, 7:48:52 PM5/10/22
to
On 5/10/2022 7:05 PM, plugh wrote:
> On Monday, May 9, 2022 at 7:01:27 AM UTC-7, Single Stage to Orbit wrote:
>> As soon as they make the compilers available, it should be
>> straightforward to port the GCC suite over to the x86_64 platform and
>> that'll include ADA if VSI doesn't port Ada over as well.
>>
>> On another note, I think it'll be interesting to get Rust running on
>> this platform as well.

> As I understand the current memory model, the stack is 32 bits, the heap 64. What impact if any will that have?

There is:
* a stack P1 that can be accessed via both 32 and 64 bit pointers
* a heap P0 that can be accessed via both 32 and 64 bit pointers
* a heap P2 that can be accessed only via 64 bit pointers

I don't think that P1 and P2 being "different" will cause
a problem, but I suspect they will get problems with the
32 and 64 bit pointers situation.

all 32 bit pointers : 1980ish
all 64 bit pointers : not able to use some VMS API's
both 32 and 64 bit pointers : compiler changes required

Arne

plugh

unread,
May 10, 2022, 8:39:22 PM5/10/22
to
Don't quote me on this, but I'm not entirely sure compiler changes are required.
I was starting some research on this a few weeks ago, based on the design of the Rust Core Library
https://doc.rust-lang.org/core/
in which there's no heap allocation.

chris

unread,
May 10, 2022, 8:40:40 PM5/10/22
to
Well, that gives you 4Gb stack space, which should be enough for most,
while ]the 64 bit heap, virtualised of course, should be enough for any
real world process. Depending on OS limits, of course...

Chri

Arne Vajhøj

unread,
May 10, 2022, 8:45:01 PM5/10/22
to
P1 is only 1 GB. Or probably more relevant P1 + P0 is 2 GB as I assume
the stack could grow below 0x0000000040000000.

Arne

Arne Vajhøj

unread,
May 10, 2022, 8:47:17 PM5/10/22
to
On 5/10/2022 8:39 PM, plugh wrote:
> On Tuesday, May 10, 2022 at 4:48:52 PM UTC-7, Arne Vajhøj wrote:
>> On 5/10/2022 7:05 PM, plugh wrote:
>>> On Monday, May 9, 2022 at 7:01:27 AM UTC-7, Single Stage to Orbit wrote:
>>>> As soon as they make the compilers available, it should be
>>>> straightforward to port the GCC suite over to the x86_64 platform and
>>>> that'll include ADA if VSI doesn't port Ada over as well.
>>>>
>>>> On another note, I think it'll be interesting to get Rust running on
>>>> this platform as well.
>>> As I understand the current memory model, the stack is 32 bits, the heap 64. What impact if any will that have?
>> There is:
>> * a stack P1 that can be accessed via both 32 and 64 bit pointers
>> * a heap P0 that can be accessed via both 32 and 64 bit pointers
>> * a heap P2 that can be accessed only via 64 bit pointers
>>
>> I don't think that P1 and P2 being "different" will cause
>> a problem, but I suspect they will get problems with the
>> 32 and 64 bit pointers situation.
>>
>> all 32 bit pointers : 1980ish
>> all 64 bit pointers : not able to use some VMS API's
>> both 32 and 64 bit pointers : compiler changes required
>
> Don't quote me on this, but I'm not entirely sure compiler changes are required.
> I was starting some research on this a few weeks ago, based on the design of the Rust Core Library
> https://doc.rust-lang.org/core/
> in which there's no heap allocation.

Rust use heap.

That library you link to apparently does not use heap.

Arne

Arne Vajhøj

unread,
May 10, 2022, 8:50:09 PM5/10/22
to
On 5/10/2022 7:44 AM, VMSgenerations working group wrote:
> Le 10/05/2022 à 01:10, Arne Vajhøj a écrit :
>> On 5/9/2022 8:55 AM, Simon Clubley wrote:
>>> On 2022-05-08, Arne Vajhøj <ar...@vajhoej.dk> wrote:
>>>> My guess is that VMS is one of the most diverse platforms
>>>> when it comes to native languages usage.
>>>>
>>>> *nix and Win are almost entirely C and C++.
>>>>
>>>> Mainframe has a strong presence of Cobol and PL/I
>>>> and today probably some C and C++ and maybe a little
>>>> bit of Fortran..
>>>>
>>>> On VMS all the languages are really widely used: Pascal,
>>>> Basic, Cobol, Fortran, C. The only one I am not sure about
>>>> is C++.
>>>
>>> [Now it's my turn. :-)]
>>>
>>> Hey, you forgot about Ada!
>>>
>>> On a more serious note, so did VSI, probably because all the Ada users
>>> have probably been forced away from VMS by now...
>>
>> I must admit that I consider Ada on VMS as dead (along
>> with PL/I).
> Except for who use it and who maintain it :)
>
> Yes Adacore killed its support in 2015. Yes VSI cannot do anything until
> today (no sufficient use cases).

Yes.

But with all due respect for the great work you do then it is not
the same as VSI or ACT.

Arne



plugh

unread,
May 10, 2022, 9:27:17 PM5/10/22
to
On Tuesday, May 10, 2022 at 5:47:17 PM UTC-7, Arne Vajhøj wrote:
> On 5/10/2022 8:39 PM, plugh wrote:
> Rust use^W CAN heap.
>
> That library you link to apparently does not use heap.

Correct. It's for embedded systems.

And
.
.
.

experimentation.....

--
It's over there...

Gérard Calliet

unread,
May 11, 2022, 2:12:19 PM5/11/22
to
HP and VSI didn't do anything on Ada Itanium. All was done by Adacore.
And I have just rebuilt the work done by Adacore from sources on FSF.

Quite same thing taking the work done by Adacore on gnat-llvm and
rebuilting it on VMS/x86. Only "quite" because there will be changes to
adapt with VMS {llvm,c++,cmake} uses. On gnat-gcc the adaptation to VMS
was already done (but it is horrible cross-compiled ): ).

My question: if everything used on VMS has to be "the same as VSI", do
you think the ecosystem has a chance to survive? And, about Adacore, no,
no, a lot of things are done using, rebuilding gnat-ada (not just me),
and there are "not the same as Adacore".

By the way, Ada did survive because of the mix business plan between
Open Source and proprietary options. Because Adacore helped to create a
community who "does things". Samething with VMS: it is because we can
use a lot of things issued from Open Source that VMS can survive, and
this things "are not VSI".

But you are right, the project is a little bit too bold. What to do?

Arne Vajhøj

unread,
May 11, 2022, 2:32:59 PM5/11/22
to
On 5/11/2022 2:12 PM, Gérard Calliet wrote:
> HP and VSI didn't do anything on Ada Itanium. All was done by Adacore.
> And I have just rebuilt the work done by Adacore from sources on FSF.
>
> Quite same thing taking the work done by Adacore on gnat-llvm and
> rebuilting it on VMS/x86. Only "quite" because there will be changes to
> adapt with VMS {llvm,c++,cmake} uses. On gnat-gcc the adaptation to VMS
> was already done (but it is horrible cross-compiled ): ).

I know it is a big project.

I dabbled a little bit into GCC on VMS around version
1.somethingaround40.

> My question: if everything used on VMS has to be "the same as VSI",

I don't think everything has to be same as VSI.

But for compilers specifically I think they should generate code
compatible with VSI compilers.

> do
> you think the ecosystem has a chance to survive?

I hope so.

VSI got a hobbyist program and an ISV program.

That should hopefully help both open source on VMS and
commercial third party products on VMS.

> By the way, Ada did survive because of the mix business plan between
> Open Source and proprietary options. Because Adacore helped to create a
> community who "does things". Samething with VMS: it is because we can
> use a lot of things issued from Open Source that VMS can survive, and
> this things "are not VSI".

VSI like most other commercial software companies (IBM, Oracle, MS,
SAP etc.) are using open source.

There is also a small VMS open source community, but to be
frank - it is way too small - we need more people maintaining
old ports and creating new ports.

And

Arne Vajhøj

unread,
May 11, 2022, 8:10:47 PM5/11/22
to
You can get APL almost anywhere.

https://github.com/dzaima/APL should be a pretty good APL
implementation and only requires Java 8+ to run.

Arne

Gérard Calliet

unread,
May 12, 2022, 7:11:13 AM5/12/22
to
Le 11/05/2022 à 20:32, Arne Vajhøj a écrit :
>
> There is also a small VMS open source community, but to be
> frank - it is way too small - we need more people maintaining
> old ports and creating new ports
indeed

Simon Clubley

unread,
May 12, 2022, 8:47:17 AM5/12/22
to
On 2022-05-11, Gérard Calliet <gerard....@pia-sofer.fr> wrote:
> HP and VSI didn't do anything on Ada Itanium. All was done by Adacore.
> And I have just rebuilt the work done by Adacore from sources on FSF.
>

And you were fortunate in that the state of Itanium VMS in the FSF
sources appeared to be in a far better state than those for Alpha VMS
appeared to be.

Unfortunately, there's no such thing as an Itanium full system emulator
or that's what I would have been running in my efforts instead of Alpha.

> Quite same thing taking the work done by Adacore on gnat-llvm and
> rebuilting it on VMS/x86. Only "quite" because there will be changes to
> adapt with VMS {llvm,c++,cmake} uses. On gnat-gcc the adaptation to VMS
> was already done (but it is horrible cross-compiled ): ).
>

The cross-compiled part of this isn't really a problem for me because
I am well used to that development model for embedded work.

I would have had no problem with using Linux as a development host for
VMS work and then only using VMS to run the final binaries after they
had been generated on a Linux box.

One of the advantages of that approach would have been the ability to
take full advantage of the richer development environment that exists
on Linux when compared to VMS.

Gérard Calliet

unread,
May 12, 2022, 11:11:09 AM5/12/22
to
Le 12/05/2022 à 14:47, Simon Clubley a écrit :
> On 2022-05-11, Gérard Calliet <gerard....@pia-sofer.fr> wrote:
>> HP and VSI didn't do anything on Ada Itanium. All was done by Adacore.
>> And I have just rebuilt the work done by Adacore from sources on FSF.
>>
>
> And you were fortunate in that the state of Itanium VMS in the FSF
> sources appeared to be in a far better state than those for Alpha VMS
> appeared to be.
I think it's because Adacore had to maintain Ada on Itanium. Not on
Alpha where there were Dec Ada, as you know.

Unfortunitly, they stopped the support in 2015, and didn't publish for
VMS after that. So we don't have anything good on versions 5 and newer.
It is the time gcc became built on c++.

I'm sure Adacore did a lot of things for VMS and gcc version 5, but,
because of the stop of support, stopped the publication. I heard they
said it was a nightmare speaking with HP ingineering at that time.

A lot more bold project I have in the future, if business makes me live,
is upgrading my gnat-ada build for itanium, building gcc-c++, and
upgrading with it. Because I'm sure there will be Itanium around at
least for five or ten years, and they deserve a good Ada.
>
> Unfortunately, there's no such thing as an Itanium full system emulator
> or that's what I would have been running in my efforts instead of Alpha.
>
>> Quite same thing taking the work done by Adacore on gnat-llvm and
>> rebuilting it on VMS/x86. Only "quite" because there will be changes to
>> adapt with VMS {llvm,c++,cmake} uses. On gnat-gcc the adaptation to VMS
>> was already done (but it is horrible cross-compiled ): ).
>>
>
> The cross-compiled part of this isn't really a problem for me because
> I am well used to that development model for embedded work.
I understand. But with VMS purism, it is a little bad using no complete
native environment.
>
> I would have had no problem with using Linux as a development host for
> VMS work and then only using VMS to run the final binaries after they
> had been generated on a Linux box
>
> One of the advantages of that approach would have been the ability to
> take full advantage of the richer development environment that exists
> on Linux when compared to VMS.
Agreed. And perhaps you know we'll be able to do a lot more of
cross-work with VMS/x86 . VSI themselves play like that to build
somethings and try in the first phases of development on x86. As I
understood something John Reagan had explained.

It is for me the actual workshop, beginning use (trying) of gnat-llvm on
VMS/x86. Absolutly exiting.
>
> Simon.

Ian Miller

unread,
May 12, 2022, 11:32:42 AM5/12/22
to
On Thursday, May 5, 2022 at 10:37:42 PM UTC+1, Jan-Erik Söderholm wrote:
> Maybe others have seen this, but I just got this announcement from VSI.
> Regards, Jan-Erik.
>
>
>
> After analyzing customer feedback, VMS Software Inc. has made the decision
> to move hypervisor support on OpenVMS V9.2 to the top of our priority list.
> We plan to extend support for the following: VMWare, KVM and Hyper-V. Kevin
> Shaw, the CEO of VMS Software, Inc. and Stephen Nelson, VP Engineering will
> present this update of the VSI x86 rollout strategy at the webinar on May
> 11th. [Registration at https://vmssoftware.com/about/webinars/]
>
> Better hypervisor support will mean being able to run OpenVMS on any x86
> hardware, which is a priority for many customers. However, this also means
> that bare metal support will be delayed – although eventually the DL380 and
> several other server models are planned to be supported. The first limited
> production release of OpenVMS on x86, V9.2, is scheduled for July 2022.

VSI Webinar 11 May x86 port update
now on YouTube https://youtu.be/eH8gAjoOkXI

Andrew Brehm

unread,
May 13, 2022, 3:34:45 AM5/13/22
to
On 05/05/2022 23:37, Jan-Erik Söderholm wrote:
> Maybe others have seen this, but I just got this announcement from VSI.
> Regards, Jan-Erik.
>
>
>
> After analyzing customer feedback, VMS Software Inc. has made the decision to move hypervisor support on OpenVMS V9.2 to the top of our priority list. We plan to extend support for the following: VMWare, KVM and Hyper-V. Kevin Shaw, the CEO of VMS Software, Inc. and Stephen Nelson, VP Engineering will present this update of the VSI x86 rollout strategy at the webinar on May 11th. [Registration at https://vmssoftware.com/about/webinars/]
>
> Better hypervisor support will mean being able to run OpenVMS on any x86 hardware, which is a priority for many customers. However, this also means that bare metal support will be delayed – although eventually the DL380 and several other server models are planned to be supported. The first limited production release of OpenVMS on x86, V9.2, is scheduled for July 2022.


I wonder if "support" means developing some paravirtualisation drivers, aka vmtools.

If OpenVMS VMs cannot be shut down properly by the hypervisor but requires timed or manual work on the OS, it will be difficult to sell this to customers used to hypervisor-based operation.

chris

unread,
May 14, 2022, 6:28:48 PM5/14/22
to
Those numbers were theoretical, but X86-64 will have hardware memory
management capability for more than that. I'm using proliant G8 series
for work, as an economical sweet spot from a cost pov, but they are
often advertised with 16 to 64Gb typically, so one has to ask, where
is the limitation ?, In the OS, or what ?...

Chris

Stephen Hoffman

unread,
May 14, 2022, 7:32:20 PM5/14/22
to
On 2022-05-14 22:28:43 +0000, chris said:

> On 05/11/22 01:44, Arne Vajhøj wrote:
>>
>> P1 is only 1 GB. Or probably more relevant P1 + P0 is 2 GB as I assume
>> the stack could grow below 0x0000000040000000.
>>
>
> Those numbers were theoretical, but X86-64 will have hardware memory
> management capability for more than that. I'm using proliant G8 series
> for work, as an economical sweet spot from a cost pov, but they are
> often advertised with 16 to 64Gb typically, so one has to ask, where is
> the limitation ?, In the OS, or what ?...

Apologies on any bit-width errors latent in the following.

This posting differentiates virtual addressing from physical
addressing, and largely ignores discussions around physical addressing
including about 48-bit physical addressing limit on many recent x86-64
boxes, and the 5-level page tables (providing 57 bits, physical)
available on certain current x86-64 processors.

OpenVMS uses what amounts to segmented virtual addressing, and the
OpenVMS virtual address space design ties directly back to VAX. This
at least through OpenVMS I64, and haven't checked OpenVMS x86-64 here.

VAX had P0 user space where ~all VAX user apps and user data resides,
00000000 to 3FFFFFFF, P1 from 40000000 to 7FFFFFFF for the control
region and CLI and baggage, and the remainder for S0 system space, and
later S1 for more system space with VAX Extended Virtual Addressing.
I'll here ignore VAX Extended Physical Addressing.

Each of these P0, P1, S0, and S2 spaces are 30 bits of virtual addressing.

Alpha uses 64-bit virtual and always has, and preserves the existing
VAX 32-bit design with P0 and P1 at the lowest addresses
(00000000.00000000 to 00000000.7FFFFFFF), with S0 and S1 at the highest
(FFFFFFFF.80000000 to FFFFFFFF.FFFFFFFF, and adds P2 and S2 into the
great huge gap between those two ranges. OpenVMS Alpha V7.0 opened up
user access for P2 and S2 addressing.

As part of allowing existing 32-bit applications to continue to operate
on OpenVMS, app developers working on new apps now get to deal with
what is located where, and which API calls to use for addressing.

Apps use 64-bit pointers, but OpenVMS and its preference for upward
compatibility means existing and new apps are an aggregation of older
32- and newer 64-bit calls where needed, and all eventually
sign-extended to 64-bit virtual addresses at run-time by the processor.

As an implementation detail, all of the virtual address bits and all of
the physical address bits are not (yet) fully implemented, but we're
getting closer. Alpha provided 43 bits virtual. x86-64 now does 57-bits
physical. Etc.

This OpenVMS segmented virtual address space design works spectacularly
for those folks that are dependent on 32-bit apps and libraries and
their related APIs, but it tends to be a confusing morass for new work,
particularly for anybody familiar with 64-bit flat virtual addressing
and not having to deal with this stuff. (For those of a certain age and
experience, this is not as ugly as TKB by any stretch, but it's still
ugly.)

OpenVMS has been incrementally allowing hunks of stuff to migrate into
P2 and S2 space, though the documentation here tends to be scattershot,
and you have to find and use the necessary switches to get your code
and data out of the default P0 address range.

OpenVMS is the only operating system I'm aware of that chose this
hybrid addressing path; of both 32-bit and 64-bit APIs mixed within the
same executable images. Most other platforms decided to allow 32-bit
and 64-bit apps and processes to coexist in parallel. But not
co-resident within an app at run-time. This transition usually with a
decade or more to allow apps to be migrated to 64-bit APIs for those
platforms that have decided to deprecate and remove the older and
problematic 32-bit APIs. As an example, macOS 10.15 "Catalina" removed
32-bit app and API support, and completed the migration started back
around 10.5 "Leopard"; a ~decade ago.

I'm here ignoring the existence of so-called multi-architecture or
so-called fat binaries, which can provide 32- and 64-bit code and code
for multiple architectures within the file containing the binary. The
equivalent of the image activator picks the appropriate executable code
and maps it into virtual memory and transfers control to the
appropriate contents from those architectures available within the
binary. For this particular posting, I'm referring to the run-time
address space and run-time APIs in use. Not to multi-architecture
binaries.

Microsoft Windows further runs parallel 32- and 64-bit software
distributions, in addition to 32- and 64-bit apps.

For the long-archived 64-bit manual with some background on the topic:
http://www0.mi.infn.it/~calcolo/OpenVMS/ssb71/6467/6467ptoc.htm

There was a long discussion of this addressing topic here in
comp.os.vms newsgroup a while back, too. Which showed that a whole lot
of folks haven't yet written apps for 64-bit address space. Which is
fine. Where things get confusing is with which APIs (32-bit or 64-bit)
work where, whether API calls or descriptors or itemlists or otherwise.
And compilers and other factors, for that matter. BASIC has not yet
added support for 64-bit.


--
Pure Personal Opinion | HoffmanLabs LLC

Arne Vajhøj

unread,
May 15, 2022, 9:16:48 AM5/15/22
to
On 5/14/2022 6:28 PM, chris wrote:
> On 05/11/22 01:44, Arne Vajhøj wrote:
>> On 5/10/2022 8:40 PM, chris wrote:
>>> On 05/11/22 00:05, plugh wrote:
>>>> As I understand the current memory model, the stack is 32 bits, the
>>>> heap 64. What impact if any will that have?
>>>
>>> Well, that gives you 4Gb stack space, which should be enough for most,
>>> while ]the 64 bit heap, virtualised of course, should be enough for
>>> any real world process. Depending on OS limits, of course...
>>
>> P1 is only 1 GB. Or probably more relevant P1 + P0 is 2 GB as I assume
>> the stack could grow below 0x0000000040000000.
>
> Those numbers were theoretical,

I believe they are very actual. VAX, Alpha, Itanium, x86-64.

> but X86-64 will have hardware memory
> management capability for more than that.

Alpha, Itanium and x86-64 are 64 bit machines.

But that does not change P0 and P1 space in VMS.

> I'm using proliant G8 series
> for work, as an economical sweet spot from a cost pov, but they are
> often advertised with 16 to 64Gb typically, so one has to ask, where
> is the limitation ?, In the OS, or what ?...

VMS design. Due to VAX compatibility decisions made 30 years ago.

Arne

Arne Vajhøj

unread,
May 15, 2022, 9:25:22 AM5/15/22
to
On 5/14/2022 7:32 PM, Stephen Hoffman wrote:
> VAX had P0 user space where ~all VAX user apps and user data resides,
> 00000000 to 3FFFFFFF, P1 from 40000000 to 7FFFFFFF for the control
> region and CLI and baggage, and the remainder for S0 system space, and
> later S1 for more system space with VAX Extended Virtual Addressing.
> I'll here ignore VAX Extended Physical Addressing.
>
> Each of these P0, P1, S0, and S2 spaces are 30 bits of virtual addressing.
>
> Alpha uses 64-bit virtual and always has, and preserves the existing VAX
> 32-bit design with P0 and P1 at the lowest addresses (00000000.00000000
> to 00000000.7FFFFFFF), with S0 and S1 at the highest (FFFFFFFF.80000000
> to FFFFFFFF.FFFFFFFF, and adds P2 and S2 into the great huge gap between
> those two ranges. OpenVMS Alpha V7.0 opened up user access for P2 and S2
> addressing.
>
> As part of allowing existing 32-bit applications to continue to operate
> on OpenVMS, app developers working on new apps now get to deal with what
> is located where, and which API calls to use for addressing.
>
> Apps use 64-bit pointers, but OpenVMS and its preference for upward
> compatibility means existing and new apps are an aggregation of older
> 32- and newer 64-bit calls where needed, and all eventually
> sign-extended to 64-bit virtual addresses at run-time by the processor.

> This OpenVMS segmented virtual address space design works spectacularly
> for those folks that are dependent on 32-bit apps and libraries and
> their related APIs, but it tends to be a confusing morass for new work,
> particularly for anybody familiar with 64-bit flat virtual addressing
> and not having to deal with this stuff. (For those of a certain age and
> experience, this is not as ugly as TKB by any stretch, but it's still
> ugly.)
>
> OpenVMS has been incrementally allowing hunks of stuff to migrate into
> P2 and S2 space, though the documentation here tends to be scattershot,
> and you have to find and use the necessary switches to get your code and
> data out of the default P0 address range.
>
> OpenVMS is the only operating system I'm aware of that chose this hybrid
> addressing path; of both 32-bit and 64-bit APIs mixed within the same
> executable images. Most other platforms decided to allow 32-bit and
> 64-bit apps and processes to coexist in parallel. But not co-resident
> within an app at run-time. This transition usually with a decade or more
> to allow apps to be migrated to 64-bit APIs for those platforms that
> have decided to deprecate and remove the older and problematic 32-bit
> APIs.  As an example, macOS 10.15 "Catalina" removed 32-bit app and API
> support, and completed the migration started back around 10.5 "Leopard";
> a ~decade ago.

> Microsoft Windows further runs parallel 32- and 64-bit software
> distributions, in addition to 32- and 64-bit apps.

I think it is muddling the matters when people talk about 32 vs 64
bit applications and even worse 32 vs 64 bit mode.

VMS does not have any of that. Unlike Windows, Linux, macOS etc..

VMS has 32 and 64 bit pointers. And the granularity is not
by program but by pointer. You can have a short program with 4 pointers
where 2 are 64 bit and 2 are 32 bit.

Some compilers only generate 32 bit pointers. And even if the compiler
supports both then unless explicit messing around is done pointers
are either all 32 bit or all 64 bit. But messing around is certainly
possible.

All done for compatibility reasons.

And potentially a PITA. Having data in P2 space requiring
a 64 bit pointer to access and an API that only accept
32 bit pointers is not so fun.

Arne

chris

unread,
May 15, 2022, 1:59:59 PM5/15/22
to
The question still remains, what are the limits to memory size
that can be seen at boot time and fully used by VMS ?...

Chris





chris

unread,
May 15, 2022, 2:03:55 PM5/15/22
to
Thanks for the extensive reply, though still not quite sure what the
limits of usable memory are. Last VNS here was 5.4 / Vax and istr, the
overall limit was 4Gb, split nto 2 x 2 GB sections...

Chris


Arne Vajhøj

unread,
May 15, 2022, 3:26:30 PM5/15/22
to
> The question still remains, what are the limits to memory size
> that can be seen at boot time and fully used by VMS ?...

The above is all about virtual addresses aka what is
seen by a process.

Physical memory in a system is a completely different
issue.

VAX had a hard architectural limit that was reached.

I believe that on Alpha, Itanium and x86-64 it is a matter
about what the hardware implementation support, because
the architectural limit is bloody huge.

I believe the x86-64 architectural limit currently is
48 bit aka 256 TB or 52 bit aka 4 EB. But you can't buy
a board and RAM modules that can supply that.

Arne

John Dallman

unread,
May 15, 2022, 3:56:37 PM5/15/22
to
In article <628153e3$0$694$1472...@news.sunsite.dk>, ar...@vajhoej.dk
(Arne Vajhøj) wrote:

> I believe that on Alpha, Itanium and x86-64 it is a matter
> about what the hardware implementation support, because
> the architectural limit is bloody huge.

That's right, if a little understated.

> I believe the x86-64 architectural limit currently is
> 48 bit aka 256 TB or 52 bit aka 4 EB. But you can't buy
> a board and RAM modules that can supply that.

Currently, 48-bit physical addressing is the limit. The current page
table format allows for 52-bit, which gives you 4 petabytes.

ARM64 likewise goes up to 48-bit physical addressing at present, as does
RISC-V. I can't rapidly figure out the limit on IBM Z; the terminology is
weird.

John

Stephen Hoffman

unread,
May 15, 2022, 4:16:13 PM5/15/22
to
On 2022-05-15 19:26:22 +0000, Arne Vajh j said:

> I believe the x86-64 architectural limit currently is
> 48 bit aka 256 TB or 52 bit aka 4 EB. But you can't buy
> a board and RAM modules that can supply that.

From the post:

> This posting differentiates virtual addressing from physical
> addressing, and largely ignores discussions around physical addressing
> including about 48-bit physical addressing limit on many recent x86-64
> boxes, and the 5-level page tables (providing 57 bits, physical)
> available on certain current x86-64 processors.

48, and 57 bits.

48 is near filled.

HPE Superdome Flex presently supports 32 sockets and 48 terabytes.

That'd be a stretch for OpenVMS SMP support, and the rest of the system.

Any servers approaching the five-level 57-bit physical memory
availability, not so much.

I don't expect to meet a fully-populated Superdome Flex running OpenVMS
anytime soon.

Arne Vajhøj

unread,
May 15, 2022, 4:17:48 PM5/15/22
to
On 5/15/2022 3:55 PM, John Dallman wrote:
> In article <628153e3$0$694$1472...@news.sunsite.dk>, ar...@vajhoej.dk
> (Arne Vajhøj) wrote:
>> I believe that on Alpha, Itanium and x86-64 it is a matter
>> about what the hardware implementation support, because
>> the architectural limit is bloody huge.
>
> That's right, if a little understated.
>
>> I believe the x86-64 architectural limit currently is
>> 48 bit aka 256 TB or 52 bit aka 4 EB. But you can't buy
>> a board and RAM modules that can supply that.
>
> Currently, 48-bit physical addressing is the limit. The current page
> table format allows for 52-bit, which gives you 4 petabytes.

Ooops.

4 PB not 4 EB.

Thanks.

Arne

Arne Vajhøj

unread,
May 15, 2022, 4:25:15 PM5/15/22
to
On 5/15/2022 4:16 PM, Stephen Hoffman wrote:
> On 2022-05-15 19:26:22 +0000, Arne Vajh j said:
>> I believe the x86-64 architectural limit currently is
>> 48 bit aka 256 TB or 52 bit aka 4 EB. But you can't buy
>> a board and RAM modules that can supply that.
>
> From the post:
>
>> This posting differentiates virtual addressing from physical
>> addressing, and largely ignores discussions around physical addressing
>> including about 48-bit physical addressing limit on many recent x86-64
>> boxes, and the 5-level page tables (providing 57 bits, physical)
>> available on certain current x86-64 processors.
>
> 48, and 57 bits.
>
> 48 is near filled.
>
> HPE Superdome Flex presently supports 32 sockets and 48 terabytes.
>
> That'd be a stretch for OpenVMS SMP support, and the rest of the system.
>
> Any servers approaching the five-level 57-bit physical memory
> availability, not so much.

This thing:

https://en.wikipedia.org/wiki/Intel_5-level_paging

57 bit = 128 PB

But other sources talk about 52 bit and 4 PB (not 4 EB). But that seems
to be 4 level with PAE (not to be confused with 32 bit PAE).

Maybe we should just call it "a lot".

:-)

> I don't expect to meet a fully-populated Superdome Flex running OpenVMS
> anytime soon.

No demand.

Arne

Stephen Hoffman

unread,
May 15, 2022, 4:33:02 PM5/15/22
to
On 2022-05-15 13:25:19 +0000, Arne Vajh j said:

> I think it is muddling the matters when people talk about 32 vs 64 bit
> applications and even worse 32 vs 64 bit mode.

> VMS does not have any of that. Unlike Windows, Linux, macOS etc..
>
> VMS has 32 and 64 bit pointers. And the granularity is not by program
> but by pointer. You can have a short program with 4 pointers where 2
> are 64 bit and 2 are 32 bit.
>
> Some compilers only generate 32 bit pointers. And even if the compiler
> supports both then unless explicit messing around is done pointers are
> either all 32 bit or all 64 bit. But messing around is certainly
> possible.
>
> All done for compatibility reasons.
>
> And potentially a PITA.

The word "potentially" is doing a whole lot of work, there.

Once y'all use a flat 64-bit address space and APIs, the existing and
hybrid memory management compatibility-focused design starts to smell
vaguely of TKB.

The lack of a clear migration path to flat 64-bit addressing and 64-bit
apps and tools, and 64-bit APIs was and remains among the most
troubling parts.

Apps going from 32-bit to 64-bit is not going to happen quickly, and
not without a migration path.

Conversely, supporting both 32- and 64-bit apps and APIs only adds
complexity on OpenVMS itself, and on end-users.

chris

unread,
May 15, 2022, 7:15:06 PM5/15/22
to
On 05/15/22 21:32, Stephen Hoffman wrote:
> On 2022-05-15 13:25:19 +0000, Arne Vajh j said:
>
>> I think it is muddling the matters when people talk about 32 vs 64 bit
>> applications and even worse 32 vs 64 bit mode.
>
>> VMS does not have any of that. Unlike Windows, Linux, macOS etc..
>>
>> VMS has 32 and 64 bit pointers. And the granularity is not by program
>> but by pointer. You can have a short program with 4 pointers where 2
>> are 64 bit and 2 are 32 bit.
>>
>> Some compilers only generate 32 bit pointers. And even if the compiler
>> supports both then unless explicit messing around is done pointers are
>> either all 32 bit or all 64 bit. But messing around is certainly
>> possible.
>>
>> All done for compatibility reasons.
>>
>> And potentially a PITA.
>
> The word "potentially" is doing a whole lot of work, there.
>
> Once y'all use a flat 64-bit address space and APIs, the existing and
> hybrid memory management compatibility-focused design starts to smell
> vaguely of TKB.
>
> The lack of a clear migration path to flat 64-bit addressing and 64-bit
> apps and tools, and 64-bit APIs was and remains among the most troubling
> parts.

I think that sort of info is what I was driving at. So used to flat
address spaces these days, it's difficult to imagine anything else, but
it does look like VMS has plenty of space to play with, even if
segmented in some way...

Chris

Arne Vajhøj

unread,
May 15, 2022, 7:15:10 PM5/15/22
to
But a fun hypothetical question:

Let us assume:
* VSI got VMS running on Superdome Flex
* no VMS customers had any specific hardware requirements
* all VMS customers were happy with a PaaS (time sharing)
solution

(not the case, but let us for fun assume it was)

How many Superdome Flex servers with 32s896c 48 TB would
it take to run all VMS production systems in existence?

Arne


Arne Vajhøj

unread,
May 15, 2022, 7:23:45 PM5/15/22
to
I don't like the term segmented as opposed to flat to
describe the VMS memory either.

If you use a language that supports 64 bit pointers
and you enable it then your program is accessing
a 64 bit flat address space.

But there are a gazillion API's that are expecting
a 32 bit pointer and if you want to call such an API
then your data better be addressable with a 32 bit
pointer or the data will need to be copied first.

Arne


Stephen Hoffman

unread,
May 15, 2022, 7:36:40 PM5/15/22
to
On 2022-05-15 23:15:00 +0000, chris said:

> On 05/15/22 21:32, Stephen Hoffman wrote:
>>
>> Once y'all use a flat 64-bit address space and APIs, the existing and
>> hybrid memory management compatibility-focused design starts to smell
>> vaguely of TKB.
>>
>> The lack of a clear migration path to flat 64-bit addressing and 64-bit
>> apps and tools, and 64-bit APIs was and remains among the most
>> troubling parts.
>
> I think that sort of info is what I was driving at. So used to flat
> address spaces these days, it's difficult to imagine anything else, but
> it does look like VMS has plenty of space to play with, even if
> segmented in some way...

Far too often, OpenVMS has 32-bit virtual addressing "to play with".

As I've mentioned in other threads: go try this stuff.

Write a complete 64-bit app.

Use a mix of some of the available languages.

Try using BASIC as part of your test app, for instance.

Find all the knobs and switches required to get (mostly) there.

This whole area is an accretion of old code, old apps, old APIs, and
old documentation; a compromise of compatibility.

The current hybrid 32-/64-bit memory management is a remarkable design.

Just not a very forward-looking one.

chris

unread,
May 16, 2022, 11:21:31 AM5/16/22
to
I can see why there is a need for 32 bit addressing for legacy reasons.
If app code is 32 bit, it implies that VMS has some internal 32
bit space reserved within the 64 bit total for that, probably
via run time libraries which either output 64 bit addresses, or have
some way of informing vms to put the code into a certain 64 bit space
and starting address...

Chris

Jake Hamby

unread,
May 16, 2022, 12:15:57 PM5/16/22
to
On Saturday, May 14, 2022 at 4:32:20 PM UTC-7, Stephen Hoffman wrote:
>
> OpenVMS uses what amounts to segmented virtual addressing, and the
> OpenVMS virtual address space design ties directly back to VAX. This
> at least through OpenVMS I64, and haven't checked OpenVMS x86-64 here.
>
> VAX had P0 user space where ~all VAX user apps and user data resides,
> 00000000 to 3FFFFFFF, P1 from 40000000 to 7FFFFFFF for the control
> region and CLI and baggage, and the remainder for S0 system space, and
> later S1 for more system space with VAX Extended Virtual Addressing.
> I'll here ignore VAX Extended Physical Addressing.
>
> Each of these P0, P1, S0, and S2 spaces are 30 bits of virtual addressing.
>
> Alpha uses 64-bit virtual and always has, and preserves the existing
> VAX 32-bit design with P0 and P1 at the lowest addresses
> (00000000.00000000 to 00000000.7FFFFFFF), with S0 and S1 at the highest
> (FFFFFFFF.80000000 to FFFFFFFF.FFFFFFFF, and adds P2 and S2 into the
> great huge gap between those two ranges. OpenVMS Alpha V7.0 opened up
> user access for P2 and S2 addressing.

While reading Hunter Goatley's explanation of the VMS memory map from an old article on how to write privileged code (https://hunter.goatley.com/writing-vms-privileged-code/part-i-the-fundamentals-part-1/), I realized that the size of P0 must be the explanation for why I could only malloc() up to 950MB of memory in the memtester port I did, until I switched to using LIB$GET_VM_64() and 64-bit pointers.

I was thinking of a 32-bit UNIX memory map split 50/50 between user and kernel space. Now that I have some experience with VMS long and short pointers, the only obnoxious quirks that I've encountered so far are that main() wants to use short pointers for argv[]: you can use "/pointer_size=long=argv" to get around that, but then getopt() and friends only work with short argv pointers.

Regards,
Jake

Jake Hamby

unread,
May 16, 2022, 12:28:22 PM5/16/22
to
On Monday, May 16, 2022 at 8:21:31 AM UTC-7, chris wrote:
> I can see why there is a need for 32 bit addressing for legacy reasons.
> If app code is 32 bit, it implies that VMS has some internal 32
> bit space reserved within the 64 bit total for that, probably
> via run time libraries which either output 64 bit addresses, or have
> some way of informing vms to put the code into a certain 64 bit space
> and starting address...

I think the key to understanding how VMS handles 64-bit space internally is to read about 64-bit descriptors and the new routines for manipulating them (Programming Concepts Manual, Vol I: Part IV. Appendixes: Macros and Examples of 64-Bit Programming). There's also a 64-bit "item_list_64b" (ileb_64) that you can pass to system calls in place of the 32-bit "ile3".

Between 64-bit descriptors and 64-bit item lists, OpenVMS was retrofitted to support 64-bit addressing by the mid-1990s, but it has taken many years for applications and the CRTL to catch up. Having a 32-bit long and 32-bit size_t is a big part of the portability problem, but for VMS-native programs, the 64-bit support is there.

As a side note, it's fortunate that VMS has only ever run on little-endian architectures, because it's easier to expand pointers from 32 bits to 64 bits on little-endian in a backward-compatible way, since the address of the pointer stays the same regardless of the size (any extra high-order bytes are ignored). Without this, the data structures VMS uses would probably have been even trickier to retrofit.

Jake

Jake Hamby

unread,
May 16, 2022, 12:41:13 PM5/16/22
to
On Monday, May 16, 2022 at 9:28:22 AM UTC-7, Jake Hamby wrote:
>
> As a side note, it's fortunate that VMS has only ever run on little-endian architectures, because it's easier to expand pointers from 32 bits to 64 bits on little-endian in a backward-compatible way, since the address of the pointer stays the same regardless of the size (any extra high-order bytes are ignored). Without this, the data structures VMS uses would probably have been even trickier to retrofit.

I forgot to mention a clever quirk of the OpenVMS calling standard that surprised me when I discovered it: all architectures pass the argument count and argument types in a register (that points to memory on VAX) on function calls, so the callee knows how many arguments were passed and how. All 32-bit params are sign-extended to 64 bits (even unsigned ints).

Therefore, any library routines can be transparently extended to take 64-bit addresses as parameters, and there are also extended versions of some POSIX/C functions that take extra parameters, which the library can detect by checking the arg count. So the OS provides a form of function overloading for all supported languages, without using name mangling.

Passing the argument info to callees in %rax is one of the few changes VSI had to make to the x86-64 ELF ABI to support VMS.

Regards,
Jake

Jake Hamby

unread,
May 16, 2022, 12:58:59 PM5/16/22
to
On Saturday, May 14, 2022 at 4:32:20 PM UTC-7, Stephen Hoffman wrote:
>
> OpenVMS is the only operating system I'm aware of that chose this
> hybrid addressing path; of both 32-bit and 64-bit APIs mixed within the
> same executable images. Most other platforms decided to allow 32-bit
> and 64-bit apps and processes to coexist in parallel. But not
> co-resident within an app at run-time. This transition usually with a
> decade or more to allow apps to be migrated to 64-bit APIs for those
> platforms that have decided to deprecate and remove the older and
> problematic 32-bit APIs. As an example, macOS 10.15 "Catalina" removed
> 32-bit app and API support, and completed the migration started back
> around 10.5 "Leopard"; a ~decade ago.

You're forgetting about IBM z/OS. It's the only OS I'm aware of that has chosen an even more painful hybrid addressing path than OpenVMS. :) They started with a 24-bit addressing mode, then in the 1990s expanded to 31 bits (not 32), with one bit reserved to indicate the addressing mode. When they launched 64-bit Z and z/OS in 2000, they added a 64-bit AMODE, for a total of three.

IBM mainframers talk about their address space in terms of a "line" (at 16MB) and a "bar" (from 2GB to 4GB). Even though z/OS has been 64-bit now for over 20 years, with each new release they're still announcing that some component's storage pool can now be allocated "above the bar" (in 64-bit space), or occasionally, for very old components, "above the line" (in 31-bit space).

For Linux on Z, this is all irrelevant, and s390x looks like any other 64-bit Linux. But for z/OS, which has to be binary compatible to run customer code dating as far back as the mid-1960s, they are very much aware of not just two, but three, different addressing modes that can be mixed within the same program. And it's big-endian, so the address of pointers will change depending on how wide they are. And z/OS has multiple calling standards for each addressing mode, with the older calling standards not even using a call stack but passing pointers to parameter blocks. Fun times.

Jake

Stephen Hoffman

unread,
May 16, 2022, 1:22:38 PM5/16/22
to
On 2022-05-16 15:21:25 +0000, chris said:

> I can see why there is a need for 32 bit addressing for legacy reasons.
> If app code is 32 bit, it implies that VMS has some internal 32 bit
> space reserved within the 64 bit total for that, probably via run time
> libraries which either output 64 bit addresses, or have some way of
> informing vms to put the code into a certain 64 bit space and starting
> address...

Existing and unmodified 32-bit user apps—apps expecting P0/P1/S0/S1
virtual address references—can't receive 64-bit descriptors and 64-bit
P2 and S2 virtual addresses from other user APIs and from other
third-party APIs and from system APIs.

With data flowing across the APIs in the other direction, the designs
of some existing user and third-party system APIs can't be
transparently modified to accept 64-bit virtual addresses arriving from
the calling apps.

Which in aggregate means that these areas of user and third-party and
OpenVMS executable code and data structures either have to sometimes
"pretend" that the P2 and S2 virtual address ranges don't exist, or
need parallel 32- and 64-bit APIs, etc.

The traditional imperative APIs used within most of
OpenVMS—descriptors, PQLs, itemlists, etc—are somewhat less flexible
around these changes than would be message-passing APIs for instance,
but message-passing API designs are not commonly encountered within
OpenVMS.

For the pedants reading and pointing to message-passing
(object-oriented, OO) code on OpenVMS: my reference here is to OpenVMS
system and RTL and device driver and related APIs. This outside of C++,
Java, Python, and other such code, all off doing C++ or Java or Python
or other message-passing tooling. BASIC mostly does well here too,
though that compiler remains 32-bit. BASIC currently
non-message-passing and probably should be updated to provide
message-passing support, too. But I digress.

Stephen Hoffman

unread,
May 16, 2022, 1:54:05 PM5/16/22
to
On 2022-05-16 16:58:57 +0000, Jake Hamby said:

> On Saturday, May 14, 2022 at 4:32:20 PM UTC-7, Stephen Hoffman wrote:
>>
>> OpenVMS is the only operating system I'm aware of that chose this
>> hybrid addressing path; of both 32-bit and 64-bit APIs mixed within the
>> same executable images. Most other platforms decided to allow 32-bit
>> and 64-bit apps and processes to coexist in parallel. But not
>> co-resident within an app at run-time. This transition usually with a
>> decade or more to allow apps to be migrated to 64-bit APIs for those
>> platforms that have decided to deprecate and remove the older and
>> problematic 32-bit APIs. As an example, macOS 10.15 "Catalina" removed
>> 32-bit app and API support, and completed the migration started back
>> around 10.5 "Leopard"; a ~decade ago.
>
> You're forgetting about IBM z/OS.

I try. 😉

Though I do well recall a brace of
dapper-jumpsuit-over-dapper-three-piece-suit-wearing technicians with
their ever-dapper briefcases yelling at each other in the server room.

And escorting the customer expectations management representatives out
of the office, with a request that IBM let me know when the service
tech and the parts would arrive, and to not show up before then as we
didn't need somebody sitting in the lobby reading magazines and I
wasn't going to delegate a developer or an operator to watch over the
IBM rep due to visitor security policies.

Oh, and that a pocket comb worked as well as the official IBM mainframe
chassis-accessing tool. 🤫

That site was not the usual IBM customer.

In the previous millennium, DEC spent a whole lot of time and focus
chasing after IBM from below (e.g. VAX 9000), and missed the rest of
the market "sneaking up" on DEC.

As for upward compatibility, sure, IBM has long had a better story. IBM
was also front and center in most of computing into the 1990s, with z
and 36/38/i series and particularly then with x series and the 5150,
and that all until their x series business was sold off. Now? IBM? Not
so central. In various ways, much like OpenVMS. For the existing IBM
installed base, z series sure, and 36/38/i series, while the future of
i and p series hardware looks way too much like Alpha or Itanium for my
preferences. The last posting here from a confused MVS user was some
time ago, too.

TL;DR: Whether IBM did something similar to the current mixture of 32-
and 64-bit APIs and segmented addressing and such found on OpenVMS, I
don't know.

Simon Clubley

unread,
May 16, 2022, 2:24:02 PM5/16/22
to
On 2022-05-15, Stephen Hoffman <seao...@hoffmanlabs.invalid> wrote:
>
> Once y'all use a flat 64-bit address space and APIs, the existing and
> hybrid memory management compatibility-focused design starts to smell
> vaguely of TKB.
>

The VMS 64-bit design comes across as a technical debt situation where,
given the nature of VMS application code, the "easier" approach was taken
initially at the expense of long-term maintenance and additional coding
work to deal with that decision.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Arne Vajhøj

unread,
May 16, 2022, 2:51:17 PM5/16/22
to
On 5/16/2022 11:21 AM, chris wrote:
> On 05/16/22 00:23, Arne Vajhøj wrote:
>> On 5/15/2022 7:15 PM, chris wrote:
>>> I think that sort of info is what I was driving at. So used to flat
>>> address spaces these days, it's difficult to imagine anything else, but
>>> it does look like VMS has plenty of space to play with, even if
>>> segmented in some way...
>>
>> I don't like the term segmented as opposed to flat to
>> describe the VMS memory either.
>
> I can see why there is a need for 32 bit addressing for legacy reasons.
> If app code is 32 bit, it implies that VMS has some internal 32
> bit space reserved within the 64 bit total for that, probably
> via run time libraries which either output 64 bit addresses, or have
> some way of informing vms to put the code into a certain 64 bit space
> and starting address...

Everything is in the same 64 bit address space.

It is just that some things can be addressed with 32 bit pointers
and other things cannot.

Demo:

$ type ptrfun.c
#include <stdio.h>
#include <stdlib.h>

#pragma pointer_size save
#pragma pointer_size 32
typedef char *char_ptr32;
#pragma pointer_size 64
typedef char *char_ptr64;
#pragma pointer_size restore

void api32(char_ptr32 a32)
{
printf("32 bit API see: %016p\n", a32);
}

void api64(char_ptr64 a64)
{
printf("64 bit API see: %016llp\n", a64);
}

int main()
{
char b[1];
printf("stack address : %016p (%d)\n", &b, sizeof(&b));
api32(b);
api64(b);
char_ptr32 p32 = _malloc32(1);
printf("32 bit pointer: %016p (%d)\n", p32, sizeof(p32));
api32(p32);
api64(p32);
char_ptr64 p64 = _malloc64(1);
printf("64 bit pointer: %016llp (%d)\n", p64, sizeof(p64));
api32(p64);
api64(p64);
for(int i = 0; i < 16; i++) _malloc64(0x10000000); // force next
allocation up in memory
p64 = _malloc64(1);
printf("64 bit pointer: %016llp (%d)\n", p64, sizeof(p64));
api32(p64);
api64(p64);
return 0;
}
$ cc/pointer=64 ptrfun

api32(p64);
.........^
%CC-W-MAYLOSEDATA2, In this statement, "p64" has a larger data size than
"short pointer to char". Assignment can result in data los
s.
at line number 33 in file DISK2:[ARNE]ptrfun.c;4

api32(p64);
.........^
%CC-W-MAYLOSEDATA2, In this statement, "p64" has a larger data size than
"short pointer to char". Assignment can result in data los
s.
at line number 38 in file DISK2:[ARNE]ptrfun.c;4
$ link ptrfun
%LINK-W-WRNERS, compilation warnings
in module PTRFUN file DISK2:[ARNE]ptrfun.OBJ;3
$ run ptrfun
stack address : 000000007AE27A48 (4)
32 bit API see: 000000007AE27A48
64 bit API see: 000000007AE27A48
32 bit pointer: 000000000004B828 (4)
32 bit API see: 000000000004B828
64 bit API see: 000000000004B828
64 bit pointer: 0000000080000010 (8)
32 bit API see: 0000000080000010
64 bit API see: 0000000080000010
64 bit pointer: 00000001A0000030 (8)
32 bit API see: 00000000A0000030
64 bit API see: 00000001A0000030

The compiler does what it can - it gives a warning - it
can't fix the problem.

Arne

Stephen Hoffman

unread,
May 16, 2022, 3:44:38 PM5/16/22
to
On 2022-05-16 18:24:00 +0000, Simon Clubley said:

> On 2022-05-15, Stephen Hoffman <seao...@hoffmanlabs.invalid> wrote:
>>
>> Once y'all use a flat 64-bit address space and APIs, the existing and
>> hybrid memory management compatibility-focused design starts to smell
>> vaguely of TKB.
>>
>
> The VMS 64-bit design comes across as a technical debt situation where,
> given the nature of VMS application code, the "easier" approach was
> taken initially at the expense of long-term maintenance and additional
> coding work to deal with that decision.

Compatibility is always a trade-off, and the complexities inherent in
compatibility inevitably accrete.

(And you can't churn APIs and tools too quickly, as the lack of
compatibility means few or none will want to continue to develop for
and update for your platform.)

The existing hybrid segmented approach wasn't easy on OpenVMS
development, and hasn't been good for substantial app overhauls and new
app work, though the design has been vastly better for existing apps
and existing installations.

Parallel 32- and 64-bit environments would have been both more work in
some ways and easier in some ways, though would have required the
third-parties and apps and tools and APIs to migrate to 64-bit.

Leave the 32-bit apps and APIs and tools alone, save for maintenance.
Port the existing apps and APUs and tools to 64-bit as demand arises.

Following the parallel approach would likely also have deferred the
availability of 64-bit addressing features for those apps that then
required it, too.

The hybrid approach in some ways reminds me of porting code back to
VAX. That ends up being a slog, due to missing calls, missing 64-bit
support, and related. And the folks still on VAX can have less
inclination to move forward.

No good answer, here.

John Dallman

unread,
May 16, 2022, 4:34:03 PM5/16/22
to
In article <t5tq5l$5fi$1...@gioia.aioe.org>, chris-...@tridac.net (chris)
wrote:

> I can see why there is a need for 32 bit addressing for legacy
> reasons. If app code is 32 bit, it implies that VMS has some
> internal 32-bit space reserved within the 64 bit total for that,
> probably via run time libraries which either output 64 bit
> addresses, or have some way of informing vms to put the code
> into a certain 64 bit space and starting address...

This is not done in the same way as more conventional 64-bit operating
systems run 32-code.

The VMS APIs were all originally defined as 32-bit, naturally, but they
were defined in terms of absolute sizes, not in terms of the sizes of
language types. To get a 64-bit environment working quickly, only some of
the APIs were given 64-bit implementations, and they're all in the same
namespaces as the 32-bit APIs, so they have different names.

Since the DEC Alpha, the original 64-bit platform for VMS, simply did not
/have/ a 32-bit mode, "32-bit code" stores the low half of pointers in
memory, and sign-extends them when loading them into registers to use for
accessing memory.

Software written for 64-bit VMS expects to use different APIs to allocate
and manage memory outside the 2GB regions at the top and bottom of the
64-bit address space, and has different types for 64-bit pointers.
Porting software for more conventional 64-bit architectures to VMS can
thus present problems.

This may seem weird. VMS was the last OS written in the expectation that
a lot of programming would be in assembler, and without any planning at
design time for 64-bit addressing. It's quite sophisticated assembler,
but the OS is defined that way, rather than in terms of a high-level
language.

John

chris

unread,
May 17, 2022, 11:00:23 AM5/17/22
to
Still trying to disambiguate this in terms of what happens with 32 bit
code. You mention P0/P1/S0/S1 above, so are these segments and does
32 bit code reside in one of them ?.

As I said earlier, if you have a 32 bit app, but a 64 bit (or >32 bits),
address space, how does the system know where in memory to place that
code to run it ?. A reserved space that vms knows about, or what ?.

Vax split 4Gb into 2 segments, fwir, but looks like current vms is
doing something similar, so it can run 32 bit code...

Chris



Simon Clubley

unread,
May 17, 2022, 1:26:25 PM5/17/22
to
On 2022-05-17, chris <chris-...@tridac.net> wrote:
>
> Still trying to disambiguate this in terms of what happens with 32 bit
> code. You mention P0/P1/S0/S1 above, so are these segments and does
> 32 bit code reside in one of them ?.
>
> As I said earlier, if you have a 32 bit app, but a 64 bit (or >32 bits),
> address space, how does the system know where in memory to place that
> code to run it ?. A reserved space that vms knows about, or what ?.
>

There is no such thing as a pure 64-bit program on VMS. Unfortunately.

On 64-bit VMS, all existing programs use 32-bit pointers and APIs with
32-bit pointers. If you are prepared to modify your source code, you can
also use 64-bit pointers within the same program and call APIs that also
use 64-bit pointers.

Also, not all parts of VMS have 64-bit APIs. Parts of RMS are one example.

This is utterly unlike anything you will be used to in Linux (for example),
where the size of the pointer is abstracted away from you as a C pointer
type. In VMS, due to the original assembly language nature of VMS, pointer
sizes are directly visible in the source code.

All VMS programs, including 32-bit ones, live in the same 64-bit address
space, and it is a feature of the program and the APIs it calls whether
it can access the full 64-bit address space or not.

On VMS, there is no such thing as simply recompiling your program with
-m32 or -m64 and then calling it job (mostly) done.

Stephen Hoffman

unread,
May 17, 2022, 4:10:57 PM5/17/22
to
On 2022-05-17 15:00:14 +0000, chris said:

>
> Still trying to disambiguate this in terms of what happens with 32 bit
> code. You mention P0/P1/S0/S1 above, so are these segments and does 32
> bit code reside in one of them ?.

P0/P1/S0/S1 are the address ranges available via 32-bit sign-extended pointers.

> As I said earlier, if you have a 32 bit app, but a 64 bit (or >32
> bits), address space, how does the system know where in memory to place
> that code to run it ?. A reserved space that vms knows about, or what ?.

Hardware uses 64-bit virtual addresses throughout. APIs and ABIs use a
mixture of 32- and 64-bit pointers. 32-bit pointers are sign-extended
to 64-bit.

> Vax split 4Gb into 2 segments, fwir, but looks like current vms is
> doing something similar, so it can run 32 bit code...

VAX virtual addressing was split into P0, P1, S0, and later, with the
arrival of VAX Extended Virtual Addressing support, S1.

VAX Extended Physical Addressing arrived around the same time, and
increased the available physical address range.

OpenVMS on Alpha and on Itanium has P0, P1, S0, and S1 within the
32-bit virtual address range, and user APIs and ABIs using 32-bit
pointers and 32-bit pointer structures.

OpenVMS Alpha V7.0 and later added P2 and S2 comprising the remainder
of 64-bit space when you remove 32-bit space, and that range was then
split in half for user and system access.

OpenVMS V7.0 and later updated some 32-bit APIs and ABIs, added some
64-bit APIs and APIs, and modified some user-accessible data structures
to support 64-bit pointers not the least of which are itemlists and
string descriptors. 32-bit string descriptors are eight bytes in size.
The 64-bit descriptors... are not.

As implemented on OpenVMS V7.0 and later and on OpenVMS I64 on Itanium,
the lowest 31 bits (P0, P1) and the highest 31 bits (S0, S1) are 32-bit
space, and the ginormous virtual address range between those two is P2
and S2.

It's the APIs and ABIs that make 64-bit more "interesting" on OpenVMS.
Most other platforms didn't try to mix pointer sizes within the same
app executables, and didn't need to have APIs and ABIs for each size,
and a bunch of switches and knobs to control it all.

For an intro, read the Guide to 64-bit addressing doc. The contents of
that doc were subsequently folded into other manuals in later versions,
and that doc was retired. But for the purposes of this discussion
however, that doc will introduce the hybrid addressing model used by
OpenVMS.

http://www0.mi.infn.it/~calcolo/OpenVMS/ssb71/6467/6467ptoc.htm

32-bit descriptor with 32-bit pointers (can reach all of P0, P1, S0,
and S1, but not P2 nor S2):

struct dsc$descriptor {
unsigned short dsc$w_length; /* specific to descriptor class;
typically a 16-bit (unsigned) length */
unsigned char dsc$b_dtype; /* data type code */
unsigned char dsc$b_class; /* descriptor class code */
char *dsc$a_pointer; /* address of first byte of data element */
};

64-bit descriptor with 64-bit pointers (can reach all of virtual
address space):

struct dsc64$descriptor {
unsigned short dsc64$w_mbo; /* must-be-one field overlays 16-bit
length field */
unsigned char dsc64$b_dtype; /* data type code */
unsigned char dsc64$b_class; /* descriptor class code */
long dsc64$l_mbmo; /* must-be-minus-one field overlays 32-bit pointer */
unsigned __int64 dsc64$q_length; /* quadword length */
char *dsc64$pq_pointer; /* address of first byte of data storage */
};

Some APIs and ABIs deal only with 32-bit descriptors, some with both.

Itemlists were similarly modified.

As were other user-referenced data structures, and APIs and ABIs.

Arne Vajhøj

unread,
May 17, 2022, 4:28:39 PM5/17/22
to
> Still trying to disambiguate this in terms of what happens with 32 bit
> code. You mention P0/P1/S0/S1 above, so are these segments and does
> 32 bit code reside in one of them ?.
>
> As I said earlier, if you have a 32 bit app, but a 64 bit (or >32 bits),
> address space, how does the system know where in memory to place that
> code to run it ?. A reserved space that vms knows about, or what ?.

There are no such thing as a 32 bit application on Alpha,
Itanium or x86-64 - applications are all 64 bit (in the
meaning of 64 bit application used on other platforms).

But that 64 bit application may use:
- all 32 bit pointers
- all 64 bit pointers
- mix of 32 bit and 64 bit pointers

32 bit pointers can only access some of the
virtual address space (P0 and P1
on VMS), but can be used with API's that
expect 32 bit pointers and API' that expect 64 bit
pointers.

64 bit pointers can access all virtual address
space (P0, P1 and P2 on VMS), but can only safely
be used with API's that expect 64 bit pointers.

I don't think where the code reside P0 or P2 is
important for accessing data - it may matter for
function pointers, self modifying code and various
other cases though.

Arne





Stephen Hoffman

unread,
May 17, 2022, 4:42:16 PM5/17/22
to
On 2022-05-17 20:28:30 +0000, Arne Vajh j said:

> There are no such thing as a 32 bit application on Alpha, Itanium or
> x86-64 - applications are all 64 bit (in the meaning of 64 bit
> application used on other platforms).

Other than that the default compilation and default link on OpenVMS
Alpha and OpenVMS I64 produces apps using 32-bit sign-extended
pointers, and necessities using 32-bit ABIs and APIs, sure.

> ...
> I don't think where the code reside P0 or P2 is important for accessing
> data - it may matter for function pointers, self modifying code and
> various other cases though.

Other than that those sign-extended 32-bit pointers not being able to
access P2 data, and apps limited to 30 bits of code and data and
somewhat less than 30 bits of stack space, sure.

Arne Vajhøj

unread,
May 17, 2022, 5:00:48 PM5/17/22
to
On 5/17/2022 4:42 PM, Stephen Hoffman wrote:
> On 2022-05-17 20:28:30 +0000, Arne Vajh j said:
>> There are no such thing as a 32 bit application on Alpha, Itanium or
>> x86-64 - applications are all 64 bit (in the meaning of 64 bit
>> application used on other platforms).
>
> Other than that the default compilation and default link on OpenVMS
> Alpha and OpenVMS I64 produces apps using 32-bit sign-extended pointers,
> and necessities using 32-bit ABIs and APIs, sure.

It is not what is meant by 32 bit application on other
platforms.

>> I don't think where the code reside P0 or P2 is important for
>> accessing data - it may matter for function pointers, self modifying
>> code and various other cases though.
>
> Other than that those sign-extended 32-bit pointers not being able to
> access P2 data,

That does not relate to where the code is.

> and apps limited to 30 bits of code and data and
> somewhat less than 30 bits of stack space, sure.

I thought they had started moving code up to P2.

And data can be in P2 with some extra work.

Stack may actually be the biggest problem as I have
never heard about any intentions of introducing a
P3 for big stack. And a lot of stack space can
be needed in massive multi-threaded apps.
10000 threads with 1 MB stacks is 10 GB stack.
Sure that is a bit extreme, but it is not
crazy extreme just a bit extreme.

Arne


Arne Vajhøj

unread,
May 17, 2022, 5:11:59 PM5/17/22
to
On 5/17/2022 1:26 PM, Simon Clubley wrote:
> On 2022-05-17, chris <chris-...@tridac.net> wrote:
>> Still trying to disambiguate this in terms of what happens with 32 bit
>> code. You mention P0/P1/S0/S1 above, so are these segments and does
>> 32 bit code reside in one of them ?.
>>
>> As I said earlier, if you have a 32 bit app, but a 64 bit (or >32 bits),
>> address space, how does the system know where in memory to place that
>> code to run it ?. A reserved space that vms knows about, or what ?.
>
> There is no such thing as a pure 64-bit program on VMS. Unfortunately.

With an unusual definition of "pure 64 bit" meaning all pointers being
64 bit.

> On 64-bit VMS, all existing programs use 32-bit pointers and APIs with
> 32-bit pointers. If you are prepared to modify your source code, you can
> also use 64-bit pointers within the same program and call APIs that also
> use 64-bit pointers.

You can switch to 64 bit pointers using a compile switch for
some languages (C, Fortran).

Whether source code changes are necessary for API reasons must
depend on the type of code. Just CRTL calls should be handled
automatically without any source code changes. With VMS calls
(SYS$, LIB$ etc.) changes will likely be needed.

> This is utterly unlike anything you will be used to in Linux (for example),
> where the size of the pointer is abstracted away from you as a C pointer
> type. In VMS, due to the original assembly language nature of VMS, pointer
> sizes are directly visible in the source code.

Some compatibility decisions was made 3 decades ago that has long
lasting impact.

That the usage of Macro-32 for applications was a major reason
is something you have claimed many times. But there are no
proofs that was more important than the other potential reasons,
so it is speculation.

> All VMS programs, including 32-bit ones, live in the same 64-bit address
> space, and it is a feature of the program and the APIs it calls whether
> it can access the full 64-bit address space or not.
>
> On VMS, there is no such thing as simply recompiling your program with
> -m32 or -m64 and then calling it job (mostly) done.

Given that there are are no 32 bit applications/CPU modes/processes
then there are obviously no m32 and m64.

There are /POINTER=64 and /POINTER=32 to change default pointer
size.

Which may or may not be sufficient to do a recompile and
start using P2 space for data.

Arne




Stephen Hoffman

unread,
May 17, 2022, 5:31:47 PM5/17/22
to
On 2022-05-17 21:00:39 +0000, Arne Vajh j said:

> On 5/17/2022 4:42 PM, Stephen Hoffman wrote:
>> On 2022-05-17 20:28:30 +0000, Arne Vajh j said:
>>> There are no such thing as a 32 bit application on Alpha, Itanium or
>>> x86-64 - applications are all 64 bit (in the meaning of 64 bit
>>> application used on other platforms).
>>
>> Other than that the default compilation and default link on OpenVMS
>> Alpha and OpenVMS I64 produces apps using 32-bit sign-extended
>> pointers, and necessities using 32-bit ABIs and APIs, sure.
>
> It is not what is meant by 32 bit application on other platforms.

It's also not what is meant by a 64-bit application on other platforms.

>>> I don't think where the code reside P0 or P2 is important for accessing
>>> data - it may matter for function pointers, self modifying code and
>>> various other cases though.
>>
>> Other than that those sign-extended 32-bit pointers not being able to
>> access P2 data,
>
> That does not relate to where the code is.

Correct. It refers to both code and data.

>> and apps limited to 30 bits of code and data and somewhat less than 30
>> bits of stack space, sure.
>
> I thought they had started moving code up to P2.

Code which is inaccessible to 32-bit addresses, so don't try to pass a
function pointer to that code.

Craig A. Berry

unread,
May 17, 2022, 6:05:28 PM5/17/22
to

On 5/17/22 4:11 PM, Arne Vajhøj wrote:

> You can switch to 64 bit pointers using a compile switch for
> some languages (C, Fortran).
>
> Whether source code changes are necessary for API reasons must
> depend on the type of code. Just CRTL calls should be handled
> automatically without any source code changes.

Unless your C program is using getopt, something from the exec family,
or anything else that deals with the argv and environ arrays, some (but
not all) of which can be mitigated by further specifying
/POINTER_SIZE=LONG=ARGV.

Or if you have various other CRTL routines that have no 64-bit variants.

Or you've got a LIB$INITIALIZE routine to set the DECC$ features at
start-up time; I may not be explaining this right but I think that
routine has to be referenced by a 32-bit pointer because that's where
the image activator will look for it.

Dave Froble

unread,
May 17, 2022, 6:51:25 PM5/17/22
to
On 5/17/2022 4:42 PM, Stephen Hoffman wrote:
I'm just wondering how many VMS uses actually need the 64 bit addressing? At
least Basic, and perhaps other languages, were never modified to take advantage
of 64 bit addressing.

Sure, there can be some uses that have such a requirement. If it doesn't exist
(VAX) then VMS cannot be used in such cases.

But, for the most part, is this a serious issue, or, something to dispare over?

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

chris

unread,
May 17, 2022, 7:05:07 PM5/17/22
to
Disappointing, but still doesn't amswer the question. Namely, if the
code is 32 bit, how does vms know where to put that in 64 bit space ?.

The question is relevant, since code has to be loaded at an address,
usually defined at link time, unless fully position independent code
is used, which can be quite inefficient. Perhaps the run time libraries
deal with the that, but not clear. Also, how do the P0, P1, S0, S1
segments relate to that ?...

Chris

Arne Vajhøj

unread,
May 17, 2022, 7:26:36 PM5/17/22
to
On 5/17/2022 6:51 PM, Dave Froble wrote:
> I'm just wondering how many VMS uses actually need the 64 bit
> addressing?  At least Basic, and perhaps other languages, were never
> modified to take advantage of 64 bit addressing.
>
> Sure, there can be some uses that have such a requirement.  If it
> doesn't exist (VAX) then VMS cannot be used in such cases.
>
> But, for the most part, is this a serious issue, or, something to
> dispare over?

There is a reason why Alpha was made 64 bit and not 32 bit
those 3 decades ago.

There are definitely application types that need 64 bit
capability.

Databases are one case. And I believe that was the case
driving the Alpha decision all these years ago.

There are other memory hungry application types.

Some platform software: cache servers, Java EE application
servers, message queue servers etc..

Many custom scientific applications (Fortran, Python).

For custom business application (Cobol, Basic, Pascal,
Java or in some cases C) there must still be a lot of applications
that don't need it (as in "really don't need it" not just
"can live without it").

But application tend to grow over time so more applications
will want to use the full 64 bit space as times goes by.

And if/when VSI start courting ISV's not currently on VMS to
support VMS, then the vast majority will expect good 64 bit
support.

So I would call it a medium to high severity issue that
by will get a bit worse every year until fixed.

Not something VSI has to fix this year. But something VSI
should fix before 2030.

Arne




Dave Froble

unread,
May 18, 2022, 12:25:04 AM5/18/22
to
On 5/17/2022 7:26 PM, Arne Vajhøj wrote:
> On 5/17/2022 6:51 PM, Dave Froble wrote:
>> I'm just wondering how many VMS uses actually need the 64 bit addressing? At
>> least Basic, and perhaps other languages, were never modified to take
>> advantage of 64 bit addressing.
>>
>> Sure, there can be some uses that have such a requirement. If it doesn't
>> exist (VAX) then VMS cannot be used in such cases.
>>
>> But, for the most part, is this a serious issue, or, something to dispare over?
>
> There is a reason why Alpha was made 64 bit and not 32 bit
> those 3 decades ago.
>
> There are definitely application types that need 64 bit
> capability.

I just wonder what percentage of current VMS users need 64 bit addresses.

> Databases are one case. And I believe that was the case
> driving the Alpha decision all these years ago.
>
> There are other memory hungry application types.
>
> Some platform software: cache servers, Java EE application
> servers, message queue servers etc..
>
> Many custom scientific applications (Fortran, Python).
>
> For custom business application (Cobol, Basic, Pascal,
> Java or in some cases C) there must still be a lot of applications
> that don't need it (as in "really don't need it" not just
> "can live without it").
>
> But application tend to grow over time so more applications
> will want to use the full 64 bit space as times goes by.
>
> And if/when VSI start courting ISV's not currently on VMS to
> support VMS, then the vast majority will expect good 64 bit
> support.

Agreed, it's a "check box" item, better have it.

> So I would call it a medium to high severity issue that
> by will get a bit worse every year until fixed.
>
> Not something VSI has to fix this year. But something VSI
> should fix before 2030.
>
> Arne
>
>
>
>


hb

unread,
May 18, 2022, 4:28:23 AM5/18/22
to
On 5/17/22 23:31, Stephen Hoffman wrote:
> On 2022-05-17 21:00:39 +0000, Arne Vajh j said:
> ...
>> I thought they had started moving code up to P2.
>
> Code which is inaccessible to 32-bit addresses, so don't try to pass a
> function pointer to that code.

On x86 and IA64 function pointers are always 32 bit.

On x86, by default the linker puts code into P2; on request the linker
can move code into P0. On x86 the function pointer is a code address of
a single code instruction, which uses a 64 bit address to jump to the
"real" code. The linker creates this code instruction. On IA64, by
default the linker puts code into P0; on request, the linker can move
code into P2. On IA64 the function pointer is the address of a data
structure, a Function Descriptor (FD), which contains a 64 bit address
of the "real" code. The linker creates the FD.

hb

unread,
May 18, 2022, 5:20:22 AM5/18/22
to
On 5/18/22 00:05, Craig A. Berry wrote:
> ...
> Or you've got a LIB$INITIALIZE routine to set the DECC$ features at
> start-up time; I may not be explaining this right but I think that
> routine has to be referenced by a 32-bit pointer because that's where
> the image activator will look for it.

The image activator doesn't care about the pointer size. Only for
shareable images, the image activator calls the code in the
LIB$INITIALIZE module, which was (by request) linked into the shareable.
Its entry point was written by the linker and that's what the image
activator is looking for. The code in LIB$INITIALIZE expects an array of
32-bit pointers in the LIB$INITIALIZE PSECTs. They are function pointers
which are always 32 bit. This array is set up by the developer and
filled with the pointers. If the source module for setting this up was
compiled with 64 bit pointers, an array of pointers has 64 bit entries.
With the function pointers being 32 bits, the array from
LIB$INITIALIZE's point of view contains zero pointers. For the
LIB$INITIALIZE code a zero pointer is the end of the list. With 64-bit
pointers, that stops you from having more than one init routine called
"from the image activator". When the image activation is done, the main
image is started with SYS$IMGSTA, which calls the code in the
LIB$INITIALIZE module, ...

That also explains, why you can debug init code of a main image, but you
can't debug the init code of a shareable image (other than you run it as
a main image).

Some people may think SYS$IMGSTA (or even LIB$INITIALIZE) is part of the
image activator. From my point of view, it is not.

This all applies to the traditional VMS image initialization with the
LIB$INITIALIZE PSECTs. On x86 the ELF .init_array sections are
supported. They contain 64-bit pointers.

FWIW, the linker on/for x86 was enhanced to include the LIB$INITIALIZE
module from STARLET.OLB, whenever it encounters a LIB$INITIALIZE PSECT
or .init_array section.
It is loading more messages.
0 new messages