Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

HPE Integrity emulator

872 views
Skip to first unread message

David Turner

unread,
Aug 11, 2022, 6:48:21 PM8/11/22
to
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
(16 cores max)

I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?

Why am I asking? Well, HPE Integrity servers are getting scarce. I have
probably purchased 80% of the ones on the market and some companies are
buying up whatever is available


Comments please.


David Turner


abrsvc

unread,
Aug 11, 2022, 6:55:10 PM8/11/22
to
Since there is a performance penalty to pay when using an emulator on a system, there is likely to be no emulator that could approach Integrity performance levels with currently available hardware. I know of no emulator for Integrity systems at this time.

Dan

gah4

unread,
Aug 11, 2022, 7:02:07 PM8/11/22
to
On Thursday, August 11, 2022 at 3:48:21 PM UTC-7, David Turner wrote:
> Does anyone here think that this is an option for people not willing or
> able to move over to x86-64 yet?

Without looking at it in much detail, it would seem to me not so good an idea.

IA-64 is specifically designed such that the instruction set optimizes the
ability of the hardware to execute instructions. All the out-of-order
hazards are solved at compile time, such that everything happens in
the right order. (Part of the reason for the complication of the design,
and especially of writing compilers for it.)

One problem with any RISC design, and especially with IA-64, is
how it scales over time. Things that made sense with the technology
one year, might be completely wrong not so many years later. (*)

Now, the thing that has made emulation work well over the years,
is that newer, faster, processors are enough faster, and also more
energy efficient, to overcome the cost of emulation. It might be
that is now true for IA-64. It does seem likely, though, that instructions
optimized for hardware are less optimized for emulation.

(*) One interesting idea from early RISC is the branch delay slot,
where one instruction is executed after the branch, while the
hardware figures out how to do the branch, and keep the pipeline
full. But as technology changed, that would have required more
and more instructions in the delay slot, inconvenient for existing
hardware, and also for compiler writers if it was done in new
hardware.

Arne Vajhøj

unread,
Aug 11, 2022, 8:04:08 PM8/11/22
to
Based on previous discussions here then no Itanium emulator
currently exist.

In theory one could be made. It should be possible to emulate any
CPU where detailed enough documentation is available.

Several posters has raised the performance issue. And even though
it is obviously easier to get similar performance of a 1 core
@ 400-600 MHz Alpha than a 4/8 core @ 1.5-2.0 GHz Itanium on
a 16/24/32 core @ 3 GHz x86-64, then I think it could be
done. I don't expect a non-JIT emulator to be fast enough,
but I believe a JIT emulator could just be fast enough to
be usable.

But I also suspect that developing such an emulator would be
a lot of work (read: bloody expensive). Itanium is a complex
CPU - I suspect a lot more complex than Alpha, and that means
more expensive to develop.

So the feasibility will depend on how many licenses could
be sold.

If you are really interested then you could reach out to Stromasys
and EmuVM and ask how many licenses they would need to sell
for them to be willing to do an Itanium emulator.

Honestly I doubt the numbers will work out. I expect the
vast majority of VMS I64 users to have migrated to VMS x86-64 within
5-10 years. 5-10 years may sound like a long time, but it is not
a long time if it is the timespan where an expensive software
product will sell.

Anyway it will not cost you much to make a few phone calls
and ask people that really knows instead of listening to someone
like me that are just thinking out loud.

Arne



gah4

unread,
Aug 11, 2022, 9:15:44 PM8/11/22
to
On Thursday, August 11, 2022 at 5:04:08 PM UTC-7, Arne Vajhøj wrote:

(snip)
> But I also suspect that developing such an emulator would be
> a lot of work (read: bloody expensive). Itanium is a complex
> CPU - I suspect a lot more complex than Alpha, and that means
> more expensive to develop.

The idea was that it would be simpler than a processor figuring out
on its own how to overlap and reorder instructions. The compiler
is supposed to do that (once) instead of the processor (every time
instructions are executed.

But yes, it is a very complicated processor.

Now, it is possible that there are people who don't need such a fast
processor, but instead need a large memory. (I just noticed that
the DS10 goes up to only 2GB.)

In the Cray-1 days, I wondered why there was no machine to compile
Cray programs on, without using expensive actual Cray-1 time.

A slow IA-64 emulator might not be so hard to write, but getting
reasonable speed should be a real challenge. Especially doing anything
in parallel.


abrsvc

unread,
Aug 11, 2022, 10:01:14 PM8/11/22
to
Realize that a system emulator is more involved that just emulating the instruction stream. The underlying hardware must be emulated as well. This may be as simple as translating an I/O stream into something that the host system can understand or as complex as emulating the functions of a file system within a "data file". There is much involved here.

Dan

jimc...@gmail.com

unread,
Aug 12, 2022, 3:44:05 AM8/12/22
to
On Thursday, August 11, 2022 at 4:02:07 PM UTC-7, gah4 wrote:
> IA-64 is specifically designed such that the instruction set optimizes the
> ability of the hardware to execute instructions. All the out-of-order
> hazards are solved at compile time, such that everything happens in
> the right order. (Part of the reason for the complication of the design,
> and especially of writing compilers for it.)
>
> One problem with any RISC design, and especially with IA-64, is
> how it scales over time. Things that made sense with the technology
> one year, might be completely wrong not so many years later. (*)

IA-64 isn't a RISC design, and the problem wasn't that it "didn't scale over time"; EPIC was a flawed premise for general-purpose computing. Turns out that it is impossible to solve out-of-order hazards at compile time for most workloads involving random memory accesses -- which makes it impossible to extract significant performance benefits from VLIW architectures for the vast majority of software.

VLIW architectures are very useful for streaming workloads with no dynamic latency, and strictly ordered execution -- they're very successful in DSP and GPU applications to this day. Bu

Bob Gezelter

unread,
Aug 12, 2022, 5:52:08 AM8/12/22
to
On Thursday, August 11, 2022 at 6:48:21 PM UTC-4, David Turner wrote:
David,

I remember asking a similar question a ways back, with respect to the x86-64 port. The comment I received concerning a binary emulator on OVMS x86-64 was that there were features of the instruction set covered by Intel patents. If that response was correct, before doing a project like this, one would need to determine the accuracy of that statement.

Ignoring the patent issue, the instruction set is fully documented, albeit significant in size. Technically, it could be done, particularly with a scope limitation of the non-privileged instruction set. Unlike the question of the VAX, there is probably a smaller market, as recompiling the source code is a far better option.

There are those who are bound to other issues, e.g., regulated configurations, but that requires full system emulation, which has correctly been identified as a far wider scope than just user-mode execution.

- Bob Gezelter, http://www.rlgsc.com

gah4

unread,
Aug 12, 2022, 6:26:28 AM8/12/22
to
On Thursday, August 11, 2022 at 7:01:14 PM UTC-7, abrsvc wrote:

(snip)

> Realize that a system emulator is more involved that just emulating the
> instruction stream. The underlying hardware must be emulated as well.
> This may be as simple as translating an I/O stream into something that
> the host system can understand or as complex as emulating the functions
> of a file system within a "data file". There is much involved here.

It is.

As a rough approximation, which mostly goes back to microprogrammed
machines from the 1960's and 1970's, but I believe also to software
emulated CISC processors is about 1/10 the speed. That is, about 10
instructions to emulate one, on average.

The idea behind RISC is simpler instructions, and the possibility that
more can be executed in the same time. One might hope that RISC
instructions are easier to emulate, but it isn't so obvious that the
RISC advantage still applies with emulation.

IA-64 is supposed to be able to execute 6 instructions per clock cycle.
My guess is that, at least the easier emulation, might still be 10 real
instructions per emulated instruction, so maybe 60 times slower.

And yes things like I/O all need to be emulated, but usually aren't
a big limit on execution speed. They might still take time to get
right, though.





gah4

unread,
Aug 12, 2022, 6:48:47 AM8/12/22
to
On Friday, August 12, 2022 at 12:44:05 AM UTC-7, jimc...@gmail.com wrote:

(snip)

> IA-64 isn't a RISC design, and the problem wasn't that it "didn't scale over time";
> EPIC was a flawed premise for general-purpose computing. Turns out that it is
> impossible to solve out-of-order hazards at compile time for most workloads
> involving random memory accesses -- which makes it impossible to extract
> significant performance benefits from VLIW architectures for the vast majority of software.

Well it isn't so easy at run-time, either. Much of my early programming was on an
IBM 360/91, which was a favorite machine for books on pipelined processors.
(And one of the few that do out-of-order retirement.)

The goal of the 360/91 was one instruction per clock cycle on normal programs,
not specifically written for it. (That is, generated by usual compilers.)
Among others, the 360/91 can prefetch on two branch paths, in addition to the
non-branch path. Keeping the pipelines full isn't so easy, and often likely
didn't run as fast as one might have hoped.

As well as I know, no parallel processor, or pipelined processor, ever runs
as fast as its (over-optimistic) designers hoped.

But okay, memory access is always a problem. The 360/91 uses 16 way
interleaved memory, as memory access time is about 13 clock cycles.
But since you can't predict the access patterns, you don't know
how well interleaved memory works.

With cache, one hopes to have more uniform memory access times,
but yes it is not easy to predict. Yes it is not possible to solve hazards
at compile time, but it is also not possible at run time. One just does
as well as it can be done, and hopes it is good enough.

(One of the fun things about the 360/91 is imprecise interrupts.
When an interrupt occurs, the pipeline is flushed, and the address is
(usually) not the address of the source of the interrupt.)

Arne Vajhøj

unread,
Aug 12, 2022, 8:35:18 AM8/12/22
to
On 8/12/2022 6:26 AM, gah4 wrote:
> As a rough approximation, which mostly goes back to microprogrammed
> machines from the 1960's and 1970's, but I believe also to software
> emulated CISC processors is about 1/10 the speed. That is, about 10
> instructions to emulate one, on average.
>
> The idea behind RISC is simpler instructions, and the possibility that
> more can be executed in the same time. One might hope that RISC
> instructions are easier to emulate, but it isn't so obvious that the
> RISC advantage still applies with emulation.
>
> IA-64 is supposed to be able to execute 6 instructions per clock cycle.
> My guess is that, at least the easier emulation, might still be 10 real
> instructions per emulated instruction, so maybe 60 times slower.

1/10th seems slightly optimistic for non-JIT emulation.

But the fastest Alpha emulators use JIT today and an IA-64
emulator would need to as well if it is to perform well.

And then we are talking closer to 1:1 instruction wise.

https://emuvm.com/support/faq/

<quote>
What is CPU server: basic, JIT1, JIT2, JIT3?

AlphaVM supports several CPU implementation back-ends. They all
implement the same Alpha CPU functionality, but in various ways.

Basic CPU is the simplest CPU implementation based on the
interpretation of Alpha instructions fetched from the memory. This CPU
serfver is the only CPU server available in AlphaVM-Basic.
JITx CPUs are based on the Just-In-Time compilation of Alpha code
to increase the performance.
JIT1 server compiles to byte code. Its performance is almost
double of the basic CPU.
JIT2 server compiles Alpha code to naive x86-64 code. Its
performance on most workloads is about a factor of 5 faster than the
basic CPU.
JIT3 server compiles Alpha code to naive x86-64 code. This CPU
server applies sophisticated optimization. It’s performance is a factor
of 10 faster than the basic CPU.

AphaVM-Pro is offered with JIT3 CPU. AlphaVM-Basic only supports the
basic CPU. Other CPU servers are used merely for debugging.
</quote>

Note that this is the vendors own description - not an
independent benchmark.

Arne




Arne Vajhøj

unread,
Aug 12, 2022, 8:42:17 AM8/12/22
to
I am not fully convinced that VLIW was a bad idea.

Yes - it turned out to be extremely difficult to
get N VLIW execution units to be N times as fast
as traditional single execution unit.

But I think that is the wrong comparison.

The correct comparison is whether N VLIW
execution units are faster than N multi-core
execution units requiring multiple threads.

I suspect that may frequently be the case.

Arne

Simon Clubley

unread,
Aug 12, 2022, 9:10:40 AM8/12/22
to
On 2022-08-11, Arne Vajhøj <ar...@vajhoej.dk> wrote:
> On 8/11/2022 6:48 PM, David Turner wrote:
>> Does anyone here think that this is an option for people not willing or
>> able to move over to x86-64 yet?
>> An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
>> (16 cores max)
>>
>> I could imagine it would be useful if stuck with HP-UX or OpenVMS for
>> Integrity for some reason?!?
>>
>> Why am I asking? Well, HPE Integrity servers are getting scarce. I have
>> probably purchased 80% of the ones on the market and some companies are
>> buying up whatever is available
>>
>> Comments please.
>
> Based on previous discussions here then no Itanium emulator
> currently exist.
>
> In theory one could be made. It should be possible to emulate any
> CPU where detailed enough documentation is available.
>

If you think this problem is about emulating the CPU, then you don't
understand the problem.

A good chunk of the CPU emulation work has already been done in Ski,
but that's only a userland binaries emulator for Linux and would be
useless as-is for running even userland VMS binaries.

In a full system emulator, the CPU is only one small part of the
emulation. You also have to emulate all the rest of the hardware to
a good enough accuracy that VMS can't tell the difference.

_That_ is where the majority of the work lies.

A full system emulator would also need access to the firmware loaded
onto the real hardware and that is now only available under a support
contract.

A userland binaries emulator OTOH would need to be run on top of
another VMS system on a different architecture as it works by calling
the system services in the underlying VMS system when a call to a VMS
system service is made in the Itanium binary.

If you run it on Alpha, you need to emulate any system services added
to Itanium that don't exist on Alpha VMS. If you run it on x86-64 VMS,
you need to hope that all the system services available on Itanium exist
on x86-64 VMS, or you have the same problem.

In addition, VMS has a major problem that simply doesn't exist in Linux
and that is whereas the vast majority of interaction between a Linux
userland binary and Linux itself is via a nice well-defined syscall
interface, VMS binaries have a nasty habit of looking at data cells
which exist directly in the VMS process's address space.

Such data cell access would have to be recognised and emulated in such
a userland level emulator.

In addition to this, you also have the problem of sharable images mapped
into user space during image activation. Such images would have to be
brought along from the Itanium system and run through the emulator
as well. I don't know what the licence implications of doing that would be.

In short, a userland binaries emulator would very likely be unsuitable
for anything other than simple VMS Itanium userland binaries so you are
looking at a full system emulator for running a real Itanium application
on another architecture.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Arne Vajhøj

unread,
Aug 12, 2022, 9:21:25 AM8/12/22
to
Possible.

But it is still a matter of documentation.

And unlike the IA-64 instruction set that is pretty unique, then
I would assume the hardware support is different but same style as
other emulators.

> A full system emulator would also need access to the firmware loaded
> onto the real hardware and that is now only available under a support
> contract.

The Alpha emulators get it from somewhere. HP(E) I presume. Anyone
doing an IA-64 emulator would need the same.

This is not a hobbyist weekend project. This would be a commercial
company deciding to invest millions of dollars.

Arne

jimc...@gmail.com

unread,
Aug 12, 2022, 11:03:57 AM8/12/22
to
On Friday, August 12, 2022 at 5:42:17 AM UTC-7, Arne Vajhøj wrote:
> The correct comparison is whether N VLIW
> execution units are faster than N multi-core
> execution units requiring multiple threads.

For certain workloads VLIW excels -- execution patterns that don't require non-deterministic memory access, don't benefit from out-of-order execution, and require massive vectorized instructions. It's why VLIW continues to receive investment and innovation in applications like digital signal processing and graphics acceleration.

For general-purpose workloads, they are not. Itanium eventually needed multiple cores, SMT, out-of-order execution, speculative processing in order to achieve reasonable performance -- all techniques that VLIW were intended to make unnecessary.

jimc...@gmail.com

unread,
Aug 12, 2022, 11:26:33 AM8/12/22
to
On Friday, August 12, 2022 at 3:48:47 AM UTC-7, gah4 wrote:
> Well it isn't so easy at run-time, either. Much of my early programming was on an
> IBM 360/91, which was a favorite machine for books on pipelined processors.

It's not easy at run-time, but the 50+ years since the 360/91 was designed have shown that run-time techniques are more effective for most workloads.

> Yes it is not possible to solve hazards > at compile time, but it is also not possible at run time. One just does
> as well as it can be done, and hopes it is good enough.

Successful hardware engineering usually doesn't come from "do the best you can with a technique and hope it's enough".

Hardware techniques to address execution hazards have always delivered more usable performance in general-purpose computing than EPIC offered -- and everything genuinely useful that came from EPIC designs (compiler innovations, large on-die caches, memory controllers, process shrinks) provided even more performance when applied to other instruction architectures.

Itanium only became usably performant by adding SMT, out-of-order execution, and speculative execution -- all of which had already pulled AMD64/x64 and other architectures ahead in pure performance, in speed-per-gate-count, as well as in thermal efficiency and power consumption.

For general-purpose computing, nearly everything of value that came from the billions of dollars poured into EPIC provided more benefit for other technologies.

Simon Clubley

unread,
Aug 12, 2022, 1:54:30 PM8/12/22
to
On 2022-08-12, Arne Vajhøj <ar...@vajhoej.dk> wrote:
>
> And unlike the IA-64 instruction set that is pretty unique, then
> I would assume the hardware support is different but same style as
> other emulators.
>

Yes and no.

Emulating various standard disk drive interfaces (for example) is one
thing, but the Itanium architecture itself has its own unique hardware
infrastructure of which the CPU instruction set is just one part.

Once again, emulating the instruction set is only one task that needs
to be done in a long list of tasks before you have a viable full system
emulator.

This hardware also needs to be emulated to a level of accuracy that means
VMS can't tell the difference. That's a _lot_ of work. Just look at the
bug reports that show up here every so often for Alpha that turn out to
be an emulation problem in the Alpha emulator in use.

That's for an architecture which is very well-known and _far_ less complex
than Itanium is. It may also interest you to know that nobody has put an
Itanium emulator in QEMU even through it supports this list of architectures:

https://www.qemu.org/docs/master/system/index.html

Writing an Itanium emulator is probably not viable these days, either as
a commercial project or a hobbyist project, given the amount of effort
required to create one and the need to access restricted firmware (for
hobbyists) or the limited user base (for commercial projects).

The fact Itanium is also both complex and dead counts against it when
trying to get people interested in it for a hobbyist project.

jimc...@gmail.com

unread,
Aug 12, 2022, 5:42:32 PM8/12/22
to
On Friday, August 12, 2022 at 10:54:30 AM UTC-7, Simon Clubley wrote:

> The fact Itanium is also both complex and dead counts against it when
> trying to get people interested in it for a hobbyist project.

At some point, I predict being complex and an infamous business failure will ensure that hobbyists build a platform emulator for Itanium :) It will be too late for the scenario David's customers need, however

Stephen Hoffman

unread,
Aug 12, 2022, 6:15:52 PM8/12/22
to
On 2022-08-11 22:48:16 +0000, David Turner said:

> Does anyone here think that this is an option for people not willing or
> able to move over to x86-64 yet?
>
> An HP Integrity emulator, emulating something like an rx2800 i2 i4 or
> i6 (16 cores max)

Nope. Not now, not particularly effectively, and not anytime soon.

Used Itanium server prices and availability will be a bellwether for
the success of VSI OpenVMS x86-64.

Though if somebody wants to try this:
http://iccd.et.tudelft.nl/Proceedings/2004/22310288.pdf


--
Pure Personal Opinion | HoffmanLabs LLC

Johnny Billquist

unread,
Aug 12, 2022, 6:38:31 PM8/12/22
to
On 2022-08-12 15:10, Simon Clubley wrote:
> In addition, VMS has a major problem that simply doesn't exist in Linux
> and that is whereas the vast majority of interaction between a Linux
> userland binary and Linux itself is via a nice well-defined syscall
> interface, VMS binaries have a nasty habit of looking at data cells
> which exist directly in the VMS process's address space.
>
> Such data cell access would have to be recognised and emulated in such
> a userland level emulator.

I find that claim incredibly hard to believe. Can you give some examples
of this? Because even RSX, which is just a primitive predecessor of VMS
do not have such behavior. Everything in the kernel is completely hidden
and out of scope for a process, and the only way to do or get to
anything is through system calls. And that is generally true of almost
any reasonable multiuser, timesharing, memory protected operating system.

There is absolutely nothing Unix/Linux specific about this.

Heck - how would such programs even survive upgrading to a new version
of the OS, when things might move around and change internally???

Johnny

Stephen Hoffman

unread,
Aug 12, 2022, 8:21:14 PM8/12/22
to
On 2022-08-12 22:38:28 +0000, Johnny Billquist said:

> On 2022-08-12 15:10, Simon Clubley wrote:
>> In addition, VMS has a major problem that simply doesn't exist in Linux
>> and that is whereas the vast majority of interaction between a Linux
>> userland binary and Linux itself is via a nice well-defined syscall
>> interface, VMS binaries have a nasty habit of looking at data cells
>> which exist directly in the VMS process's address space.
>>
>> Such data cell access would have to be recognised and emulated in such
>> a userland level emulator.
>
> I find that claim incredibly hard to believe. Can you give some
> examples of this?

VAX stuff that does this will reference SYS$BASE_IMAGE during the link,
and Alpha and Integrity apps will use LINK /SYSEXE to resolve these
symbols.

As one of various examples of symbols that some few apps will poke at:
CTL$A_COMMON — and there are others.

We met a few back in the era of Y2K too, where some apps were reading
directly from the kernel clock storage quadword.

> Because even RSX, which is just a primitive predecessor of VMS do not
> have such behavior.

RSX and OpenVMS are different. (I'd have thought you'd already been
singed enough by this erroneous assumption, but here we are again.)

The four-rings UREW/URKW/etc design specifically permits developers to
allow these cross-mode access shenanigans, too. BTW: UREW wasn't
feasible on Itanium.

To make some of these cross-mode shenanigans somewhat more supportable,
OpenVMS also implements a P1 window into system space at CTL$GL_PHD,
allowing supervisor code to poke at kernel data. But I digress.


> Everything in the kernel is completely hidden and out of scope for a
> process, and the only way to do or get to anything is through system
> calls.

Nope.

gah4

unread,
Aug 12, 2022, 9:03:11 PM8/12/22
to
On Friday, August 12, 2022 at 3:38:31 PM UTC-7, Johnny Billquist wrote:

(snip)

> I find that claim incredibly hard to believe. Can you give some examples
> of this? Because even RSX, which is just a primitive predecessor of VMS
> do not have such behavior. Everything in the kernel is completely hidden
> and out of scope for a process, and the only way to do or get to
> anything is through system calls. And that is generally true of almost
> any reasonable multiuser, timesharing, memory protected operating system.

Does timesharing mean interactive?

It might not be true for OS/360, though that is batch and was designed
before some things were known, and especially when main memory
was expensive ($1/byte, maybe more).

It mostly works at user level, as CMS does it. (That is, IBM's own
emulation of OS/360 system calls.)

One of the complications of OS/360 is that the most important
control block, the DCB, is in user space. Even more, it has some 24
bit addresses, even with 31 and 64 bit OS versions. Much fun.

John Dallman

unread,
Aug 13, 2022, 4:49:28 AM8/13/22
to
In article <ZS2dnf0hyLpzG2j_...@supernews.com>,
dtu...@islandco.com (David Turner) wrote:

> Does anyone here think that this is an option for people not
> willing or able to move over to x86-64 yet?
> An HP Integrity emulator, emulating something like an rx2800 i2 i4
> or i6 (16 cores max)

It would be useful, but it does not exist. Stromasys seem to be the
leading vendor of emulators - they support VAX, Alpha, PDP-11, SPARC and
PA-RISC - but they show no sign of launching an Itanium emulator. You
could always ask them about it? https://www.stromasys.com/

John

Johnny Billquist

unread,
Aug 13, 2022, 6:19:18 AM8/13/22
to
On 2022-08-13 02:21, Stephen Hoffman wrote:
> On 2022-08-12 22:38:28 +0000, Johnny Billquist said:
>
>> On 2022-08-12 15:10, Simon Clubley wrote:
>>> In addition, VMS has a major problem that simply doesn't exist in
>>> Linux and that is whereas the vast majority of interaction between a
>>> Linux userland binary and Linux itself is via a nice well-defined
>>> syscall interface, VMS binaries have a nasty habit of looking at data
>>> cells which exist directly in the VMS process's address space.
>>>
>>> Such data cell access would have to be recognised and emulated in
>>> such a userland level emulator.
>>
>> I find that claim incredibly hard to believe. Can you give some
>> examples of this?
>
> VAX stuff that does this will reference SYS$BASE_IMAGE during the link,
> and Alpha and Integrity apps will use LINK /SYSEXE to resolve these
> symbols.
>
> As one of various examples of symbols that some few apps will poke at:
> CTL$A_COMMON — and there are others.
>
> We met a few back in the era of Y2K too, where some apps were reading
> directly from the kernel clock storage quadword.

Are such symbols then guaranteed to never move between different
versions of the OS, or how is this managed?

>> Because even RSX, which is just a primitive predecessor of VMS do not
>> have such behavior.
>
> RSX and OpenVMS are different.  (I'd have thought you'd already been
> singed enough by this erroneous assumption, but here we are again.)

I know. :-)

> The four-rings UREW/URKW/etc design specifically permits developers to
> allow these cross-mode access shenanigans, too. BTW: UREW wasn't
> feasible on Itanium.

I know that the VAX hardware have these. I just find it weird that you
would have a design where you directly reach into the innards of the OS
without going through any system call layer.
In general it have been understood for quite some time that this is a
bad idea. Abstraction and isolation is more or less some core designs
for making things more robust and possible to change without breaking
things.

> To make some of these cross-mode shenanigans somewhat more supportable,
> OpenVMS also implements a P1 window into system space at CTL$GL_PHD,
> allowing supervisor code to poke at kernel data. But I digress.

That is digressing. Supervisor code is not normal user processes.

Well. I'm tempted to paraphrase the late Mark Crispin. RSX - a great
improvements on its successors.
(He used that with TOPS-20 and any Unix system)

Johnny


Johnny Billquist

unread,
Aug 13, 2022, 6:28:05 AM8/13/22
to
On 2022-08-13 03:03, gah4 wrote:
> On Friday, August 12, 2022 at 3:38:31 PM UTC-7, Johnny Billquist wrote:
>
> (snip)
>
>> I find that claim incredibly hard to believe. Can you give some examples
>> of this? Because even RSX, which is just a primitive predecessor of VMS
>> do not have such behavior. Everything in the kernel is completely hidden
>> and out of scope for a process, and the only way to do or get to
>> anything is through system calls. And that is generally true of almost
>> any reasonable multiuser, timesharing, memory protected operating system.
>
> Does timesharing mean interactive?

No. I just tried to limit myself to systems that fulfilled all those
attributes as systems where this isolation would be obvious. It was not
meant to be read that all timesharing systems are interactive, or that
all multiuser systems have memory protection, or any combination of
attributes means that all of those attributes apply or are necessary.

Unix systems, of which Linux is one, used to also not have that
isolation. In the old days, a lot of things were done by opening
/dev/kmem, and read through the kernel memory. Which then had to be done
in combination with reading the kernel symbol table in order to find out
where in kernel memory to read. This was always ugly, risky and tricky.
They obviously learned that this is no good, and got away from it. The
fact that VMS still have this is very surprising to me. I would have
thought it never had it to start with. Like I said, RSX do not. But in a
way that was easier/more obvious on a PDP-11, since it's not such a flat
address space as on the VAX. Kernel space on a PDP-11 is generally not
even possible to see from user space, and you'd have to mess things up,
and use extra resources there. On the VAX, the kernel space is always a
part of your address space, but I would expect it to normally all have
been fully protected from user space access. But now I'm being told it
actually isn't with VMS. I guess they might have been concerned about
performance, but this is a sad state and excuse.

Johnny

Stephen Hoffman

unread,
Aug 13, 2022, 3:46:58 PM8/13/22
to
On 2022-08-13 10:19:14 +0000, Johnny Billquist said:

> Are such symbols then guaranteed to never move between different
> versions of the OS, or how is this managed?

Linking against the kernel can vary, whether from boot to boot, or from
patch to patch. There are some apps which resolve these references at
app startup, and others that require relinking after updates or
upgrades.

Whether anybody wanted users accessing data directly is one discussion.
That some of the kernel data was accessible from an outer mode (user,
super, etc) and which meant some developers would access it directly is
another discussion.


> I know that the VAX hardware have these. I just find it weird that you
> would have a design where you directly reach into the innards of the OS
> without going through any system call layer.

VAX/VMS programmers can and did make substantial efforts to optimize
some VAX code.

Worked to reduce or eliminate CALLS/CALLG calls and change-mode
operations and longword offsets was popular, along with some other VAX
operations.

That code tuning is related to why some of us have been cleaning up
co-routine code in recent decades, why the OpenVMS Alpha C system
programming work that occurred leading up to OpenVMS Alpha V6.1 was
gnarly, and why compiler code generation can be such a joy.

There's sketchy Y2K-era timekeeping and time-drifting code around and
still in use, too. Apps that haven't been remediated to deal correctly
with daylight saving time changes, mostly.

That VAX code-optimization work has become needed less often in recent
times particularly as the compilers address much of that, though there
are still performance-sensitive code paths in some apps. Just not as
widespread as on VAX.

> In general it have been understood for quite some time that this is a
> bad idea. Abstraction and isolation is more or less some core designs
> for making things more robust and possible to change without breaking
> things.

Which is why I've been known to grumble about itemlists and descriptors
and related abstractions, too. Itemlists and descriptors were great for
the 1980s and 1990s, but are increasingly limiting what changes can be
made to OpenVMS APIs.

>> Hoff: To make some of these cross-mode shenanigans somewhat more
>> supportable, OpenVMS also implements a P1 window into system space at
>> CTL$GL_PHD, allowing supervisor code to poke at kernel data. But I
>> digress.
>
> That is digressing. Supervisor code is not normal user processes.

It's another of design compromises intended to reduce or avoid
overhead. VAX/VMS had those. All operating systems have those.

TL;DR: Yes, there are outer-mode apps that read directly from
inner-mode memory.

Scott Dorsey

unread,
Aug 13, 2022, 4:54:16 PM8/13/22
to
John Dallman <j...@cix.co.uk> wrote:
>It would be useful, but it does not exist. Stromasys seem to be the
>leading vendor of emulators - they support VAX, Alpha, PDP-11, SPARC and
>PA-RISC - but they show no sign of launching an Itanium emulator. You
>could always ask them about it? https://www.stromasys.com/

It is very, very hard to build an efficient emulator for the itanium, which
is part of why HP didn't actually realize how bad the architecture was until
they were close to having silicon on the die.

Although people in this newsgroup keep referring to itanium as a risc machine,
it's not at all a risc machine. It's a VLIW architecture where the instruction
actually sets the bits to route the data within the processor rather than just
saying what operations to perform. That is, it's basically microcode instead
of a normal operating instruction code.

This means that the actual number of possible operations that you can perform
is enormous, and a lot of the instructions themselves aren't completely
documented. You can do weird combinations of operations in one instruction,
routing an accumulator into several different parts of the alu and then picking
pieces of each of the alu outputs and putting them into another register.

Getting the compiler to efficiently take advantage of the VLIW archiecture
is really, really hard, and not enough actual work got put into it to make
the Intel compiler good enough. It might have taken decades to make it good.

Anyway, because of this, either you look at the instructions that the compiler
generates and you emulate those and hope nobody runs any code that didn't
come from that compiler, or you simulate at gate level and get a an emulator
that is accurate and reliable and slow as molasses.

It's a really interesting approach to building a computer, going in a very
different direction than either CISC or RISC architectures, but it relies
entirely on either very sophisticated compilers or very sophisticated assembler
programmers, and there remains a shortage of both.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Robert A. Brooks

unread,
Aug 13, 2022, 5:11:08 PM8/13/22
to
On 8/13/2022 4:54 PM, Scott Dorsey wrote:
> John Dallman <j...@cix.co.uk> wrote:
>> It would be useful, but it does not exist. Stromasys seem to be the
>> leading vendor of emulators - they support VAX, Alpha, PDP-11, SPARC and
>> PA-RISC - but they show no sign of launching an Itanium emulator. You
>> could always ask them about it? https://www.stromasys.com/
>
> It is very, very hard to build an efficient emulator for the itanium, which
> is part of why HP didn't actually realize how bad the architecture was until
> they were close to having silicon on the die.
>
> Although people in this newsgroup keep referring to itanium as a risc machine,
> it's not at all a risc machine. It's a VLIW architecture where the instruction
> actually sets the bits to route the data within the processor rather than just
> saying what operations to perform. That is, it's basically microcode instead
> of a normal operating instruction code.


https://en.wikipedia.org/wiki/Multiflow


--

--- Rob

Johnny Billquist

unread,
Aug 13, 2022, 5:22:57 PM8/13/22
to
On 2022-08-13 21:46, Stephen Hoffman wrote:
> On 2022-08-13 10:19:14 +0000, Johnny Billquist said:
>
>> Are such symbols then guaranteed to never move between different
>> versions of the OS, or how is this managed?
>
> Linking against the kernel can vary, whether from boot to boot, or from
> patch to patch. There are some apps which resolve these references at
> app startup, and others that require relinking after updates or upgrades.

I see. Potential nastiness ahead there then.

> Whether anybody wanted users accessing data directly is one discussion.
> That some of the kernel data was accessible from an outer mode (user,
> super, etc) and which meant some developers would access it directly is
> another discussion.

Understood. But I guess the fact that they made it possible means
obviously some will do it.

>> I know that the VAX hardware have these. I just find it weird that you
>> would have a design where you directly reach into the innards of the
>> OS without going through any system call layer.
>
> VAX/VMS programmers can and did make substantial efforts to optimize
> some VAX code.
>
> Worked to reduce or eliminate CALLS/CALLG calls and change-mode
> operations and longword offsets was popular, along with some other VAX
> operations.

Understood. And I do remember a lot of this stuff from way back when.
It do seem that in the goal to make things a bit more efficient, they
were willing to bend things just a bit more than I had expected.

> That code tuning is related to why some of us have been cleaning up
> co-routine code in recent decades, why the OpenVMS Alpha C system
> programming work that occurred leading up to OpenVMS Alpha V6.1 was
> gnarly, and why compiler code generation can be such a joy.

I know that there was quite some effort before VAX and Alpha was
somewhat unified. Never knew much of the details, but I see that I'm
getting some of that now.

> There's sketchy Y2K-era timekeeping and time-drifting code around and
> still in use, too. Apps that haven't been remediated to deal correctly
> with daylight saving time changes, mostly.

Meh. Tell me about it. Same mess in RSX.

> That VAX code-optimization work has become needed less often in recent
> times particularly as the compilers address much of that, though there
> are still performance-sensitive code paths in some apps. Just not as
> widespread as on VAX.

I would hope that they are working on getting rid of this stuff as they
port things.

>> In general it have been understood for quite some time that this is a
>> bad idea. Abstraction and isolation is more or less some core designs
>> for making things more robust and possible to change without breaking
>> things.
>
> Which is why I've been known to grumble about itemlists and descriptors
> and related abstractions, too. Itemlists and descriptors were great for
> the 1980s and 1990s, but are increasingly limiting what changes can be
> made to OpenVMS APIs.

Descriptors, if we talk about the kind used for strings, are not
unreasonable. But it seems a lot of the extensions to VMS over the years
have made things more complicated.

>>> Hoff: To make some of these cross-mode shenanigans somewhat more
>>> supportable, OpenVMS also implements a P1 window into system space at
>>> CTL$GL_PHD, allowing supervisor code to poke at kernel data. But I
>>> digress.
>>
>> That is digressing. Supervisor code is not normal user processes.
>
> It's another of design compromises intended to reduce or avoid overhead.
> VAX/VMS had those. All operating systems have those.

Different mode code to provide services and/or libraries not exactly in
user space are definitely common. And I do give such code more leeway,
since commonly they might be shipped with the OS itself, and as such,
are in sync with other internal bits, or else have other APIs used
internally, for which other rules apply anyway.

> TL;DR: Yes, there are outer-mode apps that read directly from inner-mode
> memory.

Check. That's the thing that surprised me. Especially since that's not
happening in RSX, unless you have a privileged program which is mapped
to the kernel. But such a program is already not very normal anyway, and
not something any normal user can write or run (well, of course they can
write it, but they can't actually run it.)

Johnny

Johnny Billquist

unread,
Aug 13, 2022, 5:35:11 PM8/13/22
to
On 2022-08-13 22:54, Scott Dorsey wrote:
> Although people in this newsgroup keep referring to itanium as a risc machine,
> it's not at all a risc machine. It's a VLIW architecture where the instruction
> actually sets the bits to route the data within the processor rather than just
> saying what operations to perform. That is, it's basically microcode instead
> of a normal operating instruction code.

I wouldn't agree with that. Yes, it's not really RISC, and yes, it's
most definitely VLIW.
However, you have a clear set of defined opcodes, with arguments, and
all that stuff. No different than any other processor. It's just that
because of the long word, you stuff multiple instructions into one word,
and then you get to the point of all the rules of which instructions can
actually be combined in one word, since you do not have enough execution
units to perform all the operations for all the instructions in one word
in parallel. This is where scheduling comes in, and with VLIW, it was
thought that the compiler can work this out, reorder code, and come up
with the optimal ordering and combination of things to do to maximize
the utilization of the execution units.

As opposed to the Alpha, for example, which instead dynamically can
reorder instructions to keep all execution units busy.

The Alpha thus is more complex in the silicon, since the rescheduling
and resource allocation, along with making it behave somewhat correct,
is pretty complex. On the other hand, the compile don't really have to
be so clever.

And it turned out that statically working this out isn't only a bit too
complex in the generic case. It's not even very possible when you have
unknown (at compile time) work to do.

The dynamic rescheduling deals with this much better. In addition, with
VLIW, you are getting to the same problem some other RISC CPUs exposed,
where things like the delayed branch slot, while considered a great idea
at one point, became one of the worst achilles heels of the SPARC later
on, since every implementation had to implement that same behavior, even
when it was no longer needed.

VLIW is bad in that way that if you would want to add more execution
units, and more instructions into the word, you just can't. You are
locking yourself into the current design limits, based on current
technology, making future development very hard.

It's just a dead end, except for more specialized problems, where it
works well. What Alpha did was actually the right thing. But that whole
thing is moot now. We have x86, which have been poured so much resources
on that it's hard to displace. ARM seems to be the only realistic
alternative still around. ARM on the other hand, can potentially benefit
from at least some of the same solutions that Alpha had.

Johnny

gah4

unread,
Aug 13, 2022, 6:09:46 PM8/13/22
to
On Saturday, August 13, 2022 at 2:35:11 PM UTC-7, Johnny Billquist wrote:

(snip)

> It's just a dead end, except for more specialized problems, where it
> works well. What Alpha did was actually the right thing. But that whole
> thing is moot now. We have x86, which have been poured so much resources
> on that it's hard to displace. ARM seems to be the only realistic
> alternative still around. ARM on the other hand, can potentially benefit
> from at least some of the same solutions that Alpha had.

I believe RISC-V is on its way to a realistic alternative, though
maybe not there yet.




Johnny Billquist

unread,
Aug 13, 2022, 6:33:56 PM8/13/22
to
By the way, since people asked about IA64 emulators, and the general
belief that they don't exist and are too difficult to do.

They do exist, and have for a long time. It's not that complex from this
point of view, but of course, performance is probably nowhere near where
anyone would actually want to use it for production.

See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/

Last updated in 2004. But that is how they developed all the tooling and
so on before they had actual hardware.

And to correct myself and others a little. IA84 isn't really just a VLIW
machine. It also incorporated EPIC, which is sortof an attempt at half
dynamically be able to figure out dynamically which bundles of
instructions could be parallelized.
See: https://en.wikipedia.org/wiki/Explicitly_parallel_instruction_computing

It was still crap though.

Johnny

Dave Froble

unread,
Aug 13, 2022, 11:27:58 PM8/13/22
to
I seem to recall that at some point HP engineers tried to tell management that
VLIW was a bad idea, and another path (perhaps Alpha which they then had) should
be taken. HP management would not hear of it. Don't remember when this was.

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

John Dallman

unread,
Aug 14, 2022, 5:02:41 AM8/14/22
to
In article <td98si$cvh$1...@news.misty.com>, b...@softjar.se (Johnny
Billquist) wrote:

> By the way, since people asked about IA64 emulators, and the
> general belief that they don't exist and are too difficult to do.
>
> They do exist, and have for a long time. It's not that complex from
> this point of view, but of course, performance is probably nowhere
> near where anyone would actually want to use it for production.
>
> See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
>
> Last updated in 2004. But that is how they developed all the
> tooling and so on before they had actual hardware.

Is that page still up? I can't access it.

In 1999, when trying to port software to Windows Itanium, I had a copy of
Intel's emulator for Windows. It was ... slow. Too slow to actually be
useful for software development, never mind production. Part of this was
because it ran on 32-bit x86. It could have run faster on Alpha, but
Intel said "they couldn't do that, could they?"

Intel thought of it as the fast simulator, because it didn't do
gate-level emulation. Heaven knows how slow that was. One of the early
indicators of problems with the project was their answer which I asked if
the emulator was generated from the formal model of the processor. They
didn't understand the question.

John

Johnny Billquist

unread,
Aug 14, 2022, 6:10:05 AM8/14/22
to
On 2022-08-14 11:02, John Dallman wrote:
> In article <td98si$cvh$1...@news.misty.com>, b...@softjar.se (Johnny
> Billquist) wrote:
>
>> By the way, since people asked about IA64 emulators, and the
>> general belief that they don't exist and are too difficult to do.
>>
>> They do exist, and have for a long time. It's not that complex from
>> this point of view, but of course, performance is probably nowhere
>> near where anyone would actually want to use it for production.
>>
>> See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
>>
>> Last updated in 2004. But that is how they developed all the
>> tooling and so on before they had actual hardware.
>
> Is that page still up? I can't access it.

It works for me. No idea what the problem might be for you.

> In 1999, when trying to port software to Windows Itanium, I had a copy of
> Intel's emulator for Windows. It was ... slow. Too slow to actually be
> useful for software development, never mind production. Part of this was
> because it ran on 32-bit x86. It could have run faster on Alpha, but
> Intel said "they couldn't do that, could they?"

:-)
But I think performance wouldn't exactly have been great on an Alpha
either. Better, but not useful.

> Intel thought of it as the fast simulator, because it didn't do
> gate-level emulation. Heaven knows how slow that was. One of the early
> indicators of problems with the project was their answer which I asked if
> the emulator was generated from the formal model of the processor. They
> didn't understand the question.

I would sortof have expected that they'd know and would have answered
"no". But not even understanding the question would be a bad sign
indeed. I wonder if they had a formal model even.

Johnny


Jan-Erik Söderholm

unread,
Aug 14, 2022, 8:28:39 AM8/14/22
to
Den 2022-08-14 kl. 12:10, skrev Johnny Billquist:
> On 2022-08-14 11:02, John Dallman wrote:
>> In article <td98si$cvh$1...@news.misty.com>, b...@softjar.se (Johnny
>> Billquist) wrote:
>>
>>> By the way, since people asked about IA64 emulators, and the
>>> general belief that they don't exist and are too difficult to do.
>>>
>>> They do exist, and have for a long time. It's not that complex from
>>> this point of view, but of course, performance is probably nowhere
>>> near where anyone would actually want to use it for production.
>>>
>>> See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
>>>
>>> Last updated in 2004. But that is how they developed all the
>>> tooling and so on before they had actual hardware.
>>
>> Is that page still up? I can't access it.
>
> It works for me. No idea what the problem might be for you.

Doesn't work for me. Gives "www.irisa.fr doesn't respond".

First hit when googling "irisa" is www.irisa.fr, but doesn't work either.

Johnny Billquist

unread,
Aug 14, 2022, 9:02:21 AM8/14/22
to
Seems to have stopped working for me as well now.
I got the link from the Itanium wikipedia page.

Well, there is always the wayback machine (those people should really
get some kudos...)

https://web.archive.org/web/20220410003719/http://www.irisa.fr/caps/projects/ArchiCompil/iato/

Johnny

plugh

unread,
Aug 14, 2022, 9:35:02 AM8/14/22
to
On Thursday, August 11, 2022 at 3:48:21 PM UTC-7, David Turner wrote:
> Does anyone here think that this is an option for people not willing or
> able to move over to x86-64 yet?
> An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
> (16 cores max)
>
> I could imagine it would be useful if stuck with HP-UX or OpenVMS for
> Integrity for some reason?!?
>
> Why am I asking? Well, HPE Integrity servers are getting scarce. I have
> probably purchased 80% of the ones on the market and some companies are
> buying up whatever is available
>
>
> Comments please.
>
>
> David Turner

Based on a review of object code generated for this machine by a certain C compiler, I'd say you need only one instruction: NOP


John Dallman

unread,
Aug 14, 2022, 10:26:06 AM8/14/22
to
In article <td9q3r$32ah0$1...@dont-email.me>, da...@tsoft-inc.com (Dave
Froble) wrote:

> I seem to recall that at some point HP engineers tried to tell
> management that VLIW was a bad idea, and another path (perhaps
> Alpha which they then had) should be taken. HP management would
> not hear of it. Don't remember when this was.

That's consistent with HP management's behaviour in 2002-04, when it was
becoming clear that (a) Intel's plan to replace x86 with Itanium had been
wrecked by AMD's x86-64 and (b) making Windows and HP-UX software run
fast on Itanium was quite hard. At this point, HP made a lot of noise
about how they were "Betting the company on Itanium" and quite a few
companies felt they needed to become less reliant on HP.

Later on, an HP person said "You're biased against Itanium!" and our
chief of operations responded "We think of ourselves as well-informed."

John

David Turner

unread,
Aug 14, 2022, 2:20:03 PM8/14/22
to
I am still convinced that running HP_UX on an Itanium emulator, not
messing with code, applications etc, would be a better option than
trying to port to another Unix-like OS.
Perhaps not so for OpenVMS. But on the other hand, there are many
companies out there just using OpenVMS; their app vendors have either
gone out of business or stopped supporting OpenVMS all together on ANY
platform. An emulator with decent performance would be better than the
many 100,000s of dollars to port to a new OS. And yes, from the people I
have talked to, there is nothing cheap about any work done in the
OpenVMS market.
A $10K emulator that performs efficiently and fast, would still be
cheaper than going with any unnecessary hardware or OS upgrades. I think
AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.

DT

Arne Vajhøj

unread,
Aug 14, 2022, 4:49:06 PM8/14/22
to
On 8/14/2022 2:19 PM, David Turner wrote:
> I am still convinced that running HP_UX on an Itanium emulator, not
> messing with code, applications etc, would be a better option than
> trying to port to another Unix-like OS.
> Perhaps not so for OpenVMS. But on the other hand, there are many
> companies out there just using OpenVMS; their app vendors have either
> gone out of business or stopped supporting OpenVMS all together on ANY
> platform. An emulator with decent performance would be better than the
> many 100,000s of dollars to port to a new OS. And yes, from the people I
> have talked to, there is nothing cheap about any work done in the
> OpenVMS market.
> A $10K emulator that performs efficiently and fast, would still be
> cheaper than going with any unnecessary hardware or OS upgrades. I think
> AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.

Obviously the situation for HP-UX is a lot different than
for VMS.

VMS has a company dedicated to it. VMS has been ported to x86-64.

HP-UX got neither of those. Unless HPE does something then HP-UX is
stuck on Itanium and current functionality.

But I also suspect that the typical HP-UX site is a lot easier
to migrate than the typical VMS site.

Macro-11, VMS Pascal and VMS Basic are a rewrite from scratch
on Linux. All LIB$ and SYS$ calls would need to be changed
on Linux no matter the language. Lots of VMS concepts are not 1:1
portable to Linux including logical names and queue system.
Rdb is not available on Linux. RMS index-sequential files
would (except for Cobol) require a third party software solution
and change of calls to use API of that. No DCL on Linux so all
script would be rewrite from scratch. VMS to Linux is not easy - not
impossible either but expensive and risky.

I believe a lot of HP-UX systems are database servers running
Oracle DB, Sybase ASE etc. - and those are available on
Linux (in fact the vendors would like to see customers migrate
to Linux). Most application code would be C/C++ and Cobol
which are respectively available by default and available for
a price on Linux. Most programming concepts and system
calls would work on Linux. The shells used would be available
on Linux. HP-UX to Linux would not be trivial - definitely a
huge project, but both risk and cost seems significant lower
than VMS to Linux.

Arne


Rich Alderson

unread,
Aug 14, 2022, 5:14:31 PM8/14/22
to
gah4 <ga...@u.washington.edu> writes:

> It might not be true for OS/360, though that is batch and was designed
> before some things were known, and especially when main memory
> was expensive ($1/byte, maybe more).

> It mostly works at user level, as CMS does it. (That is, IBM's own
> emulation of OS/360 system calls.)

ITYM, actually IKYM DOS/360 here.

> One of the complications of OS/360 is that the most important
> control block, the DCB, is in user space. Even more, it has some 24
> bit addresses, even with 31 and 64 bit OS versions. Much fun.

--
Rich Alderson ne...@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen

Scott Dorsey

unread,
Aug 14, 2022, 6:51:36 PM8/14/22
to
David Turner <dtu...@islandco.com> wrote:
>I am still convinced that running HP_UX on an Itanium emulator, not
>messing with code, applications etc, would be a better option than
>trying to port to another Unix-like OS.

HP-UX really is Unix. If the code is well-written, it should not be
difficult to port to any other SysV-like Unix. Realtime code excepted
perhaps.

>Perhaps not so for OpenVMS. But on the other hand, there are many
>companies out there just using OpenVMS; their app vendors have either
>gone out of business or stopped supporting OpenVMS all together on ANY
>platform. An emulator with decent performance would be better than the
>many 100,000s of dollars to port to a new OS. And yes, from the people I
>have talked to, there is nothing cheap about any work done in the
>OpenVMS market.

OpenVMS is not Unixlike and porting OpenVMS code to Unix-like systems is
frequently problematic. Which is why x86 VMS is such a great idea. In
most cases this involves a complete rewrite rather than a port.

>A $10K emulator that performs efficiently and fast, would still be
>cheaper than going with any unnecessary hardware or OS upgrades. I think
>AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.

I don't think an IA64 emulator that performs efficiently and fast is even
feasible. Making it reliable is still more difficult. It's not like
emulating a normal architecture like Alpha.

abrsvc

unread,
Aug 14, 2022, 7:01:11 PM8/14/22
to

> A $10K emulator that performs efficiently and fast, would still be
> cheaper than going with any unnecessary hardware or OS upgrades. I think
> AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.
>
> DT

No to be picky, but the Stromasys product is called Charon/AXP.

Dan

(Currently working for Stromasys)

John Dallman

unread,
Aug 14, 2022, 7:14:34 PM8/14/22
to
In article <tdbn3s$3v3$1...@gioia.aioe.org>, ar...@vajhoej.dk (Arne Vajhøj)
wrote:

> Unless HPE does something then HP-UX is stuck on Itanium and
> current functionality.

I've been watching for that for years. There were rumours during the
HP-Oracle lawsuit that HP had investigated porting HP-UX to x86-64, but
nothing came of them. HP has been running Linux for years on its high-end
"Superdome" x86-64 systems. They haven't said anything to indicate that
HP-UX will have a life after the end of Itanium support in 2025 AFAIK.

> But I also suspect that the typical HP-UX site is a lot easier
> to migrate than the typical VMS site.

You're right. HP-UX has a few quirks of its own, but it isn't
fundamentally hard to port from it to Linux.

John

Simon Clubley

unread,
Aug 15, 2022, 1:28:20 PM8/15/22
to
On 2022-08-12, Johnny Billquist <b...@softjar.se> wrote:
> On 2022-08-12 15:10, Simon Clubley wrote:
>> In addition, VMS has a major problem that simply doesn't exist in Linux
>> and that is whereas the vast majority of interaction between a Linux
>> userland binary and Linux itself is via a nice well-defined syscall
>> interface, VMS binaries have a nasty habit of looking at data cells
>> which exist directly in the VMS process's address space.
>>
>> Such data cell access would have to be recognised and emulated in such
>> a userland level emulator.
>
> I find that claim incredibly hard to believe. Can you give some examples
> of this? Because even RSX, which is just a primitive predecessor of VMS
> do not have such behavior. Everything in the kernel is completely hidden
> and out of scope for a process, and the only way to do or get to
> anything is through system calls. And that is generally true of almost
> any reasonable multiuser, timesharing, memory protected operating system.
>

As you now know Johnny, you were (once again) very very wrong to try
and compare the two. :-)

BTW, the fact you immediately switched to talking about kernel mode,
makes me wonder if you are even aware of P1 space in a VMS process.

> There is absolutely nothing Unix/Linux specific about this.
>

Oh yes there is.

Unix/Linux sets up a new process to run an image and then deletes
it immediately afterwards and has no such thing as P1 space.

A normal VMS session only ever has one process that is reused over
and over again to run programs (unless you choose to start a subprocess
for some reason.)

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Simon Clubley

unread,
Aug 15, 2022, 1:37:28 PM8/15/22
to
I keep looking at RISC-V. I will become _much_ more interested when
you can get a RISC-V board at Raspberry Pi or BeagleBone Black prices
and with the capabilities of those boards.

Once it reaches that level, that's when it is _really_ going to take
off (IMHO), but it's not there yet.

As with the ARM stuff, you need that price/functionality point to get
enough people to start playing with them to build a critical mass of
interested people.

Simon Clubley

unread,
Aug 15, 2022, 1:47:23 PM8/15/22
to
On 2022-08-13, Johnny Billquist <b...@softjar.se> wrote:
> By the way, since people asked about IA64 emulators, and the general
> belief that they don't exist and are too difficult to do.
>
> They do exist, and have for a long time. It's not that complex from this
> point of view, but of course, performance is probably nowhere near where
> anyone would actually want to use it for production.
>
> See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
>
> Last updated in 2004. But that is how they developed all the tooling and
> so on before they had actual hardware.
>

I've just had a quick look at this. This emulator is no good for VMS.

From the documentation:

|The IATO environment operates directly with ELF binary executables. As of
|release 1.0, fully static binary executables are only supported. In the
|presence of dynamically linked executables, the IATO clients reports an
|error and terminates. The best method to check for a file type is to use
|the file command

It's an user-level binary emulator only and it doesn't even support dynamic
binaries.

Also:

|3.4 Kernel emulation library
|The kernel (KRN) library is a set of classes that handles Linux system
|calls. Systems calls are vectored traps sent by the program. They are
|caught by the emulator or the simulator and routed to the system call
|handler. The Syscall class encapsulates all Linux system calls. Note that a
|system call argument mapping procedure is also included into this library.

And it only supports Linux syscalls.

Dave Froble

unread,
Aug 15, 2022, 2:06:23 PM8/15/22
to
Can't everybody just let the itanic boat anchor sink quietly into the mud, never
to be seen again?

I'm also a bit surprised by David's question. I was under the impression that
there were many discarded itanics, available rather cheap. What has changed?

The one I have cost exactly $0, and I rarely run it.

Simon Clubley

unread,
Aug 15, 2022, 2:06:44 PM8/15/22
to
On 2022-08-13, Johnny Billquist <b...@softjar.se> wrote:
> On 2022-08-13 02:21, Stephen Hoffman wrote:
>> To make some of these cross-mode shenanigans somewhat more supportable,
>> OpenVMS also implements a P1 window into system space at CTL$GL_PHD,
>> allowing supervisor code to poke at kernel data. But I digress.
>
> That is digressing. Supervisor code is not normal user processes.
>

On VMS, there is no such thing as a normal user process.

There is one process that at various times during its lifecycle
executes a mixture of code running in all four modes (KESU).

As such, the supervisor mode code and data structures are part of
the same address space as the user-mode programs. It's just that
most of it is not directly accessible to user-mode programs due
to page protections.

My opinions about whether I think this is a good idea these days
have already been discussed at length. :-)

BTW, are you aware that on VMS, a normal user program can execute
a function within that same program in kernel mode provided it has
sufficient privileges ?

I don't mean jump into the kernel address space, but to actually
execute a function within the program with kernel-mode access.

Simon Clubley

unread,
Aug 15, 2022, 2:16:15 PM8/15/22
to
On 2022-08-15, Dave Froble <da...@tsoft-inc.com> wrote:
>
> Can't everybody just let the itanic boat anchor sink quietly into the mud, never
> to be seen again?
>

It's getting there, but there's still the legacy installed base.
A legacy installed base which has permanent licences BTW, so things
like that are going to factor into various decisions.

BTW, over a couple of days (I think it was a weekend :-)) I had a look
at what would be involved in writing a full-system Itanium emulator.

At the end of those couple of days, I had come to the conclusion that
I would be more likely to succeed with doing something less insane
such as writing a modern web browser by myself. :-)

IOW, writing an Itanium full-system emulator would be a major undertaking.

Simon Clubley

unread,
Aug 15, 2022, 2:19:30 PM8/15/22
to
On 2022-08-14, Arne Vajhøj <ar...@vajhoej.dk> wrote:
>
> Macro-11, VMS Pascal and VMS Basic are a rewrite from scratch
> on Linux. All LIB$ and SYS$ calls would need to be changed
> on Linux no matter the language. Lots of VMS concepts are not 1:1
> portable to Linux including logical names and queue system.
> Rdb is not available on Linux. RMS index-sequential files
> would (except for Cobol) require a third party software solution
> and change of calls to use API of that. No DCL on Linux so all
> script would be rewrite from scratch. VMS to Linux is not easy - not
> impossible either but expensive and risky.
>

There's the third-party porting toolkits that can help with this.

Simon.

PS: BTW Arne, Macro-11 ??? :-)

Hans Bachner

unread,
Aug 15, 2022, 3:31:24 PM8/15/22
to
abrsvc schrieb am 15.08.2022 um 01:01:
>
>> A $10K emulator that performs efficiently and fast, would still be
>> cheaper than going with any unnecessary hardware or OS upgrades. I think
>> AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.
>>
>> DT
>
> No to be picky, but the Stromasys product is called Charon/AXP.

in fact, it is called CHARON-AXP :-)

> Dan
>
> (Currently working for Stromasys)

Hans.

(Stromasys partner)

Arne Vajhøj

unread,
Aug 15, 2022, 4:44:19 PM8/15/22
to
On 8/15/2022 2:19 PM, Simon Clubley wrote:
> On 2022-08-14, Arne Vajhøj <ar...@vajhoej.dk> wrote:
>>
>> Macro-11, VMS Pascal and VMS Basic are a rewrite from scratch
>> on Linux. All LIB$ and SYS$ calls would need to be changed
>> on Linux no matter the language. Lots of VMS concepts are not 1:1
>> portable to Linux including logical names and queue system.
>> Rdb is not available on Linux. RMS index-sequential files
>> would (except for Cobol) require a third party software solution
>> and change of calls to use API of that. No DCL on Linux so all
>> script would be rewrite from scratch. VMS to Linux is not easy - not
>> impossible either but expensive and risky.
>
> There's the third-party porting toolkits that can help with this.

Yes. Sector7 etc.. But without diminishing their products I would
not expect a silver bullet.

> PS: BTW Arne, Macro-11 ??? :-)

Ooops.

Macro-32

Arne

Johnny Billquist

unread,
Aug 15, 2022, 7:34:25 PM8/15/22
to
On 2022-08-15 20:06, Simon Clubley wrote:
> On 2022-08-13, Johnny Billquist <b...@softjar.se> wrote:
>> On 2022-08-13 02:21, Stephen Hoffman wrote:
>>> To make some of these cross-mode shenanigans somewhat more supportable,
>>> OpenVMS also implements a P1 window into system space at CTL$GL_PHD,
>>> allowing supervisor code to poke at kernel data. But I digress.
>>
>> That is digressing. Supervisor code is not normal user processes.
>>
>
> On VMS, there is no such thing as a normal user process.

Sorry. But here is where you go into nonsense. I probably should have
avoided the word "process", since the point was normal programs. Any and
every program will at least do calls into kernel mode, if nothing else,
at one or another point in their execution. But that is calls to code
that is not a part of the program. But it's done within the context of a
process. This is normal.

> There is one process that at various times during its lifecycle
> executes a mixture of code running in all four modes (KESU).

The fact that code in those different modes are invoked is irrelevant.
It's not code I wrote or compiled. Makes no difference if it's all in
kernel, or a mix of different modes.
It can all just as well be compressed into one mode as well. Makes no
difference. I know that you constantly is missing that point, and think
you've found security holes where there actually aren't any. Why don't
you actually understand this, and go hunt for other actual bugs and issues?

My program was not written to run in any other mode than user mode, and
that's what a normal program does. Sorry if I used the word "process" in
a way that confused you.

Johnny

Johnny Billquist

unread,
Aug 15, 2022, 7:40:32 PM8/15/22
to
On 2022-08-15 19:28, Simon Clubley wrote:
> On 2022-08-12, Johnny Billquist <b...@softjar.se> wrote:
>> On 2022-08-12 15:10, Simon Clubley wrote:
>>> In addition, VMS has a major problem that simply doesn't exist in Linux
>>> and that is whereas the vast majority of interaction between a Linux
>>> userland binary and Linux itself is via a nice well-defined syscall
>>> interface, VMS binaries have a nasty habit of looking at data cells
>>> which exist directly in the VMS process's address space.
>>>
>>> Such data cell access would have to be recognised and emulated in such
>>> a userland level emulator.
>>
>> I find that claim incredibly hard to believe. Can you give some examples
>> of this? Because even RSX, which is just a primitive predecessor of VMS
>> do not have such behavior. Everything in the kernel is completely hidden
>> and out of scope for a process, and the only way to do or get to
>> anything is through system calls. And that is generally true of almost
>> any reasonable multiuser, timesharing, memory protected operating system.
>>
>
> As you now know Johnny, you were (once again) very very wrong to try
> and compare the two. :-)

There is so much more that is common between RSX and VMS, than there are
things different between them. Not sure if you know that, but anyway.
Way more than between VMS and any Unix, for instance.

> BTW, the fact you immediately switched to talking about kernel mode,
> makes me wonder if you are even aware of P1 space in a VMS process.

Yes, I'm very aware of P1 space.

>> There is absolutely nothing Unix/Linux specific about this.
>>
>
> Oh yes there is.

No there isn't. Most operating systems have a clean separation between
user code and the kernel. Including all PDP-11 OSes. It turned out that
VMS does not, which rather makes VMS the exception here, not Unix.

> Unix/Linux sets up a new process to run an image and then deletes
> it immediately afterwards and has no such thing as P1 space.

Yes.
Well, technically, P1 space is an artifact of the hardware, and as such,
P1 space exists also for Unix systems running on VAX, and possibly also
Alpha.

VMS just keeps P1 space around a bit more disconnected from the program
you might be executing.

> A normal VMS session only ever has one process that is reused over
> and over again to run programs (unless you choose to start a subprocess
> for some reason.)

Well, not really true. Every time you start a program, it gets a new
process ID, with new resources allocated in the kernel for it. Just that
P1 space is retained between them, unless I remember wrong.

Johnny

Johnny Billquist

unread,
Aug 15, 2022, 7:43:03 PM8/15/22
to
On 2022-08-15 19:47, Simon Clubley wrote:
> On 2022-08-13, Johnny Billquist <b...@softjar.se> wrote:
>> By the way, since people asked about IA64 emulators, and the general
>> belief that they don't exist and are too difficult to do.
>>
>> They do exist, and have for a long time. It's not that complex from this
>> point of view, but of course, performance is probably nowhere near where
>> anyone would actually want to use it for production.
>>
>> See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
>>
>> Last updated in 2004. But that is how they developed all the tooling and
>> so on before they had actual hardware.
>>
>
> I've just had a quick look at this. This emulator is no good for VMS.

Never claimed it was. My point was that emulators for IA64 do exist, and
are not impossible or unobtanium, as some people suggested.

Obviously that project was not interested in VMS. Does not mean it
couldn't be done. IA64 isn't that hard to emulate, as such. But again -
performance is another question.

Johnny

Arne Vajhøj

unread,
Aug 15, 2022, 7:51:19 PM8/15/22
to
Same process with same process id.

I would say that P0 space is not retained. But there is no
difference in substance between P0 not retained (implicit
P1 retained) and P1 retained (implicit P0 not retained).

Arne

Arne Vajhøj

unread,
Aug 15, 2022, 7:53:40 PM8/15/22
to
On 8/15/2022 7:40 PM, Johnny Billquist wrote:
> On 2022-08-15 19:28, Simon Clubley wrote:
>> Unix/Linux sets up a new process to run an image and then deletes
>> it immediately afterwards and has no such thing as P1 space.
>
> Yes.
> Well, technically, P1 space is an artifact of the hardware, and as such,
> P1 space exists also for Unix systems running on VAX, and possibly also
> Alpha.
>
> VMS just keeps P1 space around a bit more disconnected from the program
> you might be executing.

The big difference is that DCL is living in P1 space (stack space)
while a Unix shell is living in heap space (P0 space on a VAX).

Arne

Sunset Ash

unread,
Aug 15, 2022, 9:33:13 PM8/15/22
to
On Thursday, August 11, 2022 at 3:48:21 PM UTC-7, David Turner wrote:
> Does anyone here think that this is an option for people not willing or
> able to move over to x86-64 yet?
> An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
> (16 cores max)
>
> I could imagine it would be useful if stuck with HP-UX or OpenVMS for
> Integrity for some reason?!?
>
> Why am I asking? Well, HPE Integrity servers are getting scarce. I have
> probably purchased 80% of the ones on the market and some companies are
> buying up whatever is available
>
>
> Comments please.
>
>
> David Turner

HPE has an Integrity emulator for running HP-UX - it's called Portable HP-UX and runs on Linux; you can request access if you have an active contract. I suspect running VMS was not of particular interest to them during development, though.

Johnny Billquist

unread,
Aug 16, 2022, 5:25:00 AM8/16/22
to
Well. P0 isn't just heap. P0 is basically all memory that you want to
look at as either static or growing upward. So heap is one part, but
plain executable code is also in P0. P1 is static stuff as well, and
data growing downward, like a stack for example.

So yes, DCL sits in P1, while Unix shells sits in P0 *and* P1, just as
any other binary. The Unix shell hangs around because you normally fork
and then execute something else in place, while DCL hangs around by
sitting P1 which is not as process local in VMS as it is in Unix.

Johnny

Simon Clubley

unread,
Aug 16, 2022, 1:54:11 PM8/16/22
to
On 2022-08-15, Johnny Billquist <b...@softjar.se> wrote:
> On 2022-08-15 19:28, Simon Clubley wrote:
>> Unix/Linux sets up a new process to run an image and then deletes
>> it immediately afterwards and has no such thing as P1 space.
>
> Yes.
> Well, technically, P1 space is an artifact of the hardware, and as such,
> P1 space exists also for Unix systems running on VAX, and possibly also
> Alpha.
>
> VMS just keeps P1 space around a bit more disconnected from the program
> you might be executing.
>

It's what VMS does with that address space that makes it so different
from other operating systems.

>> A normal VMS session only ever has one process that is reused over
>> and over again to run programs (unless you choose to start a subprocess
>> for some reason.)
>
> Well, not really true. Every time you start a program, it gets a new
> process ID, with new resources allocated in the kernel for it. Just that
> P1 space is retained between them, unless I remember wrong.
>

That is completely and totally utterly wrong. However, if you really
believe that (instead of you just doing a David by trolling by making
false statements :-)) it also explains your confusion because VMS works
so differently to what you are clearly used to.

The PID does _not_ belong to the program. It belongs to the process itself.
At many times during the lifecycle of a typical VMS process, there will not
even _be_ a user-mode program loaded into the process P0 address space.

In Linux, there is no such thing as an executing process without a
user-mode program, regardless of whether that user-mode program is
a shell, a user's application program, or something else. Also, every
time the shell runs a new program, the program is run in a new and
different process.

OTOH, in VMS, having a process you can interact with, but without
having any user-mode P0 program loaded, is a perfectly normal thing.

When you ask DCL to run a program, _it_ maps the requested program
into the P0 address space, sets it up, and then calls it to start
execution of the user program.

When the user program exits, the user-mode pages used by that program,
but _only_ those user-mode pages, are removed from the process address
space, and control returns to DCL to await your next command.

There is no "new process ID, with new resources allocated in the kernel
for it". It's the same physical process that gets used over and over
again during the user's session to run different user-mode programs.

Running a user program on VMS from DCL is much more like DCL doing
a dlopen() on the user program into P0 space and then doing a call
to it, instead of the Linux/Unix approach of creating a whole new
fresh process for each program the shell wants to run.

_Now_ do you understand why I am describing the VMS approach in the
way I am ?

For the record, I prefer the Unix approach, but I am trying to make
you understand how the VMS approach actually works, not how you think
it works.

Simon Clubley

unread,
Aug 16, 2022, 2:01:24 PM8/16/22
to
Actually, the _major_ difference is that on VMS, they are in the same
process. In Unix land, they are in different processes.

Also, the other major difference is that parts of P1 space are
directly accessible by a user-mode VMS program, so to get back to the
topic, such access would have to be detected and emulated in any
user-mode binaries level emulator (as opposed to a full-system emulator).

Simon Clubley

unread,
Aug 16, 2022, 2:05:24 PM8/16/22
to
On 2022-08-15, Johnny Billquist <b...@softjar.se> wrote:
> On 2022-08-15 19:47, Simon Clubley wrote:
>>
>> I've just had a quick look at this. This emulator is no good for VMS.
>
> Never claimed it was. My point was that emulators for IA64 do exist, and
> are not impossible or unobtanium, as some people suggested.
>

I know you didn't, but it was still worth me looking at it, to see if
it could be something useful. Unfortunately, that does not appear to
be the case, as it doesn't offer anything over what Ski already does,
and Ski would be only a small part of any required full-system emulator.

Simon Clubley

unread,
Aug 16, 2022, 2:08:24 PM8/16/22
to
On 2022-08-15, Sunset Ash <hpeint...@gmail.com> wrote:
>
> HPE has an Integrity emulator for running HP-UX - it's called Portable HP-UX and runs on Linux; you can request access if you have an active contract. I suspect running VMS was not of particular interest to them during development, though.

This:

https://downloads.linux.hpe.com/SDR/project/c-ux-beta/

appears to be the download page for anyone interested in it.

John Dallman

unread,
Aug 16, 2022, 3:26:30 PM8/16/22
to
In article <tdgm1h$45ik$2...@dont-email.me>,
clubley@remove_me.eisner.decus.org-Earth.UFP (Simon Clubley) wrote:

> Actually, the _major_ difference is that on VMS, they are in the
> same process. In Unix land, they are in different processes.

It is a quirk of UNIX-style OSes that process creation is extremely cheap,
and is thus used for all kinds of things. Most other OSes, including VMS
and its mutant child Windows NT, take rather longer to create processes.

John

Bill Gunshannon

unread,
Aug 16, 2022, 3:33:16 PM8/16/22
to
Quirk? :-)

bill

Craig A. Berry

unread,
Aug 16, 2022, 5:17:27 PM8/16/22
to

On 8/16/22 1:08 PM, Simon Clubley wrote:
> On 2022-08-15, Sunset Ash <hpeint...@gmail.com> wrote:
>>

>> HPE has an Integrity emulator for running HP-UX - it's called
>> Portable HP-UX and runs on Linux; you can request access if you have an
>> active contract. I suspect running VMS was not of particular interest to
>> them during development, though.
>
> This:
>
> https://downloads.linux.hpe.com/SDR/project/c-ux-beta/
>
> appears to be the download page for anyone interested in it.

It claims to be a full-system emulator that uses JIT for some
instructions. The release posted there is a beta release from May 2019
that only supports HP-UX. Whether it actually works or is still under
active development? Your guess is as good as mine.

Arne Vajhøj

unread,
Aug 16, 2022, 7:16:03 PM8/16/22
to
On 8/16/2022 2:01 PM, Simon Clubley wrote:
> On 2022-08-15, Arne Vajhøj <ar...@vajhoej.dk> wrote:
>> On 8/15/2022 7:40 PM, Johnny Billquist wrote:
>>> On 2022-08-15 19:28, Simon Clubley wrote:
>>>> Unix/Linux sets up a new process to run an image and then deletes
>>>> it immediately afterwards and has no such thing as P1 space.
>>>
>>> Yes.
>>> Well, technically, P1 space is an artifact of the hardware, and as such,
>>> P1 space exists also for Unix systems running on VAX, and possibly also
>>> Alpha.
>>>
>>> VMS just keeps P1 space around a bit more disconnected from the program
>>> you might be executing.
>>
>> The big difference is that DCL is living in P1 space (stack space)
>> while a Unix shell is living in heap space (P0 space on a VAX).
>
> Actually, the _major_ difference is that on VMS, they are in the same
> process. In Unix land, they are in different processes.

(they being shell and programs)

That is the same thing. It is possible because DCL is in P1.

Arne

Johnny Billquist

unread,
Aug 17, 2022, 7:49:19 AM8/17/22
to
On 2022-08-16 19:54, Simon Clubley wrote:
> On 2022-08-15, Johnny Billquist <b...@softjar.se> wrote:
>> On 2022-08-15 19:28, Simon Clubley wrote:
>>> Unix/Linux sets up a new process to run an image and then deletes
>>> it immediately afterwards and has no such thing as P1 space.
>>
>> Yes.
>> Well, technically, P1 space is an artifact of the hardware, and as such,
>> P1 space exists also for Unix systems running on VAX, and possibly also
>> Alpha.
>>
>> VMS just keeps P1 space around a bit more disconnected from the program
>> you might be executing.
>>
>
> It's what VMS does with that address space that makes it so different
> from other operating systems.

It's certainly been a long time since I looked inside VMS. Which I get
called out on every time I make some mistake/assumption/remember things
wrong. Embarassing each time...

>>> A normal VMS session only ever has one process that is reused over
>>> and over again to run programs (unless you choose to start a subprocess
>>> for some reason.)
>>
>> Well, not really true. Every time you start a program, it gets a new
>> process ID, with new resources allocated in the kernel for it. Just that
>> P1 space is retained between them, unless I remember wrong.
>>
>
> That is completely and totally utterly wrong. However, if you really
> believe that (instead of you just doing a David by trolling by making
> false statements :-)) it also explains your confusion because VMS works
> so differently to what you are clearly used to.

No. I did believe that. I had some recollection that the PIDs were
allocated each time a program was started. Partly (again) coming from
RSX. Structures like PCB, TCB, task headers and so on are setup when a
program is started, and thus every time a program starts, you have a new
context in this sense.
But this is also a place where RSX and VMS differs the most, since in
RSX, the "shell" is in a sense even weirder than VMS, or any other OS I
know of.

But the end result is that every time a program is started, it has it's
own process id. That DCL under VMS actually will be starting everything
as a part of its own process is really weird, and it makes me also
wonder how things like spawning another program from a program under VMS
works, since it would need to create a new DCL instance then. On the
other hand, I now recollect that VMS don't have spawn as a system call
like RSX do.

But that certainly explains why creating a new process under VMS is even
heaver.

So yeah, I certainly seem to have been totally lost on this detail.

> The PID does _not_ belong to the program. It belongs to the process itself.

That was something I thought I remembered being different.

> At many times during the lifecycle of a typical VMS process, there will not
> even _be_ a user-mode program loaded into the process P0 address space.

That on the other hand isn't any strange to me, and does not necessarily
follow, or lead to the topics of the PID itself.

> In Linux, there is no such thing as an executing process without a
> user-mode program, regardless of whether that user-mode program is
> a shell, a user's application program, or something else. Also, every
> time the shell runs a new program, the program is run in a new and
> different process.

Yes.

> OTOH, in VMS, having a process you can interact with, but without
> having any user-mode P0 program loaded, is a perfectly normal thing.

Yes.

> When you ask DCL to run a program, _it_ maps the requested program
> into the P0 address space, sets it up, and then calls it to start
> execution of the user program.

But you say that not only that - it also uses the context of DCL. So
that from an accounting point of view, it's still the same process. What
about process quotas like runtime limits? Do DCL reset these, and DCL
itself is excluded from such? And accounting. When a program runs and is
finished, you get accounting information on how much cpu time was used,
memory, and all kind of stuff. Is DCL then doing that accounting
processing, and not the kernel? A process calling something like exit
will not terminate the process, but just jump back to DCL?

> When the user program exits, the user-mode pages used by that program,
> but _only_ those user-mode pages, are removed from the process address
> space, and control returns to DCL to await your next command.

Does DCL do that, or the kernel?

> There is no "new process ID, with new resources allocated in the kernel
> for it". It's the same physical process that gets used over and over
> again during the user's session to run different user-mode programs.

That was something I had forgotten/misunderstood/never realized.

> Running a user program on VMS from DCL is much more like DCL doing
> a dlopen() on the user program into P0 space and then doing a call
> to it, instead of the Linux/Unix approach of creating a whole new
> fresh process for each program the shell wants to run.

I can understand that bit. But I then wonder about the whole winding
down of the running of the program, as commented above.

> _Now_ do you understand why I am describing the VMS approach in the
> way I am ?

In part, yes. I still do not consider DCL to be part of userspace, user
programs or anything like that. It's an OS component, and have rights
and privileges which means it can do anything really. Your ranting about
security issues around that topic is still nonsense to me. But VMS is
certainly doing things a bit odd in some ways that I think are unwise here.

> For the record, I prefer the Unix approach, but I am trying to make
> you understand how the VMS approach actually works, not how you think
> it works.

And as I observed, this is hardly Unix specific. The fact that VMS do
things odd is just a bit more surprising to me, since I know how RSX
works, upon which so much of VMS is based, but this is one place where
RSX works more like Unix. So how VMS diverged there is an interesting
topic in my head.
(Not that RSX actually is like Unix, RSX is actually sortof different in
another way, but in the perspective of how VMS works, RSX isn't close here.)

Johnny

Rich Alderson

unread,
Aug 17, 2022, 2:10:49 PM8/17/22
to
Johnny Billquist <b...@softjar.se> writes:

> No. I did believe that. I had some recollection that the PIDs were
> allocated each time a program was started. Partly (again) coming from
> RSX. Structures like PCB, TCB, task headers and so on are setup when a
> program is started, and thus every time a program starts, you have a new
> context in this sense.
> But this is also a place where RSX and VMS differs the most, since in
> RSX, the "shell" is in a sense even weirder than VMS, or any other OS I
> know of.

Interestingly, RSX does things the way TOPS-20 (< TENEX) does them, while VMS
does them very much like the way Tops-10 does them! I would never have guessed
that.

--
Rich Alderson ne...@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen

Simon Clubley

unread,
Aug 17, 2022, 3:02:10 PM8/17/22
to
On 2022-08-17, Johnny Billquist <b...@softjar.se> wrote:
> On 2022-08-16 19:54, Simon Clubley wrote:
>>
>> That is completely and totally utterly wrong. However, if you really
>> believe that (instead of you just doing a David by trolling by making
>> false statements :-)) it also explains your confusion because VMS works
>> so differently to what you are clearly used to.
>
> No. I did believe that. I had some recollection that the PIDs were
> allocated each time a program was started. Partly (again) coming from
> RSX. Structures like PCB, TCB, task headers and so on are setup when a
> program is started, and thus every time a program starts, you have a new
> context in this sense.
> But this is also a place where RSX and VMS differs the most, since in
> RSX, the "shell" is in a sense even weirder than VMS, or any other OS I
> know of.
>
> But the end result is that every time a program is started, it has it's
> own process id. That DCL under VMS actually will be starting everything
> as a part of its own process is really weird, and it makes me also
> wonder how things like spawning another program from a program under VMS
> works, since it would need to create a new DCL instance then. On the
> other hand, I now recollect that VMS don't have spawn as a system call
> like RSX do.
>

VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
system services. It also has a "$ spawn" DCL command.

This allows you to either 1) run something in a subprocess while you
carry on in the main process or 2) wait for the subprocess to complete
(depending on the options you use).

This most certainly is _NOT_ the way you normally run a program on VMS
however. :-)

For example, all user programs listed in the DCL command table run in
the same process as the DCL instance that loads and executes them.

>> When you ask DCL to run a program, _it_ maps the requested program
>> into the P0 address space, sets it up, and then calls it to start
>> execution of the user program.
>
> But you say that not only that - it also uses the context of DCL. So
> that from an accounting point of view, it's still the same process. What
> about process quotas like runtime limits? Do DCL reset these, and DCL
> itself is excluded from such? And accounting. When a program runs and is
> finished, you get accounting information on how much cpu time was used,
> memory, and all kind of stuff. Is DCL then doing that accounting
> processing, and not the kernel? A process calling something like exit
> will not terminate the process, but just jump back to DCL?
>

The quotas are against the process, not the program. When you try to
run a program that doesn't fit into those quotas, the account or system
quotas need adjusting to give the _process_ (not the program) more quota.

Accounting is the same, unless there are some exceptions I don't know about.
Try hitting Ctrl-T repeatedly while at the DCL prompt and watch the I/O
count increase.

A user-mode exit() in a program run from DCL never terminates the process.
The user-mode program exits and control is returned to DCL.

>> When the user program exits, the user-mode pages used by that program,
>> but _only_ those user-mode pages, are removed from the process address
>> space, and control returns to DCL to await your next command.
>
> Does DCL do that, or the kernel?
>

Both. There are system services, but they are called under the control
of DCL. What I can't remember is if they need to be called manually
from DCL code as part of the cleanup or if they are run automatically
as part of some exit handler previously established by DCL. (It's been
a while since I've been in that part of the I&DS manual :-)).

(IIRC, sys$rundwn() is called with a user-mode flag to cause the user-mode
part of the process to be run down. Everyone feel free to correct me if
I am wrong about that. :-))

>> _Now_ do you understand why I am describing the VMS approach in the
>> way I am ?
>
> In part, yes. I still do not consider DCL to be part of userspace, user
> programs or anything like that. It's an OS component, and have rights
> and privileges which means it can do anything really. Your ranting about
> security issues around that topic is still nonsense to me. But VMS is
> certainly doing things a bit odd in some ways that I think are unwise here.
>

It's only nonsense until you realise that, unlike on Linux, DCL has access
to the privileges of the programs it runs.

Johnny Billquist

unread,
Aug 18, 2022, 3:43:04 PM8/18/22
to
Under RSX, SPWN$ is the system call. And it creates a new process, which
is also associated with a terminal, and a UIC, which is given as
arguments to SPWN$. The new process have it's own virtual memory, in
which the task image is loaded, all shared libraries are setup with
regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
like a combo of fork() and exec() under Unix.

Which obviously is rather different than what VMS does then.

> This most certainly is _NOT_ the way you normally run a program on VMS
> however. :-)
>
> For example, all user programs listed in the DCL command table run in
> the same process as the DCL instance that loads and executes them.

That is no surprise and not so different from lots of systems. Heck,
even in Unix shells, and bunch of stuff are actually built into the
shell itself, and when you give the command, it's all done within the
shell process itself. Some commands are even *required* to be run within
the shell itself, and it would not work to run them as separate programs.

>>> When you ask DCL to run a program, _it_ maps the requested program
>>> into the P0 address space, sets it up, and then calls it to start
>>> execution of the user program.
>>
>> But you say that not only that - it also uses the context of DCL. So
>> that from an accounting point of view, it's still the same process. What
>> about process quotas like runtime limits? Do DCL reset these, and DCL
>> itself is excluded from such? And accounting. When a program runs and is
>> finished, you get accounting information on how much cpu time was used,
>> memory, and all kind of stuff. Is DCL then doing that accounting
>> processing, and not the kernel? A process calling something like exit
>> will not terminate the process, but just jump back to DCL?
>>
>
> The quotas are against the process, not the program. When you try to
> run a program that doesn't fit into those quotas, the account or system
> quotas need adjusting to give the _process_ (not the program) more quota.

Um. Sure, I can see that for things like memory limits. But if we talk
about CPU runtime limits, it's usually meant for that specific program
you run. Or are you saying that VMS can't have a runtime limit?
(runtime, like in, you're not allowed to use more than 2 CPU seconds,
and when you hit that, you'll be killed.)

> Accounting is the same, unless there are some exceptions I don't know about.
> Try hitting Ctrl-T repeatedly while at the DCL prompt and watch the I/O
> count increase.

Well. No surprise about that. The whole login session does have such
counting, since that's what accounting wants to have, in order to
(potentially) charge users with used resources.
But accounting usually can also report how much CPU time, I/O, memory
and so on individual programs used. I was pretty sure VMS could report
that as well, which would be something logged as soon as a program
finishes. But since this is all done within the DCL context, it means
the process is not finished. So how does this happen, or can VMS not
have accounting that gives this kind of information?
(Yes, it's been a bloody long time since I was admining VMS systems...)

> A user-mode exit() in a program run from DCL never terminates the process.
> The user-mode program exits and control is returned to DCL.

So things jumps back to DCL at that point. So exit() would not terminate
the process at all.

>>> _Now_ do you understand why I am describing the VMS approach in the
>>> way I am ?
>>
>> In part, yes. I still do not consider DCL to be part of userspace, user
>> programs or anything like that. It's an OS component, and have rights
>> and privileges which means it can do anything really. Your ranting about
>> security issues around that topic is still nonsense to me. But VMS is
>> certainly doing things a bit odd in some ways that I think are unwise here.
>>
>
> It's only nonsense until you realise that, unlike on Linux, DCL has access
> to the privileges of the programs it runs.

DCL runs as a part of the kernel. It has the potential to have any
privilege it wants, if it was malicious. User privileges are pretty
irrelevant and uninteresting. And yes, bugs in DCL can be rather serious
because of the rights and abilities it has.
This is where you seem to miss the point. DCL is already at a point
where, if it wanted, it could do anything. Which is why users cannot
write their own replacements for DCL and run them, without having
serious privileges.
And partly also why there are almost no alternatives to DCL. It's a bit
of a mess, and pretty tricky to write another CLI for VMS.
MCR did exist at one point, and might still, but I'm not sure I ever saw
anything else.

Johnny

Bill Gunshannon

unread,
Aug 18, 2022, 4:48:06 PM8/18/22
to
On 8/18/22 15:43, Johnny Billquist wrote:
>
> And partly also why there are almost no alternatives to DCL. It's a bit
> of a mess, and pretty tricky to write another CLI for VMS.
> MCR did exist at one point, and might still, but I'm not sure I ever saw
> anything else.

Actually, there was. When they came out with the first POSIX subsystem
(I really don't know what else to call it) it came with a version of the
Bourne Shell that could be installed on a per user basis as the login
CLI instead of DCL. I know I did it but only for testing. I don't
remember how it was done. Something set up with SYSUAF, I think.
None of my users ever asked for it and even being primarily a Unix user
I preferred DCL on VMS.

bill


abrsvc

unread,
Aug 18, 2022, 5:00:25 PM8/18/22
to
Also, Cerner had their clinical application that was a replacement for DCL. At the time, it was the only CLI replacement application known.

Dan

Jan-Erik Söderholm

unread,
Aug 18, 2022, 6:10:07 PM8/18/22
to
Den 2022-08-18 kl. 21:43, skrev Johnny Billquist:

> Um. Sure, I can see that for things like memory limits. But if we talk
> about CPU runtime limits, it's usually meant for that specific program you
> run. Or are you saying that VMS can't have a runtime limit? (runtime, like
> in, you're not allowed to use more than 2 CPU seconds, and when you hit
> that, you'll be killed.)
>

Process quotas are *process* quotas. Doesn't matter if you run 1 or 10 EXEs
in that process.

Don't mixup process quotas with the accounting features.


> I was pretty sure VMS could report that as
> well, which would be something logged as soon as a program finishes.

Yes, you can enable that. But that is an *accounting* feature,
not some quota for the process. The resources used by the EXE
are still accumulated against the *process* quotas.

> So things jumps back to DCL at that point. So exit() would not terminate
> the process at all.

It depends.
If the EXE runs in an DCL context, the process will return to DCL.
If the EXE runs without an DCL context, exit from the EXE terminates the
process.

It depends on how the process was created.

If you just do a RUN /DETACH on the target EXE itself, there is no DCL
environment. Exit of the EXE terminates the process.

If you RUN /DETACH the image named LOGINOUT.EXE and give it a COM
file as the /input parameter, you will have an DCL environment and
you can do whatever you like in the COM file. Exit from the/an EXE
just return to DCL and the COM file.



Arne Vajhøj

unread,
Aug 18, 2022, 7:21:37 PM8/18/22
to
SYSUAF> MOD username /CLI=xxxxxx

It could also be done for session when logging in by user:

Login: username/CLI=xxxxxx

Arne



Bill Gunshannon

unread,
Aug 19, 2022, 8:09:52 AM8/19/22
to
Thank you. That jogged my memory.

bill


Simon Clubley

unread,
Aug 19, 2022, 8:24:34 AM8/19/22
to
On 2022-08-18, Johnny Billquist <b...@softjar.se> wrote:
> On 2022-08-17 21:02, Simon Clubley wrote:
>>
>> VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
>> system services. It also has a "$ spawn" DCL command.
>>
>> This allows you to either 1) run something in a subprocess while you
>> carry on in the main process or 2) wait for the subprocess to complete
>> (depending on the options you use).
>
> Under RSX, SPWN$ is the system call. And it creates a new process, which
> is also associated with a terminal, and a UIC, which is given as
> arguments to SPWN$. The new process have it's own virtual memory, in
> which the task image is loaded, all shared libraries are setup with
> regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
> like a combo of fork() and exec() under Unix.
>
> Which obviously is rather different than what VMS does then.
>

No. To this point in the process lifecycle, a spawn on VMS ends up doing
the same as you describe above with RSX in that you do end up with another
process with its own PID.

It's just that after this, a subprocess behaves in the same way as in the
main process, in that the DCL instance running in the subprocess starts
any user programs in the same subprocess just as DCL running in the main
process starts any user programs in the same main process.

>>
>> The quotas are against the process, not the program. When you try to
>> run a program that doesn't fit into those quotas, the account or system
>> quotas need adjusting to give the _process_ (not the program) more quota.
>
> Um. Sure, I can see that for things like memory limits. But if we talk
> about CPU runtime limits, it's usually meant for that specific program
> you run. Or are you saying that VMS can't have a runtime limit?
> (runtime, like in, you're not allowed to use more than 2 CPU seconds,
> and when you hit that, you'll be killed.)
>

In VMS, CPU runtime limits are documented as being against the process,
although I've never used them. For example:

SUBMIT

/CPUTIME

/CPUTIME=time

Defines a CPU time limit for the batch job. You can specify time
as delta time, 0, INFINITE, or NONE. If the queue on which the
job executes has a defined CPUMAXIMUM value, the smaller of
the SUBMIT command and queue values is used. If the queue on
which the job executes does not have a specified maximum CPU time
limit, the smaller of the SUBMIT command and user authorization
file (UAF) values is used. If the queue on which the job executes
does not have a specified maximum CPU time limit and the UAF has
a specified CPU time limit of NONE, either the value 0 or the
keyword INFINITE allows unlimited CPU time. If you specify the
keyword NONE, the specified queue or UAF value is used. CPU time
values must be greater than or equal to the number specified by
the system parameter PQL_MCPULM.


>> Accounting is the same, unless there are some exceptions I don't know about.
>> Try hitting Ctrl-T repeatedly while at the DCL prompt and watch the I/O
>> count increase.
>
> Well. No surprise about that. The whole login session does have such
> counting, since that's what accounting wants to have, in order to
> (potentially) charge users with used resources.
> But accounting usually can also report how much CPU time, I/O, memory
> and so on individual programs used. I was pretty sure VMS could report
> that as well, which would be something logged as soon as a program
> finishes. But since this is all done within the DCL context, it means
> the process is not finished. So how does this happen, or can VMS not
> have accounting that gives this kind of information?
> (Yes, it's been a bloody long time since I was admining VMS systems...)
>

Jan-Erik pointed out one thing I had forgotten about and that was the
optional image-level accounting in addition to the overall process-level
accounting. You still get the normal process-level accounting on top of
the image-level accounting if you use that option however.

>>
>> It's only nonsense until you realise that, unlike on Linux, DCL has access
>> to the privileges of the programs it runs.
>
> DCL runs as a part of the kernel. It has the potential to have any
> privilege it wants, if it was malicious. User privileges are pretty
> irrelevant and uninteresting. And yes, bugs in DCL can be rather serious
> because of the rights and abilities it has.
> This is where you seem to miss the point. DCL is already at a point
> where, if it wanted, it could do anything. Which is why users cannot
> write their own replacements for DCL and run them, without having
> serious privileges.

Actually, no I am not. The point I am making is that a DCL which behaves
in this way increases the available attack surface, compared to more
secure options such as how Unix shells work.

Scott Dorsey

unread,
Aug 19, 2022, 9:02:37 AM8/19/22
to
This was kind of like Software Tools for Pr1mos or Cygwin for Windows. It
was just enough like Unix to seem familiar, but not enough like Unix to
actually be familiar. It was just enough different to be frustrating...
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Jan-Erik Söderholm

unread,
Aug 19, 2022, 9:32:04 AM8/19/22
to
Den 2022-08-19 kl. 14:24, skrev Simon Clubley:

> Jan-Erik pointed out one thing I had forgotten about and that was the
> optional image-level accounting in addition to the overall process-level
> accounting.

Well, both PROCESS and IMAGE are possible to enable or disable.

So you *can* have image accounting *without* process accounting... :-)



Bill Gunshannon

unread,
Aug 19, 2022, 10:37:36 AM8/19/22
to
Are you talking about the CLI or the POSIX Subsystem? The POSIX
Subsystem was very much like the Softwware Tools Virtual Operating
System (not to be confused with the Kernighan & Plauger Software
Tools which was a handful of utilities but no API). But the ability
to run the Bourne Shell (or any other alternate CLI) is something
much different. It could not be done on Pr1mos and I don't believe
it can be done on Windows. An alternate shell can only be run as
a sub-process to the normal OS CLI. And that can be done on most
any OS, really. STVOS ran on a lot of different systems (including
all the DEC OSes) but I was never aware of a way to make the shell
an alternate CLI like you could do with VMS and the POSIX Subsystem.

And I have long said that the whole POSIX concept was nothing more
than STVOS revived and warmed over. Imagine what POSIX could have
been if the development of the STVOS had continued from its origin
until the present instead of lying fallow for decades only to be
tried again starting from scratch.

On another side note, I wonder if being able to run a Unix like
shell as a CLI would help with using the install scripts under
GNV?

bill



Scott Dorsey

unread,
Aug 19, 2022, 6:36:43 PM8/19/22
to
Bill Gunshannon <bill.gu...@gmail.com> wrote:
>>
>> This was kind of like Software Tools for Pr1mos or Cygwin for Windows. It
>> was just enough like Unix to seem familiar, but not enough like Unix to
>> actually be familiar. It was just enough different to be frustrating...
>
>Are you talking about the CLI or the POSIX Subsystem? The POSIX
>Subsystem was very much like the Softwware Tools Virtual Operating
>System (not to be confused with the Kernighan & Plauger Software
>Tools which was a handful of utilities but no API). But the ability
>to run the Bourne Shell (or any other alternate CLI) is something
>much different. It could not be done on Pr1mos and I don't believe
>it can be done on Windows. An alternate shell can only be run as
>a sub-process to the normal OS CLI. And that can be done on most
>any OS, really. STVOS ran on a lot of different systems (including
>all the DEC OSes) but I was never aware of a way to make the shell
>an alternate CLI like you could do with VMS and the POSIX Subsystem.

SWT on Primos gave you a shell that was kind of like the Bourne Shell
until you tried to do something useful with it and then it turned out
it wasn't exactly like it. It had pipes and redirection but they didn't
quite work the way they did under Unix with easy forks.

>And I have long said that the whole POSIX concept was nothing more
>than STVOS revived and warmed over. Imagine what POSIX could have
>been if the development of the STVOS had continued from its origin
>until the present instead of lying fallow for decades only to be
>tried again starting from scratch.

Posix shells and compatibility libraries exist on various operating systems
and exist only to allow them to bid for specific government contracts. In
many cases they pass the compatibility test suites without actually working
in any useful way.

Johnny Billquist

unread,
Aug 21, 2022, 11:08:04 AM8/21/22
to
On 2022-08-19 00:10, Jan-Erik Söderholm wrote:
> Den 2022-08-18 kl. 21:43, skrev Johnny Billquist:
>
>> Um. Sure, I can see that for things like memory limits. But if we talk
>> about CPU runtime limits, it's usually meant for that specific program
>> you run. Or are you saying that VMS can't have a runtime limit?
>> (runtime, like in, you're not allowed to use more than 2 CPU seconds,
>> and when you hit that, you'll be killed.)
>>
>
> Process quotas are *process* quotas. Doesn't matter if you run 1 or 10
> EXEs in that process.
>
> Don't mixup process quotas with the accounting features.

It's more being lazy. I was hoping people would understand the concepts
here without having to write every detail in some very specific form.

>> I was pretty sure VMS could report that as well, which would be
>> something logged as soon as a program finishes.
>
> Yes, you can enable that. But that is an *accounting* feature,
> not some quota for the process. The resources used by the EXE
> are still accumulated against the *process* quotas.

Well. CPU usage limit would be something you would expect to be applied
to the program you run, and not to your session as a whole. But I'm
starting to get the feeling that VMS can't do this then.

And if a program finishes, but it just means you get back to DCL, then
I'm still wondering how the accounting is done, since the process is
still there, the kernel don't have as much clue about what happened.

>> So things jumps back to DCL at that point. So exit() would not
>> terminate the process at all.
>
> It depends.
> If the EXE runs in an DCL context, the process will return to DCL.
> If the EXE runs without an DCL context, exit from the EXE terminates the
> process.
>
> It depends on how the process was created.
>
> If you just do a RUN /DETACH on the target EXE itself, there is no DCL
> environment. Exit of the EXE terminates the process.
>
> If you RUN /DETACH the image named LOGINOUT.EXE and give it a COM
> file as the /input parameter, you will have an DCL environment and
> you can do whatever you like in the COM file. Exit from the/an EXE
> just return to DCL and the COM file.

But how is this done from a technical point of view? There is a huge
difference between the kernel getting a call/signal/whatever that the
process should die, and the kernel removes all associated resources, and
a return being done to DCL, from where the program was called.

Or is a program terminating always going into the kernel, and the kernel
then notices that there is a CLI associated here, and it then moves the
execution back to the CLI with some additional information that the
program terminated?

Johnny

Johnny Billquist

unread,
Aug 21, 2022, 11:18:30 AM8/21/22
to
On 2022-08-19 14:24, Simon Clubley wrote:
> On 2022-08-18, Johnny Billquist <b...@softjar.se> wrote:
>> On 2022-08-17 21:02, Simon Clubley wrote:
>>>
>>> VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
>>> system services. It also has a "$ spawn" DCL command.
>>>
>>> This allows you to either 1) run something in a subprocess while you
>>> carry on in the main process or 2) wait for the subprocess to complete
>>> (depending on the options you use).
>>
>> Under RSX, SPWN$ is the system call. And it creates a new process, which
>> is also associated with a terminal, and a UIC, which is given as
>> arguments to SPWN$. The new process have it's own virtual memory, in
>> which the task image is loaded, all shared libraries are setup with
>> regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
>> like a combo of fork() and exec() under Unix.
>>
>> Which obviously is rather different than what VMS does then.
>>
>
> No. To this point in the process lifecycle, a spawn on VMS ends up doing
> the same as you describe above with RSX in that you do end up with another
> process with its own PID.
>
> It's just that after this, a subprocess behaves in the same way as in the
> main process, in that the DCL instance running in the subprocess starts
> any user programs in the same subprocess just as DCL running in the main
> process starts any user programs in the same main process.

Meaning there is always DCL? That seems to contradict what Jan-Erik said.

>>> The quotas are against the process, not the program. When you try to
>>> run a program that doesn't fit into those quotas, the account or system
>>> quotas need adjusting to give the _process_ (not the program) more quota.
>>
>> Um. Sure, I can see that for things like memory limits. But if we talk
>> about CPU runtime limits, it's usually meant for that specific program
>> you run. Or are you saying that VMS can't have a runtime limit?
>> (runtime, like in, you're not allowed to use more than 2 CPU seconds,
>> and when you hit that, you'll be killed.)
>>
>
> In VMS, CPU runtime limits are documented as being against the process,
> although I've never used them. For example:

CPU limits for a batch process is actually for the whole thing, and not
for individual programs.

Not sure if VMS have CPU limits for individual programs. After all these
messages, it almost sounds like it don't.

In RSX, it's a switch to RUN. Like this:

.help run ins tim

RUN [ddnn:][$]filename /TIME=nM
/TIME=nS

Sets the time limit for a task that uses the CPU. When the time limit
expires,
the task is aborted and a message is displayed. If the task being run is
privileged, this keyword is ignored.

Specify the time limit in minutes (M) or in seconds (S); M is the
default.
(Valid only on systems with Resource Accounting.)


Obviously, if RSX had worked like VMS here, you would have a serious
headache if DCL was running in the same process context, as that process
context is killed at that point.

>>> It's only nonsense until you realise that, unlike on Linux, DCL has access
>>> to the privileges of the programs it runs.
>>
>> DCL runs as a part of the kernel. It has the potential to have any
>> privilege it wants, if it was malicious. User privileges are pretty
>> irrelevant and uninteresting. And yes, bugs in DCL can be rather serious
>> because of the rights and abilities it has.
>> This is where you seem to miss the point. DCL is already at a point
>> where, if it wanted, it could do anything. Which is why users cannot
>> write their own replacements for DCL and run them, without having
>> serious privileges.
>
> Actually, no I am not. The point I am making is that a DCL which behaves
> in this way increases the available attack surface, compared to more
> secure options such as how Unix shells work.

That there are more risks with code that have such rights is hardly new,
is it?
You could argue that this design makes it more sensitive to bugs causing
security problems, and I'm sure everyone would agree.

No different than any other part of the kernel. A bug anywhere in the
kernel have the same potential problem.

From a security point of view then, minimizing the size of the kernel
and other subsystems that runs with such elevated rights makes the risk
easier to assess, analyze and fix. Nothing new in that either.

So there isn't really anything new under the sun here. If you find a bug
in DCL, good. Report it, and let's hope it gets fixed. Is there a
security issue that DCL gets the rights of the executing program? Nope.

Johnny

Jan-Erik Söderholm

unread,
Aug 21, 2022, 11:27:29 AM8/21/22
to
Sure, na RSX "task" is like a VMS "process".

You can of course start an VMS EXE in a new "detached process" and
run it without an DCL envionment. Then there is nothing but that
EXE running in that process. And when the EXE exits, the process
is deleted.

But you can also, if you want or need, start the same EXE in a DCL
environment by calling LOGINOUT.EXE and having a COM file as the
sys$input to that EXE where you run your main EXE. You might need
to have a "script" environment in your detached process where you
run different EXEs.

>
> Obviously, if RSX had worked like VMS here, you would have a serious
> headache if DCL was running in the same process context, as that process
> context is killed at that point.

I'd say that in most cases you just run the EXE in the detached
process without a DCL envionment. So it is a bit like running
an RSX EXE in a new "task".

Dave Froble

unread,
Aug 21, 2022, 2:52:15 PM8/21/22
to
On 8/21/2022 11:18 AM, Johnny Billquist wrote:
> On 2022-08-19 14:24, Simon Clubley wrote:
>> On 2022-08-18, Johnny Billquist <b...@softjar.se> wrote:
>>> On 2022-08-17 21:02, Simon Clubley wrote:
>>>>
>>>> VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
>>>> system services. It also has a "$ spawn" DCL command.
>>>>
>>>> This allows you to either 1) run something in a subprocess while you
>>>> carry on in the main process or 2) wait for the subprocess to complete
>>>> (depending on the options you use).
>>>
>>> Under RSX, SPWN$ is the system call. And it creates a new process, which
>>> is also associated with a terminal, and a UIC, which is given as
>>> arguments to SPWN$. The new process have it's own virtual memory, in
>>> which the task image is loaded, all shared libraries are setup with
>>> regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
>>> like a combo of fork() and exec() under Unix.
>>>
>>> Which obviously is rather different than what VMS does then.
>>>
>>
>> No. To this point in the process lifecycle, a spawn on VMS ends up doing
>> the same as you describe above with RSX in that you do end up with another
>> process with its own PID.
>>
>> It's just that after this, a subprocess behaves in the same way as in the
>> main process, in that the DCL instance running in the subprocess starts
>> any user programs in the same subprocess just as DCL running in the main
>> process starts any user programs in the same main process.
>
> Meaning there is always DCL? That seems to contradict what Jan-Erik said.

Actually, I'm not sure of that.

An interactive process haws a CLI, whatever is specified in the SYSUAF record
for that user account. Usually DCL, but it does not have to be DCL.

A batch job has a batch command file that specifies activity.

A detached process can read from a command file, however, I do not think it has
to have such. While I've used detached processes, I usually have a command file
for activity. Not sure it is required.

Now, normally on VMS, there is some kind of SYS$COMMAND, SYS$INPUT, SYS$OUTPUT,
and SYS$ERROR. Or some other method of seeing completion, whether successful or
not.

The I&DS book(s) would be helpful ...

>>>> The quotas are against the process, not the program. When you try to
>>>> run a program that doesn't fit into those quotas, the account or system
>>>> quotas need adjusting to give the _process_ (not the program) more quota.
>>>
>>> Um. Sure, I can see that for things like memory limits. But if we talk
>>> about CPU runtime limits, it's usually meant for that specific program
>>> you run. Or are you saying that VMS can't have a runtime limit?
>>> (runtime, like in, you're not allowed to use more than 2 CPU seconds,
>>> and when you hit that, you'll be killed.)
>>>
>>
>> In VMS, CPU runtime limits are documented as being against the process,
>> although I've never used them. For example:
>
> CPU limits for a batch process is actually for the whole thing, and not for
> individual programs.
>
> Not sure if VMS have CPU limits for individual programs. After all these
> messages, it almost sounds like it don't.

It's been a while, but I'm pretty sure that CPU and time limits are on a
process. I've never used them.

> In RSX, it's a switch to RUN. Like this:
>
> .help run ins tim
>
> RUN [ddnn:][$]filename /TIME=nM
> /TIME=nS
>
> Sets the time limit for a task that uses the CPU. When the time limit expires,
> the task is aborted and a message is displayed. If the task being run is
> privileged, this keyword is ignored.
>
> Specify the time limit in minutes (M) or in seconds (S); M is the default.
> (Valid only on systems with Resource Accounting.)

If I wished such, I'd most likely use a timer AST.

> Obviously, if RSX had worked like VMS here, you would have a serious headache if
> DCL was running in the same process context, as that process context is killed
> at that point.
>
>>>> It's only nonsense until you realise that, unlike on Linux, DCL has access
>>>> to the privileges of the programs it runs.
>>>
>>> DCL runs as a part of the kernel. It has the potential to have any
>>> privilege it wants, if it was malicious. User privileges are pretty
>>> irrelevant and uninteresting. And yes, bugs in DCL can be rather serious
>>> because of the rights and abilities it has.
>>> This is where you seem to miss the point. DCL is already at a point
>>> where, if it wanted, it could do anything. Which is why users cannot
>>> write their own replacements for DCL and run them, without having
>>> serious privileges.
>>
>> Actually, no I am not. The point I am making is that a DCL which behaves
>> in this way increases the available attack surface, compared to more
>> secure options such as how Unix shells work.
>
> That there are more risks with code that have such rights is hardly new, is it?
> You could argue that this design makes it more sensitive to bugs causing
> security problems, and I'm sure everyone would agree.

A friend got tired of hearing about bugs, so he implimented a "bug" in the
terminal I/O routines. If active, the "bug" would come out and crawl around the
screen. Some people are easily bored.

> No different than any other part of the kernel. A bug anywhere in the kernel
> have the same potential problem.
>
> From a security point of view then, minimizing the size of the kernel and other
> subsystems that runs with such elevated rights makes the risk easier to assess,
> analyze and fix. Nothing new in that either.

Not having bugs is an even better idea ...

> So there isn't really anything new under the sun here. If you find a bug in DCL,
> good. Report it, and let's hope it gets fixed. Is there a security issue that
> DCL gets the rights of the executing program? Nope.

I don't have a problem with that.


--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Jan-Erik Söderholm

unread,
Aug 21, 2022, 4:29:52 PM8/21/22
to
There is an DCL environment if you start your detached process
by using the LOGINOUT.EXE system image to start/create it.

But you do not have to, if you do not need an DCL environment,
then you just let your detached process run your own EXE directly.

Without a DCL environment:

$ run /detached [other switches as needed] MYEXE.EXE

With a DCL envionment:

$ run /detached /input=myexe.com sys$system:loginout.exe

LOGINOUT does a full "login" of the detached process and creates
a DCL envionment in it and starts reading the /input file just
as when DCL processes any COM file.

The MYEXE.COM file can have any setup needed for the main EXE
and then do a normal RUN of it. Such as process unique logical
names or whatever.

We have background (deteched) processes started in both ways
depening on the requirement of the process.

>
> An interactive process haws a CLI, whatever is specified in the SYSUAF
> record for that user account.  Usually DCL, but it does not have to be DCL.
>
> A batch job has a batch command file that specifies activity.
>

I expect any batch job to have a DCL envionment.

> A detached process can read from a command file, however, I do not think it
> has to have such.  While I've used detached processes, I usually have a
> command file for activity.  Not sure it is required.

No, you can start an EXE directly, if that is fine.

>
> Now, normally on VMS, there is some kind of SYS$COMMAND, SYS$INPUT,
> SYS$OUTPUT, and SYS$ERROR.  Or some other method of seeing completion,
> whether successful or not.

But those are the process "permanent" logical names. As far as I know,
any process has these defined by the system at process creation.

>
> The I&DS book(s) would be helpful ...
>
>>>>> The quotas are against the process, not the program. When you try to
>>>>> run a program that doesn't fit into those quotas, the account or system
>>>>> quotas need adjusting to give the _process_ (not the program) more quota.
>>>>
>>>> Um. Sure, I can see that for things like memory limits. But if we talk
>>>> about CPU runtime limits, it's usually meant for that specific program
>>>> you run. Or are you saying that VMS can't have a runtime limit?
>>>> (runtime, like in, you're not allowed to use more than 2 CPU seconds,
>>>> and when you hit that, you'll be killed.)
>>>>
>>>
>>> In VMS, CPU runtime limits are documented as being against the process,
>>> although I've never used them. For example:
>>
>> CPU limits for a batch process is actually for the whole thing, and not for
>> individual programs.
>>
>> Not sure if VMS have CPU limits for individual programs. After all these
>> messages, it almost sounds like it don't.
>
> It's been a while, but I'm pretty sure that CPU and time limits are on a
> process.  I've never used them.

Sometimes you'd wished you had, when you get that run-away process... :-)

>
>> In RSX, it's a switch to RUN. Like this:
>>
>> .help run ins tim
>>
>>  RUN [ddnn:][$]filename /TIME=nM
>>                         /TIME=nS
>>
>>  Sets the time limit for a task that uses the CPU. When the time limit
>> expires,
>>  the task is aborted and a message is displayed. If the task being run is
>>  privileged, this keyword is ignored.

But that creates a new RSX process (called "task" in RSX).

It is the same doing this on VMS:

$ run /detached /time_limit=00:10:00 [other switches as needed] MYEXE.EXE

A 10 min CPU limit in that case. Can also be used for the other
case with an DCL envionment, of course. It is still valid för the
whole process, no matter if it is a single EXE or a DCL environment.

$ help run process /time

RUN

Process

/TIME_LIMIT

/TIME_LIMIT=limit

Specifies the maximum amount of CPU time (in delta time) a
created process can use. CPU time is allocated to the created
process in units of 10 milliseconds. When it has exhausted its
CPU time limit quota, the created process is deleted.




Simon Clubley

unread,
Aug 22, 2022, 1:54:05 PM8/22/22
to
On 2022-08-21, Johnny Billquist <b...@softjar.se> wrote:
>
> And if a program finishes, but it just means you get back to DCL, then
> I'm still wondering how the accounting is done, since the process is
> still there, the kernel don't have as much clue about what happened.
>

The image-level accounting records are probably written during the
user-mode rundown system service call, but that's just a guess as this
is a part of VMS I have not really looked at.

>
> But how is this done from a technical point of view? There is a huge
> difference between the kernel getting a call/signal/whatever that the
> process should die, and the kernel removes all associated resources, and
> a return being done to DCL, from where the program was called.
>

As already mentioned, DCL is responsible for kicking off the cleanup of
the resources allocated to the user-mode program when that _program_ exits.
The kernel does the normal process-level cleanup when the _process_ exits.

> Or is a program terminating always going into the kernel, and the kernel
> then notices that there is a CLI associated here, and it then moves the
> execution back to the CLI with some additional information that the
> program terminated?
>

The CLI is between the user-mode program exiting and the process exiting.

If you manage to crash DCL itself so DCL exits, the process itself exits
as a result.
0 new messages