Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Hoff's Boot Camp report available

1,227 views
Skip to first unread message

Paul Sture

unread,
Oct 9, 2015, 11:38:02 AM10/9/15
to
<http://labs.hoffmanlabs.com/node/1917>

Thanks for that Hoff.

--
Anyone who thinks editors are not needed for writting has never written.

JF Mezei

unread,
Oct 9, 2015, 3:08:09 PM10/9/15
to
On 2015-10-09 11:37, Paul Sture wrote:
> <http://labs.hoffmanlabs.com/node/1917>
>


A buit disapointed they won't allow any "easy" goodies before the port
to the 8086 is done.

I recall many of the goodies Guy Peleg had done for 8.3.

The "80% still on VAX/Alpha" is fairly scary.

A native VMS on 8086 should be able to run an emulator for VAX and Alpha.

This way, that 80% could move its existing system to VMS-8086 by
emulating their VAX/Alpha environment on it, and progressively move
software to the VMS-8086 instance.


Getting that 80% to buy an 8086 with a VMS license will be easy. But
getting their environment moved to 8086, especially when you consider
much of that spoftware likely isn't developped anymore won't be so easy.


Imagine a simh VAX instance where your old software is accessing all its
files not from a container file, but from the 8086-VMS file system. You
can then use native TPU to edit your software configs, source etc. and
make it run on the emulated VAX instance. And as you recompile stuff,
you can recompile native. And if they do a good enough job, you could
get portions of a software to run on the 8086-VMS instance, and portions
on the SIMH VAX emulation, with mailboxes, locks etc bridges betwene the
2 instances.


David Froble

unread,
Oct 9, 2015, 4:25:27 PM10/9/15
to
I won't disagree with some of your suggestions. If SimH or similar was able to
access disks directly, there might be some benefit. Until someone asks for both
VMS and VMS on SimH to observe each other's locks, and such. Maybe the
container disks aren't such a bad idea after all ....

But the important thing to remember is, if someone has an artery pumping out
blood, and a minor scrape on the finger, isn't it more important to address the
major problem first?

Without VMS on x86, VMS is basically dead. So, I cannot see anything more
important than the port.

Dirk Munk

unread,
Oct 9, 2015, 4:53:19 PM10/9/15
to
Paul Sture wrote:
> <http://labs.hoffmanlabs.com/node/1917>
>
> Thanks for that Hoff.
>

Yes, nice synopsis.

So the new IP stack will be out quite soon. Any news about the CLI
style? Standard VMS DCL style, or Unix style, or both?
En how about IPv6?

No graphics card support. That is what I expected. So if you want a VMS
laptop, run a VM, and put VMS on top of it, combined with Windows, or
Linux, or MacOS. One of the latter three will give you a graphical
window on VMS.

Of course they could support Vesa bios calls. That would give a slow and
perhaps limited graphics support, but it would work with any graphics card.


JF Mezei

unread,
Oct 9, 2015, 5:36:44 PM10/9/15
to
On 2015-10-09 16:25, David Froble wrote:

> I won't disagree with some of your suggestions.

This is not recommended. Very dangerous to your health to agree with me :-)


> Without VMS on x86, VMS is basically dead. So, I cannot see anything more
> important than the port.

I don't think there is any debate on port to x86 being the top priority.
Based on lack of media attention to Itanic, there isn't any speculation
on when it will sink anymore because it is assumed to be sunken. VMS
only has so much time it can survive underwater on a single breath.


Having said that, is there much of a point is doing all the effor to
swim back to the surface to get some air only to find out nobody will
come to rescue you and you'll freeze to death.

So, while the engineers struggle to get VMS on the 8086, it is important
for the other folks to do a strategy of what to do to get the VMS
installed base to buy in and move to VMS on '86.


A good VAX/Alpha emulation "app" on VMS would make it much easier to get
those 80% to migrate to new hardware ASAP and start paying VSI real money.

(and may be an interesting way for VSI to end up supporting VAX-VMS and
Alpha-VMS via the emulators on 8086-VMS)

The 20% who have moved to Itanic are much more likely to be able to port
to 8086 because a higher percentage of software on VMS-IA64 is still
"alive" and thus more likely to be ported.

But on Alpha and expecially VAX, there is a lot of "abandonware" out
there which has locked the customer onto the old hardware.

So those customers could move to Window/Linux with a total rewrite or
purchase to replace their abandonware apps. If you want them to move to
VMS-8086 native, they face the same mountain to climb, having to rewrite
or repurchase the software they need. And it is likely there is more
software choices available on Windows/Linux than on VMS-8086.

On the other hand, if you let them host their unportable software on
8086-VMS via emulators, it makes it far mroe palatable for them to move
to 8086-VMS than to move to Linux or Windows.




Craig A. Berry

unread,
Oct 9, 2015, 6:19:42 PM10/9/15
to
On 10/9/15 4:37 PM, JF Mezei wrote:

> A good VAX/Alpha emulation "app" on VMS would make it much easier to get
> those 80% to migrate to new hardware ASAP and start paying VSI real money.

You perhaps missed Clair Grant's recent message saying, "The instruction
set decoder will be done within the next few days; 30,000 lines of code
for the implementation and test suite."

See <fa8f4064-ae12-45fa...@googlegroups.com>

terry+go...@tmk.com

unread,
Oct 9, 2015, 8:00:57 PM10/9/15
to
On Friday, October 9, 2015 at 4:25:27 PM UTC-4, David Froble wrote:
> I won't disagree with some of your suggestions. If SimH or similar was able to
> access disks directly, there might be some benefit. Until someone asks for both
> VMS and VMS on SimH to observe each other's locks, and such. Maybe the
> container disks aren't such a bad idea after all ....

AlphaVM-Pro (mentioned in another comp.os.vms thread) supports physical disks as well as container files. I tested it with a SATA drive (which appears as a DKAnnn device on an ISPxxxx controller to VMS) as well as with SAS drives (which have less translation overhead). For difficult cases AlphaVM-Pro provides a number of translation configuration options.

I would expect a cluster of AlphaVM instances on a single physical box to be able to coordinate access to shared physical drives. Similarly, it should be possible to create a NI cluster with the disk(s) MSCP served from an AlphaVM instance, regardless of the other cluster member(s) being emulators or actual Alpha (or VAX, subject to SPD interoperability limits) systems.

I don't know if multiple systems with physical connections to the same drive would work. This would need to be a legacy bus like parallel SCSI in any event, and a NI cluster over 10GbE with modern drives would likely be faster. [The Ethernet just looks like a VERY fast DE500 to the emulated VMS system.]

The above refers to what is technically possible - whether or not this is a fully-supported configuration from either EmuVM or HP is a different question.

JF Mezei

unread,
Oct 9, 2015, 9:08:09 PM10/9/15
to
On 2015-10-09 18:19, Craig A. Berry wrote:

> You perhaps missed Clair Grant's recent message saying, "The instruction
> set decoder will be done within the next few days; 30,000 lines of code
> for the implementation and test suite."

I read that more in a context of being able to prepare 8086 code on an
Itanium as opposed to being the Alpha or VAX emulators.

Obviously, if they are able to run an Alpha .EXE natively and provide a
shareable image library that converts Alpha style calls into 8086 native
calls (and then calls the "live" 8086 shareable image, this would be
best since the Alpha apps would be able to run inside the VMS instance
as opposed to an emulated VMS instance/node.

Would make for some interesting shareable image handling.


No matter how it is done, I think it is important for VSI to make it
MUCH easier for VAX and Alpha customers stuck on old apps to move to x86
VMS compared to having to rewrite/repurchase totally new software on
Linux/Windows.



With regards to the graphics drivers:

Can we assume that the very basic VGA driver will be supported during
boot with roughly VT52 emulation once you login (as I recall) ? I know
I have asked this question before, but what sort of character cell
console is provided by modern X86 servers ? Will VMS lack of video
support restrict server models to only those that have a serial console
(either telnet/ssh console, IPMI SOL console or something akin to this ?)

Does the OS care whether the serial console is telnet/physical serial
post, SSH, or IPMI SOL or whatever ? Does EFI totally isolate the OS
instance from the type of serial console being used even once boot has
completed ?

Johnny Billquist

unread,
Oct 10, 2015, 6:09:22 AM10/10/15
to
On 2015-10-10 03:07, JF Mezei wrote:
> On 2015-10-09 18:19, Craig A. Berry wrote:
>
>> You perhaps missed Clair Grant's recent message saying, "The instruction
>> set decoder will be done within the next few days; 30,000 lines of code
>> for the implementation and test suite."
>
> I read that more in a context of being able to prepare 8086 code on an
> Itanium as opposed to being the Alpha or VAX emulators.
>
> Obviously, if they are able to run an Alpha .EXE natively and provide a
> shareable image library that converts Alpha style calls into 8086 native
> calls (and then calls the "live" 8086 shareable image, this would be
> best since the Alpha apps would be able to run inside the VMS instance
> as opposed to an emulated VMS instance/node.
>
> Would make for some interesting shareable image handling.
>
>
> No matter how it is done, I think it is important for VSI to make it
> MUCH easier for VAX and Alpha customers stuck on old apps to move to x86
> VMS compared to having to rewrite/repurchase totally new software on
> Linux/Windows.

I don't understand what you are getting at.
Current users of VMS on VAX or Alpha can already run it on an emulator.
There is nothing VSI needs/can to do here. It already works. Any port of
VMS to x86-64 will not change anything here. Those stuck on old
architectures will be stuck with those old architectures, no matter what
VSI does. And running simh, or some other simulator, is just an
application on the host machine. Nothing for VSI to do, period.

Johnny

--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: b...@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol

Craig A. Berry

unread,
Oct 10, 2015, 10:04:43 AM10/10/15
to
On 10/10/15 5:09 AM, Johnny Billquist wrote:
> On 2015-10-10 03:07, JF Mezei wrote:
>> On 2015-10-09 18:19, Craig A. Berry wrote:
>>
>>> You perhaps missed Clair Grant's recent message saying, "The instruction
>>> set decoder will be done within the next few days; 30,000 lines of code
>>> for the implementation and test suite."
>>
>> I read that more in a context of being able to prepare 8086 code on an
>> Itanium as opposed to being the Alpha or VAX emulators.

I think you're talking about a cross compiler. It sure sounds to me like
he's talking about the dynamic binary translator, mentioned under the
x86_64 port in the right-hand column of the second page of the current
roadmap, i.e., something like Apple's Rosetta:

<https://en.wikipedia.org/wiki/Rosetta_(software)>

> I don't understand what you are getting at.
> Current users of VMS on VAX or Alpha can already run it on an emulator.
> There is nothing VSI needs/can to do here. It already works. Any port of
> VMS to x86-64 will not change anything here. Those stuck on old
> architectures will be stuck with those old architectures, no matter what
> VSI does. And running simh, or some other simulator, is just an
> application on the host machine. Nothing for VSI to do, period.

And yet they have already done it, apparently, and have been saying they
would do it since the day the existence of the company was publicly
announced. Why you think whole-system emulation is the only thing that
can be done (or would be desirable to do)?

Stephen Hoffman

unread,
Oct 10, 2015, 10:28:08 AM10/10/15
to
On 2015-10-09 20:25:24 +0000, David Froble said:

> I won't disagree with some of your suggestions. If SimH or similar was
> able to access disks directly, there might be some benefit. Until
> someone asks for both VMS and VMS on SimH to observe each other's
> locks, and such. Maybe the container disks aren't such a bad idea
> after all ....

It'd probably be easier and faster to cluster the boxes. That's
existing and documented and with more than a few customers having used
it. Mixed-architecture clustering is also a capability that VSI has
stated they're planning to support. So... clustering does (most of)
what's requested, and doesn't add to the VSI workload.

Now clustering does not support the requested cross-compiler and
cross-linker features, and — if past ports are any guide — I'd expect
VSI would not be looking to maintain those tools. Not once the native
compilers and the native linker are available, that is.

Previous OpenVMS ports did use cross tools, but only until the native
tools became available. Cross tools have some subtle considerations,
where you have to tell the C compiler to go look at the target CRTL and
not the host CRTL, and where the #ifdef stuff is all based on the
target mode and not the host, and where the DCL still has to deal with
the host and the target architectures. Makes things... complex. Much
code got stripped out, once the compilers and the linker were native.

VSI is starting with a complete and very modular compiler chain, and
with an operating system environment that's already using ELF and
DWARF. Whether they have decided to port over the existing and custom
Itanium linker or use and to extend the LLVM lld linker, we shall
(eventually) learn. I'd hope that they'd use lld.

VSI has previously discussed image translation plans, but that wasn't a
feature of this year's core Boot Camp presentations. They're focused
on getting the base OS ported.

Image translation makes for problematic maintenance, in my experience.
When it works, it works. When it doesn't or when there's a bug in the
original image, now you have one or two or maybe three translations to
wade through. It's obviously a near-last-resort approach for a port,
short of either rewriting the code, or reversing the binaries.

Folks that are still on VAX and Alpha are unlikely to drop what they're
doing and port to x86-64, and this for any of various reasons. The
servers and software might be part of some bureaucratic evaluation or
contract configurations, they might have very specific hardware I/O
requirements, no staff or staff working on higher-priority projects,
dependencies on binary code with no migration available and for any of
various reasons, either plans for or a migration off of OpenVMS, the
ever-popular "it works; why mess with it?" and/or "no money" and/or "no
management buy-in", etc.


> Without VMS on x86, VMS is basically dead. So, I cannot see anything
> more important than the port.

Other than sustainable revenue before the cash runs out, of course.




Others...

re: On 2015-10-09 20:53:17 +0000, Dirk Munk...

Yes, IPv6. The VSI TCP/IP stack would be a fairly useless replacement
IP stack if that was lacking, given the end of availability of IPv4
addresses. The product that VSI is replacing — HP TCP/IP Services —
does have IPv6 support, though with some substantial limitations.
We'll know more as the VSI product becomes available.

re: VESA BIOS. This still means drivers. "For the foreseeable future,
VMS is a server" and "no graphics drivers" seemed pretty clear.
They're also based on UEFI, which means
<http://www.intel.com/content/www/us/en/intelligent-systems/intel-embedded-graphics-drivers/faq-bios-firmware.html>
is in play for console-level access, and not legacy BIOS. There are
also integrated graphics controllers with open documentation
(integrated Intel HD and Iris graphics, other add-ons), so dealing with
legacy graphics interfaces with legacy resolutions is unlikely to be a
viable path forward. Maybe good enough for a boot-time splash screen,
but beyond that...



re: On 2015-10-09 22:19:39 +0000, Craig A. Berry:

The decompiler that has been mentioned is used in various parts of the
operating system, and it's undoubtedly something the VSI team wants to
make crashes easier to decode. There'll be more than a few crashdumps
and process dumps to look at, during a port.

On the topic of crashes... Once the port is online, I'd hope that VSI
will eventually get these system and process crashes into a database
for analysis; that they'll also start work on encrypting and uploading
these crashes (with permission) either to a customer dump server or to
servers at VSI HQ. User-visible stackdumps and server-local crashdumps
and "unfiltered" dumps just don't scale as well, and they're more work
to deal with. q.v. Canasta / CCAT, etc.



re: On 2015-10-10 00:00:53 +0000, terry+go...@tmk.com:

Unshared disk I/O isn't that different from unshared container file
I/O. Anybody that's capable of writing an emulator is easily capable
of disk and file I/O.

However, I'd be very cautious around shared-write physical drives and
shared-write container file I/O in an emulated environment. This can
easily be the emulator developers' version of the mess that can arise
with controller-level clustering or controller-based backup. It's not
as easy as it looks, as you can't always trust what the storage and the
host operating system will do with the ordering-level details of your
emulator I/O, and — particularly for backups and any asynchronous
processing that the environment might provide — can easily run afoul of
state information that the lower parts of the I/O stack don't have
access to. Inconsistencies can ensue, whether it's due to some
low-level sequencing differences or due to a crash with data that's
resident in the underlying host I/O caches and that VMS expected was
written to disk.

You'll want to have or to obtain a vendor support statement, if you go
shared-write clustered on either the containers or the devices.


--
Pure Personal Opinion | HoffmanLabs LLC

Camiel Vanderhoeven

unread,
Oct 10, 2015, 11:26:27 AM10/10/15
to
Op zaterdag 10 oktober 2015 16:04:43 UTC+2 schreef Craig A. Berry:
We're talking about very different things here. The instruction set decoder that's nearly done is a decoder (disassembler) for the x86 instruction set, for use by SDA, DEBUG, etc.

To run VAX and Alpha executables, we're working on a dynamic translator, which is more like what VEST and AEST does than what SIMH or another whole-system emulator does.

Camiel.

Craig A. Berry

unread,
Oct 10, 2015, 12:28:30 PM10/10/15
to
On 10/10/15 10:26 AM, Camiel Vanderhoeven wrote:
> We're talking about very different things here. The instruction set
> decoder that's nearly done is a decoder (disassembler) for the x86
> instruction set, for use by SDA, DEBUG, etc.
>
> To run VAX and Alpha executables, we're working on a dynamic
> translator, which is more like what VEST and AEST does than what SIMH
> or another whole-system emulator does.

Thanks for the clarification.

Stephen Hoffman

unread,
Oct 10, 2015, 12:45:53 PM10/10/15
to
On 2015-10-10 15:26:24 +0000, Camiel Vanderhoeven said:

> We're talking about very different things here. The instruction set
> decoder that's nearly done is a decoder (disassembler) for the x86
> instruction set, for use by SDA, DEBUG, etc.

Somewhat surprised y'all seem to have written your own disassembler
here, what with tools like Hopper, Snowman, Capstone Engine and lldb
around.

<http://www.capstone-engine.org>
<https://github.com/smorr/Mach-O-Scope>
<http://hopperapp.com>
<http://derevenets.com>
<https://www.hex-rays.com/products/ida/index.shtml>
<http://lldb.llvm.org>

Various of these have BSD-MIT licenses, too.

I'd hope that any new disassembler at least considered these
alternatives to creating custom code, and was based on one or more of
these and not wholly local code, but that's fodder for another
discussion.

JF Mezei

unread,
Oct 10, 2015, 1:12:05 PM10/10/15
to
On 2015-10-10 10:04, Craig A. Berry wrote:

> And yet they have already done it, apparently, and have been saying they
> would do it since the day the existence of the company was publicly
> announced. Why you think whole-system emulation is the only thing that
> can be done (or would be desirable to do)?


My understanding was that a priority was to do the IA64 instruction
emulator. And I don't think I heard about a VAX instruction emulator.

Its a tough question. That 80% of customers who are still on Alpha/VAX
are likely "dormant" customers that are tough bunch to wakeup and get to
invest in VMS again.

The 20% on IA64 represent fewer customers, but the fact they recently
invested in hardware probably points to them being easier to acquire as
8086 customers. But this represents a smaller piece of the pie.

Camiel Vanderhoeven

unread,
Oct 10, 2015, 1:17:19 PM10/10/15
to
Op zaterdag 10 oktober 2015 18:45:53 UTC+2 schreef Stephen Hoffman:
> On 2015-10-10 15:26:24 +0000, Camiel Vanderhoeven said:
>
> > We're talking about very different things here. The instruction set
> > decoder that's nearly done is a decoder (disassembler) for the x86
> > instruction set, for use by SDA, DEBUG, etc.
>
> Somewhat surprised y'all seem to have written your own disassembler
> here, what with tools like Hopper, Snowman, Capstone Engine and lldb
> around.

It was a nice way to get more familiar with the instruction set, and will help the author of the disassembler to have a better feel of what SDA and the debugger will need for x86.

JF Mezei

unread,
Oct 10, 2015, 1:40:33 PM10/10/15
to
On 2015-10-10 10:28, Stephen Hoffman wrote:

> Folks that are still on VAX and Alpha are unlikely to drop what they're
> doing and port to x86-64, and this for any of various reasons.

But if you can point them to a solution that runs on modern cheaper
hardware with little trouble because VAX/Alpha code can run unchanged
even if they don't have the source code, then perhaps this can grab
their attention. VMS never really provided this facility in a public
fashion. (was VEST ever part of the SPD on Alpha VMS ?)

One way to get their attention is wth the expression "Spare parts".

Can they still easily get disks for their VAX ? Replacement power
supplies ? Same for Alpha.

And this is where VSI will have to sharpen its pencil. In order to make
it attractive for VAX/Alpha customers to migrate, it will have to not
only be easy (technical) but affordable too. If customers have to
repurchase all the licenses, that is way too expensive.

And consider discontinued products. Say I have Message Router on VAX.
Licenses are no longer issued for it. So my VAX license should be
loadable on the x86 box and the license manager agree to load it even if
it is for wrong architecture, insufficeint units or whatnot. And there
are legal implications. If the abandonned product is still owned by HP,
VSI would have to get permission to transfer licenses for these
abandonned prodycts without HP involvement.

VSI is small enough that it can probably do such deals privately so a
public precedent is not set. But private deals don't attract attention
of dormant customers who may be awakened to news that they can transfer
to an x86 box for free (if they buy support or whatever).

Yeah, that means not getting much cash, but re-acquiring these customers
who have been dormant for over a decade and getting them to start paying
for support and in long term, chances of upgrades etc may be a very good
investment for VSI. Whether they can afford to do long term is a
different question.

> VMS is a server" and "no graphics drivers" seemed pretty clear.

Am not opposed to this early on, as long as X11 is still in there so
that you can run VMS X apps and target display at your Mac. And that
doesn't require reverse engineering a graphics cards like you guys had
to do (RIP FredK)


> However, I'd be very cautious around shared-write physical drives and
> shared-write container file I/O in an emulated environment. This can
> easily be the emulator developers' version of the mess that can arise
> with controller-level clustering or controller-based backup.

Tweaking the emulator may provide support for shared access of disks so
that the host x86-VMS instance would treat DKA200: as a shared access
disk which the VAX0VMS computer also accesses (via SIMH, but th x86
instance doesn't have to know this). This way, both instances are aware
they are accessing the same drive so they do the locking and device
naming correctly.


JF Mezei

unread,
Oct 10, 2015, 1:43:29 PM10/10/15
to
On 2015-10-10 11:26, Camiel Vanderhoeven wrote:

> To run VAX and Alpha executables, we're working on a dynamic translator, which is more like what VEST and AEST does than what SIMH or another whole-system emulator does.

OK. So both VAX and Alpha "rosetta" will be provided ?


Is this being developped at same time as x86-VMS port, or is it just at
preliminary stage and the real work will begin once the port of VMS is
done ?


Camiel Vanderhoeven

unread,
Oct 10, 2015, 1:54:59 PM10/10/15
to
Op zaterdag 10 oktober 2015 19:43:29 UTC+2 schreef JF Mezei:
> On 2015-10-10 11:26, Camiel Vanderhoeven wrote:
>
> > To run VAX and Alpha executables, we're working on a dynamic translator, which is more like what VEST and AEST does than what SIMH or another whole-system emulator does.
>
> OK. So both VAX and Alpha "rosetta" will be provided ?

Certainly Itanium and Alpha; VAX is a maybe.

> Is this being developped at same time as x86-VMS port, or is it just at
> preliminary stage and the real work will begin once the port of VMS is
> done ?

I wouldn't say "once it's done", but it's not part of the first things we're working on now.

Camiel.

JF Mezei

unread,
Oct 10, 2015, 2:11:07 PM10/10/15
to
On 2015-10-10 13:54, Camiel Vanderhoeven wrote:

> Certainly Itanium and Alpha; VAX is a maybe.


I take out our beloved Sue is busy contacting the dormant customers who
are still on VAX/Alpha to see how easy it would be to get them back as
active customers and migrate to x86-VMS ?



David Froble

unread,
Oct 10, 2015, 10:00:22 PM10/10/15
to
It's called "marketing" !!!

You got to get your "new" product in front of people, and let them figure "oh
boy, something new, I got to check it out", just as they did when linux and
weendoze and such came out, even though they most likely had something better in
use.

It's time for the pendulum to swing back in the other direction.

Remember, all those "gee wiz" things started small, and not as good as they are
now. Time to tout VMS on x86 as the next great thing to arrive on the scene.

But, ya gotta get the ideas and the product and the hype in front of people. Ya
know, "MARKETING" !

Who knows, might actually work.

No workstation or graphics? No problem, tout the system as a new and better
server, special purpose to be better than what's out there now. It's all in how
you present things. "We need very good special purpose servers."

David Froble

unread,
Oct 10, 2015, 10:13:07 PM10/10/15
to
JF Mezei wrote:
> On 2015-10-10 10:28, Stephen Hoffman wrote:
>
>> Folks that are still on VAX and Alpha are unlikely to drop what they're
>> doing and port to x86-64, and this for any of various reasons.
>
> But if you can point them to a solution that runs on modern cheaper
> hardware with little trouble because VAX/Alpha code can run unchanged
> even if they don't have the source code, then perhaps this can grab
> their attention. VMS never really provided this facility in a public
> fashion. (was VEST ever part of the SPD on Alpha VMS ?)
>
> One way to get their attention is wth the expression "Spare parts".
>
> Can they still easily get disks for their VAX ? Replacement power
> supplies ? Same for Alpha.

Easily? Can't do so, even if it's HARD!

> And this is where VSI will have to sharpen its pencil. In order to make
> it attractive for VAX/Alpha customers to migrate, it will have to not
> only be easy (technical) but affordable too. If customers have to
> repurchase all the licenses, that is way too expensive.

I've said it before, recurring service revenue is the way to go. Been reading
about license transfers at 50%. I doubt that will be acceptable to many. Now,
how about no license fee, but pre-paid service for 1, 2, or 3 years? And then
additional service fees after that. One way to raise money now. And a way to
get people willing to consider recurring service fees.

Neil Rieck

unread,
Oct 11, 2015, 7:54:30 AM10/11/15
to
On Saturday, October 10, 2015 at 10:13:07 PM UTC-4, David Froble wrote:
[...snip...]
>
> I've said it before, recurring service revenue is the way to go. Been
> reading about license transfers at 50%. I doubt that will be acceptable
> to many. Now, how about no license fee, but pre-paid service for 1, 2, or 3
> years? And then additional service fees after that. One way to raise money
> now. And a way to get people willing to consider recurring service fees.
>
That makes a lot of sense and is similar to how MariaDB works (acquire it for free but pay for a support contract). This is also how Oracle markets MySQL although many people are unaware of the fact that this is now done with two code bases (you need a support contract in order to acquire the better code base).

People forget that high licenses costs in the mini-computer world (30-years ago) were associated with free future support which sometimes (depending upon what you paid) included free "new version" rights in perpetuity. Likewise, in those days you could also buy life-time memberships to many gymnasiums but proprietors quickly realized that a large number of these contracts was fatal because there was no monthly cash-flow to keep the doors open. They soon realized that charging $20 per month was the superior business model.

Now I am in no way suggesting that HPE would ever allow new software to be given away without a license. But as you suggested, it would be wise for VSI to offer support contracts without paid license transfers.

I do think HPE and VSI should take OpenVMS licensing slowly (5 years?) in the direction of the open-source models (eg. RHEL)

Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/OpenVMS.html


Phillip Helbig (undress to reply)

unread,
Oct 11, 2015, 10:32:25 AM10/11/15
to
In article <e5911$5618293d$5ed4324a$41...@news.ziggo.nl>, Dirk Munk
<mu...@home.nl> writes:

> Any news about the CLI
> style? Standard VMS DCL style, or Unix style, or both?

If it is neither standard DCL nor both, the port is dead in its tracks.

> No graphics card support. That is what I expected. So if you want a VMS
> laptop,

What about a workstation?

> run a VM, and put VMS on top of it, combined with Windows, or
> Linux, or MacOS. One of the latter three will give you a graphical
> window on VMS.

How hard would it do to have low-end basic graphics support so that one
could run CDE and DECwindows?

BillPedersen

unread,
Oct 11, 2015, 12:13:07 PM10/11/15
to
On Sunday, October 11, 2015 at 10:32:25 AM UTC-4, Phillip Helbig (undress to reply) wrote:
> In article <e5911$5618293d$5ed4324a$41...@news.ziggo.nl>, Dirk Munk
> <mu...@home.nl> writes:
>
> > Any news about the CLI
> > style? Standard VMS DCL style, or Unix style, or both?
>
> If it is neither standard DCL nor both, the port is dead in its tracks.
>
> > No graphics card support. That is what I expected. So if you want a VMS
> > laptop,
>
> What about a workstation?
>

Not specifically. They believe the focus needs to be on the server components at this time.

> > run a VM, and put VMS on top of it, combined with Windows, or
> > Linux, or MacOS. One of the latter three will give you a graphical
> > window on VMS.
>
> How hard would it do to have low-end basic graphics support so that one
> could run CDE and DECwindows?

What was said, we will not support any "add on" graphics cards. They do not intend to remove the current single headed graphics support - on board chip with 2D graphics.

Anything other than that would need to be supported by a Partner/OEM.

Bill.

Phillip Helbig (undress to reply)

unread,
Oct 11, 2015, 2:44:57 PM10/11/15
to
In article <de2717d1-f0e7-4e8e...@googlegroups.com>,
BillPedersen <pede...@ccsscorp.com> writes:

> Not specifically. They believe the focus needs to be on the server
> components at this time.

Sure, but is a low-end graphics card that much extra work?

> What was said, we will not support any "add on" graphics cards. They
> do not intend to remove the current single headed graphics support - on
> board chip with 2D graphics.

In that case, it's enough for me to post to comp.os.vms using NEWSRDR in
a DECterm. :-)

JF Mezei

unread,
Oct 11, 2015, 3:34:22 PM10/11/15
to
On 2015-10-11 10:32, Phillip Helbig (undress to reply) wrote:

> How hard would it do to have low-end basic graphics support so that one
> could run CDE and DECwindows?

Hard.

Fred Kleinsorge (sp?) had spent some time here a number of years ago
explaining the hoops and tribulations of reverse engineering graphics
cards because the manufacturers didn't provide specs. It seems to
require a lot of brute force work and very time consuming to get
something working right.

Note that with Linux having some support for some cards, the drivers
that are open sourced for some cards would provide VSI with
"inspiration" to do the VMS drivers for that card.

JF Mezei

unread,
Oct 11, 2015, 3:39:36 PM10/11/15
to
One of the problem with providing free license transfers is those are HP
licenses, so VSI would have to negotiate this with HP/HPE/whatever.

One possible way to convice HP:

Say a customer has been off support contract for that software for more
than 5 years. HP should have no expectations of ever getting any
revenues from them. Agreeing to re-issue the license might result in
that dormant customer buying an HP server which would be a net win for HP.

(of course, if the server is bought from HP-PC and the license belongs
to HP-E, then this becomes harder to do the convincing).

Convincing Oracle and other 3rd parties to do these transfers would also
require some fine dining and playing golf with the CEO. Pretty sure Sue
is up to par on this ;-)

Scott Dorsey

unread,
Oct 11, 2015, 4:26:17 PM10/11/15
to
JF Mezei <jfmezei...@vaxination.ca> wrote:
>On 2015-10-11 10:32, Phillip Helbig (undress to reply) wrote:
>
>> How hard would it do to have low-end basic graphics support so that one
>> could run CDE and DECwindows?
>
>Hard.
>
>Fred Kleinsorge (sp?) had spent some time here a number of years ago
>explaining the hoops and tribulations of reverse engineering graphics
>cards because the manufacturers didn't provide specs. It seems to
>require a lot of brute force work and very time consuming to get
>something working right.

This is sort of true.

All of the cards out there pretty much have a standard VGA mode, which
is used by the bios roms and at boot time. It is not difficult to
write a driver that uses this mode exclusively and will work on every
current video card made.

However, such a driver is very very slow. It's fine for running simple
decwindows, I think, but doing any degree of sophisticated graphics is
out of the question. It just gives you a big memory-mapped display and
everything is done in software.

As far as sophisticated graphics goes, most of the manufacturers do not
provide specs about their cards' other modes, except for Nvidia. This
is where the reverse-engineering comes in.

>Note that with Linux having some support for some cards, the drivers
>that are open sourced for some cards would provide VSI with
>"inspiration" to do the VMS drivers for that card.

The standard linux driver just uses the VGA mode, but there are some
other higher performance drivers for other cards. And some vendors
make proprietary Linux drivers available; these cannot be shipped with
most Linux distributions but require a separate installation.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

David Froble

unread,
Oct 11, 2015, 4:54:05 PM10/11/15
to
Phillip Helbig (undress to reply) wrote:
From Steve's site:

Per Clair Grant: "For the foreseeable future, VMS is a server."

They (VSI) appear to have no plans for any GUI stuff.

Time to learn weendoze ???

Johnny Billquist

unread,
Oct 11, 2015, 4:57:13 PM10/11/15
to
What exactly have they said they have done, or will do, since day one?
You seem to have read something I have totally missed.

Stephen Hoffman

unread,
Oct 11, 2015, 5:16:42 PM10/11/15
to
On 2015-10-11 20:26:15 +0000, Scott Dorsey said:

> As far as sophisticated graphics goes, most of the manufacturers do
> notvprovide specs about their cards' other modes, except for Nvidia.
> Thisvis where the reverse-engineering comes in.

The open-spec AMD ATI controllers and the integrated Intel HD and Iris
graphics would provide more than sufficient performance for OpenVMS
graphics, was somebody to create a DECwindows-compatible device driver.

These graphics options and these documents <https://01.org/> simply did
not exist in the era that Fred K. was working in.

As a rough guess, it'd take around six months or so and quite possibly
longer to create a graphics driver, and would require access to
DECwindows and possibly also OpenVMS source code. AFAIK, the driver
development bits for DECwindows were never split out, and the necessary
header files and related pieces weren't packaged for third-party
development. You'd undoubtedly want or need to study several of the
existing drivers anyway. Then there's the whole discussion of
applications and tools and the rest of the chain that'd be expected
here; MMOV or otherwise.

Remote X displays — DECwindows — can be used for those that want a much
more heavyweight interface for DECterm, too. Boot up a Linux or BSD
guest in the VM, and send your output over there. Or use DECwindows
on the existing and supported OpenVMS I64 servers via the management
processor, or via some other existing and supported graphics controller.


But all of this is moot, as VSI has stated there will be no new drivers
and no support for workstations, no desktops, nor laptops.


Per VSI, OpenVMS is for servers, and for the foreseeable future. Not
workstations. Not desktops. Not laptops.

Per VSI, the priority is what is on the VSI roadmap, and centrally the
x86-64 port.

That means not spending six months or more on something that's not
central to the roadmap and to the port.


Think there is a market for this DECwindows graphics driver? Figure
out how you're going to recoup your costs and package and license and
sell and support this driver, and then petition VSI for access to the
necessary bits.


===

That OpenVMS is for servers started in earnest back in the mid-1990s,
with DEC's Windows Affinity project. More than a little work went into
Affinity, too. Here's an example of some of that work,
<http://www.compaq.com/info/CU9503/CU9503MW.DOC> and this involved
moving the front-end and the development over to Windows. From that
.DOC file...

"OpenVMS Enterprise Toolkit for Visual Studio

Highlights
* Supports all of the OpenVMS compiler languages, including C, C++ and
DIGITAL Fortran
* Online access to full range of OpenVMS programming and product documentation
* Integrated debugging support using the OpenVMS Client Debugger and
Developer Studio
* Source code browsing for OpenVMS source files
* Access to source code control (CMS, etc.) on OpenVMS system
integrated into Developer Studio
Description
OpenVMS Enterprise Toolkit for Visual Studio (Enterprise Toolkit) is
software that lets the developer, working in a PC environment, create
applications for either an OpenVMS or Windows NT system. The developer
can use C, C++, Fortran, COBOL, BASIC, Pascal, and ADA to write,
compile, and debug applications in the familiar PC environment and run
them in an OpenVMS computing environment.
The Enterprise Toolkit represents an important step in Compaq's
commitment to mixed OpenVMS/Windows NT development and deployment. The
Enterprise Toolkit provides an attractive software development
environment in which development can take place on a PC for deployment
on OpenVMS. Now one common software development environment can address
core software development needs for PC-based and OpenVMS applications,
and client/server solutions, while creating OpenVMS applications and
maintaining existing OpenVMS applications..."

Stephen Hoffman

unread,
Oct 11, 2015, 5:32:52 PM10/11/15
to
On 2015-10-11 20:54:00 +0000, David Froble said:

> They (VSI) appear to have no plans for any GUI stuff.

Which is sensible. The folks at VSI are burning through more than a
little cash with the OpenVMS I64 updates and the x86-64 port, after all.

> Time to learn weendoze ???

Or OS X, BSD or Linux.

While learning, there's also iOS and Android and mobile in general, as
there are more than a few of those clients around.

FWIW and for those with access to the OpenVMS Boot Camp 2015
presentation files — when those become available — Guy Peleg presented
a session on connecting iOS to Oracle via REST. A good presentation,
and the basics are quite easy with the available Oracle tools.

Craig A. Berry

unread,
Oct 11, 2015, 6:26:18 PM10/11/15
to
On 10/11/15 3:57 PM, Johnny Billquist wrote:
> On 2015-10-10 16:04, Craig A. Berry wrote:
>> On 10/10/15 5:09 AM, Johnny Billquist wrote:
>>> I don't understand what you are getting at.
>>> Current users of VMS on VAX or Alpha can already run it on an emulator.
>>> There is nothing VSI needs/can to do here. It already works. Any port of
>>> VMS to x86-64 will not change anything here. Those stuck on old
>>> architectures will be stuck with those old architectures, no matter what
>>> VSI does. And running simh, or some other simulator, is just an
>>> application on the host machine. Nothing for VSI to do, period.
>>
>> And yet they have already done it, apparently, and have been saying they
>> would do it since the day the existence of the company was publicly
>> announced. Why you think whole-system emulation is the only thing that
>> can be done (or would be desirable to do)?
>
> What exactly have they said they have done, or will do, since day one?
> You seem to have read something I have totally missed.

What they have said they will do is create a dynamic binary translator.
It's actually listed as a "Binary Static/Dynamic Translator" on the
current roadmap and I believe has been on every roadmap they've ever
published in some form. I think it's also been discussed here more than
once.

As Camiel corrected me up-thread, this is not the thing that has been
recently completed. That was the disassembler for use by the debugger
and SDA; I heard "instruction set decoder" and mixed it up with the
binary translator.

Dirk Munk

unread,
Oct 12, 2015, 6:49:58 AM10/12/15
to
Phillip Helbig (undress to reply) wrote:
> In article <e5911$5618293d$5ed4324a$41...@news.ziggo.nl>, Dirk Munk
> <mu...@home.nl> writes:
>
>> Any news about the CLI
>> style? Standard VMS DCL style, or Unix style, or both?
>
> If it is neither standard DCL nor both, the port is dead in its tracks.
>
>> No graphics card support. That is what I expected. So if you want a VMS
>> laptop,
>
> What about a workstation?

Of course, same principle

>
>> run a VM, and put VMS on top of it, combined with Windows, or
>> Linux, or MacOS. One of the latter three will give you a graphical
>> window on VMS.
>
> How hard would it do to have low-end basic graphics support so that one
> could run CDE and DECwindows?
>

Yes, you could use the VESA bios extensions:

https://en.wikipedia.org/wiki/VESA_BIOS_Extensions

Works with any decent graphics card.

Dirk Munk

unread,
Oct 12, 2015, 7:11:22 AM10/12/15
to
BillPedersen wrote:
> On Sunday, October 11, 2015 at 10:32:25 AM UTC-4, Phillip Helbig (undress to reply) wrote:
>> In article <e5911$5618293d$5ed4324a$41...@news.ziggo.nl>, Dirk Munk
>> <mu...@home.nl> writes:
>>
>>> Any news about the CLI
>>> style? Standard VMS DCL style, or Unix style, or both?
>>
>> If it is neither standard DCL nor both, the port is dead in its tracks.
>>
>>> No graphics card support. That is what I expected. So if you want a VMS
>>> laptop,
>>
>> What about a workstation?
>>
>
> Not specifically. They believe the focus needs to be on the server components at this time.
>
>>> run a VM, and put VMS on top of it, combined with Windows, or
>>> Linux, or MacOS. One of the latter three will give you a graphical
>>> window on VMS.
>>
>> How hard would it do to have low-end basic graphics support so that one
>> could run CDE and DECwindows?
>
> What was said, we will not support any "add on" graphics cards.

Interesting choice of words. Now of course Intel and AMD have CPUs with
embedded GPUs, these are not "add on" graphic cards. Of course these are
not server CPUs, they are desktop and notebook CPUs.

Will you support those embedded GPUs, and will you support this class of
desktop/notebook CPUs?

clairg...@gmail.com

unread,
Oct 12, 2015, 8:26:31 AM10/12/15
to
On Monday, October 12, 2015 at 7:11:22 AM UTC-4, Dirk Munk wrote:
> Interesting choice of words. Now of course Intel and AMD have CPUs with
> embedded GPUs, these are not "add on" graphic cards. Of course these are
> not server CPUs, they are desktop and notebook CPUs.
>
> Will you support those embedded GPUs, and will you support this class of
> desktop/notebook CPUs?
>

Not in the foreseeable future. We have to pick our spots very carefully and there is a small handful of things we must deliver on or there will not be a future. Maybe someday we will get to the point of considering things like GPUs but for the time being we need to concentrate our efforts on the make-or-break items.

David Froble

unread,
Oct 12, 2015, 9:19:21 AM10/12/15
to
Yes, just that.

And, if / when you do start looking at GPUs, there would be much more benefit in
using the embedded GPUs on the CPU chips to enhance processing speed, than the
benefit for graphics. It's my understanding they're good for floating point
numbers.

I'm sure you know your customer base better than I do. I'm thinking there are
few to none serious (paying) users that are doing any graphics on VMS.

Kerry Main

unread,
Oct 12, 2015, 9:20:04 AM10/12/15
to comp.os.vms to email gateway
As Gretzky (hockey player) once said - skate to where the puck is going to
be - not where it is right now.

To their credit, the reason RHEL was so successful in breaking into many
large companies was that they understood the difference between OPEX
and CAPEX in the Customers budget planning.

CAPEX (up front expensive licenses) requires senior Customer VP approvals
and gets lots of visibility with the CFO and related players.

OPEX (service fees) is a re-occurring annual cost that gets buried in the
annual Operations budget as just another line item. Hence, the cost
seldom gets questioned outside of the OPS Manager meetings.

Another consideration against high cost up front licenses - while the
focus in the last 10 years has been in reducing HW associated costs, the
next 10 years will be all about reducing SW related costs.

Hence, imho, companies like Oracle and SAP are in for some tough times
as Customers, faced with exponentially increasing pressures to reduce
costs, will find alternate SW solutions that may not offer all the bells and
whistles of the Oracle and SAP offerings, but in their minds, the alternate
solutions will be "good enough" (aka similarly how the Windows/Linux OS
offerings stole market share from other much more expensive OS's)

These companies may not convert their current applications from SAP or
Oracle, but all there new App development will be with solutions that are
based on much cheaper technologies - a longer term death spiral for
current state high cost SW providers.

Regards,

Kerry Main
kerry dot main at starkgaming dot com






Kerry Main

unread,
Oct 12, 2015, 9:45:05 AM10/12/15
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> David Froble
> Sent: 12-Oct-15 9:19 AM
> To: info...@info-vax.com
> Subject: Re: [New Info-vax] Hoff's Boot Camp report available
>
There are some third party WS graphics intensive apps developed for VMS.

However, getting graphics cards developed for niche HW like Alpha and/or
Integrity has always been a tough sell.

Once OpenVMS is on the X86-64 platform, there is always the opportunity
for a third party and/or partner to develop a graphics driver for one or
more of the available high end HW graphics cards - perhaps with added
security slant?

Same server/laptop HW, same graphics card, just different driver. That's a
much easier solution to put in place for a third party.

Microsoft pretty much has the same model. Develop the platform, make
it easy for vendors to offer added value products and let their partners
compete.

Regards,

Kerry Main
Chief Information Officer (CIO)
Stark Gaming Inc.
613-797-4937 (cell)
613-599-6261 (fax)
Kerry...@starkgaming.com
http://www.starkgaming.com





Dirk Munk

unread,
Oct 12, 2015, 9:51:24 AM10/12/15
to
David Froble wrote:
> clairg...@gmail.com wrote:
>> On Monday, October 12, 2015 at 7:11:22 AM UTC-4, Dirk Munk wrote:
>>> Interesting choice of words. Now of course Intel and AMD have CPUs
>>> with embedded GPUs, these are not "add on" graphic cards. Of course
>>> these are not server CPUs, they are desktop and notebook CPUs.
>>>
>>> Will you support those embedded GPUs, and will you support this class
>>> of desktop/notebook CPUs?
>>>
>>
>> Not in the foreseeable future. We have to pick our spots very
>> carefully and there is a small handful of things we must deliver on or
>> there will not be a future. Maybe someday we will get to the point of
>> considering things like GPUs but for the time being we need to
>> concentrate our efforts on the make-or-break items.
>
> Yes, just that.
>
> And, if / when you do start looking at GPUs, there would be much more
> benefit in using the embedded GPUs on the CPU chips to enhance
> processing speed, than the benefit for graphics. It's my understanding
> they're good for floating point numbers.

Very true, the floating point processing speed of these GPUs is
enormous. If I'm not mistaken, the instructions for floating point
processing on these GPUs are already part of standard x86 C compilers,
but I'm not 100% sure.

>
> I'm sure you know your customer base better than I do. I'm thinking
> there are few to none serious (paying) users that are doing any graphics
> on VMS.

No, but a VMS workstation for developing VMS applications is not such a
strange idea, and then graphics may come in handy. Without graphics you
need to use X-Windows on another operating system to get VMS graphics.

Stephen Hoffman

unread,
Oct 12, 2015, 10:06:32 AM10/12/15
to
On 2015-10-12 13:19:18 +0000, David Froble said:

> And, if / when you do start looking at GPUs, there would be much more
> benefit in using the embedded GPUs on the CPU chips to enhance
> processing speed, than the benefit for graphics. It's my understanding
> they're good for floating point numbers.

GPUs and such are often useful for SIMD; for cases where a single or a
few instructions are applied across a whole lot of data. Quickly.
Usually floating point, but more than a few of these can also support
integer.

Some of the available compute engines are analogous to what the VAX
Vector Instruction subset once provided, too.
<http://manx.classiccmp.org/collections/antonio/dec/MDS-2000-01/cd1/VAX/60VAAOM1.PDF>

<http://www.itec.suny.edu/scsys/vms/ovmsdoc073/v73/4515/4515pro_036.html>


Other compute engines are just large numbers of mostly-x86 cores in a
very small size such as Xeon Phi
<https://en.wikipedia.org/wiki/Xeon_Phi>
<http://www.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html>,
or the SIMD SSE instructions that are now present in x86. SSE4 added
popcnt, which was also added to Alpha in a revision. This in support
of certain compute-intensive processing, for instance.
<http://www.strchr.com/crc32_popcnt>

As for compute engines and GPUs, OpenCL
<https://en.wikipedia.org/wiki/OpenCL> and OpenGL
<https://en.wikipedia.org/wiki/OpenGL> are two (of many) standards in
this area.

In a different approach toward scaling computes, VSI has indicated an
interest in improving OpenVMS multiprocessing and particularly making
better use of single boxes with lots of cores and with multiple boxes
(or multiple instances), though that's futures stuff and all all (well)
after the x86-64 port. Hadoop was specifically mentioned here.
(These sorts of capabilities could make the OpenVMS system software
builds a whole lot less bespoke, but I digress. Even something as
(relatively) simple as distcc <https://github.com/distcc> can be a win
over what's available now for OpenVMS. But I digress. Again.)

As for the other direction from what GPUs — lots and lots of cores and
lots of servers — then there's the sort of cloud services support —
mass-configuration and mass-deployment and mass-reconfiguration support
— that'd make things vastly easier for the sorts of sites that might
want to be adding a whole lot of OpenVMS boxes. Quickly, and with
little or no human intervention or manual overhead past plugging in the
racks or the PODs. Rolling out OpenVMS in even a Xeon Phi
configuration would be a pain in the rump, given present tools and user
interfaces. This is were cloud services and various Apache tools or
similar can be very useful — though tying this to another contemporary
thread, some of these tools require Java 8 support.

Yes, much headroom is available here for improvements. ...Once VSI
has the port working, and once they have a revenue stream.

Stephen Hoffman

unread,
Oct 12, 2015, 10:28:18 AM10/12/15
to
On 2015-10-12 13:51:23 +0000, Dirk Munk said:

> Very true, the floating point processing speed of these GPUs is
> enormous. If I'm not mistaken, the instructions for floating point
> processing on these GPUs are already part of standard x86 C compilers,
> but I'm not 100% sure.

Executing code on a GPU usually involves something like OpenCL to
compile and schedule the code for the particular GPU (often compiling
that GPU code on the fly, while the application program is executing),
and which hunk of a system should be used depends greatly on the local
hardware configuration. Effectively, you're cross-compiling for the
GPU. GPU instruction sets can and do vary, and are very different
from the CPU instruction sets.

Unless you're referring to the x86 SSE instructions and that ilk
operating within the x86 processor here, and support for those is
present in GCC and Clang/LLVM. But that's not what most folks refer to
with "GPU".

> No, but a VMS workstation for developing VMS applications is not such a
> strange idea, and then graphics may come in handy. Without graphics you
> need to use X-Windows on another operating system to get VMS graphics.

I've not found need for a DECwindows session for any OpenVMS
application development, beyond developing code that directly uses X
calls. Not in some years, that is. Xquartz works on OS X, on the few
times when I need an Xterm or an X Server locally. xming other other
options exist for Windows. For those folks that do want GUI
development for OpenVMS, there are third-party options available.
(Alas, those options no longer include the OpenVMS Enterprise Toolkit
for Visual Studio.) Using DECterm remotely does work but... well, why
not use ssh? ssh is much lighter-weight than an X session. Yes,
there are some cases where having a separate debugging window is handy,
but that works as easily with remote X or with multiple serial
sessions; there are ways to do that remotely.

Stephen Hoffman

unread,
Oct 12, 2015, 12:24:13 PM10/12/15
to
On 2015-10-12 13:15:25 +0000, Kerry Main said:

> These companies may not convert their current applications from SAP
> orOracle, but all there new App development will be with solutions that
> are
> based on much cheaper technologies - a longer term death spiral
> forcurrent state high cost SW providers.

Here's some fodder for the folks at Stark Gaming to ponder, on the
topic of newer and variously cheaper software:

http://highscalability.com/blog/2015/10/12/making-the-case-for-building-scalable-stateful-services-in-t.html

Some general discussions of the Halo 4 back-end processing implementation, too.

Kerry Main

unread,
Oct 12, 2015, 4:50:05 PM10/12/15
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Stephen Hoffman
> Sent: 12-Oct-15 12:24 PM
> To: info...@info-vax.com
> Subject: Re: [New Info-vax] Hoff's Boot Camp report available
>
Steve - thx!

Very nice video which I found has interesting potential ..

:-)

Regards,

Kerry Main
Kerry dot main at starkgaming dot com




David Froble

unread,
Oct 12, 2015, 6:12:07 PM10/12/15
to
I do very well developing on a VMS system without any graphics, well, other than
that on the weendoze system for the terminal emulator.

JF Mezei

unread,
Oct 13, 2015, 1:54:59 AM10/13/15
to
On 2015-10-11 17:16, Stephen Hoffman wrote:

> That means not spending six months or more on something that's not
> central to the roadmap and to the port.


If the FAA were to knock on VSI's doors and ask for robust solution to
drive air traffic control in the USA and make use of VMS failover and
clustering, I suspect priorities would change real fast.



Yes, VMS is in intensive care after the almost succesful attempt obn its
life by Digital, Compaq and HP. So yes, priority is getting the core
back up and running on a server that has a pulse.

However, I wouldn't so quickly dismiss graphics and I would prefer to
see VSI saying they are going to listen to customers and potential
customers for what the priorities should me. This is more "politically
correct" and doesn't close doors that could possibly pan out as being
very profitable.

Jan-Erik Soderholm

unread,
Oct 13, 2015, 3:24:28 AM10/13/15
to
Den 2015-10-13 kl. 07:54, skrev JF Mezei:
> On 2015-10-11 17:16, Stephen Hoffman wrote:
>
>> That means not spending six months or more on something that's not
>> central to the roadmap and to the port.
>
>
> If the FAA were to knock on VSI's doors and ask for robust solution to
> drive air traffic control in the USA and make use of VMS failover and
> clustering, I suspect priorities would change real fast.
>
>
>
> Yes, VMS is in intensive care after the almost succesful attempt obn its
> life by Digital, Compaq and HP. So yes, priority is getting the core
> back up and running on a server that has a pulse.
>
> However, I wouldn't so quickly dismiss graphics and I would prefer to
> see VSI saying they are going to listen to customers and potential
> customers for what the priorities should me.

Do you have any sign of that they havn't don just that since 1-aug-2014?

Camiel Vanderhoeven

unread,
Oct 13, 2015, 3:56:43 AM10/13/15
to
Op dinsdag 13 oktober 2015 07:54:59 UTC+2 schreef JF Mezei:
>
> If the FAA were to knock on VSI's doors and ask for robust solution to
> drive air traffic control in the USA and make use of VMS failover and
> clustering, I suspect priorities would change real fast.
>
> Yes, VMS is in intensive care after the almost succesful attempt obn its
> life by Digital, Compaq and HP. So yes, priority is getting the core
> back up and running on a server that has a pulse.
>
> However, I wouldn't so quickly dismiss graphics and I would prefer to
> see VSI saying they are going to listen to customers and potential
> customers for what the priorities should me. This is more "politically
> correct" and doesn't close doors that could possibly pan out as being
> very profitable.

Please don't take our statements of what we are concentrating on as a sign of unwillingness to adapt to customer needs; rather see them as in invitation to our customers to go over our plans and make sure we aren't missing something crucial to their business. We can't deliver just any "nice-to-have" feature with the limited resources we have, but if there is a solid business case for changing some part of our plans, rest assured that it will be seriously considered.

Camiel.

Stephen Hoffman

unread,
Oct 13, 2015, 10:21:49 AM10/13/15
to

On 13 October 2015 07:54:59 UTC+2, JF Mezei keyed in:
>
> If the FAA were to knock on VSI's doors and ask for robust solution to
> drive air traffic control in the USA and make use of VMS failover and
> clustering, I suspect priorities would change real fast.

No, they wouldn't. This because there are graphics drivers available
now, for the present products. Buy-n-Large hasn't been buying all
that many of these configurations, though. If Buy-n-Large wanted
graphics on x86-64, they're still going to have to wait for the port to
be completed and stable, and for the ports and support for the VSI and
third-party software that is inevitably required.

> Yes, VMS is in intensive care after the almost succesful attempt obn
> its life by Digital, Compaq and HP. So yes, priority is getting the
> core back up and running on a server that has a pulse.

That means getting the business going, getting a revenue stream going,
and picking which projects are a priority given available staffing and
budgets.

That's what is on the VSI roadmap.

> However, I wouldn't so quickly dismiss graphics and I would prefer to
> see VSI saying they are going to listen to customers and potential
> customers for what the priorities should me. This is more "politically
> correct" and doesn't close doors that could possibly pan out as being
> very profitable.

Profitable? Okay.... Here's where this whole "graphics driver"
discussion is headed, for a new application — you write "FAA", and I'll
write Buy-n-Large.

Why Buy-n-Large? Because the US FAA isn't porting their systems in
less than a decade or two, given what they're dealing with.

There's a whole lot more to getting VMS graphics working for a typical
new (NEW!) Buy-n-Large application than support one graphics driver,
such as the lack of sound output — Buy-n-Large is probably going to
want alert tones minimally, and quite probably digital audio
capabilities — and then there's the lack of video capabilities — you'd
think that the next-generation Buy-n-Large application might want to
embed some of these newfangled fancy moving images of some activities
into their end-user displays, at least for specific tasks — and then
there's that Buy-n-Large is likely going to want updated X Windows
libraries and tools, and quite possibly support for Wayland — as X as
presently implemented is a morass of an RPC with more than a few bags
on the side — and likely better fonts, massively better X configuration
and management — having helped several folks get graphics controllers
working, the docs are sometimes outright wrong, and what was expected
for folks configuring the devices is just absurd by present-day
standards — I gotta do _what_ to change the resolution? — and the
Buy-n-Large application development team is going to want current
compilers and maybe even an IDE, they'll want integrated communications
and security-related updates and integrated certificate support and
such, and there are more than a few other details I've undoubtedly
overlooked here.

This is all expected display infrastructure, and before discussing the
sorts of "desktop" or "workstation" file formats and applications that
folks may or will want.

That's also all before we even get into more general details such as
the "fun" of installing and managing OpenVMS en-mass, that an
increasingly common question is "how do I change the IP address?",
hardening the OpenVMS and cluster system security substantially, and a
whole pile of other topics and details that are already on VSI's
roadmap.

VSI has to deliver on their roadmap, and get the x86-64 port done —
because otherwise, Buy-n-Large can go use OpenVMS I64 and the existing
graphics drivers and maybe somebody gets MMOV dusted off and working,
and the rest...

But then Buy-n-Large likely wants this all on x86-64, right? Which
means the port has to happen. First. That port is going to take a
couple of years, best case. Sure, Buy-n-Large can use some of that
time for planning and prototypes, if they want to Bet Big on VSI
succeeding.

Now if Buy-n-Large did decide to use OpenVMS x86-64 as a server, then
they can use various of these capabilities on thin clients or any
number of other client configurations. That also works. That works
now, and there's no reason to believe that won't still work once the
x86-64 port is available.

FWIW, if I were really wanting reliability and failover and such, then
I'd probably not want my display hardware tied right to one of my
servers, too. That doesn't fail-over all that well. In the interest
of reducing software complexity and attack targets and open ports and
general overhead, I'd generally not want X windows running on core
production servers, if I could avoid it.

So... sure... Leave the doors open... Write your name on a check
with a whole lot of zeros on it — for just a graphics driver — and VSI
will probably take notice. But don't assume — don't delude yourself —
into thinking that a graphics driver is even central here, should
Buy-n-Large decide to create some new application or to port some
existing application to OpenVMS x86-64.

For the foreseeable future — for the duration of the port, and quite
possibly some years after that — OpenVMS is a server operating system.

Scott Dorsey

unread,
Oct 13, 2015, 12:02:37 PM10/13/15
to
JF Mezei <jfmezei...@vaxination.ca> wrote:
>
>However, I wouldn't so quickly dismiss graphics and I would prefer to
>see VSI saying they are going to listen to customers and potential
>customers for what the priorities should me. This is more "politically
>correct" and doesn't close doors that could possibly pan out as being
>very profitable.

Graphics aren't hard.
Really nice graphics are hard.
High performance graphics are really, really hard.

JF Mezei

unread,
Oct 13, 2015, 3:37:30 PM10/13/15
to
On 2015-10-13 10:21, Stephen Hoffman wrote:

> now, for the present products. Buy-n-Large hasn't been buying all
> that many of these configurations, though. If Buy-n-Large wanted
> graphics on x86-64, they're still going to have to wait for the port to
> be completed and stable, and for the ports and support for the VSI and
> third-party software that is inevitably required.

But Buy More likely doesn't need "mission critical" cash registers or
wharehouse inventory management and certainly wouldn't pay for "extra
quality" when run of the mill Windows will do. (However, the covert CIA
substation beneath it would :-)

An outfit like FAA will pay for extra quality and robustness. And yes,
it may take years before FAA implements, but what if they were to select
the platform now ?


An embryo's cells are in a unique situation because they can become any
type of cell. Later in the development, cells become "typecast" and skin
cells remain skin cells.

VMS is in a unique situation now where it is that early embryo. One
shouldn't be making announcements that appear to shut the door to
possibilities.


Yes, obviously, the priority is to port what is left of VMS to 8086. And
yes, one can say that graphical capabilities will not be part of V1.0 of
Born-Again-VMS but this will be re-evaluated based on demand.

This sends the same message to not expect graphics, without shutting the
door for it.


Semantics and marketing become important when you try to convince people
that VMS is coming back from the dead.

Hans Vlems

unread,
Oct 14, 2015, 11:28:59 AM10/14/15
to
Camiel, can you elaborate on what kind of graphics will be supported on the new hardware?
What I mean is this: probably all x64 systems have a vga, hdmi or whatever connector on board, right?
Will VMS just support vt100 emulation (or vt52 like early alphas)?
Hans

PS
Thanks for taking the time to talk to us!

David Froble

unread,
Oct 14, 2015, 5:39:20 PM10/14/15
to
Hans Vlems wrote:
> Camiel, can you elaborate on what kind of graphics will be supported on the new hardware?
> What I mean is this: probably all x64 systems have a vga, hdmi or whatever connector on board, right?

Actually, wrong.

An x86 system can have an add on graphics card ..

An x86 system can have graphics on the motherboard ..

A desktop A-series AMD CPU chip has graphics right on the chip ..

But, you could also have an x86 system without any of the above ..

When Clair says "server", be prepared for systems with zero graphics
capabilities. No, I don't know what will be supported, but, I can imagine
server systems with no graphics.

IanD

unread,
Oct 14, 2015, 8:16:36 PM10/14/15
to
I was going to say Xeon processors tend not to have gpu's embedded but it looks like Intel is doing this too now...

http://www.pcworld.com/article/2152360/intel-integrates-highend-pc-graphics-processors-into-xeon-server-chips.html

http://www.anandtech.com/show/9339/xeon-e31200-v4-launch-only-with-gpu-integrated

More and more server data crunching is becoming graphic in nature - netflix's for example

I think the notion of Server not having a GPU basis is old hat and is neglecting the types of data that servers are increasingly being called upon to churn over

We need to stop thinking that GPU support = GUI / front end and start thinking that GPU support = Graphical data support. Whether you throw this to a display for rendering or not is another matter

VMS needs to target not just the current market but keep an eye and track to future definite trends also - the world is increasingly graphical in nature and the need for spatial type data crunching is only going to get bigger

I don't care if VMS sports a GUI in the immediate future but I sure as hell care if it ignore an emerging market trend of needing to deal with graphical data.

Intel didn't just slap GPU's into it's server based chips because it liked the technical challenge, it did so because it saw a trend for needing to deal with graphical data emerging and wanted to tap that market

Not having VMS being able to spin-off and/or crunch data through GPU's is a fairly big limitation going forward IMO

But yeah, in comparison to x86-64 port it might be small potatoes but it surely shouldn't be ignored for too long

Stephen Hoffman

unread,
Oct 14, 2015, 9:48:11 PM10/14/15
to
On 2015-10-15 00:16:32 +0000, IanD said:

> I was going to say Xeon processors tend not to have gpu's embedded but
> it looks like Intel is doing this too now...

This would be the Intel HD and Intel Iris graphics and — to avoid
having the next conversation that tends to arise in this sequence —
yes, these graphics controllers are openly documented by Intel.

> We need to stop thinking that GPU support = GUI / front end and start
> thinking that GPU support = Graphical data support. Whether you throw
> this to a display for rendering or not is another matter

If you're looking for a GPU or GPGPU, then you're probably not looking
at Intel HD or Iris or other integrated graphics, you're looking for a
higher-end co-processor, or maybe toward a multicore such as Xeon Phi.
None of which have had OpenVMS support; not since the days when the
VAX Vector hardware was being marketed. (Well, not sure if there's
anything much using the Itanium SIMD SSE or the Alpha MVI support with
OpenVMS. Not that that's overly similar to GPU or GPGPU computing.)

Sure, you could use the integrated graphics controller for some
computing, but if you're going to go to the effort of custom GPU coding
with OpenVMS, you probably have a bigger goal than a Xeon-integrated HD
or Intel Iris product offers. There are boxes with 8 and 12 Xeon Phi
configurations available, if you want to go that route. But with
OpenVMS?

> VMS needs to target not just the current market but keep an eye and
> track to future definite trends also - the world is increasingly
> graphical in nature and the need for spatial type data crunching is
> only going to get bigger

VSI has to turn a profit. That's only going to happen with their
current market; with the installed base.

If VSI is still around and looking for updates and improvements and
enhancements a year or two after the x86-64 port is ready and things
are going swimmingly for the OpenVMS folks from Bolton, then they'll
almost certainly be looking at what to add to bring new projects at
existing customers, and maybe new customers — but then we're probably
also discussing 2020 here, too. Right now, that's probably involving
Hadoop and/or maybe Apache Spark — but what else is or will be
available in 2020?

> I don't care if VMS sports a GUI in the immediate future but I sure as
> hell care if it ignore an emerging market trend of needing to deal with
> graphical data.

OpenVMS hasn't been a factor in the graphics and technical markets for
a very long time — technical computing and high-performance computing
largely migrated to Unix in the 1980s based on price and performance
and software availability, and Unix boxes have only gotten better and
more capable and more entrenched since.

For graphics, DECwindows V1.6 was X11R6.6 — I don't immediately see an
SPD for V1.7 — and X11R6.6 is from 2001. That's about a dozen releases
back, and that's without considering alternatives to traditional X11
and its RPC implementation, such as Wayland. As for tools, DEC VUIT
was cancelled a very long time ago, and it's been a decade or more
since I've looked at the third-party alternative ICS BX Builder
Xcessory <http://motif.ics.com/products/bx-pro/product-tour>

For one recent instance of how other systems integrate GPU or GPGPU, OS
X ships with OpenCL, OpenGL and Metal, as well as an IDE and the tools
to program, profile, maintain and debug applications using those.

For more general distributed computing, Apache Hadoop, Spark and other
useful tools — not so much for GPU or GPGPU computing, but for
coordinating and distributing work across lots and lots of server boxes
— are also commonly available for other platforms.


> Intel didn't just slap GPU's into it's server based chips because it
> liked the technical challenge, it did so because it saw a trend for
> needing to deal with graphical data emerging and wanted to tap that
> market

Those particular chipsets target workstations, and the SoCs in
particular maybe eventually adding graphics onto the low-end servers.


> Not having VMS being able to spin-off and/or crunch data through GPU's
> is a fairly big limitation going forward IMO
>
> But yeah, in comparison to x86-64 port it might be small potatoes but
> it surely shouldn't be ignored for too long

Again, VSI has to turn a profit.

If you have a very large deal — dozens or hundreds of licenses, or more
— that are riding on Xeon workstation support or add-on GPU or GPGPU
support, then do give VSI a call.

For GPU or GPGPU-based computing, remember too that you're going to
need a whole lot more than the GPU/GPGPU driver, too. Gotta have some
way to schedule and compile code for the GPU/GPGPU, after all. Tools
to debug and profile the GPU/GPGPU code, too. Or have a look at what
is possible now with Linux or BSD.

But then, VSI has the upcoming Itanium releases to work on — which
already have graphics controller support, BTW — and also wants to get
the x86-64 port out the door quickly, and while still remaining viable.

BTW: if you're building your own x86-64 boxes: http://www.logicalincrements.com
If you want a small Xeon box:
http://www.supermicro.com/products/system/midtower/5028/SYS-5028D-TN4T.cfm

Camiel Vanderhoeven

unread,
Oct 15, 2015, 9:27:28 AM10/15/15
to
Op woensdag 14 oktober 2015 17:28:59 UTC+2 schreef Hans Vlems:
Our plans are to support embedded server graphics hardware (the kind provided by iLO); there are currently no plans to support add-in graphics option cards for multihead applications or for higher performance 2D/3D graphics applications. So graphics support on x86 will be very similar to what's currently available on Itanium.

Camiel.

David Froble

unread,
Oct 15, 2015, 11:12:04 PM10/15/15
to
IanD wrote:

> Intel didn't just slap GPU's into it's server based chips because it liked
> the technical challenge, it did so because it saw a trend for needing to deal
> with graphical data emerging and wanted to tap that market

Or, perhaps because AMD was already doing so with their A series desktop CPUs,
which aren't too shabby for server tasks either ....

I sometimes wonder just what we'd have if AMD wasn't pushing Intel. Very likely
no 64 bit x86. Stuck with the itanic.

IanD

unread,
Oct 16, 2015, 10:56:45 PM10/16/15
to
On Thursday, October 15, 2015 at 12:48:11 PM UTC+11, Stephen Hoffman wrote:
> On 2015-10-15 00:16:32 +0000, IanD said:
>
> > I was going to say Xeon processors tend not to have gpu's embedded but
> > it looks like Intel is doing this too now...
>
> This would be the Intel HD and Intel Iris graphics and -- to avoid
> having the next conversation that tends to arise in this sequence --
> yes, these graphics controllers are openly documented by Intel.
>
> > We need to stop thinking that GPU support = GUI / front end and start
> > thinking that GPU support = Graphical data support. Whether you throw
> > this to a display for rendering or not is another matter
>
> If you're looking for a GPU or GPGPU, then you're probably not looking
> at Intel HD or Iris or other integrated graphics, you're looking for a
> higher-end co-processor, or maybe toward a multicore such as Xeon Phi.
> None of which have had OpenVMS support; not since the days when the
> VAX Vector hardware was being marketed. (Well, not sure if there's
> anything much using the Itanium SIMD SSE or the Alpha MVI support with
> OpenVMS. Not that that's overly similar to GPU or GPGPU computing.)
>

Yeah, I game and these gpu's can't cut it in that arena either but perhaps Intel is finally making good on it's VDI push it started a long time ago?

Putting the gpu on a server class chip makes perfect sense if your going to support the virtual desktop or perhaps offload video rendering

Cpu's for number crunching, high end gpu's for parallel intensive work / gpu farms at the HPC end of town and lower integrated gpu's on the die for keeping those virtual desktops happy?

It's no secret Intel would like to push Nvidea out of the HPC arena altogether, with a gpu on a server class cpu, then it has all bases covered longer term - just mobile to go :-)

> Sure, you could use the integrated graphics controller for some
> computing, but if you're going to go to the effort of custom GPU coding
> with OpenVMS, you probably have a bigger goal than a Xeon-integrated HD
> or Intel Iris product offers. There are boxes with 8 and 12 Xeon Phi
> configurations available, if you want to go that route. But with
> OpenVMS?
>

ha ha ha, no, not OpenVMS, as much as it pains me to say that

> > VMS needs to target not just the current market but keep an eye and
> > track to future definite trends also - the world is increasingly
> > graphical in nature and the need for spatial type data crunching is
> > only going to get bigger
>
> VSI has to turn a profit. That's only going to happen with their
> current market; with the installed base.
>

It's been done to death by myself and yourself about the need for VSI to be viable and is a side issue to the points mentioned

Let's just qualify that all future statements from myself unless stated otherwise have the unspoken qualification that 'VSI need to be viable' then we can focus on the merits of an idea presented

The point of posting is to throw out ideas, because I don't know the minds behind VSI, I don't know how much in touch with industry edge technologies they are nor what academic circles they operate in nor who they rub shoulders with. They are fairly well a black hole to me

The problem with forums like this that it's a bit like trying to determine position and velocity at the same time, considered fairly well impossible in physics so we end up with someone posting a static idea and having people treat it as a direction and visa versa

I've generally tried to follow the physicist David Bohm's excellent work, 'On Dialogue' as a goal in regards to dialogue as opposed to inference. I probably don't do the wisdom of what it contains any justice in my posting style which can at times see myself injecting aspects of 'position' and 'velocity' and the same time, muddying the waters of the idea I was posting

If VSI had an open forum / website of their own that called for public input of ideas for discussion, then I guess we could frequent there in a much more closely targeted discussion. Failing that, I guess we are left to discuss things in this forum with overlapping posts pulling and pushing discussions in differing directions

The worst words to hear IMO are "I wish we had put the ability to support that feature when we touched module xyz"

ok, so, maybe that's not the worst and that the worst words really would be, "VSI are shutting shop due to lack of funding"

Let's hope that doesn't ever become the topic of discussion

> If VSI is still around and looking for updates and improvements and
> enhancements a year or two after the x86-64 port is ready and things
> are going swimmingly for the OpenVMS folks from Bolton, then they'll
> almost certainly be looking at what to add to bring new projects at
> existing customers, and maybe new customers -- but then we're probably
> also discussing 2020 here, too. Right now, that's probably involving
> Hadoop and/or maybe Apache Spark -- but what else is or will be
> available in 2020?
>

Certain trends are already hitting CPU scaling now and they will be there in 2020 also, when supposedly not long after we should see the first true Exascale machines in operation. Oops, I just checked Wiki, they are saying 2018 now.

Optalysys claim that it will be able to deliver an 17.1 exaflops optical computer by 2020! Things are moving quickly

The need for parallelizing almost everything in computing, from I/O to cpu computation / coding is only going to become increasingly important

> > I don't care if VMS sports a GUI in the immediate future but I sure as
> > hell care if it ignore an emerging market trend of needing to deal with
> > graphical data.
>
> OpenVMS hasn't been a factor in the graphics and technical markets for
> a very long time -- technical computing and high-performance computing
> largely migrated to Unix in the 1980s based on price and performance
> and software availability, and Unix boxes have only gotten better and
> more capable and more entrenched since.
>

I suspect cost drove people towards cheap unix solutions while VMS tried to charge a premium for looking after the business side of the house

I had a recent discussion with a PhD researcher doing experiments into probing the inner structures of substances through neutron bombardment - really interesting stuff that made my head hurt with some of the things they were looking at

It isn't the graphic displaying abilities of the compute stack that they were interested in, they wanted cheap data crunching and lots of it. They used to work on VMS many years ago and used VMS for some of their very early research when VMS could hold it's head high in the scientific community but became a unix/linux advocate simply because of cost

As they said to me, I can spin up a number of instances quickly and cheaply and I really don't care if I have a system that is business verifiable and stays up all day, I just want to crunch the numbers for little to no cost

How is VMS going to help drive costs down in business by just moving to x86-64 if it doesn't add a whole lot of other value in the process? Just what market segment is it going to be targeted at? The existing customer base for sure but beyond that? Just what is it going to be able to do that is beyond what's already available now in other systems?

> For graphics, DECwindows V1.6 was X11R6.6 -- I don't immediately see an
> SPD for V1.7 -- and X11R6.6 is from 2001. That's about a dozen releases
> back, and that's without considering alternatives to traditional X11
> and its RPC implementation, such as Wayland. As for tools, DEC VUIT
> was cancelled a very long time ago, and it's been a decade or more
> since I've looked at the third-party alternative ICS BX Builder
> Xcessory <http://motif.ics.com/products/bx-pro/product-tour>
>
> For one recent instance of how other systems integrate GPU or GPGPU, OS
> X ships with OpenCL, OpenGL and Metal, as well as an IDE and the tools
> to program, profile, maintain and debug applications using those.
>

So much work to be done :-( and I really didn't find Decwindows to be productive for the use I used it for, which was a long time ago and it was when I was an operator, when such titles existed

> For more general distributed computing, Apache Hadoop, Spark and other
> useful tools -- not so much for GPU or GPGPU computing, but for
> coordinating and distributing work across lots and lots of server boxes
> -- are also commonly available for other platforms.
>
>

Hadoop get's around flaky hardware through duplication, not the most efficient way to go about computation but I guess with the volumes of data it typically deals with and the number of servers involved, I guess having some additional servers for redundancy isn't going to even get noticed on the big budget end of town

Hadoop suffers from serialization when it comes to crunching the data, effectively batch chaining on mass. Quick to store, requires specific coding to then crunch it and collate it back

Yeah, Spark is more geared towards being able to do more analytics than Hadoop but who wouldn't be able to scale up better given support for extremely large memory models?

So yeah, Hadoop came along and solved the distributed computing model in a limited way and indirectly has eaten VMS's cluster lunch when it comes to large reliable data stores in the process :-( If Grid had taken off on VMS it might have been a different story but considering how VMS scales poorly for large clusters, probably not

Issues with Hadoop are that it's akin to what one does in sql, take an object, break it apart, store it, then have to put parts of it back together when you want to work on it - shame OODBs never fully matured but then again, computing is littered with it's acceptance of less than optimal pure IT solutions

When the 'internet of things' (what a corny name that is IMO) ramps up, the ability to pre-process data as you store it is going to become more important that ever, I don't see pure cpu based systems coping with that sort of load and analytical need - gpu's might go some of the way as they have the ability to work on multiple data sets at the same time. It's going to perhaps be a hybrid solution, which leave VMS where exactly if it doesn't look a gpu some time in it's future?

It's certainly exciting what the future holds and it holds the potential to obsolete a lot of technology as the push for the all elusive all knowing real time data analysis inches closer

Will VMS be there? I have my doubts sadly unless it can pull an IT rabbit out of it's hat

> > Intel didn't just slap GPU's into it's server based chips because it
> > liked the technical challenge, it did so because it saw a trend for
> > needing to deal with graphical data emerging and wanted to tap that
> > market
>
> Those particular chipsets target workstations, and the SoCs in
> particular maybe eventually adding graphics onto the low-end servers.
>
>
> > Not having VMS being able to spin-off and/or crunch data through GPU's
> > is a fairly big limitation going forward IMO
> >
> > But yeah, in comparison to x86-64 port it might be small potatoes but
> > it surely shouldn't be ignored for too long
>
> Again, VSI has to turn a profit.
>

no comment...

> If you have a very large deal -- dozens or hundreds of licenses, or more
> -- that are riding on Xeon workstation support or add-on GPU or GPGPU
> support, then do give VSI a call.
>

If I'm ever in that position, I think I'd probably be able to afford to have someone do that for me :-)

> For GPU or GPGPU-based computing, remember too that you're going to
> need a whole lot more than the GPU/GPGPU driver, too. Gotta have some
> way to schedule and compile code for the GPU/GPGPU, after all. Tools
> to debug and profile the GPU/GPGPU code, too. Or have a look at what
> is possible now with Linux or BSD.
>
> But then, VSI has the upcoming Itanium releases to work on -- which
> already have graphics controller support, BTW -- and also wants to get
> the x86-64 port out the door quickly, and while still remaining viable.
>
> BTW: if you're building your own x86-64 boxes: http://www.logicalincrements.com
> If you want a small Xeon box:
> http://www.supermicro.com/products/system/midtower/5028/SYS-5028D-TN4T.cfm
>
>
>
> --
> Pure Personal Opinion | HoffmanLabs LLC

That box looks awesome

Probably send my piggy bank into a black hole ever to be seen again though

A couple of HP Microservers is about all I can afford along with a gaming machine and a few laptops

If VSI are not going to do 'this or that', then just what market segment are VSI going to pitch VMS at?

Begging for the Vax / Alpha majority to come over to x86-64 VMS isn't going to probably attract much attention if your also not going to target where computing is headed to also - some type of future proofing all that money comes into play a little bit too. What real world issues is VMS going to solve for business going forward? What costs is it going to drive down for IT. It's a general purpose OS, what definition is being used for general here when IT solutions are becoming targeted to specific built IT solutions.

VMS via a docker type delivery pushed onto any OS in some type of virtual container perhaps?

I wonder what quantum computing is going to open up also, that could be a field to change everything or a slow hard grind to get something that's viable like fusion energy turned out to be - but yeah, who knows what computing is going to be like even in 2020 and that's not that far away

IanD

unread,
Oct 16, 2015, 11:02:34 PM10/16/15
to
Yeah, that's a very good point

I remember x86 was supposed to be dead until AMD added it's 64 bit extensions to the cpu then suddenly it became viable to stay on the same software base and move forward on the same cpu - smart move by AMD but it locked us into x86

I remember when greedy Intel used to sell those 8087 co-processors too. I used to do CAD/CAM work for circuit design and the co-processors made a huge difference to Autcad work but they cost a fortune!

I see AMD's profits are down again, they are getting slammed and could even fold in the next few years which will leave Intel to price gouge us again :-(

Maybe IBM's PowerCPU will save us? It seems at the moment though, Intel cannot do anything wrong

johnwa...@yahoo.co.uk

unread,
Oct 17, 2015, 1:34:27 AM10/17/15
to
Intel can do and have done and are doing lots of things wrong (IA64,
for example, and indeed almost any Intel non-x86 product), but their
bread and butter still comes from x86.

x86 (or its modern successor, AMD64) is not the growth market it used
to be, but it isn't going away any time soon.

Neil Rieck

unread,
Oct 17, 2015, 9:42:31 AM10/17/15
to
Hoff's article points back to VSI's roadmap:
http://vmssoftware.com/pdfs/VSI_Roadmap_20150924.pdf

For me, the most important changes are these:

1) HP's "TCPIP Services" will be replaced with VSI TCP/IP 1.0 (I would have started numbering at 6.0); anyone who has used "TCPIP Services" knows this is a good thing. On the flip side, perhaps VSI should consider signing an agreement with PSC to co-vend/co-develop MultiNet (beats reinventing the wheel)

2) CSWS will finally be updated to Apache 2.4.12 (will VSI change the name to VSWS?); this is long overdue so kudos to Bret Cameron and VSI; I hope they publish their source code so people like me can more easily build compatible security modules;

3) gSOAP will be developed by VSI. Kudos again to Bret Cameron and VSI because gSOAP is mission-critical for me.

Now if we could only get VSI to hire Mark Berryman so "MariaDB on OpenVMS" would be more protected from an asteroid strike. :-)

Mark also publishes a PHP offering better than anything I've seen previously. If VSI hired him they wouldn't need to do their own PHP. (two birds with one stone)

Just my 2-cents worth:

Neil Rieck
Kitchener / Waterloo / Cambridge,
Ontario, Canada.
http://www3.sympatico.ca/n.rieck/OpenVMS.html



David Froble

unread,
Oct 17, 2015, 10:35:36 AM10/17/15
to
IanD wrote:
> On Friday, October 16, 2015 at 2:12:04 PM UTC+11, David Froble wrote:
>> IanD wrote:
>>
>>> Intel didn't just slap GPU's into it's server based chips because it
>>> liked the technical challenge, it did so because it saw a trend for
>>> needing to deal with graphical data emerging and wanted to tap that
>>> market
>> Or, perhaps because AMD was already doing so with their A series desktop
>> CPUs, which aren't too shabby for server tasks either ....
>>
>> I sometimes wonder just what we'd have if AMD wasn't pushing Intel. Very
>> likely no 64 bit x86. Stuck with the itanic.
>
> Yeah, that's a very good point
>
> I remember x86 was supposed to be dead until AMD added it's 64 bit extensions
> to the cpu then suddenly it became viable to stay on the same software base
> and move forward on the same cpu - smart move by AMD but it locked us into
> x86

I do hope you wouldn't rather be locked into the itanic? Lifeboats might be a
requirement. Lots of money too, Intel would have made that another requirement.

> I remember when greedy Intel used to sell those 8087 co-processors too. I
> used to do CAD/CAM work for circuit design and the co-processors made a huge
> difference to Autcad work but they cost a fortune!
>
> I see AMD's profits are down again, they are getting slammed and could even
> fold in the next few years which will leave Intel to price gouge us again :-(

Not sure what AMD can do. On price vs performance, they are good. Not sure why
some casual users think they need an i7 for email and browsers.

Then, back in the day, MicroVAX IIs were used for word processing. I'm pretty
sure most will agree that a PC desktop is better suited for such. However, even
that was much more than what many users needed. Now those users have smart
phones and tablets, and Intel is also hurting a bit.

> Maybe IBM's PowerCPU will save us? It seems at the moment though, Intel
> cannot do anything wrong

Wouldn't go so far as to say that ....

Stephen Hoffman

unread,
Oct 17, 2015, 11:28:33 AM10/17/15
to
On 2015-10-17 02:56:41 +0000, IanD said:

> If VSI are not going to do 'this or that', then just what market
> segment are VSI going to pitch VMS at?

Servers for the installed base, for the foreseeable future.

With pricing for licenses and support, I'd expect VSI will stay with
HPE pricing for Itanium. HPE would certainly want that, too.

Whether the VSI OpenVMS pricing and availability might change with
x86-64, we shall see.

Whether Clair and Carmiel can come to an agreement around plans for VSI
x86-64 graphics support? :-)

> Begging for the Vax / Alpha majority to come over to x86-64 VMS isn't
> going to probably attract much attention if your also not going to
> target where computing is headed to also - some type of future proofing
> all that money comes into play a little bit too. What real world issues
> is VMS going to solve for business going forward? What costs is it
> going to drive down for IT.

The VSI budget is insufficient — and not by a little bit — to really
compete in the general-purpose OS space.

Which means they'll continue to enlist third-party providers and
open-source software, where they can.

Downside is that those third-party tools and the open-source software
won't usually look like or integrate with OpenVMS, either at all, or
without additional effort and maintenance. Without integration
effort, or a generic and OpenVMS-like front-end akin to OS X Server
Server.app. In short, if I want to run Apache, it'll probably be
easier and cheaper to do that on some other platform. It's _vastly_
easier to run Apache and SMTP and other services on OS X Server, due to
that front-end.

There's also that more than a few of these issues and limitations arise
from old infrastructure. No integrated database support means that VSI
staff and customers keep trying to use something like RMS way past its
use-by date. No integrated preferences means every OpenVMS package
implements that differently. Ever tried live backups of an RMS file,
or upgrading an RMS file in a cluster? It's not fun. BACKUP
/IGNORE=INTERLOCK just isn't reliable. That's before we get to
discussions of certificates or encryption or security, too.

VSI did migrate LDAP password authentication into the default
configuration, but migrating SYSUAF and all the local preferences and
defaults stuff over into LDAP is a far larger project.

If OpenVMS ever gets more installations, the problems pointed out in
Gavin McCance's "cows versus cats" CERN Data Centre Evolution will
arise, too.

<http://www.slideshare.net/gmccance/cern-data-centre-evolution>

BTW: That's a case that one of the HPE presenters quite effectively
makes. One look at how manual the configuration process of an OpenVMS
box (still!) is...

> It's a general purpose OS, what definition is being used for general
> here when IT solutions are becoming targeted to specific built IT
> solutions.

OpenVMS hasn't been marketed as a general-purpose OS in quite a few
years. Probably not since the ~1990s or so. OpenVMS was marketed
for specific markets that HP / Compaq / DEC thought needed its
features, and with features specifically for those markets. Some that
have been mentioned in presentations include finance, telecoms,
healthcare, government, manufacturing, and lotteries.

> VMS via a docker type delivery pushed onto any OS in some type of
> virtual container perhaps?

Docker and most other container schemes are built on pushing
applications out to Linux hosts, and they target keeping applications
from getting tangled within a single virtual machine. (OpenVMS "has
issues" here, too. There've been several recent local tussles around
OpenVMS requiring SYSLCK for coordinating application sharing within
the application — and ended up using Unix-style lock files, which is a
galactically stupid design. But it's functional. But I digress.)

Containers try to target Kerry Main's frequent "VM sprawl" complaint.

As for infrastructure, you're also going to have to divvy DLM somehow
and beyond what resource domains currently provide (e.g. lib$get_rsd,
or "here's a UUID or a key-pair so please let me play in the associated
domain without privileges"), logical names (don't want that decc$
morass bleeding over) and some sort of file system container, and IPv4
or IPv6 addresses to applications, UICs and groups probably necessarily
become UUIDs, and various other resources. Not a small project...

Semi-related to containers is sandboxing or jails. Mechanisms which
keep applications in containers or in general from accidentally
stomping on each other, or more overtly going after each other.
OpenVMS has no clue about sandboxing and little clue about application
signing, which means that most of us are one malevolent unzip tool away
from a whole lot of pain, too.

Docker gets to part of the application configuration process on
OpenVMS, but there's still the system configuration.

> I wonder what quantum computing is going to open up also, that could be
> a field to change everything or a slow hard grind to get something
> that's viable like fusion energy turned out to be - but yeah, who knows
> what computing is going to be like even in 2020 and that's not that far
> away

Ayup. Similar to where things are headed, iPhone was available in
2007, and Android was just getting going. That's not very long ago.
Microsoft had been trying for some years prior, too. AFAICT, OpenVMS
hasn't adopted anything that's related to or targeting mobile
platforms, either for application use or for remote management of
OpenVMS.

VSI is not going to compete with and is not going to replace Linux or
BSD, and certainly not Windows. Not without a decade of very
substantial investments. Not without a goal of getting (far) past
what's available now in whatever markets VSI might target. What VSI
has set out upon is no small project.

VSI has a huge amount of work ahead of them — a never-ending amount —
and for as long as the revenues roll in.

####
Postscript / FWIW / history / shifting investments:
http://www.cnet.com/news/itanium-gives-openvms-new-lease-on-life/
https://groups.google.com/d/msg/comp.os.vms/hRJCGeSLwac/s3mWEC1CpAEJ

Hans Vlems

unread,
Oct 17, 2015, 12:55:04 PM10/17/15
to
Camiel, that's exactly what I wanted to know!
Bedankt en veel plezier bij VSI
Hans

Craig A. Berry

unread,
Oct 17, 2015, 3:43:17 PM10/17/15
to
On 10/17/15 8:42 AM, Neil Rieck wrote:
> CSWS will finally be updated to Apache 2.4.12 (will VSI change the
name to VSWS?)

I really hope they will just drop the silly rebranding and call it
Apache for OpenVMS. And keep the version numbers tracking the open
source versions. If they need a 2.4.12-1, 2.4.12-2 for their own changes
to the upstream 2.4.12, that's fine. But CSWS 2.2 based on Apache 2.0.63
is just wrong.

JF Mezei

unread,
Oct 17, 2015, 5:06:20 PM10/17/15
to
On 2015-10-17 15:43, Craig A. Berry wrote:

> I really hope they will just drop the silly rebranding and call it
> Apache for OpenVMS. And keep the version numbers tracking the open
> source versions.

+1,000,000,000,000,000,000,000,000,000,000,000,000.00

Jan-Erik Soderholm

unread,
Oct 17, 2015, 5:08:32 PM10/17/15
to
Den 2015-10-17 kl. 17:28, skrev Stephen Hoffman:

> ...In short, if I want to run Apache, it'll probably be easier and
> cheaper to do that on some other platform.

Yes, if the *only* thing you want is to "run Apache", then of course
you should run it elseware.

But not if you want to "run Apache to serve my OpenVMS data". Why
else would you want to run a web server on your OpenVMS system?


Stephen Hoffman

unread,
Oct 17, 2015, 5:40:38 PM10/17/15
to
I suspect you've missed the point.

Here's the full citation:

> Downside is that those third-party tools and the open-source software
> won't usually look like or integrate with OpenVMS, either at all, or
> without additional effort and maintenance. Without integration
> effort, or a generic and OpenVMS-like front-end akin to OS X Server
> Server.app. In short, if I want to run Apache, it'll probably be
> easier and cheaper to do that on some other platform. It's _vastly_
> easier to run Apache and SMTP and other services on OS X Server, due to
> that front-end.

The harder OpenVMS is to deal with, the less likely there'll be a
viable long-term upgrade path for that AlphaServer DS20 running that
application of yours.

Robert A. Brooks

unread,
Oct 17, 2015, 8:39:19 PM10/17/15
to
Yeah, we agree that the numbering is confusing. We'll use the open source
version numbers for our releases.

I have no idea if we'll rename CSWS to something like VSWS, Apache for OpenVMS,
or something else.

--
-- Rob

terry+go...@tmk.com

unread,
Oct 17, 2015, 9:21:29 PM10/17/15
to
On Saturday, October 17, 2015 at 11:28:33 AM UTC-4, Stephen Hoffman wrote:
> Downside is that those third-party tools and the open-source software
> won't usually look like or integrate with OpenVMS, either at all, or
> without additional effort and maintenance. Without integration
> effort, or a generic and OpenVMS-like front-end akin to OS X Server
> Server.app. In short, if I want to run Apache, it'll probably be
> easier and cheaper to do that on some other platform. It's _vastly_
> easier to run Apache and SMTP and other services on OS X Server, due to
> that front-end.

The low-hanging fruit is probably getting people running existing apps on Itanium (or Alpha, or maybe VAX) to move to x86-64. Good performance of translated / emulated / whatevered images is important. It seems like that is the plan from what I've read.

Next up is "going native" on x86-64. That depends on how dusty the user's source code is and how compatible the compilers are (which gets back to the maximum compatibility vs. modern features language issue). Some code may end up running in translation mode forever, due to performance not being an issue and not being worth the time to update and recompile for native use.

I'm discounting (for the moment) the possibility of new design wins for VMS.

The users in the above groups are either doing everything on VMS or are already a mixed-OS shop. So not having everything people might want right out of the starting gate isn't necessarily a big problem.

And I would actually argue that things like Apache NOT acting like VMS apps may in fact be a Good Thing. The world is a much more multi-platform place than it was when those things were initially ported to VMS. Acting the same way they do on other platforms means that users don't have to learn another way of configuring and managing a tool they're already using elsewhere. Plus, consistent versioning between the upstream software and the VMS version will make it easier for users to have their various platforms on the same version of the tool - no more "VMS thingamawhatzy V1.2-04 patch 3 is the same thing as whatchamacallit 7.0.1". And it may be easier to get changes accepted upstream (lessening the work VSI or VMS-specific volunteers need to do to deal with future updates). I'm basing this on the patchset being smaller because it doesn't add VMSisms to command line parsing, diagnostic message output, etc.

As an example, while I've always been very happy with MultiNet, a bunch of things in it are a thin veneer of VMSisms overlaid on top of Unix code. This has made things like keeping BIND up-to-date more difficult due to the magnitude of the patchset.

David Froble

unread,
Oct 17, 2015, 11:04:59 PM10/17/15
to
Stephen Hoffman wrote:

> Docker and most other container schemes are built on pushing
> applications out to Linux hosts, and they target keeping applications
> from getting tangled within a single virtual machine. (OpenVMS "has
> issues" here, too. There've been several recent local tussles around
> OpenVMS requiring SYSLCK for coordinating application sharing within the
> application — and ended up using Unix-style lock files, which is a
> galactically stupid design. But it's functional. But I digress.)

Well, we've at least got a fix for that, right?

Craig A. Berry

unread,
Oct 17, 2015, 11:18:08 PM10/17/15
to
For Hoff's digressions (and his frequent apologies for doing so)? No, I
don't think that can be fixed :-).

Phillip Helbig (undress to reply)

unread,
Oct 18, 2015, 5:59:09 AM10/18/15
to
In article <mvudcf$tih$1...@news.albasani.net>, Jan-Erik Soderholm
<jan-erik....@telia.com> writes:

> > ...In short, if I want to run Apache, it'll probably be easier and
> > cheaper to do that on some other platform.
>
> Yes, if the *only* thing you want is to "run Apache", then of course
> you should run it elseware.
>
> But not if you want to "run Apache to serve my OpenVMS data".

But one could run OSU or WASD as well.

> Why
> else would you want to run a web server on your OpenVMS system?

Maybe for the same reason many people run a web server but with the
additional advantages which VMS offers, even if it is not serving any
VMS data.

Jan-Erik Soderholm

unread,
Oct 18, 2015, 6:27:31 AM10/18/15
to
Den 2015-10-18 kl. 11:58, skrev Phillip Helbig (undress to reply):
> In article <mvudcf$tih$1...@news.albasani.net>, Jan-Erik Soderholm
> <jan-erik....@telia.com> writes:
>
>>> ...In short, if I want to run Apache, it'll probably be easier and
>>> cheaper to do that on some other platform.
>>
>> Yes, if the *only* thing you want is to "run Apache", then of course
>> you should run it elseware.
>>
>> But not if you want to "run Apache to serve my OpenVMS data".
>
> But one could run OSU or WASD as well.

Not an option if you just *have* to run Apache, but yes, WASD
is a better web server *on VMS* then Apache (and OSU)... :-)

>
>> Why
>> else would you want to run a web server on your OpenVMS system?
>
> Maybe for the same reason many people run a web server but with the
> additional advantages which VMS offers, even if it is not serving any
> VMS data.
>

VMS does not offer that many additional advantages to a web server,
if any, if you don't actualy have a real need for a web server on VMS.

Most non-VMS web needs are probably better served from other platforms.

IMHO... :-)



Stephen Hoffman

unread,
Oct 18, 2015, 8:17:38 AM10/18/15
to
On 2015-10-18 00:38:54 +0000, Robert A. Brooks said:

> Yeah, we agree that the numbering is confusing. We'll use the open
> source version numbers for our releases.
>
> I have no idea if we'll rename CSWS to something like VSWS, Apache for
> OpenVMS, or something else.

OpenVMS is arcane enough.

Every opportunity to simplify terminology, to simplify interfaces, to
simplify documentation — or to remove the need for it — is a win.

Remove install-time or configure-time options, etc.

Supposed-experts can go use the cryptic interfaces. Or by loading profiles.

But that cryptic usage should not be required for a basic functional system.

OpenVMS should be able to install, boot and auto-generating a host name
and DHCP for IPv4 or (preferably) IPv6 address, prompt for your VSIID
and password for automatic license acquisition, and the VMS server is
ready for work. A newly-rethought and extended set of standard
products should be installed. The less chunder you put in the way of
getting work done with VMS, the better. Ruthlessly remove options.
We don't need those. (We don't live with small disks nor with bespoke
and dedicated server management any more. Rethink it all.)

You're not DEC or Compaq or HP, you're VSI.

Stop thinking like DEC, and how DEC designed interfaces for the last
ten years, much less the last ~40.

Start thinking like VSI, and how VSI designs and updates interfaces for
the next ten years of customers.

David Froble

unread,
Oct 18, 2015, 10:13:52 AM10/18/15
to
terry+go...@tmk.com wrote:

> Next up is "going native" on x86-64. That depends on how dusty the user's
> source code is and how compatible the compilers are (which gets back to the
> maximum compatibility vs. modern features language issue). Some code may end
> up running in translation mode forever, due to performance not being an issue
> and not being worth the time to update and recompile for native use.

I've seen this topic many times. I've got to wonder how prevalent this
conditions is. A few instances? A large number of instances?

Our code is always up to date. We distribute changes and re-compile on customer
systems on a daily basis. The whole procedure is automated. Quite simple to
re-build the entire set of applications.

It surprises me that anyone would let such capabilities fade away ....

David Froble

unread,
Oct 18, 2015, 10:18:42 AM10/18/15
to
Well, for some things, perhaps not.

:-)

But I did have him write a couple of UWSS routines for me. These allow the use
of SYSLCK without priviledges.

I had that capability on VAX, but some things were changed, and I never was able
to comprehend how to do so on Alpha, and beyond ....

Scott Dorsey

unread,
Oct 18, 2015, 10:37:54 AM10/18/15
to
David Froble <da...@tsoft-inc.com> wrote:
>terry+go...@tmk.com wrote:
>
>> Next up is "going native" on x86-64. That depends on how dusty the user's
>> source code is and how compatible the compilers are (which gets back to the
>> maximum compatibility vs. modern features language issue). Some code may end
>> up running in translation mode forever, due to performance not being an issue
>> and not being worth the time to update and recompile for native use.
>
>I've seen this topic many times. I've got to wonder how prevalent this
>conditions is. A few instances? A large number of instances?

A tremendous number of of instances, mostly when people have commercial
code which they purchased without source code, and which is no longer
supported by the original vendor (if the original vendor still exists).

Of course, the best thing to do is to get away from such code, but having
tools to ease the transition is a fine thing.

I worked once at a bank that had an IBM 360, and once a week they had to
boot an IBM 1620 emulator in order to run one job that had been written for
a 1620 long ago. It turned out that the 1620 job was actually an emulator
for some kind of plugboard-programmed machine.

>Our code is always up to date. We distribute changes and re-compile on customer
>systems on a daily basis. The whole procedure is automated. Quite simple to
>re-build the entire set of applications.
>
>It surprises me that anyone would let such capabilities fade away ....

Most of the people who have kept their codebase up to date have already
ported away from VMS....
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Norm Raphael

unread,
Oct 18, 2015, 3:45:05 PM10/18/15
to info...@info-vax.com

> On 10/18/15, Scott Dorsey<klu...@panix.com> wrote:
> <snip>
>
> I worked once at a bank that had an IBM 360, and once a week they had to
> boot an IBM 1620 emulator in order to run one job that had been written for
> a 1620 long ago. It turned out that the 1620 job was actually an emulator
> for some kind of plugboard-programmed machine.


Probably and IBM-407 Accounting Machine. The 1620 generated punched-cards
to run through the 407 for reporting, since it's only other output device was an
IBM selectric typewriter console.




Norman F. Raphael
"Everything worthwhile eventually
degenerates into real work." -Murphy

JF Mezei

unread,
Oct 18, 2015, 4:17:44 PM10/18/15
to
As I make a good devil's advocate, I will look at other side of coin:

There are disadvantages of not having Apache on VMS. (which is why it
is important for "Apache" to be available, not some proprietary
rebranded version)

However, having Apache on VMS doesn't give VMS a performance edge,
especially if Apache relies a lot on subprocess creation etc where Unix
wins for performance.

If VMS were to have a high performance web server (nginx or even a
spruced up wasd or osu that supported standard plug-ins), perhaps it
would then have some performance advantage to run a web server on VMS
instead of Unix.

Just a thought. (devil's advocate mode off)

JF Mezei

unread,
Oct 18, 2015, 4:22:27 PM10/18/15
to
On 2015-10-18 06:27, Jan-Erik Soderholm wrote:

> VMS does not offer that many additional advantages to a web server,
> if any, if you don't actualy have a real need for a web server on VMS.

One big advantage I had found with OSU was its ability to specify a
DECnet task as a "filename" to execute when a script is called.

What this means is that you can have scripts run on a totally different
SYSUAF authentication and thus security than the actual web server.

I know DECnet has fallen out of favour as a networming protocol, but
in-node or in-cluster use of Decnet for task-to-task has many advantages
from a security point of view compared to IP or sockets.

(Hoff mentioned sandboxing in a recent post, and that is one way to
ensure the web server runs without access to actual data files because
those are accessed by scripts/applications that run on a totally
different UIC.)


David Froble

unread,
Oct 18, 2015, 4:24:13 PM10/18/15
to
There is a good chance that you'd be wrong about that.

Our stuff began on RSTS, then moved to VAX / DEC / Compaq / HP Basic. I seem to
recall there was quite a few applications with similar background. Then there
was the comment by Clair Grant that Basic would always be supported on VMS.
Perhaps he knows something about the current customers?

As for porting, VAX / DEC / Compaq / HP Basic runs on .... well, just VMS ...

Nor was I amused by some recent slanderous comments about Basic .... :-)

Jan-Erik Soderholm

unread,
Oct 18, 2015, 4:30:43 PM10/18/15
to
Den 2015-10-18 kl. 22:22, skrev JF Mezei:
> On 2015-10-18 06:27, Jan-Erik Soderholm wrote:
>
>> VMS does not offer that many additional advantages to a web server,
>> if any, if you don't actualy have a real need for a web server on VMS.
>
> One big advantage I had found with OSU was its ability to specify a
> DECnet task as a "filename" to execute when a script is called.
>

Nothing special. In WASD you specify which user name should run
scripts using each path, and the server creates detached server
processes if needed. The server processes can also be "persistent"
so they they are reused between sessions. Doesn't need DECnet.

You can also, of course, run scripts after a forced login and in
that case run under whatever AUF username that logged in.

John Reagan

unread,
Oct 18, 2015, 5:06:26 PM10/18/15
to
On Sunday, October 18, 2015 at 4:24:13 PM UTC-4, David Froble wrote:
> Scott Dorsey wrote:
> > David Froble <> wrote:
I spoke with several customers with LARGE BASIC applications at Bootcamp. BASIC should come over to x86-64 without issues. Alpha to Itanium was just recompile-and-go, I expect the same for x86-64.

JF Mezei

unread,
Oct 18, 2015, 5:08:42 PM10/18/15
to
On 2015-10-18 10:13, David Froble wrote:

> Our code is always up to date. We distribute changes and re-compile on customer
> systems on a daily basis. The whole procedure is automated. Quite simple to
> re-build the entire set of applications.
>
> It surprises me that anyone would let such capabilities fade away ....

Consider how many companies have stopped developping for VMS, leaving a
number of VMS customers with apps that run but cannot be updated or
recompiled. Some call this abandonware. And a LOT of Digital software
has become abandonware since the VAX heydays (and of course lots of 3rd
party too).



Stephen Hoffman

unread,
Oct 19, 2015, 9:11:04 AM10/19/15
to
On 2015-10-18 01:21:26 +0000, terry+go...@tmk.com said:

> The users in the above groups are either doing everything on VMS or are
> already a mixed-OS shop. So not having everything people might want
> right out of the starting gate isn't necessarily a big problem.
>
> And I would actually argue that things like Apache NOT acting like VMS
> apps may in fact be a Good Thing. The world is a much more
> multi-platform place than it was when those things were initially
> ported to VMS. Acting the same way they do on other platforms means
> that users don't have to learn another way of configuring and managing
> a tool they're already using elsewhere. Plus, consistent versioning
> between the upstream software and the VMS version will make it easier
> for users to have their various platforms on the same version of the
> tool - no more "VMS thingamawhatzy V1.2-04 patch 3 is the same thing as
> whatchamacallit 7.0.1". And it may be easier to get changes accepted
> upstream (lessening the work VSI or VMS-specific volunteers need to do
> to deal with future updates). I'm basing this on the patchset being
> smaller because it doesn't add VMSisms to command line parsing,
> diagnostic message output, etc.

Two details: good suggestions, but are you thinking how to manage stuff
in recent times and how you've done this in the past, and not how we're
going to have to manage these servers going forward? We're going to
need to have less futzing with the tools, so — for smaller
installations and for folks that don't need the details of Apache or
Postfix — this usually means a front-end akin to OS X Server Server.app
(available GUI and simple command-line server management), and — for
automated deployments — configurations and profiles that can be loaded
from a central configuration server. We are not long in the world when
we're managing Apache or Postfix as we have done — it either needs to
be vastly simpler for most folks, or vastly more automated. Sure,
there'll still be a handful of folks that need Apache or Postfix access
— customizations beyond the standard tools, or for creating specific
profiles — but that's not going to be a big market.

Ayup; the rampant random-version idiocy needs to end.

Stephen Hoffman

unread,
Oct 19, 2015, 9:14:42 AM10/19/15
to
Workaround. Not fix.

Stephen Hoffman

unread,
Oct 19, 2015, 9:51:38 AM10/19/15
to
On 2015-10-18 14:13:48 +0000, David Froble said:

> terry+go...@tmk.com wrote:
>
>> Next up is "going native" on x86-64. That depends on how dusty the
>> user's source code is and how compatible the compilers are (which gets
>> back to the maximum compatibility vs. modern features language issue).
>> Some code may end up running in translation mode forever, due to
>> performance not being an issue and not being worth the time to update
>> and recompile for native use.
>
> I've seen this topic many times. I've got to wonder how prevalent this
> conditions is. A few instances? A large number of instances?

There's a whole range of dusty decks — "dusty decks" itself a quaint
concept, and probably one lost on more than a few younger programmers —
and of lost-source-code messes. and with cases at various sites here
ranging from a few replaceable giblets to the wholesale loss of
not-replaceable source code, and the parallel world of lost
prerequisite products and packages.

Each architectural port has had software packages go end-of-life — VAX
to Alpha, Alpha to Itanium — which either maroons some folks using
those packages, or increases the end-user porting costs and
difficulties. There'll likely be third-party and maybe some HP Itanium
applications which won't be available on x86-64, and those will block a
few folks.

> Our code is always up to date. We distribute changes and re-compile on
> customer systems on a daily basis. The whole procedure is automated.
> Quite simple to re-build the entire set of applications.

Your environment is one that is actively maintained and updated. More
than a few others are not.

> It surprises me that anyone would let such capabilities fade away ....

There are more than a few sites that have not preserved the particular
combinations of compilers and tools and source code in their source
libraries (meaning reproducible builds are either not feasible or are
far more work), and those that did not automate their builds and build
analysis. Some still have all their bits and an investment spent
hacking out /STANDARD=VAXC or updating the oldest of the FORTRAN code
and other morasses can bring the environments back to usefulness.
Others... involve migrations or rewriting some or all of the code.

The question for a vendor then becomes how much time and effort spent
keeping lost end-user source code working will be recouped, and where
it becomes more financially appropriate for the vendor to move on, or
to offer more specialized and more expensive custom services for those
end-user sites — and more than a few of those sites with lost code will
eventually switch to emulation, and will simply run the code into the
ground. Until the end-users have to do something about that code.

More subtly, how much of that vendor time and effort in keeping the old
code working will impede moving OpenVMS forward — toward better
interfaces, more secure interfaces, better capabilities — for those
folks that are staying current. DEC valued application
upward-compatibility and stability very highly and in various cases
more highly than moving OpenVMS forward — and y'all are quite fond of
pointing out how those old OpenVMS apps still work — which IMO is also
part of why OpenVMS is in the mess that it's in now. There are changes
which are inherently incompatible, and there are changes — like the
overhead of doing cross-version cluster upgrades using the utterly
reliable and in some key ways also utterly antique RMS tools — that are
either a very large hassle, that lead to designs that are really ugly
and complex, or that lead to limitations on what you can change and
what you can secure and what you can retire. Whether you can get good
live backups, too.

We're very far from the computing world that VAX-11/VMS was created in,
which means there are now... stresses and fractures in user
expectations and user interfaces and deployment scale and networking
and compatibility — and in details such as the necessity for certain
updates to become available much more quickly, for that matter.

Good practices — such as rebuilding your source code — are still
useful. But as for updates even in your daily-builds environment, I'd
look at using a DVCS for what you're doing, for instance. Which means
it'd be handy to have easily available or integrated git or Mercurial,
for instance. (For those of you that are about to reply with "but we
dooooooo!" thank you and now please go use a system that actually does
have integrated tools.)

This is no small project for VSI nor for the remaining third-party
vendors, either/

David Froble

unread,
Oct 19, 2015, 10:23:05 AM10/19/15
to
Hmmm .... "LARGE BASIC", that's a new one for me. Is it similar to the other
names for VAX Basic?

:-)

Indeed, VAX to Alpha, then Alpha to itanic, were all just re-compile and go. As
a rather "mature", or should I go so far as "legacy", product, I'd expect
nothing less. Well, some things on VAX did get lost.

On VAX, Basic would become the keyboard monitor, there is a term for that, but I
forget it. As such, upon typing the <RETURN> key, the text would be compiled
and executed. That got lost on the move to Alpha, with I believe, a few other
things. Just asking, is such a capability a large or a small effort?

Steve has talked about "modern" development environments, where code is checked
upon entry. I'd guess it's a similar thing. I never used it, but it's my
impression the LSE did something similar.

Another question. Are you going to look at that thing about scanning the stack
when returning from a subprogram call? Don't see the need for that "performance
eater".

David Froble

unread,
Oct 19, 2015, 10:29:44 AM10/19/15
to
If an application is going to be abandoned, and it was sold without sources,
then the vendor should provide sources to all customers. Anything less is, in
my opinion, depriving the customer of what he has paid for.

I'd go so far as to make something such as this mandatory by law. At a minimum
a vendor should provide some type of access to sources if the product is
abandoned. Or might need to be moved to a new architecture.

I've always provided customers with sources. I consider it part of what they
paid for. Even the database code, though I caution them, "don't touch this
unless you know what you're doing".

David Froble

unread,
Oct 19, 2015, 10:55:50 AM10/19/15
to
Stephen Hoffman wrote:
> On 2015-10-18 14:13:48 +0000, David Froble said:
>
>> terry+go...@tmk.com wrote:
>>
>>> Next up is "going native" on x86-64. That depends on how dusty the
>>> user's source code is and how compatible the compilers are (which
>>> gets back to the maximum compatibility vs. modern features language
>>> issue). Some code may end up running in translation mode forever, due
>>> to performance not being an issue and not being worth the time to
>>> update and recompile for native use.
>>
>> I've seen this topic many times. I've got to wonder how prevalent
>> this conditions is. A few instances? A large number of instances?
>
> There's a whole range of dusty decks — "dusty decks" itself a quaint
> concept, and probably one lost on more than a few younger programmers —
> and of lost-source-code messes. and with cases at various sites here
> ranging from a few replaceable giblets to the wholesale loss of
> not-replaceable source code, and the parallel world of lost prerequisite
> products and packages.

Maybe it comes down to "being responsible"? My opinion is that an IT
professional's job includes being responsible and considering and attending to
more than what nail the boss wants hammered today. Some of the issues of
jumping jobs every 6 months for a bit more pay.

> Each architectural port has had software packages go end-of-life — VAX
> to Alpha, Alpha to Itanium — which either maroons some folks using those
> packages, or increases the end-user porting costs and difficulties.
> There'll likely be third-party and maybe some HP Itanium applications
> which won't be available on x86-64, and those will block a few folks.

Another opinion I have is that many such applications don't go "end-of-life",
but rather are "killed". There is no need for this. I'm beginning to think
that sales of an application should include sources. At least an escrow for any
cases where the vendor no longer exists, or cares, about the application.

>> Our code is always up to date. We distribute changes and re-compile
>> on customer systems on a daily basis. The whole procedure is
>> automated. Quite simple to re-build the entire set of applications.
>
> Your environment is one that is actively maintained and updated. More
> than a few others are not.

Yeah. Maybe this is something that should be attended to by all users.

>> It surprises me that anyone would let such capabilities fade away ....
>
> There are more than a few sites that have not preserved the particular
> combinations of compilers and tools and source code in their source
> libraries (meaning reproducible builds are either not feasible or are
> far more work), and those that did not automate their builds and build
> analysis. Some still have all their bits and an investment spent
> hacking out /STANDARD=VAXC or updating the oldest of the FORTRAN code
> and other morasses can bring the environments back to usefulness.
> Others... involve migrations or rewriting some or all of the code.

Or, if they started with VAX Basic, such work would not be needed ...

:-)

> The question for a vendor then becomes how much time and effort spent
> keeping lost end-user source code working will be recouped, and where it
> becomes more financially appropriate for the vendor to move on, or to
> offer more specialized and more expensive custom services for those
> end-user sites — and more than a few of those sites with lost code will
> eventually switch to emulation, and will simply run the code into the
> ground. Until the end-users have to do something about that code.
>
> More subtly, how much of that vendor time and effort in keeping the old
> code working will impede moving OpenVMS forward — toward better
> interfaces, more secure interfaces, better capabilities — for those
> folks that are staying current. DEC valued application
> upward-compatibility and stability very highly and in various cases more
> highly than moving OpenVMS forward — and y'all are quite fond of
> pointing out how those old OpenVMS apps still work — which IMO is also
> part of why OpenVMS is in the mess that it's in now.

I'll just observe that without the upward compatibility and stability that VMS
would not exist today. YOu would not have the existing customers that are
"stuck" on VMS because most of their applications would no longer work.

And I'll admit, moving forward is also important. I believe there is some
middle ground. The two concepts are not mutually exclusive.

> There are changes
> which are inherently incompatible, and there are changes — like the
> overhead of doing cross-version cluster upgrades using the utterly
> reliable and in some key ways also utterly antique RMS tools — that are
> either a very large hassle, that lead to designs that are really ugly
> and complex, or that lead to limitations on what you can change and what
> you can secure and what you can retire. Whether you can get good live
> backups, too.
>
> We're very far from the computing world that VAX-11/VMS was created in,
> which means there are now... stresses and fractures in user expectations
> and user interfaces and deployment scale and networking and
> compatibility — and in details such as the necessity for certain updates
> to become available much more quickly, for that matter.

Agreed..

> Good practices — such as rebuilding your source code — are still
> useful. But as for updates even in your daily-builds environment, I'd
> look at using a DVCS for what you're doing, for instance. Which means
> it'd be handy to have easily available or integrated git or Mercurial,
> for instance. (For those of you that are about to reply with "but we
> dooooooo!" thank you and now please go use a system that actually does
> have integrated tools.)

Where did such products come from? I'd suggest from the ideas used for such
tasks before there was such products. So, if some product is an extension of
something I'd developed due to need, why does that all of a sudden make my
procedures chopped liver? I totally disagree with that concept. I see no
reason to learn someone else's bastardized implementation of what works just
fine for me.

These new procedures might be fine for new entrants into software, since they
don't have a clue on best practices. But not everyone needs the products, since
they may have developed the original concepts, and thus have knowledge of best
practices.

David Froble

unread,
Oct 19, 2015, 10:57:53 AM10/19/15
to
Would like to see definitions of "workaround" and "fix".

What kind of "fix" might be developed to allow use of shared locks?

John Reagan

unread,
Oct 19, 2015, 11:18:59 AM10/19/15
to
On Monday, October 19, 2015 at 10:23:05 AM UTC-4, David Froble wrote:
> John Reagan wrote:
>
> > I spoke with several customers with LARGE BASIC applications at Bootcamp.
> > BASIC should come over to x86-64 without issues. Alpha to Itanium was just
> > recompile-and-go, I expect the same for x86-64.
>
> Hmmm .... "LARGE BASIC", that's a new one for me. Is it similar to the other
> names for VAX Basic?
>
> :-)

Sure, why not. For this customer, they have 9+ million lines of BASIC code (if I remember correctly).

>
> Indeed, VAX to Alpha, then Alpha to itanic, were all just re-compile and go. As
> a rather "mature", or should I go so far as "legacy", product, I'd expect
> nothing less. Well, some things on VAX did get lost.
>
> On VAX, Basic would become the keyboard monitor, there is a term for that, but I
> forget it. As such, upon typing the <RETURN> key, the text would be compiled
> and executed. That got lost on the move to Alpha, with I believe, a few other
> things. Just asking, is such a capability a large or a small effort?

It was very large to keep the 'interactive mode' in the GEM model. It isn't a JIT.


>
> Another question. Are you going to look at that thing about scanning the stack
> when returning from a subprogram call? Don't see the need for that "performance
> eater".

The prologue will scan up the stack by default to pick up some language settings. You can turn that off today with OPTIONS INACTIVE=SETUP. Look in the User Manual. Those features (whose names just roll off the tongue) have been around for quite a while. Look in the user-manual.

The RTL behavior with it signalling errors for IO errors such as record not found, etc. can slow things down also. In the latest BASIC RTL, there is a logical DBASIC$IO_NO_SIGNAL logical to prevent all that stack walking.

Stephen Hoffman

unread,
Oct 19, 2015, 11:43:18 AM10/19/15
to
On 2015-10-19 14:55:44 +0000, David Froble said:

> Stephen Hoffman wrote:
>> On 2015-10-18 14:13:48 +0000, David Froble said:
>>
>>> I've seen this topic many times. I've got to wonder how prevalent this
>>> conditions is. A few instances? A large number of instances?
>>
>> There's a whole range of dusty decks — "dusty decks" itself a quaint
>> concept, and probably one lost on more than a few younger programmers —
>> and of lost-source-code messes. and with cases at various sites here
>> ranging from a few replaceable giblets to the wholesale loss of
>> not-replaceable source code, and the parallel world of lost
>> prerequisite products and packages.
>
> Maybe it comes down to "being responsible"? My opinion is that an IT
> professional's job includes being responsible and considering and
> attending to more than what nail the boss wants hammered today. Some
> of the issues of jumping jobs every 6 months for a bit more pay.

If it's all your code and/or you've decided this up front, then the
source code release is somewhat easier. But in many environments, any
sort of open-sourcing costs more than a little add-on legal review
costs to the end of a project, and some of the decisions here can
arrive quickly, and as part of a budgetary axe and/or strategic
realignment and/or a corporate acquisition.

>>> It surprises me that anyone would let such capabilities fade away ....
>>
>> There are more than a few sites that have not preserved the particular
>> combinations of compilers and tools and source code in their source
>> libraries (meaning reproducible builds are either not feasible or are
>> far more work), and those that did not automate their builds and build
>> analysis. Some still have all their bits and an investment spent
>> hacking out /STANDARD=VAXC or updating the oldest of the FORTRAN code
>> and other morasses can bring the environments back to usefulness.
>> Others... involve migrations or rewriting some or all of the code.
>
> Or, if they started with VAX Basic, such work would not be needed ...
>
> :-)


VAX BASIC to DEC BASIC was pretty easy, yes. But how quickly they
forget about the transition from BP2 to VAX BASIC.

>> The question for a vendor then becomes how much time and effort spent
>> keeping lost end-user source code working will be recouped, and where
>> it becomes more financially appropriate for the vendor to move on, or
>> to offer more specialized and more expensive custom services for those
>> end-user sites — and more than a few of those sites with lost code will
>> eventually switch to emulation, and will simply run the code into the
>> ground. Until the end-users have to do something about that code.
>>
>> More subtly, how much of that vendor time and effort in keeping the old
>> code working will impede moving OpenVMS forward — toward better
>> interfaces, more secure interfaces, better capabilities — for those
>> folks that are staying current. DEC valued application
>> upward-compatibility and stability very highly and in various cases
>> more highly than moving OpenVMS forward — and y'all are quite fond of
>> pointing out how those old OpenVMS apps still work — which IMO is also
>> part of why OpenVMS is in the mess that it's in now.
>
> I'll just observe that without the upward compatibility and stability
> that VMS would not exist today. YOu would not have the existing
> customers that are "stuck" on VMS because most of their applications
> would no longer work.
>
> And I'll admit, moving forward is also important. I believe there is
> some middle ground. The two concepts are not mutually exclusive.

I'm not suggesting "break everything". But I am suggesting that some
interfaces be selectively broken, and once you decide to break those,
then that you break other related APIs that might also need work, such
that the vendor does not dribble out incompatible updates, and that the
folks doing the migration gain more from the effort required. If
it's the Purdy polynomial that's the target for some hypothetical
upgrade, you might want to look at providing a vastly simpler API for
related tasks, as well as easier support for LDAP integration and user
authentication, and quite possibly also profiles and the storage of
application preferences.

But given a migration schedule and some benefits? If I can acquire
immediately-useful new features by adopting the upgrade, then the
disruption of hacking out hopefully-isolated uses of the
soon-to-be-deprecated and soon-to-be-broken code is (somewhat)
lessened. In this case, if I'm going to churn SYSUAF, make it so that
I don't need to wildcard $GET through SYSUAF, for instance, or make it
such that a wholesale replacement of SYSUAF with local and
network-served LDAP will make future updates to the authorization
database much easier. Go all-in on LDAP local and served as a
hypothetical overhaul-it-all-option that makes the Purdy hash
performance much, much, much, worse than it is now, and get $getuai and
$setuai working with that for most folks.

(But in the interests of full disclosure, I haven't entirely sorted out
some of the unexpectedness in the ACME interface that replaced the
previous and entirely undocumented authentication interfaces.)


>> Good practices — such as rebuilding your source code — are still
>> useful. But as for updates even in your daily-builds environment, I'd
>> look at using a DVCS for what you're doing, for instance. Which means
>> it'd be handy to have easily available or integrated git or Mercurial,
>> for instance. (For those of you that are about to reply with "but we
>> dooooooo!" thank you and now please go use a system that actually does
>> have integrated tools.)
>
> Where did such products come from? I'd suggest from the ideas used for
> such tasks before there was such products. So, if some product is an
> extension of something I'd developed due to need, why does that all of
> a sudden make my procedures chopped liver? I totally disagree with
> that concept. I see no reason to learn someone else's bastardized
> implementation of what works just fine for me.
>
> These new procedures might be fine for new entrants into software,
> since they don't have a clue on best practices. But not everyone needs
> the products, since they may have developed the original concepts, and
> thus have knowledge of best practices.

DVCS can provide a distributed cache of software; so that you can
maintain your own central pool, and can acquire an update from each
remote site when and as needed. This then allows incremental source
copies, rather than having to push an entire library of source code.
Your existing procedure is certainly not "chopped liver", but you do
get to decide where you want to invest your time and effort — is that
distributed source code pool management, or might adding features that
your customers want or need be better? Now there's also the
discussion of the overhead of replacing the existing tools with a DVCS
or some other package. Then there's that some folks want to own the
whole software stack, others want to invest where they can make the
most advantage. There's no right answer here, and there are problems
with all of the potential answers.

Stephen Hoffman

unread,
Oct 19, 2015, 12:23:16 PM10/19/15
to
On 2015-10-19 14:57:50 +0000, David Froble said:

> Stephen Hoffman wrote:
>> On 2015-10-18 03:04:55 +0000, David Froble said:
>>
>>> Stephen Hoffman wrote:
>>>
>>>> Docker and most other container schemes are built on pushing
>>>> applications out to Linux hosts, and they target keeping applications
>>>> from getting tangled within a single virtual machine. (OpenVMS "has
>>>> issues" here, too. There've been several recent local tussles around
>>>> OpenVMS requiring SYSLCK for coordinating application sharing within
>>>> the application — and ended up using Unix-style lock files, which is a
>>>> galactically stupid design. But it's functional. But I digress.)
>>>
>>> Well, we've at least got a fix for that, right?
>>
>> Workaround. Not fix.
>
> Would like to see definitions of "workaround" and "fix".

The UWSS needs to be installed. If I could ensure that happened, then
I wouldn't be using lock files.

> What kind of "fix" might be developed to allow use of shared locks?

Shareable images that automatically get their own namespace for lock
resources; the removal of the SYSLCK requirement for
application-specific coordination. That shareable can use its own
identification — maybe the digital signature for the shareable and its
developer and an associated UUID within its HPC or ESW file, or maybe
its own FID within the existing and free-for-all OpenVMS environment,
for instance — as the basis for coordinating across all callers of that
shareable image.

The HPC or ESW file is what I'd consider an example of a missed
opportunity; good and valid and workable as far as it goes and
certainly tested and shipped, but where capabilities such as developer
signing or the storage of application preferences or where authorized
access or maybe startup procedures could have been either been latent
or could have been implemented. No APIs for poking around inside one
— which would be necessary for accessing a payload, for instance.

If you're going to force folks to migrate — and as gets cited here —
the vendor is left to think the changes through, and leave yourself
some headroom for future work. Or to get the port out the door.
Which means VSI will be making compromises and they'll be making
decisions that will have medium- or long-term costs. That's part of
an OS, unfortunately — picking your targets carefully too, given
OpenVMS is competing with vastly larger projects for new users.

Johnny Billquist

unread,
Oct 19, 2015, 3:43:24 PM10/19/15
to
On 2015-10-19 16:23, David Froble wrote:
> On VAX, Basic would become the keyboard monitor, there is a term for
> that, but I forget it. As such, upon typing the <RETURN> key, the text
> would be compiled and executed. That got lost on the move to Alpha,
> with I believe, a few other things. Just asking, is such a capability a
> large or a small effort?

It was/is called immediate mode execution in BASIC, unless my memory
fails me. Typing a statement without a line number.

Johnny

David Froble

unread,
Oct 19, 2015, 7:10:22 PM10/19/15
to
Yes, on Basic Plus. On VAX Basic it was called incremental compilation.

On Alpha it was called ... gone ....

I'm guessing the LSE did syntax checking. Not compilation.

Stephen Hoffman

unread,
Oct 19, 2015, 8:24:54 PM10/19/15
to
On 2015-10-19 23:10:18 +0000, David Froble said:

> Yes, on Basic Plus. On VAX Basic it was called incremental compilation.

Also commonly known as an interpreter.

> On Alpha it was called ... gone ....

I don't know why the interpreter was dropped. Wouldn't surprise me
that it was all just more work than could be completed, and the
interpreter was dropped.

> I'm guessing the LSE did syntax checking. Not compilation.

LSEDIT writes out the file and spawns the target compiler when
commanded, and then reads and processes the analysis file.

Somewhat similarly to what LSEDIT provides — though the implementation
details do differ substantially, as do the capabilities — Xcode invokes
the compiler as you enter statements, and processes the results for
both syntax errors and command completion.

Heading back toward BASIC and the ability to enter statements directly,
Swift <https://developer.apple.com/swift/> can be invoked and prompt
directly within a so-called playground, if you're inclined. Also
<http://www.swiftstub.com>.

glen herrmannsfeldt

unread,
Oct 19, 2015, 10:51:20 PM10/19/15
to
Stephen Hoffman <seao...@hoffmanlabs.invalid> wrote:
> On 2015-10-19 23:10:18 +0000, David Froble said:

>> Yes, on Basic Plus. On VAX Basic it was called incremental compilation.

> Also commonly known as an interpreter.

Most interpreters of progrmaming languages do at least some conversion
from the input source to the executed object.

Some common ones are tokenizing keywords and converting numeric
constants to internal form.

The latter is visible in some BASIC systems, where you type

10 I=100000000000

and with LIST it comes out

10 I=1E11

The usual name given to such is incremental compilation, even when most
of the work is done like an interpreter.

As well as I know, command processors like CMD and unix shells normally
don't convert the input, but execute commands from source characters
each time.

-- glen



It is loading more messages.
0 new messages