Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

PowerX Roadmap - Extended beyond 2020

966 views
Skip to first unread message

Kerry Main

unread,
Apr 9, 2016, 12:45:04 PM4/9/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of

For those so inclined to think beyond just X86-64, these links may be of
some interest:

April 7, 2016:
http://www.nextplatform.com/2016/04/07/ibm-unfolds-power-chip-roadmap-past-2020/

Nov 16, 2015:
http://www.nextplatform.com/2015/11/16/openpower-accelerated-computing-will-be-the-new-normal/

Regards,

Kerry Main
Kerry dot main at starkgaming dot com






IanD

unread,
Apr 10, 2016, 5:38:35 AM4/10/16
to
IBM have some dam smart cookies in their midst

They were instrumental in the Cell chip development too

I always remember when they used that scanning tunneling microscope to position individual atoms to spell IBM

https://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV1003.html

I know VSI will have their hands full with x86 but sometimes you wonder if there is going to be something that pops up out of left field that will cause a complete change in marketplace direction.

I know one ultimately has to deal in knows unless you have money to play with though

Kerry Main

unread,
Apr 19, 2016, 10:40:05 AM4/19/16
to comp.os.vms to email gateway
Interesting article on chip futures positioning -

Updated Power9 vs Intel X86-64 article: (April 18/16)
http://www.nextplatform.com/2016/04/18/power9-will-bring-competition-datacenter-compute/

Bill Gunshannon

unread,
Apr 19, 2016, 7:25:56 PM4/19/16
to
On 4/19/16 10:37 AM, Kerry Main wrote:
>
> Interesting article on chip futures positioning -
>
> Updated Power9 vs Intel X86-64 article: (April 18/16)
> http://www.nextplatform.com/2016/04/18/power9-will-bring-competition-datacenter-compute/
>
> Regards,
>
> Kerry Main
> Kerry dot main at starkgaming dot com

Speaking of chips and Intel, did people here see the announcement
from Intel about staff cuts? Guess all those Itanium engineers will
be going back on the job market.

bill


Paul Sture

unread,
Apr 20, 2016, 12:57:20 AM4/20/16
to
<http://www.theregister.co.uk/2016/04/19/intel_q1_fy2016_job_cuts/>

---- start quote ----
Intel will axe 12,000 employees globally – more than one in ten of its
workforce – as it moves further away from being a PC chip company.

...

The processor giant said about 11 per cent of its 107,000 staffers will
be shed through "site consolidations worldwide, a combination of
voluntary and involuntary departures, and a re-evaluation of programs."
---- end quote ----

--
There are two hard things in computer science: cache invalidation,
naming, and off-by-one errors.

johnwa...@yahoo.co.uk

unread,
Apr 20, 2016, 6:39:20 AM4/20/16
to
On Wednesday, 20 April 2016 05:57:20 UTC+1, Paul Sture wrote:
> On 2016-04-19, Bill Gunshannon <bill.gu...@gmail.com> wrote:
> > On 4/19/16 10:37 AM, Kerry Main wrote:
> >>
> >> Interesting article on chip futures positioning -
> >>
> >> Updated Power9 vs Intel X86-64 article: (April 18/16)
> >> http://www.nextplatform.com/2016/04/18/power9-will-bring-competition-datacenter-compute/
> >>
> >> Regards,
> >>
> >> Kerry Main
> >> Kerry dot main at starkgaming dot com
> >
> > Speaking of chips and Intel, did people here see the announcement
> > from Intel about staff cuts? Guess all those Itanium engineers will
> > be going back on the job market.
>
> <http://www.theregister.co.uk/2016/04/19/intel_q1_fy2016_job_cuts/>
>
> ---- start quote ----
> Intel will axe 12,000 employees globally - more than one in ten of its
> workforce - as it moves further away from being a PC chip company.
>
> ...
>
> The processor giant said about 11 per cent of its 107,000 staffers will
> be shed through "site consolidations worldwide, a combination of
> voluntary and involuntary departures, and a re-evaluation of programs."
> ---- end quote ----
>
> --
> There are two hard things in computer science: cache invalidation,
> naming, and off-by-one errors.

Intel. The x86/PC company, in an increasingly non-x86/PC world. Not
dead yet, but certainly going to have to change.

Fortunately for VMS, it's been through enough processors that the
next migration after x86 will hopefully be a mere matter of
cranking the handle. I jest slightly, but which other non-Linux
OS has the same proven portability.

Many months ago, there was a comment here suggesting that 'if'
VMS ever migrated to x86-64, it would be good for that migration
to also consider the next one as well. At the time there was no
real suggestion that nuVMS on x86 would ever happen. It's not
there yet, but it's a lot closer than it was back then.

Obviously VMS, like everything else on the IT market, is obsolete
in some people's view, but by the time the volume OSes finish
their continuing external field tests (or just migrate their
services to the cloud), maybe VMS might have caught up enough
to be a contender in the business-class OS market, the market
where "developers developers developers" no longer has the
value as a slogan that it had in the Ballmer era.

We'll see.

Kerry Main

unread,
Apr 20, 2016, 8:45:05 AM4/20/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> johnwallace4--- via Info-vax
> Sent: 20-Apr-16 6:39 AM
> To: info...@info-vax.com
> Cc: johnwa...@yahoo.co.uk
> Subject: Re: [New Info-vax] PowerX Roadmap - Extended beyond 2020
>
Todays accepted model of distributed computing is running many
separate App servers on different OS instances all communicating over
slow (relative to direct IO's, large memory and flash drives) networks to
another network DB instance that Is typically sharded in a shared
nothing architecture (means more up front planning and less flexibility
to change).

Perhaps its time to take a closer look at model that uses bigger and
more centralized servers with Apps and DB's in the same cluster (OS?)
that can also easily scale out once the scale up server reaches 70%.

Remember that the reason the current distributed client-server model
was created was to address short comings in server technologies that
were getting overloaded - hence, the separation of servers into separate
App, Web, DB etc roles all communicating over slow networks.

Servers have come a long way since that time.

Stephen Hoffman

unread,
Apr 20, 2016, 9:58:08 AM4/20/16
to
On 2016-04-20 12:40:45 +0000, Kerry Main said:

> Todays accepted model of distributed computing is running manyseparate
> App servers on different OS instances all communicating overslow
> (relative to direct IO's, large memory and flash drives) networks
> toanother....

Accessing remote memory via network is very fast and ~triple the speed
of local HDD storage, though this assumes you're not dealing with a
glacially-slow network stack.

From more than a few years ago, but still worth a read:
http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf


http://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf




--
Pure Personal Opinion | HoffmanLabs LLC

Johnny Billquist

unread,
Apr 20, 2016, 10:31:44 AM4/20/16
to
I was going to write something along the same lines in response, but am
happy to just emphasize what Hoff wrote. It's actually "direct I/O" that
is slow, not the network... And the gap is only increasing all the time.
With a fast enough CPU and bus, you are now looking at 40GbE, and soon
100GbE networks, with extremely low latencies if you just stay within
one city. No disk have a chance of delivering similar numbers, which is
why you have large disk farms to just aggregate I/O bandwidth enough to
keep the network somewhat busy.

Johnny

Kerry Main

unread,
Apr 20, 2016, 12:45:05 PM4/20/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
re: the google doc - " Numbers Everyone Should Know" which the google
doc (Jeff Dean talk from 2010) references is now outdated and contains
some old concepts.

http://colin-scott.github.io/blog/2012/12/24/latency-trends/
" The problem, of course, is that hardware performance increases
exponentially! After some digging, I actually found that the numbers Jeff
quotes are over a decade old [1]."

A more recent article is likely more appropriate:
http://softwareyoga.com/latency-numbers-everyone-should-know/

The referenced google doc does not mention network write time i.e. the
additional time for the host server to receive 1Mb data over the network
and then in turn write it to its local disk which is more of an apples to
apples "write to disk" locally to SAN vs remote NFS style comparison.

In addition, the read IO numbers referenced relate to direct IO on traditional
HDD drives. However, flash drives with huge storage caches are now replacing
traditional HDD's. Add to this large 64-128-256 core servers with physical
memory in the TB+ range.

As an example - with V8.4-2, OpenVMS supports blades with 64 cores with
1.5TB physical memory and soon, 16Gb/s storage IO adapters accessing
flash arrays with controller nodes that have 190GiB cache on 4 controller
nodes (reading some 3PAR specs).

Not sure if the specs align with what is coming in the near term in terms
of VSI support, but one gets the idea - this type of config is not what Google
was looking at when they created this document back in 2010.

Having stated this, Google is a very special case and in their environment,
I have no doubt their applications, as currently designed, require the heavily
distributed model. Course, the number of application environments that
have Google type IT environments with the seemingly unlimited budgets
can likely be counted on 1 hand.

While network speed capabilities are increasing, one needs to remember
moving to 10Gbe and higher speeds in a DC is limited by the number of
available 10Gbe ports on Switches and Routers. Enterprise level 10Gbe
switches and routers are very expensive and replacing existing switches and
routers while updating server NIC's is a very painful planning and upgrade
process using an existing distributed model that has many existing App/Web
/DB servers.

Bottom line is that 1Gbe is going to be here a lot longer than what most
people expect.

Having stated this, I also suspect that while all flash will become the norm
for any IO that is even remotely performance sensitive, I also believe
traditional HDD storage will be around for some time for low IO/archiving
type reasons.

For an interesting read where storage is heading, here is 3PAR tech WP
(albeit with a bit of marketing): (April 2016)
https://h20195.www2.hp.com/V2/GetPDF.aspx%2F4AA4-7264ENW.pdf

johnwa...@yahoo.co.uk

unread,
Apr 20, 2016, 2:24:17 PM4/20/16
to
Remember, this is Google. Writes are of very little significance in
their high volume workloads (it's hinted at by e.g. a throwaway
reference to the 'mark as read' on email, or something like that).

Transactions (which in a real business system succeed or fail
atomically and which in a real business can be quite important) are
of even less interest. Except maybe when Google are billing and being
paid (no stock levels or ticket sales or real-world stuff like that
for Google to worry about). Who wants to bet that the systems Google
use for business activities like billing and payments don't look
quite the same as their search engines and content delivery and and
and. Why would they, different application characteristics may need
different underlying technologies (or approaches) to deliver the
different requirements.

A company like Amazon, on the other hand, obviously do have to worry
about stock levels and payments and transactions and other such
'legacy' concepts which require numbers to actually add up.

Amazon and/or their suppliers even have real world real time
transactional stuff like automated warehouses to think about
(though in many places, people and low-tech stuff are used instead
of total automation). How do they address these needs?

For a non-technical warehouse overview from 2012 see e.g.
http://www.telegraph.co.uk/topics/christmas/9664036/Behind-the-scenes-at-Amazons-Christmas-warehouse.html
But no real details of the underlying IT - anybody seen any?

How many companies successfully run systems like that successfully
'in the cloud'?

One size does not fit all, though cross-fertilisation of ideas and
concepts can be a useful thing.

Simon Clubley

unread,
Apr 20, 2016, 3:04:34 PM4/20/16
to
On 2016-04-20, johnwa...@yahoo.co.uk <johnwa...@yahoo.co.uk> wrote:
>
> Fortunately for VMS, it's been through enough processors that the
> next migration after x86 will hopefully be a mere matter of
> cranking the handle. I jest slightly, but which other non-Linux
> OS has the same proven portability.
>

Apart from Linux, NetBSD and OpenBSD are the other two traditional
operating systems I know of which are highly portable.

In the RTOS market, there's RTEMS and QNX which I know exist on
multiple architectures. (I've just checked the eCos supported
architecture list and see that has support for a wide range of
architectures as well.)

> Many months ago, there was a comment here suggesting that 'if'
> VMS ever migrated to x86-64, it would be good for that migration
> to also consider the next one as well. At the time there was no
> real suggestion that nuVMS on x86 would ever happen. It's not
> there yet, but it's a lot closer than it was back then.
>

There's the two-level hardware (User and Kernel only) issue to tackle
in that case. I know there's a similar issue in the x86-64 but it
turns out VSI are using x86-64 specific features to work around that
issue. (I don't remember the fine details, but do remember thinking
it was a creative approach; however it is one that relies on x86-64
functionality.)

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world

johnwa...@yahoo.co.uk

unread,
Apr 20, 2016, 3:56:13 PM4/20/16
to
Fair comment re *BSD. I specifically considered saying "which
other *volume* OS" for that reason, especially given that VMS
was once the volume OS of choice in the industry. Decided not
to. Who needs hindsight...

Wrt RTOS: Wind River's (sorry, now Intel's) VxWorks would at one time
have been a classic example of a portable RTOS. But I wasn't really
thinking RTOS.

clairg...@gmail.com

unread,
Apr 20, 2016, 4:03:59 PM4/20/16
to
Not quite but you almost remembered it. Yes, we will only be using two of x86's HW access modes and enforcing the other two in the OS. We already do a little of this on Itanium so it is not complete invention this time. However, we are not using anything specific to x86. In fact, we are prototyping this on Itanium to get it debugged before we reach the point of needing it on x86. Note that x86 has four modes but they do not give us the separation that VMS needs.

Clair

Craig A. Berry

unread,
Apr 20, 2016, 4:51:48 PM4/20/16
to
On Wednesday, April 20, 2016 at 2:04:34 PM UTC-5, Simon Clubley wrote:
> On 2016-04-20, johnwa...@yahoo.co.uk <johnwa...@yahoo.co.uk> wrote:
> >
> > Fortunately for VMS, it's been through enough processors that the
> > next migration after x86 will hopefully be a mere matter of
> > cranking the handle. I jest slightly, but which other non-Linux
> > OS has the same proven portability.
> >
>
> Apart from Linux, NetBSD and OpenBSD are the other two traditional
> operating systems I know of which are highly portable.

NT has run on MIPS, Alpha, PowerPC, and Itanium as well as x86. Isn't NT traditional?

On porting VMS to Power, IBM has contributed to LLVM:

<http://llvm.org/devmtg/2013-04/weigand-slides.pdf>

and future Power architectures will fully support little-endian:

<http://llvm.org/devmtg/2014-04/PDFs/Talks/Euro-LLVM-2014-Weigand.pdf>

but it would still be a huge project that's a long way off at best.



Bob Koehler

unread,
Apr 20, 2016, 5:00:35 PM4/20/16
to
In article <9707b47f-e1ad-4689...@googlegroups.com>, "Craig A. Berry" <craig....@gmail.com> writes:
>
> NT has run on MIPS, Alpha, PowerPC, and Itanium as well as x86. Isn't NT traditional?

Thanks. I needed a laugh.

Simon Clubley

unread,
Apr 20, 2016, 6:20:04 PM4/20/16
to
On 2016-04-20, clairg...@gmail.com <clairg...@gmail.com> wrote:
> On Wednesday, April 20, 2016 at 3:04:34 PM UTC-4, Simon Clubley wrote:
>>
>> There's the two-level hardware (User and Kernel only) issue to tackle
>> in that case. I know there's a similar issue in the x86-64 but it
>> turns out VSI are using x86-64 specific features to work around that
>> issue. (I don't remember the fine details, but do remember thinking
>> it was a creative approach; however it is one that relies on x86-64
>> functionality.)
>>
>
> Not quite but you almost remembered it. Yes, we will only be using
> two of x86's HW access modes and enforcing the other two in the OS. We
> already do a little of this on Itanium so it is not complete invention
> this time. However, we are not using anything specific to x86. In
> fact, we are prototyping this on Itanium to get it debugged before we
> reach the point of needing it on x86. Note that x86 has four modes but
> they do not give us the separation that VMS needs.
>

Yes, this is what I remembered from when we had these discussions on
comp.os.vms. However, I thought you were using some x86-64 specific
feature (maybe to do with TLB invalidation) to vastly reduce the pain
of doing the page table switching when emulating a switch from (say)
VMS user mode to VMS executive mode and then back to VMS user mode.

Did I remember incorrectly or is this something which has not turned
out to be a problem in practice ? (The latter would be good because
it means that maybe simpler CPUs such as ARM might be a viable target
for VMS).

> Clair
>

Thanks,

Kerry Main

unread,
Apr 20, 2016, 6:45:06 PM4/20/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Craig A. Berry via Info-vax
> Sent: 20-Apr-16 4:52 PM
> To: info...@info-vax.com
> Cc: Craig A. Berry <craig....@gmail.com>
> Subject: Re: [New Info-vax] [OT] Portable operating systems, was: Re:
> PowerX Roadmap - Extended beyond 2020
>
> On Wednesday, April 20, 2016 at 2:04:34 PM UTC-5, Simon Clubley wrote:
> > On 2016-04-20, johnwa...@yahoo.co.uk
> <johnwa...@yahoo.co.uk> wrote:
> > >
> > > Fortunately for VMS, it's been through enough processors that the
> > > next migration after x86 will hopefully be a mere matter of
> > > cranking the handle. I jest slightly, but which other non-Linux
> > > OS has the same proven portability.
> > >
> >
> > Apart from Linux, NetBSD and OpenBSD are the other two traditional
> > operating systems I know of which are highly portable.
>
> NT has run on MIPS, Alpha, PowerPC, and Itanium as well as x86. Isn't NT
> traditional?
>

The DEC West team did most of the grunt work to make Windows NT on
Alpha work as well as it did.

While Microsoft did make Windows NT available on other platforms, its
support for these other platforms was minimal at best.

As an example, even though MS Office on Alpha Windows NT was
considered very fast compared to X86 versions of MS Office, the Alpha
MS Office official release contained loads of debug code that Microsoft
refused to fix.

Others have stated that Microsoft Windows contains a huge amount of
X86 macro code, so it made cross platform porting very difficult.

> On porting VMS to Power, IBM has contributed to LLVM:
>
> <http://llvm.org/devmtg/2013-04/weigand-slides.pdf>
>
> and future Power architectures will fully support little-endian:
>
> <http://llvm.org/devmtg/2014-04/PDFs/Talks/Euro-LLVM-2014-
> Weigand.pdf>
>
> but it would still be a huge project that's a long way off at best.
>

Agree it's a ways off (schedule always a function of $'s and priority), but
one might argue that at some future point, a port to PowerX would also
have side benefits like IBM software prods would likely become much
easier to get done on OpenVMS as well.

Getting more aligned with IBM as well as HP would be a good goal for
the long term.

Also, remember that Rocket SW has a background with IBM.

;-)

Camiel Vanderhoeven

unread,
Apr 21, 2016, 12:56:30 AM4/21/16
to
Op donderdag 21 april 2016 00:45:06 UTC+2 schreef Kerry Main:

> While Microsoft did make Windows NT available on other platforms, its
> support for these other platforms was minimal at best.
>
> As an example, even though MS Office on Alpha Windows NT was
> considered very fast compared to X86 versions of MS Office, the Alpha
> MS Office official release contained loads of debug code that Microsoft
> refused to fix.

Don't forget that Microsoft only ported NT to Alpha as a result of a lawsuit brought on by Digital over OS IP. Their support for the MIPS version of NT was much better.

> Others have stated that Microsoft Windows contains a huge amount of
> X86 macro code, so it made cross platform porting very difficult.

I seriously doubt that. I can believe that to be true of Windows non-NT, not of Windows NT. Windows NT didn't start out on x86; development started on Intel's i860 RISC cpu, then was shifted to MIPS. It was only ported to x86 when it was mostly done, precisely to avoid being locked into a single architecture too much. I'm sure there is plenty of x86 assembly code in the x86 HAL, but not so much in the rest of the OS.

Camiel.

Dirk Munk

unread,
Apr 21, 2016, 4:21:10 AM4/21/16
to
Didn't Microsoft use lots of Alpha's for the development of 64 bit
Windows? At the time there was no 64 bit Intel system available so I
have been told.

johnwa...@yahoo.co.uk

unread,
Apr 21, 2016, 5:42:19 AM4/21/16
to
My recollection is that it's largely Office (not NT itself) that has
substantial x86-specific baggage which has proved hard to get rid of.
ICBW etc.

Kerry Main

unread,
Apr 21, 2016, 8:35:05 AM4/21/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Camiel Vanderhoeven via Info-vax
> Sent: 21-Apr-16 12:56 AM
> To: info...@info-vax.com
> Cc: Camiel Vanderhoeven <iamc...@gmail.com>
> Subject: Re: [New Info-vax] [OT] Portable operating systems, was: Re:
> PowerX Roadmap - Extended beyond 2020
>
> Op donderdag 21 april 2016 00:45:06 UTC+2 schreef Kerry Main:
>
> > While Microsoft did make Windows NT available on other platforms, its
> > support for these other platforms was minimal at best.
> >
> > As an example, even though MS Office on Alpha Windows NT was
> > considered very fast compared to X86 versions of MS Office, the Alpha
> > MS Office official release contained loads of debug code that Microsoft
> > refused to fix.
>
> Don't forget that Microsoft only ported NT to Alpha as a result of a
> lawsuit brought on by Digital over OS IP. Their support for the MIPS
> version of NT was much better.
>

Yep - story behind the story.

> > Others have stated that Microsoft Windows contains a huge amount of
> > X86 macro code, so it made cross platform porting very difficult.
>
> I seriously doubt that. I can believe that to be true of Windows non-NT,
> not of Windows NT. Windows NT didn't start out on x86; development
> started on Intel's i860 RISC cpu, then was shifted to MIPS. It was only
> ported to x86 when it was mostly done, precisely to avoid being locked
> into a single architecture too much. I'm sure there is plenty of x86
> assembly code in the x86 HAL, but not so much in the rest of the OS.
>

As part of the Digital NT Wizards program, we used to get annual trips
to Redmond / Seattle (land of sunshine) and meet with Microsoft and
DEC West folks to get the latest on what Microsoft and Digital were up
to and what was coming. Some used to call it a rock show accompanied
by some tech stuff (we even had Eagles play one event).

I seem to remember a presentation by someone from DEC West stating
the amount of the X86 macro code was a challenge for them, but my
memory might have dropped a bit or two since then.

:-)

John Reagan

unread,
Apr 21, 2016, 8:57:17 AM4/21/16
to
On Thursday, April 21, 2016 at 8:35:05 AM UTC-4, Kerry Main wrote:

> As part of the Digital NT Wizards program, we used to get annual trips
> to Redmond / Seattle (land of sunshine) and meet with Microsoft and
> DEC West folks to get the latest on what Microsoft and Digital were up
> to and what was coming. Some used to call it a rock show accompanied
> by some tech stuff (we even had Eagles play one event).

It was just Don Henley. I was in the 4th row. :)



John Reagan

unread,
Apr 21, 2016, 8:57:24 AM4/21/16
to
On Thursday, April 21, 2016 at 8:35:05 AM UTC-4, Kerry Main wrote:

> As part of the Digital NT Wizards program, we used to get annual trips
> to Redmond / Seattle (land of sunshine) and meet with Microsoft and
> DEC West folks to get the latest on what Microsoft and Digital were up
> to and what was coming. Some used to call it a rock show accompanied
> by some tech stuff (we even had Eagles play one event).

Kerry Main

unread,
Apr 21, 2016, 9:15:05 AM4/21/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> John Reagan via Info-vax
> Sent: 21-Apr-16 8:57 AM
> To: info...@info-vax.com
> Cc: John Reagan <xyzz...@gmail.com>
> Subject: Re: [New Info-vax] [OT] Portable operating systems, was: Re:
> PowerX Roadmap - Extended beyond 2020
>
Wow - great memory - you are right! Small world!

Btw, I was in even better location .. a few of us were asked to do additional
"security" which was essentially ask any eager beavers to move back from
the stage, but allowed us to stand in the front of stage.

Course, my ears were much more able to take the music loudness back then.

Stephen Hoffman

unread,
Apr 21, 2016, 12:15:20 PM4/21/16
to
On 2016-04-20 22:43:07 +0000, Kerry Main said:

>>
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
>> Craig A. Berry via Info-vax
>> Sent: 20-Apr-16 4:52 PM
>> To: info...@info-vax.com
>> Cc: Craig A. Berry <craig....@gmail.com>
>> Subject: Re: [New Info-vax] [OT] Portable operating systems, was: Re:
>> PowerX Roadmap - Extended beyond 2020
>>
>> On Wednesday, April 20, 2016 at 2:04:34 PM UTC-5, Simon Clubley wrote:
>>> On 2016-04-20, johnwa...@yahoo.co.uk
>> <johnwa...@yahoo.co.uk> wrote:
>>>>
>>>> Fortunately for VMS, it's been through enough processors that the
>>>> next migration after x86 will hopefully be a mere matter of
>>>> cranking the handle. I jest slightly, but which other non-Linux
>>>> OS has the same proven portability.
>>>>
>>>
>>> Apart from Linux, NetBSD and OpenBSD are the other two traditional
>>> operating systems I know of which are highly portable.
>>
>> NT has run on MIPS, Alpha, PowerPC, and Itanium as well as x86. Isn't
>> NT traditional?

Add ARM.

> The DEC West team did most of the grunt work to make Windows NT on
> Alpha work as well as it did.

Because DEC thought they could get revenues competing with Windows on
x86, and could get the Alpha production volumes up and the prices
correspondingly down.

More than a few DEC folks ended up working at Microsoft, too.

> While Microsoft did make Windows NT available on other platforms, its
> support for these other platforms was minimal at best.

And why do you think that's the case? Maybe because Windows has an
installed base compatibility problem? Porting to different hardware
is disruptive, and — while a few folks have managed relatively
transparent migrations, such as Apple with Rosetta. The Apple folks
still killed the old software and killed Rosetta, and forced folks to
upgrade to newer versions and newer tools and newer APIs. They broke
compatibility. Microsoft uses .NET for masking operating system
differences, but they've never gone in for FX!32 — not that
third-parties were all that interested in supporting their apps
translated by FX!32, either. But I digress. Maybe also because
there's no other hardware platform out there with as many boxes and
with the prices of x86 hardware, too. ARM is the closest to that right
now for Windows UI and uses, though Windows RT cratered rather badly in
the market.

Maintaining complex software across multiple hardware platforms is expensive.

Few organizations want to tangle with supporting the results of any
sort of binary translation.

> As an example, even though MS Office on Alpha Windows NT wasconsidered
> very fast compared to X86 versions of MS Office, the Alpha MS Office
> official release contained loads of debug code that Microsoft refused
> to fix.

Microsoft knew where their money was coming from, and where they should
be spending their time. But then running Microsoft Office on an Alpha
— even without the debug code – was a stupidly-expensive thing for
folks to do. Alpha boxes were much more expensive than x86 boxes, so
— if you're looking to run a desktop and its tools — why would you buy
low-volume and expensive 64-bit hardware to run 32-bit OS and 32-bit
software; to do what a 32-bit box could do faster and cheaper?

> Others have stated that Microsoft Windows contains a huge amount ofX86
> macro code, so it made cross platform porting very difficult.

I wonder if OpenVMS has any macro assembler code in it?

>> On porting VMS to Power, IBM has contributed to LLVM:
>>
>> <http://llvm.org/devmtg/2013-04/weigand-slides.pdf>
>>
>> and future Power architectures will fully support little-endian:
>>
>> <http://llvm.org/devmtg/2014-04/PDFs/Talks/Euro-LLVM-2014-Weigand.pdf>
>>
>> but it would still be a huge project that's a long way off at best.
>
>> Agree it's a ways off (schedule always a function of $'s and priority),
>> butone might argue that at some future point, a port to PowerX would
>> alsohave side benefits like IBM software prods would likely become
>> mucheasier to get done on OpenVMS as well.

POWER is fast, but also scarce, perilously few boxes and few models of
boxes, expensive and with production volumes far too low to ever get
the prices competitive. The few POWER boxes I've dealt with also ran
hot, with all the problems that involves.

OpenVMS already has a problem with sales volumes.

In short, POWER is Alpha all over again.

> Getting more aligned with IBM as well as HP would be a good goal forthe
> long term.

IBM revenues have been down for sixteen consecutive quarters. Not
that HPE revenues and particularly BCS revenues have been all that
great, either.

> Also, remember that Rocket SW has a background with IBM.

Picking up the pieces that IBM has deprecated, and making some money
off the folks that want or need or can't easily port off the tool or
platform. These tools and these platforms are not going to be picking
up many new customers. Nice business, though revenue and profit
growth there is heavily or entirely acquisition-based.

In five years or such, when (if?) OpenVMS is around and when (if?) VSI
gets bored from raking in all the profits (losses?) from the massive
expansion (continued decline?) of the OpenVMS customer base and when
(if?) VSI OpenVMS customers are looking for a high-end platform beyond
then-current Xeon servers, maybe then POWER is on the short list of
porting targets — if POWER is still around, of course — and with
whatever other hardware platforms are then-available and
then-appropriate. That's all if VSI wants to either greatly slow their
development efforts or greatly staff up their porting efforts, as ports
(to anything) are no small project and — until the port is complete and
successful — any port costs a whole lot of money and risks ISVs and
customers not following the port, and the port adds to on-going support
costs until the old platforms can be deprecated.

TL;DR: AFAICT, OpenVMS on POWER is a fantasy, and not the good
profit-producing kind of fantasy.

Kerry Main

unread,
Apr 21, 2016, 1:45:05 PM4/21/16
to comp.os.vms to email gateway
Yep, nothing wrong with trying, but hind sight is 20-20

> More than a few DEC folks ended up working at Microsoft, too.
>
> > While Microsoft did make Windows NT available on other platforms, its
> > support for these other platforms was minimal at best.
>
> And why do you think that's the case? Maybe because Windows has an
> installed base compatibility problem? Porting to different hardware
> is disruptive, and — while a few folks have managed relatively
> transparent migrations, such as Apple with Rosetta. The Apple folks
> still killed the old software and killed Rosetta, and forced folks to
> upgrade to newer versions and newer tools and newer APIs. They
> broke
> compatibility. Microsoft uses .NET for masking operating system
> differences, but they've never gone in for FX!32 — not that
> third-parties were all that interested in supporting their apps
> translated by FX!32, either. But I digress. Maybe also because
> there's no other hardware platform out there with as many boxes and
> with the prices of x86 hardware, too. ARM is the closest to that right
> now for Windows UI and uses, though Windows RT cratered rather badly
> in
> the market.
>
> Maintaining complex software across multiple hardware platforms is
> expensive.
>

While certainly that was part of it, a bigger issue was Microsoft not liking
being forced to support Alpha when they really did not want to.

> Few organizations want to tangle with supporting the results of any
> sort of binary translation.
>

As long as it is transparent, most org's do not care. If not transparent,
then that’s when they get cranky.

> > As an example, even though MS Office on Alpha Windows NT
> wasconsidered
> > very fast compared to X86 versions of MS Office, the Alpha MS Office
> > official release contained loads of debug code that Microsoft refused
> > to fix.
>
> Microsoft knew where their money was coming from, and where they
> should
> be spending their time. But then running Microsoft Office on an Alpha
> — even without the debug code – was a stupidly-expensive thing for
> folks to do. Alpha boxes were much more expensive than x86 boxes, so
> — if you're looking to run a desktop and its tools — why would you buy
> low-volume and expensive 64-bit hardware to run 32-bit OS and 32-bit
> software; to do what a 32-bit box could do faster and cheaper?

The market focus was high end engineering and technical types who did
not want a desktop for office products & a separate WS box for engineering
work.

>
> > Others have stated that Microsoft Windows contains a huge amount
> ofX86
> > macro code, so it made cross platform porting very difficult.
>
> I wonder if OpenVMS has any macro assembler code in it?
>

Not the point - Microsoft tech and sales resources were just not interested
in multi-platform development and dealing with macro code was just not
sexy for the whiz kids.

Case in point - how hard would it have been to release a maint fix to
remove the debug code in the Alpha NT version of Office?

After many emails, DEC gave up trying to convince Microsoft.

> >> On porting VMS to Power, IBM has contributed to LLVM:
> >>
> >> <http://llvm.org/devmtg/2013-04/weigand-slides.pdf>
> >>
> >> and future Power architectures will fully support little-endian:
> >>
> >> <http://llvm.org/devmtg/2014-04/PDFs/Talks/Euro-LLVM-2014-
> Weigand.pdf>
> >>
> >> but it would still be a huge project that's a long way off at best.
> >
> >> Agree it's a ways off (schedule always a function of $'s and priority),
> >> butone might argue that at some future point, a port to PowerX
> would
> >> alsohave side benefits like IBM software prods would likely become
> >> mucheasier to get done on OpenVMS as well.
>
> POWER is fast, but also scarce, perilously few boxes and few models of
> boxes, expensive and with production volumes far too low to ever get
> the prices competitive. The few POWER boxes I've dealt with also ran
> hot, with all the problems that involves.
>
> OpenVMS already has a problem with sales volumes.
>
> In short, POWER is Alpha all over again.
>

Except with much better platform marketing - a foreign concept with DEC
/Compaq and yes, even HP.

> > Getting more aligned with IBM as well as HP would be a good goal
> forthe
> > long term.
>
> IBM revenues have been down for sixteen consecutive quarters. Not
> that HPE revenues and particularly BCS revenues have been all that
> great, either.
>

Agree IBM is having issues, but -

Intel just announced 12,000 layoffs, so I guess they are toast as well?

AMD is under fire as well - I guess they are toast as well?

Who's going to be left?

> > Also, remember that Rocket SW has a background with IBM.
>
> Picking up the pieces that IBM has deprecated, and making some money
> off the folks that want or need or can't easily port off the tool or
> platform. These tools and these platforms are not going to be picking
> up many new customers. Nice business, though revenue and profit
> growth there is heavily or entirely acquisition-based.
>
> In five years or such, when (if?) OpenVMS is around and when (if?) VSI
> gets bored from raking in all the profits (losses?) from the massive
> expansion (continued decline?) of the OpenVMS customer base and
> when
> (if?) VSI OpenVMS customers are looking for a high-end platform
> beyond
> then-current Xeon servers, maybe then POWER is on the short list of
> porting targets — if POWER is still around, of course — and with
> whatever other hardware platforms are then-available and
> then-appropriate. That's all if VSI wants to either greatly slow their
> development efforts or greatly staff up their porting efforts, as ports
> (to anything) are no small project and — until the port is complete and
> successful — any port costs a whole lot of money and risks ISVs and
> customers not following the port, and the port adds to on-going support
> costs until the old platforms can be deprecated.
>
> TL;DR: AFAICT, OpenVMS on POWER is a fantasy, and not the good
> profit-producing kind of fantasy.
>

No one here is suggesting this is something that is going to happen in the
short or medium term or even at all.

Gotta love all that optimism though .. using the same logic, those dumb
college kids who started a college web site called Facebook should have
realized they had no hope of creating anything worthwhile.

:-)

johnwa...@yahoo.co.uk

unread,
Apr 21, 2016, 1:54:32 PM4/21/16
to
> is disruptive, and -- while a few folks have managed relatively
> transparent migrations, such as Apple with Rosetta. The Apple folks
> still killed the old software and killed Rosetta, and forced folks to
> upgrade to newer versions and newer tools and newer APIs. They broke
> compatibility. Microsoft uses .NET for masking operating system
> differences, but they've never gone in for FX!32 -- not that
> third-parties were all that interested in supporting their apps
> translated by FX!32, either. But I digress. Maybe also because
> there's no other hardware platform out there with as many boxes and
> with the prices of x86 hardware, too. ARM is the closest to that right
> now for Windows UI and uses, though Windows RT cratered rather badly in
> the market.
>
> Maintaining complex software across multiple hardware platforms is expensive.
>
> Few organizations want to tangle with supporting the results of any
> sort of binary translation.
>
> > As an example, even though MS Office on Alpha Windows NT wasconsidered
> > very fast compared to X86 versions of MS Office, the Alpha MS Office
> > official release contained loads of debug code that Microsoft refused
> > to fix.
>
> Microsoft knew where their money was coming from, and where they should
> be spending their time. But then running Microsoft Office on an Alpha
> -- even without the debug code - was a stupidly-expensive thing for
> folks to do. Alpha boxes were much more expensive than x86 boxes, so
> -- if you're looking to run a desktop and its tools -- why would you buy
> porting targets -- if POWER is still around, of course -- and with
> whatever other hardware platforms are then-available and
> then-appropriate. That's all if VSI wants to either greatly slow their
> development efforts or greatly staff up their porting efforts, as ports
> (to anything) are no small project and -- until the port is complete and
> successful -- any port costs a whole lot of money and risks ISVs and
> customers not following the port, and the port adds to on-going support
> costs until the old platforms can be deprecated.
>
> TL;DR: AFAICT, OpenVMS on POWER is a fantasy, and not the good
> profit-producing kind of fantasy.
>
>
>
>
> --
> Pure Personal Opinion | HoffmanLabs LLC

If porting or rewriting applications to different hardware made
no sense, the 32bit market would have continued to use VMS. Porting
or rewriting does make sense sometimes, depending on the alternative.
POWER isn't an interesting alternative, but maybe NT/Alpha could
have been.

Contrary to received wisdom, by the time of the PWS/Miata
workstations (late 1990s), for example, the UK and European prices
of NT/Alpha workstations were quite competitive with x86
workstations with significantly slower floating point performance.

That shouldn't be a huge surprise really, given that DEC's x86
workstations were based on Intel-specified NLX-format systems,
and the comparable Alpha systems used basically the same NLX-format
systems with a different processor daughtercard and a different
badge on the outside. And the Alpha tax inside, which meant that
you had to know where it was competitive or you'd lose the sale.

NT/Alpha PCs weren't price competitive with Gateway 2000 and other
bargain PC suppliers. But those weren't the competition. I can't
comment on US pricing as I wasn't there.

Which markets and which application providers care(d) about floating
point? Not just the obvious scientific markets (where NT/Alpha was
often missing the software anyway).

One example of where NT/Alpha could compete was the high performance
high quality PostScript-to-raster image market, which was dominated
at the time by a small number of software vendors and an even
smaller number of device vendors. These companies products were in
every printshop and many graphic designers in every town. This
historically was not NT home territory, but the failure of the
incumbent (Apple) to keep up meant that alternatives were in with a
chance. The small number of relatively simple relevant apps also
meant that porting really wasn't that big an issue for the ISVs and
device vendors either.

Courtesy of a focused awareness-raising campaign [1] in this
sector, NT/Alpha went from nowhere to being a credible contender
in the space of a couple of years. Alpha hardware prices weren't
far off the x86 price for a decent x86 workstation or a Mac (not
some bargain basement x86 kit), and the NT/Alpha performance for
the relevant applications was streets ahead of NT/x86 (or Mac) on
anything you could buy.

The boxes in question were used in specific workflows which
didn't need them to be used as generic x86 boxes, so the NT/Alpha
issues around generic NT/x86 support weren't particularly
important.

This activity was basically invisible unless you were there.

Then DEC/MS/Oracle corporate shenanigans (which Kerry just
mentioned part of) meant that NT/Alpha equally rapidly went back
to nowhere again.

When HQ only wants to understand one size, then one size has to
fit all, whether the market is interested in alternative options
or not.

[1] Some people might call it marketing, but it can't have been,
because DEC didn't do marketing.

Stephen Hoffman

unread,
Apr 21, 2016, 2:53:49 PM4/21/16
to
On 2016-04-21 17:42:37 +0000, Kerry Main said:

> Gotta love all that optimism though .. using the same logic, those dumb
>
> college kids who started a college web site called Facebook should have
>
> realized they had no hope of creating anything worthwhile.

Some folks still don't think Facebook is worthwhile.

David Froble

unread,
Apr 21, 2016, 4:07:46 PM4/21/16
to
He who doesn't remember the past mistakes, is bound to repeat them ..

>> More than a few DEC folks ended up working at Microsoft, too.
>>
>>> While Microsoft did make Windows NT available on other platforms, its
>>> support for these other platforms was minimal at best.
>> And why do you think that's the case? Maybe because Windows has an
>> installed base compatibility problem? Porting to different hardware
>> is disruptive, and — while a few folks have managed relatively
>> transparent migrations, such as Apple with Rosetta. The Apple folks
>> still killed the old software and killed Rosetta, and forced folks to
>> upgrade to newer versions and newer tools and newer APIs. They
>> broke
>> compatibility. Microsoft uses .NET for masking operating system
>> differences, but they've never gone in for FX!32 — not that
>> third-parties were all that interested in supporting their apps
>> translated by FX!32, either. But I digress. Maybe also because
>> there's no other hardware platform out there with as many boxes and
>> with the prices of x86 hardware, too. ARM is the closest to that right
>> now for Windows UI and uses, though Windows RT cratered rather badly
>> in
>> the market.
>>
>> Maintaining complex software across multiple hardware platforms is
>> expensive.
>>
>
> While certainly that was part of it, a bigger issue was Microsoft not liking
> being forced to support Alpha when they really did not want to.

Face it, Microsoft viewed DEC as competition. Microsoft has had a rather quick
and final way with competitors. I remember one story, a company that tried to
deal with Microsoft. When things went bad, the company's people asked, what
went wrong. An off the cuff response from some Microsoft person, "your problem
is that you trusted us". Urban legend maybe, but I believe it.

>> Few organizations want to tangle with supporting the results of any
>> sort of binary translation.
>>
>
> As long as it is transparent, most org's do not care. If not transparent,
> then that’s when they get cranky.
>
>>> As an example, even though MS Office on Alpha Windows NT
>> wasconsidered
>>> very fast compared to X86 versions of MS Office, the Alpha MS Office
>>> official release contained loads of debug code that Microsoft refused
>>> to fix.
>> Microsoft knew where their money was coming from, and where they
>> should
>> be spending their time. But then running Microsoft Office on an Alpha
>> — even without the debug code – was a stupidly-expensive thing for
>> folks to do. Alpha boxes were much more expensive than x86 boxes, so
>> — if you're looking to run a desktop and its tools — why would you buy
>> low-volume and expensive 64-bit hardware to run 32-bit OS and 32-bit
>> software; to do what a 32-bit box could do faster and cheaper?
>
> The market focus was high end engineering and technical types who did
> not want a desktop for office products & a separate WS box for engineering
> work.

And how big is that market? Compared to the general market? Rather poor
marketing decision.

DEC's first big mistake was letting Microsoft get started, and their second big
mistake was the one way street of trying to co-operate with them.

>>> Others have stated that Microsoft Windows contains a huge amount
>> ofX86
>>> macro code, so it made cross platform porting very difficult.
>> I wonder if OpenVMS has any macro assembler code in it?

Yes, and a rather inspirational solution was the Macro-32 compiler. With that,
Macro-32 could run anywhere the compiler was set up for.

> Not the point - Microsoft tech and sales resources were just not interested
> in multi-platform development and dealing with macro code was just not
> sexy for the whiz kids.
>
> Case in point - how hard would it have been to release a maint fix to
> remove the debug code in the Alpha NT version of Office?

See above. Perhaps Microsoft didn't want the product to be a success.
Oh nooos, the sky is falling, we're all going back to the caves ...

Gonna require lots of digging ...

Look, smart phones and tablets is the majority of the whole market, because
that's what people always wanted and needed, and now they can have them. But
that isn't 100% of the market, won't ever be. The rest of us just need to learn
to live in our little niche.
What? Just a better user interface for a bulletin board?

David Froble

unread,
Apr 21, 2016, 4:09:46 PM4/21/16
to
Stephen Hoffman wrote:
> On 2016-04-21 17:42:37 +0000, Kerry Main said:
>
>> Gotta love all that optimism though .. using the same logic, those dumb
>> college kids who started a college web site called Facebook should have
>> realized they had no hope of creating anything worthwhile.
>
> Some folks still don't think Facebook is worthwhile.

Depends on the expectations.

I know lots of people who keep in touch much better when using Facebook. Not
me. And not others. But, for the masses, perhaps it is a good tool.

Kerry Main

unread,
Apr 21, 2016, 4:15:04 PM4/21/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Stephen Hoffman via Info-vax
> Sent: 21-Apr-16 2:54 PM
> To: info...@info-vax.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [New Info-vax] [OT] Portable operating systems, was: Re:
> PowerX Roadmap - Extended beyond 2020
>
> On 2016-04-21 17:42:37 +0000, Kerry Main said:
>
> > Gotta love all that optimism though .. using the same logic, those dumb
> >
> > college kids who started a college web site called Facebook should
> have
> >
> > realized they had no hope of creating anything worthwhile.
>
> Some folks still don't think Facebook is worthwhile.
>

Well, if OpenVMS could get the same number of users as FB, I am sure VSI
would be ok with some people thinking OpenVMS was not worthwhile.

http://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/

"This statistic shows a timeline with the worldwide number of monthly
active Facebook users from 2008 to 2015. As of the fourth quarter of 2015,
Facebook had 1.59 billion monthly active users. In the third quarter of 2012,
the number of active Facebook users had surpassed 1 billion. Active users
are those which have logged in to Facebook during the last 30 days."

Not bad for a few college kids who built something from nothing.

Course, managing this number of users in SYSUAF might be a tad bit of a
challenge ...

Stephen Hoffman

unread,
Apr 21, 2016, 6:02:30 PM4/21/16
to
On 2016-04-21 20:10:00 +0000, Kerry Main said:

>>
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
>> Stephen Hoffman via Info-vax
>> Sent: 21-Apr-16 2:54 PM
>> To: info...@info-vax.com
>> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
>> Subject: Re: [New Info-vax] [OT] Portable operating systems, was: Re:
>> PowerX Roadmap - Extended beyond 2020
>>
>> On 2016-04-21 17:42:37 +0000, Kerry Main said:
>>
>>> Gotta love all that optimism though .. using the same logic, those dumb
>>> college kids who started a college web site called Facebook should have
>>> realized they had no hope of creating anything worthwhile.
>>
>> Some folks still don't think Facebook is worthwhile.
>
> Well, if OpenVMS could get the same number of users as FB, I am sure
> VSIwould be ok with some people thinking OpenVMS was not worthwhile.

So.... OpenVMS for free, and VSI will track your activities and serve
up ads? Oh, wait, that's basically the Microsoft Windows software
model now. Sorry. My bad.

> Course, managing this number of users in SYSUAF might be a tad bit of
> achallenge ...

Managing OpenVMS with something as limited and as limiting as SYSUAF is
a bit of a challenge.

But then this is the market that OpenVMS now has to be profitable in,
too. Ad-supported and/or free software has become quite common, and —
while on the subject of SYSUAF — much deeper integration with Windows
Server or other LDAP providers is now an expectation, and — on the
subject of Facebook and where Microsoft is headed – deeper integration
with remote and hosted services such as FB authentication or Azure
services. VSI has taken the first step of getting rid of the old
non-ACME bits, and has another large project or two to go to get to
somewhat better integration.

Steven Schweda

unread,
Apr 21, 2016, 6:52:46 PM4/21/16
to
> "[...] you trusted us". Urban legend maybe, but I believe it.

Sounds more like "Animal House" than reality. Come on, Flounder.

Kerry Main

unread,
Apr 21, 2016, 8:05:05 PM4/21/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> David Froble via Info-vax
> Sent: 21-Apr-16 4:08 PM
> To: info...@info-vax.com
> Cc: David Froble <da...@tsoft-inc.com>
> Subject: Re: [New Info-vax] [OT] Portable operating systems, was: Re:
> PowerX Roadmap - Extended beyond 2020
>

[snip..]

>
> What? Just a better user interface for a bulletin board?

Actually, just a better interface for VMS notes and VMS phone ..from
the 70's and 80's ..

Ooops - was that my outside voice?

David Froble

unread,
Apr 21, 2016, 11:20:39 PM4/21/16
to
Why would one choose to do that. It's part of the application. I'd have a
database with what I needed for users, not what VMS needs to set up an
interactive user. The way I understand it, Facebook has no interactive users,
as we understand the term.

Kerry Main

unread,
Apr 22, 2016, 8:30:05 AM4/22/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> David Froble via Info-vax
> Sent: 21-Apr-16 11:21 PM
> To: info...@info-vax.com
> Cc: David Froble <da...@tsoft-inc.com>
Apologies - did not think I needed a smiley face when I stated using sysuaf.

Course, one would need a distributed directory DB for something like
this. Directory DB's like LDAP are somewhat different than regular DB's
as they are designed for very high speed local reads with replicated
updates across many locations to facilitate single sign-on (SSO).

As an example - an interesting product I found recently (still looking into)
http://www.idmworks.com/iam-integration-software/openvms-connector/

http://www.idmworks.com/wp-content/uploads/2015/09/HP-OpenVMS-IDMWORKS-IdentityForge-Connector-Datasheet.pdf

- LDAP Authentication
- Standard LDAPv3 Interface
- Password Management
- Bi-Directional User Profile Synchronization
- UAF RIGHTS Data Management

Supported Integrations

- Oracle Identity Manager (OIM)
- IBM Tivoli
- Microsoft FIM 2010
- Dell Identity & Access Services - Quest One
- Dot NET Factory EmpowerID
- CA Identity Manager
- SAP Netweaver IdM
- Avatier Identity & Access Risk
- NetIQ Identity & Goverance
- Identropy SCUID Platform
- VOICETRUST Biometrics
- Sailpoint Access & Compliance
- Aveksa Access & Goverance
- OpenIAM
- IdentityLogix SpyLogix Module
- ForgeRock OpenIDM
- Courion Identity & Access
- Cyber-Ark Password Manager
- Hitachi ID Password Manager
- CloudAccess IDM & SEIM

Stephen Hoffman

unread,
Apr 22, 2016, 9:14:18 AM4/22/16
to
On 2016-04-22 03:20:39 +0000, David Froble said:

> Kerry Main wrote:
>>
>> Course, managing this number of users in SYSUAF might be a tad bit of a
>> challenge ...
>
> Why would one choose to do that. It's part of the application. I'd
> have a database with what I needed for users, not what VMS needs to set
> up an interactive user.

If I were starting a new app or overhauling an existing app and LDAP
were present and better integrated with OpenVMS, I'd likely be using
that. If not and/or if necessarily rolling my own, then likely a
relational database.

Biggest issue with rolling my own is dealing with brute-forcing and the
rest of the dreck — the baseline authentication is easy and scrypt or
other intentionally-glacial cryptographic hashes are available, but
it's the evasion and password management and the rest that adds details
to the design and implementation effort.

> The way I understand it, Facebook has no interactive users, as we
> understand the term.

Most servers don't, either. Shell logins are usually for development,
debugging or devops, and then only when there's no better alternative.

IanD

unread,
Sep 9, 2016, 7:37:11 AM9/9/16
to
On Sunday, April 10, 2016 at 2:45:04 AM UTC+10, Kerry Main wrote:
> > -----Original Message-----
> > From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
>
> For those so inclined to think beyond just X86-64, these links may be of
> some interest:
>
> April 7, 2016:
> http://www.nextplatform.com/2016/04/07/ibm-unfolds-power-chip-roadmap-past-2020/
>
> Nov 16, 2015:
> http://www.nextplatform.com/2015/11/16/openpower-accelerated-computing-will-be-the-new-normal/
>
> Regards,
>
> Kerry Main
> Kerry dot main at starkgaming dot com

https://www.hpcwire.com/2016/08/30/ibm-unveils-power9-details/

Looks like IBM is going to keep pushing the Power platform onto bigger and better...

Quote:

August 30, 2016
IBM Advances Against x86 with Power9
Tiffany Trader

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Built on GlobalFoundries 14nm finFET process technology, Power9 will be the centerpiece in Power-based servers starting in the second half of 2017. The highlight of the release is a brand new core and chip architecture that IBM has optimized for technical/HPC workloads, hyperscale, analytics and machine learning applications.

Although system availability hasn’t been announced yet, IBM has already landed a major win for its forthcoming Power9 platform. Back in November 2014, IBM, Mellanox and Nvidia were tapped to provide the DOE with two ~200-petaflops machines: Summit and Sierra. The $325 million contract specifies that the machines will employ Power9 CPUs and Volta GPUs when they come online next year.

IBM also has buy-in from Google, no small proof point in an era when hyperscalers exert substantial influence on the market. At the 2016 OpenPower Summit, Google said that the majority of its infrastructure had been ported to Power and that for most Googlers, enabling Power is a matter of a config change. Google is also working with Rackspace on a Power9 server, called Zaius, a design that will then be submitted to the Open Compute Project.

etc

PoowerX-VMS anyone? :-)

Kerry Main

unread,
Sep 9, 2016, 10:35:05 AM9/9/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On
> Behalf Of IanD via Info-vax
> Sent: 09-Sep-16 7:37 AM
> To: info...@rbnsn.com
> Cc: IanD <iloveo...@gmail.com>
> Subject: Re: [Info-vax] PowerX Roadmap - Extended
> beyond 2020
>
Hey, I would love to see VSI do OpenVMS on Power9 at some future point. As a reminder, VSI has always stated that the X86-64 port was part of the journey, not an end point.

Having stated this, one should never make a port decision based purely on pure technical reasons.

One needs to "skate where the puck is going to be - not where it is right now" (great Gretzky quote).

Imho, the industry is moving back to what I would call in simplified terms "a return to client-server". ... but with a slightly different model and emphasis i.e. for lack of a better term "SC-SS" (secure-client to secure-server) ... Back to the future IT.

By this I mean where the "client" is one of many, many different "secure thin clients" - cell phones, laptops, notebooks, game consoles, and as much as I hate to use industry hype - the "IOT" (fridges, pop machines, scanners .. whatever).

This is a great future market for VSI to market OpenVMS and ARM/X86-64. This is a volume compute focus.

Wrt to servers, I mean where the "secure server" represents the traditional back end big server environment with lights out, very high HA, multi-site DC models, big core servers with high compute, huge TB memory, ultra-low latency, fast local IO and a reduction of the many different HIGH latency LAN network tiers so common today. Network tier consolidation.

This is a great future market for VSI to market OpenVMS and Power9/X86-64. This is a mission critical compute focus.

Why Power9? It's not just because the Power architecture has some really strong differentiators over X86-64 (especially with dedicated very high throughput workload accelerators arch), but also because an improved Power9 partnership with IBM would also get them to more easily justify releasing their many software products on VSI OpenVMS and Power9.

>From link provide by Ian:
https://www.hpcwire.com/2016/08/30/ibm-unveils-power9-details/

Quote "As we’re moving into the post-Moore’s law era, you can’t just turn the crank and make the general-purpose processor faster,” said Starke. “It’s our believe that you’re going to see more and more specialized silicon. That can be in the form of on-chip acceleration, but as you can see from our approach, we tend to believe it’s more flexible and deployable with off-chip acceleration. Obviously it requires extreme bandwidth, low-latency, and tight integration with your main processor complex, but that’s where we see the future of computing going and you see us putting very strong investments in these directions.”

Or stated more simply - skate where the puck is going to be.

I also think, if implemented, an OpenVMS / Power9 platform would have a much better chance of future success as a mainframe alternative than ANY OS on a X86-64 platform - especially if the IBM LP products were available on the platform to assist with selling Power9 servers. One has to understand the mainframe culture - most really (really) hate all UNIX (including AIX) and Linux and view them as "distributed systems" (mainframe talk for "not real production systems"). On the other hand, mainframe types have a much higher level of respect for OpenVMS.

Another good link on Power9 (more technical): August 24, 2016
http://www.nextplatform.com/2016/08/24/big-blue-aims-sky-power9/

Imho, this would be a great "blue oceans" future strategy.

IanD

unread,
Sep 10, 2016, 3:58:56 PM9/10/16
to
On Saturday, September 10, 2016 at 12:35:05 AM UTC+10, Kerry Main wrote:

<snip>

> Hey, I would love to see VSI do OpenVMS on Power9 at some future point. As a reminder, VSI has always stated that the X86-64 port was part of the journey, not an end point.
>
> Having stated this, one should never make a port decision based purely on pure technical reasons.
>

+1

Digital created some great technologies but when the market changed direction they filed to give customers what they actually wanted

> One needs to "skate where the puck is going to be - not where it is right now" (great Gretzky quote).
>
> Imho, the industry is moving back to what I would call in simplified terms "a return to client-server". ... but with a slightly different model and emphasis i.e. for lack of a better term "SC-SS" (secure-client to secure-server) ... Back to the future IT.
>
> By this I mean where the "client" is one of many, many different "secure thin clients" - cell phones, laptops, notebooks, game consoles, and as much as I hate to use industry hype - the "IOT" (fridges, pop machines, scanners .. whatever).
>

I may not have been around technology as long as some folk here but since I have been involved (and that started in high school when I worked for 3 months to buy my little casio fx-720P where I learned to program in basic, at a time when computers were something you only did when you got to uni and even then it was for the nerds), computing has been in a continual pendulum swing back and forth between centralization and decentralization, but with each swing filling in more options and offerings with each movement - we will eventually get to a point where the whole spectrum is covered and finally you'll be able to mix n match what you need / want, instead of having to squeeze your technology wants into pre-defined sizes

I was speaking to an engineer (non IT, military / heavy vehicle design) and he was telling me how much BS he thought the whole IoT was because he didn't want to know how his toaster was 'thinking'. I laughed and told him yeah, on an individual level perhaps he was right but that the real benefit will be around the collation of data coming from say toasters in one suburb and then having that data used by super markets for the prediction of bread sales and better just in time ordering / supplying. I must have triggered something because months later I saw him and he'd been investigating the whole IoT for possible applications in some of the delivery logistics businesses he consults for I think IoT will become rather large as a field not so much because individuals want to know how many liters of water their toilet used that day but because the companies out there want to find some way to monitise all that data to push more crap products down people's throats and/or spin them a tale of how much they can improve their lives by selling them this/that package *sigh*

Not sure where OpenVMS is going to fit in the IoT picture, it's not lean enough or it's file system not quick enough to act as a data collector. Maybe as an aggregator?

> This is a great future market for VSI to market OpenVMS and ARM/X86-64. This is a volume compute focus.
>

I certainly hope so

If it also runs on hardware that us mere mortals can buy and the license fee's don't require me to sell both kidneys (clustering for example) then I'll be in line to buy

Hobbyists is great and I used it now but I'd be happy to buy a home license much like what MS offer where you can install on up to X computers for their office product

> Wrt to servers, I mean where the "secure server" represents the traditional back end big server environment with lights out, very high HA, multi-site DC models, big core servers with high compute, huge TB memory, ultra-low latency, fast local IO and a reduction of the many different HIGH latency LAN network tiers so common today. Network tier consolidation.
>

Security is going to become probably one of the main focus points going forward imo. Big business are going to wake up and realize the internet at large is a nasty place and getting worse and no government with their draconian laws and dictates can legislate to fix it. The only option going forward will be for big business to take stuff back under their close guard and to have their own experts actively engaged in protecting their own systems around the clock on a continual basis. I know the drivers for the failure of globalisation are different but this too will drive companies to focus their attention in-house

I was just skimming an article yesterday about the need for next level encryption techniques and the need to start think-tanking solutions. The rationale behind this discussion was that quantum computing will begin to come online in the next decade and with it the treat to existing security techniques. Highly interesting stuff

OpenVMS-Quantum, lol

> This is a great future market for VSI to market OpenVMS and Power9/X86-64. This is a mission critical compute focus.
>
> Why Power9? It's not just because the Power architecture has some really strong differentiators over X86-64 (especially with dedicated very high throughput workload accelerators arch), but also because an improved Power9 partnership with IBM would also get them to more easily justify releasing their many software products on VSI OpenVMS and Power9.
>
> >From link provide by Ian:
> https://www.hpcwire.com/2016/08/30/ibm-unveils-power9-details/
>
> Quote "As we’re moving into the post-Moore’s law era, you can’t just turn the crank and make the general-purpose processor faster,” said Starke. “It’s our believe that you’re going to see more and more specialized silicon. That can be in the form of on-chip acceleration, but as you can see from our approach, we tend to believe it’s more flexible and deployable with off-chip acceleration. Obviously it requires extreme bandwidth, low-latency, and tight integration with your main processor complex, but that’s where we see the future of computing going and you see us putting very strong investments in these directions.”
>
> Or stated more simply - skate where the puck is going to be.
>

Intel have been saying for quite some time now that parallel coding techniques are going to be more important than ever going forward. Just cranking up the clock speed is fast becoming a diminishing return

IBM make some great servers, VSI and IBM might do well together and might slow the wretched Oracle onslaught that's been going on in the marketplace the last decade (I am not a fan of Oracle as you might be able to tell)

OpenVMS really needs a strong DB. I doubt Oracle will sell RDB and the other open source db's seem to have stalled on OpenVMS so what to do in this area?
DB engine rankings show NoSQL still moving forward on market share as is MS SQL Server

> I also think, if implemented, an OpenVMS / Power9 platform would have a much better chance of future success as a mainframe alternative than ANY OS on a X86-64 platform - especially if the IBM LP products were available on the platform to assist with selling Power9 servers. One has to understand the mainframe culture - most really (really) hate all UNIX (including AIX) and Linux and view them as "distributed systems" (mainframe talk for "not real production systems"). On the other hand, mainframe types have a much higher level of respect for OpenVMS.
>
> Another good link on Power9 (more technical): August 24, 2016
> http://www.nextplatform.com/2016/08/24/big-blue-aims-sky-power9/
>
> Imho, this would be a great "blue oceans" future strategy.
>
> Regards,
>
> Kerry Main
> Kerry dot main at starkgaming dot com

Thanks for those links, I'm heading off to look at them now... :-)

David Froble

unread,
Sep 10, 2016, 5:55:13 PM9/10/16
to
IanD wrote:

> Not sure where OpenVMS is going to fit in the IoT picture, it's not lean
> enough or it's file system not quick enough to act as a data collector. Maybe
> as an aggregator?

You've stated things like this in the past. You got any citations, facts, or
such to back up your statements?

While I don't have any specifics, I remember reading years ago how the size of
VMS compared to weendoze, and the comparison was rather favorable for VMS. Much
smaller footprint.

With the memory available today, I'm not sure how much a difference in footprint
matters, in comparison to capabilities.

You also got to differentiate between the OS and the utilities that come with
it. In an embedded situation, much of the utilities would perhaps not be included.

When you mention "file system", are you really referring to the file system, or
to RMS? I can do some rather fast I/O on VMS.

Kerry Main

unread,
Sep 10, 2016, 6:00:04 PM9/10/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On
> Behalf Of IanD via Info-vax
> Sent: 10-Sep-16 3:59 PM
> To: info...@rbnsn.com
> Cc: IanD <iloveo...@gmail.com>
> Subject: Re: [Info-vax] PowerX Roadmap - Extended
> beyond 2020
>
> On Saturday, September 10, 2016 at 12:35:05 AM UTC+10,
> Kerry Main wrote:
>
> <snip>
>
> > Hey, I would love to see VSI do OpenVMS on Power9 at
> some future point. As a reminder, VSI has always stated
> that the X86-64 port was part of the journey, not an end
> point.
> >
> > Having stated this, one should never make a port
> decision based purely on pure technical reasons.
> >
>
> +1
>

[snip..]

>
> Not sure where OpenVMS is going to fit in the IoT picture,
> it's not lean enough or it's file system not quick enough to
> act as a data collector. Maybe as an aggregator?

These are "thin clients" with one user type workloads.

Imho, given the power of chip HW today, performance is not that much of an issue. OpenVMS 8.4-2 boots and runs fine on my Alpha system with 128MB memory. License costs and the biggie - license simplicity, dependability / stability / security (you don’t want monthly patches) and industry / ISV acceptance are the bigger challenges.

OpenVMS/ARM might be a nice volume play option in the longer term if all of the above could be addressed.

>
> > This is a great future market for VSI to market OpenVMS
> and ARM/X86-64. This is a volume compute focus.
> >
>
> I certainly hope so
>
> If it also runs on hardware that us mere mortals can buy
> and the license fee's don't require me to sell both kidneys
> (clustering for example) then I'll be in line to buy
>

X86-64 HW is pretty cheap today.

Actually, besides all of the various emulators, Alpha HW is pretty cheap as well as long as you are ok with doing your own support.

Course, it depends on your location as well.

Simply google "ebay DS10" or "ebay ds10L"

[snip]

> > Another good link on Power9 (more technical): August
> 24, 2016
> > http://www.nextplatform.com/2016/08/24/big-blue-
> aims-sky-power9/
> >
> > Imho, this would be a great "blue oceans" future
> strategy.
> >
> > Regards,
> >
> > Kerry Main
> > Kerry dot main at starkgaming dot com
>
> Thanks for those links, I'm heading off to look at them
> now... :-)
>

Btw, for those that like to keep current with compute HW and supporting infrastructure technology, the web site "The Next Platform" is pretty good in that it explores ARM, X86-64 and PowerX as well as high speed interconnects and some of the other not so well known HPC technologies from various vendors reasonably well.

http://www.nextplatform.com

IanD

unread,
Sep 15, 2016, 1:49:08 PM9/15/16
to
When one looks at things like the Gartner report on IoT for 2017 - 2018, the power requirements of devices is going to need to be extremely low

As to the slowness of VMS file systems, I was referring to RMS since that is the layer most people work with and I doubt anyone that is developing for IoT is going to bother with anything lower level than the native file system on the device / OS they are implementing on, especially since there are already protocols and libraries out there that people are levering for software development (which will still have to be ported to OpenVMS if OpenVMS is going to participate)

I would be extremely surprised if anyone wrote code to go block mode I/O on OpenVMS for data capture in the IoT space either

High transaction rate environments resort to items like sharding and distributed DB's like NoSQL Cassandra etc as well as other techniques. So far OpenVMS doesn't have anything like these technologies to my limited knowledge. At the device level the options are stripping but then you get hit with lack of redundancy which isn't going to fly in most environments and even stripping isn't going to save you for lots of small data writes which is what IoT will be primarily focused on

In time, OpenVMS might participate in some up-stream data aggregation but I seriously don't see it acting in the data collection part of the spectrum

The sorts of things being looked at for IoT is ballooning and the spectrum of what people are wanting to capture data on is growing all the time

It's going way beyond wanting to capture data out of your toaster, there is not much of a commercial drive behind wanting to know how your toaster performed last night ;-)

Things like Smart Concrete however and items used in public infrastructure are certainly prime targets. Knowing if/when public infrastructure like a bridge might collapse or be subject to extreme forces etc are of high interest.

Imagine a dam with literally 10's of 1000's of collection points embedded in the concrete all sampling and sending their data back. You are talking about a lot of small quick data packets.

There is an dam not that far from where I live. It's small but it's 66 m x 390 m long. If you place a sensor in the concrete at say 1 m intervals, your talking about 25K sensors. If you sample at even a paltry 2x's per second, which for embedded devices is near in a sleep cycle, that's 50K samples per second of data. Can RMS take in data at those rates without issue? 50K writers at once?

http://h20565.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c04618690

This was an interesting find, this is OpenVMS with SSD support. Some of the upper range shown here is below even the modest example I made up above for the dam and HP were testing 4K writes, not what IoT will be targeting, which will probably be under 1K writes. I really think (without proof) that RMS will bottleneck quickly, especially in trying to keep it's index current

IoT will drive the whole data / storage industry up another notch

We will see the early adopters take the lions share of the IOT space and I happen to think that will be linux yet again :-(, I really don't think OpenVMS is in any shape at present to even begin to participate, it's having enough fun and games getting itself onto x86

The rebuilding of OpenVMS is going to need to address why people abandoned the platform in the first place, it's not just a lack of x86 support. People are coding for other architectures currently and are doing so I think primarily because of good porting tools and excellent development frameworks and Open source is now just not a nice to have but an essential

On a philosophical front, man seems hell bent on sampling everything possible in the hope of controlling his environment and ultimately planning his existence. I happen to think it's folly to pursuit such things to the nth degree but until this approach as abandoned then expect IoT to keep getting more wild in it's hype and promises. I mean if central banks cannot give up on their notion of a controlled economy (yeah, how well has that been for the planet!), then what hope is there that IoT will be de-hyped in the near future? i.e. none!

Kerry Main

unread,
Sep 15, 2016, 4:55:04 PM9/15/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf
> Of IanD via Info-vax
> Sent: 15-Sep-16 1:49 PM
> To: info...@rbnsn.com
> Cc: IanD <iloveo...@gmail.com>
> Subject: Re: [Info-vax] PowerX Roadmap - Extended beyond 2020
>
RMS is not as slow as one thinks. It can be very fast if you
understand the application and using direct IO's.

With all relational DB's, there is usually an internal function
called a query optimizer. This is an internal function which
receives a query, then determines if the query should be executed
in index or sequential mode. There is overhead associated with
this and the output of the query optimizer is not always correct
(logic errors in query, or optimizer bugs) and hence, a query
that should use an index, might incorrectly decide to go
sequential. This is a classic case where all of a sudden a query
that takes 5 seconds normally, all of a sudden takes over a
minute. Symptoms are very much higher than normal DB IO's.

Now, the current RMS design has issues with maint, online backups
and likely a few other issues, so there is a trade-off.

However, why look at addressing future requirements with today's
technology?

We know there is a new file system coming on OpenVMS and I would
expect quite a few of the current RMS issues to be addressed with
the new design - including better performance.

> I would be extremely surprised if anyone wrote code to go block
> mode I/O on OpenVMS for data capture in the IoT space either
>
> High transaction rate environments resort to items like
sharding
> and distributed DB's like NoSQL Cassandra etc as well as other
> techniques. So far OpenVMS doesn't have anything like these
> technologies to my limited knowledge. At the device level the
> options are stripping but then you get hit with lack of
redundancy
> which isn't going to fly in most environments and even
stripping
> isn't going to save you for lots of small data writes which is
what
> IoT will be primarily focused on
>

There are huge load balancing trade-offs with distributed DB
sharding. In a nutshell, you assign different parts of the DB to
specific nodes. Each node can only update directly that part of
the DB it is assigned to. If one part of the DB becomes a hot
spot that exceeds the requirements of that single node, then your
only option is to replace that server with a bigger system,
re-design and re-partition the DB or return an error to the
application.

You have to design a sharded DB exceptionally well so you really
need to understand your workloads. That is the core of the
NonStop world. In their financial world, they understand their
transactions very well, but the big 800lb gorilla in every
NonStop environment is what happens if a workload exceeds the
capacity of one node?

A good WP that compares shared everything/disk DB's (OpenVMS,
Linux/GFS, z/OS) vs. shared nothing (Linux, Windows, UNIX,
NonStop) can be found here:
http://www.scaledb.com/wp-content/uploads/2015/11/Shared-Nohing-v
s-Shared-Disk-WP_SDvSN.pdf
""Comparing shared-nothing and shared-disk in benchmarks is
analogous to comparing a dragster and a Porsche. The dragster,
like the hand-tuned shared-nothing database, will beat the
Porsche in a straight quarter mile race. However, the Porsche,
like a shared-disk database, will easily beat the dragster on
regular roads. If your selected benchmark is a quarter mile
straightaway that tests all out speed, like Sysbench, a
shared-nothing database will win. However, shared-disk will
perform better in real world environments."
See notes above.

Remember - new file system and other new core things coming.

We should stop trying to address the future 5+ year requirements
of tomorrow using today's limitations when we know OpenVMS has a
new engine (file system) and new wheels (TCPIP stack) and a new
body (X86-64) coming in the next 18-24 months.

Also, as pointed out in the WP, one needs to consider the
benefits of being able to load balance IO requests across all
available back end systems vs. DB sharding across many small
systems across high latency LAN networks (net writes latency vs.
local memory, flash disk) before the system is deployed, then
having to deal with hot spots or unplanned workloads or DR later.

> IoT will drive the whole data / storage industry up another
notch
>
> We will see the early adopters take the lions share of the IOT
> space and I happen to think that will be linux yet again :-(, I
really
> don't think OpenVMS is in any shape at present to even begin to
> participate, it's having enough fun and games getting itself
onto
> x86
>
> The rebuilding of OpenVMS is going to need to address why
> people abandoned the platform in the first place, it's not just
a
> lack of x86 support. People are coding for other architectures
> currently and are doing so I think primarily because of good
> porting tools and excellent development frameworks and Open
> source is now just not a nice to have but an essential
>

Open source is indeed another good tool to have on one's tool
belt. More tools usually makes for a better carpenter.

However, there are trade-off's. Each solution architect
(carpenter) has to review these to determine what tools are right
for their environment.

In most cases, there will likely be a mix of custom code and open
source.

> On a philosophical front, man seems hell bent on sampling
> everything possible in the hope of controlling his environment
> and ultimately planning his existence. I happen to think it's
folly to
> pursuit such things to the nth degree but until this approach
as
> abandoned then expect IoT to keep getting more wild in it's
hype
> and promises. I mean if central banks cannot give up on their
> notion of a controlled economy (yeah, how well has that been
for
> the planet!), then what hope is there that IoT will be de-hyped
in
> the near future? i.e. none!
>

IoT is like Public Cloud, SDN, IT Utility, Adaptive Enterprise,
SOA, Real-Time Enterprise and a host of so many other industry
hype terms.

There is some truth that is just a re-invention of existing
technologies behind each of these, but the definition of each is
left up to the individual, so in the end, you can define these
terms as anything you want.

johnwa...@yahoo.co.uk

unread,
Sep 15, 2016, 6:07:35 PM9/15/16
to
Please treat the InterwebOfTat hype with the respect most
of it deserves.

Acoustic monitoring of major structures (bridges, dams, etc) is
at least 40 years old. Here's a 1976 reference:
https://trid.trb.org/view.aspx?id=45894

If this stuff is to be reliable enough for safety related
work, you keep it simple. Maybe a few transducers and
some wires, not hundreds of transducers a few feet apart,
each connected to a Pi Zero and a mesh network and a GUI
and a database and TwitFace and such.

Every IoT outfit and his dog is looking for stuff they think
will increase their IPO value or their share prices. Some of
it might even catch on, but the majority of it will go the
same way as the PDA (for those who remember them) and many
other similarly Gartner-endorsed trends from years gone by.

Have a lot of fun.

David Froble

unread,
Sep 16, 2016, 12:13:15 AM9/16/16
to
IanD wrote:
> On Sunday, September 11, 2016 at 7:55:13 AM UTC+10, David Froble wrote:
>> IanD wrote:
>>
>>> Not sure where OpenVMS is going to fit in the IoT picture, it's not lean
>>> enough or it's file system not quick enough to act as a data collector. Maybe
>>> as an aggregator?
>> You've stated things like this in the past. You got any citations, facts, or
>> such to back up your statements?
>>
>> While I don't have any specifics, I remember reading years ago how the size of
>> VMS compared to weendoze, and the comparison was rather favorable for VMS. Much
>> smaller footprint.
>>
>> With the memory available today, I'm not sure how much a difference in footprint
>> matters, in comparison to capabilities.
>>
>> You also got to differentiate between the OS and the utilities that come with
>> it. In an embedded situation, much of the utilities would perhaps not be included.
>>
>> When you mention "file system", are you really referring to the file system, or
>> to RMS? I can do some rather fast I/O on VMS.
>
> When one looks at things like the Gartner report on IoT for 2017 - 2018, the
> power requirements of devices is going to need to be extremely low

Again, I ask, what has this to do with VMS. That's a HW issue. You might say
that VMS doesn't run on light weight HW, today, but that doesn't mean it
couldn't in the future. However, just talking about VMS as an OS, perhaps it
just might work well in some new environments.

> As to the slowness of VMS file systems, I was referring to RMS since that is
> the layer most people work with and I doubt anyone that is developing for IoT
> is going to bother with anything lower level than the native file system on
> the device / OS they are implementing on, especially since there are already
> protocols and libraries out there that people are levering for software
> development (which will still have to be ported to OpenVMS if OpenVMS is
> going to participate)

Don't know why you insist on RMS. I haven't used it since before 1984. If I
was working on something new, with perhaps special file I/O requirements, then
I'd consider how I might best do the job.

> I would be extremely surprised if anyone wrote code to go block mode I/O on
> OpenVMS for data capture in the IoT space either

Go ahead and work on that "extremely surprised" look. You might need it. Nor
is block I/O the only, or even preferred, method. Regardless, most or all
storage has the concept of "blocks" of data. Other things are built on top of
that. Even RMS. So, what else might you be considering?

> High transaction rate environments resort to items like sharding and
> distributed DB's like NoSQL Cassandra etc as well as other techniques.

Perhaps that's not necessary, or even close to optimum. Just because it's done
doesn't have much to do with it being good, or bad.

> So far
> OpenVMS doesn't have anything like these technologies to my limited
> knowledge. At the device level the options are stripping but then you get hit
> with lack of redundancy which isn't going to fly in most environments and
> even stripping isn't going to save you for lots of small data writes which is
> what IoT will be primarily focused on
>
> In time, OpenVMS might participate in some up-stream data aggregation but I
> seriously don't see it acting in the data collection part of the spectrum

Nor do I, but not for the reasons you suggest. There is no business case for
it, at this time. Nor do I see much chance of the world coming to VMS and
offering to pay for the required work to be done. VMS will flourish if it can
do things people need that other products can't or don't.

> The sorts of things being looked at for IoT is ballooning and the spectrum of
> what people are wanting to capture data on is growing all the time
>
> It's going way beyond wanting to capture data out of your toaster, there is
> not much of a commercial drive behind wanting to know how your toaster
> performed last night ;-)

But there might be a need for the toaster to operate some selected time after
the alarm goes off. But I doubt it, unless there is also a device to take fresh
bread from a wrapper prior to toasting it. Bread left out overnight won't be
very good in the morning.

> Things like Smart Concrete however and items used in public infrastructure
> are certainly prime targets. Knowing if/when public infrastructure like a
> bridge might collapse or be subject to extreme forces etc are of high
> interest.
>
> Imagine a dam with literally 10's of 1000's of collection points embedded in
> the concrete all sampling and sending their data back. You are talking about
> a lot of small quick data packets.
>
> There is an dam not that far from where I live. It's small but it's 66 m x
> 390 m long. If you place a sensor in the concrete at say 1 m intervals, your
> talking about 25K sensors. If you sample at even a paltry 2x's per second,
> which for embedded devices is near in a sleep cycle, that's 50K samples per
> second of data. Can RMS take in data at those rates without issue? 50K
> writers at once?
>
> http://h20565.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c04618690
>
> This was an interesting find, this is OpenVMS with SSD support. Some of the
> upper range shown here is below even the modest example I made up above for
> the dam and HP were testing 4K writes, not what IoT will be targeting, which
> will probably be under 1K writes. I really think (without proof) that RMS
> will bottleneck quickly, especially in trying to keep it's index current

If you've been paying attention, you'll recall that many storage devices will be
doing 4K writes as a minimum. Nor is that required for each piece of data.
Data might first be marshaled in memory and written to storage in much more
optimum methods.

> IoT will drive the whole data / storage industry up another notch
>
> We will see the early adopters take the lions share of the IOT space and I
> happen to think that will be linux yet again :-(, I really don't think
> OpenVMS is in any shape at present to even begin to participate, it's having
> enough fun and games getting itself onto x86
>
> The rebuilding of OpenVMS is going to need to address why people abandoned
> the platform in the first place, it's not just a lack of x86 support. People
> are coding for other architectures currently and are doing so I think
> primarily because of good porting tools and excellent development frameworks
> and Open source is now just not a nice to have but an essential

Wait! I know this one. Pick me! Pick me!

Perhaps it started back in the 1990s when DEC was telling people to move from VMS?

Perhaps it was influenced by Compaq's dropping Alpha, for that itanic thing?

Perhaps it was influenced by HP not really wanting VMS, by Stallard saying that
in time HP figured VMS users would move to HP-UX, and the 20 years or so of
ignoring any meaningful VMS work, with the firing of maybe the best software
team the world has ever seen and shipping the jobs, what was left of them, to India?

What is amazing is that VMS is still in use ....

Bob Koehler

unread,
Sep 16, 2016, 9:22:34 AM9/16/16
to
In article <nrfrgj$res$1...@dont-email.me>, David Froble <da...@tsoft-inc.com> writes:
>
> Don't know why you insist on RMS. I haven't used it since before 1984. If I
> was working on something new, with perhaps special file I/O requirements, then
> I'd consider how I might best do the job.

Done any file I/O on VMS lately? If not via RMS, then how?

If you so much as accessed a file by name and path, you used RMS.

Jan-Erik Soderholm

unread,
Sep 16, 2016, 9:46:28 AM9/16/16
to
For what we are talking about here (VMS as back-end for IoT
generated data/transaction) RMS might not be an issue. Oracle
Rdb doesn't use RMS for it's general DB I/O. Only for specific
operations such as database backup, table load/unload, database
export/import and similar. Not for anything done during regular
operations.

Sometimes back in the early 90's someone on INFO-VAX wrote:

> I have it from DEC that Rdb does NOT use RMS. It calls QIO's
> directly to avoid the (minimal) overhead of RMS, since it
> doesn't need anything which RMS provides and QIOs do not.


I do not now how other DB's such as MariaDB runs its DB I/O...

So, for what we are talkning about here, RMS might be a non-issue.


Jan-Erik.

David Froble

unread,
Sep 16, 2016, 11:05:31 AM9/16/16
to
Yes, you are correct. Didn't see any reason to re-invent the wheel. The RMS
filename parser is used.

But, once the file name is resolved, it's QIO and IO$PERFORM and such for the
rest of the operations. And, from my perspective, it's not too shabby.

It's not here, yet, but down the road there might be capabilities such as direct
I/O to and from NV memory.

David Froble

unread,
Sep 16, 2016, 11:12:15 AM9/16/16
to
Well, storage I/O isn't rocket science ....

Ignoring things strictly on the storage devices, you might have:

1) device drivers to talk to the storage devices

2) low level tools to talk to the device drivers, such as QIO, IO$PERFORM

3) medium level tools using QIO, IO$PERFORM such as RMS, DB products, and such

4) high level tools such as databases, RMS, and such that most programmers use

Very brief description, and VMS specific, but that's most of the structure

Simon Clubley

unread,
Sep 16, 2016, 3:12:52 PM9/16/16
to
On 2016-09-16, David Froble <da...@tsoft-inc.com> wrote:
> IanD wrote:
>>
>> When one looks at things like the Gartner report on IoT for 2017 - 2018, the
>> power requirements of devices is going to need to be extremely low
>
> Again, I ask, what has this to do with VMS. That's a HW issue. You might say
> that VMS doesn't run on light weight HW, today, but that doesn't mean it
> couldn't in the future. However, just talking about VMS as an OS, perhaps it
> just might work well in some new environments.
>

Once again... :-)

How many VMS systems do you think are capable of running on a single
PP3 battery or the equivalent current of a PP3 ?

Once you solve that problem with either an ARM or MIPS port how much
effort do you think it would then take to restructure VMS along the
lines of an embedded OS so that you can write a BSP to allow VMS to
be brought up on your custom ARM or MIPS board ?

This would also include the tools to generate a custom VMS bootable
image (which would include your application) which could be booted
directly (for example) from flash. (Note that this isn't a USB flash
drive but flash sitting directly on the memory bus.)

What do you think are the chances of VSI doing this work when there
are a large range of established embedded OS options, both real time
and otherwise, already in existence ?

IOW, it simply isn't going to happen.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world

Simon Clubley

unread,
Sep 16, 2016, 4:01:18 PM9/16/16
to
On 2016-09-16, Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
>
> How many VMS systems do you think are capable of running on a single
> PP3 battery or the equivalent current of a PP3 ?
>

That should be equivalent power output of a PP3, not current.

David Froble

unread,
Sep 16, 2016, 10:57:32 PM9/16/16
to
I never said it would happen. In fact, I do believe that I wrote that VSI would
never spend money on something from which they could never see any revenue.

But, it's not going to happen because of business reasons, not technical reasons.

Chris

unread,
Sep 17, 2016, 2:50:42 PM9/17/16
to
On 09/17/16 02:57, David Froble wrote:

> But, it's not going to happen because of business reasons, not technical
> reasons.

No, for good technical reasons as well. At minimum, any os that could
compete in the embedded space needs a hardware abstraction layer or
similar for portability reasons, whereas VMS has always been closely
tied to and uses specific features of the underlying processor
architecture. There are dozens of different embedded cpu architectures,
each with their own quirks and requirements, modes of operation and more.

VMS is a very good general purpose OS designed for data center
applications. Not optimised for speed, small footprint, power
efficiency, nor memory requirements, the exact opposite of those
for for most embedded work. Not to mention that there are already
dozens of real time os's specifically designed for embedded work,
many with a long and successful track record.

Like chalk and cheese, in fact :-)...

Regards,

Chris


Simon Clubley

unread,
Sep 17, 2016, 6:41:13 PM9/17/16
to
On 2016-09-17, Chris <xxx.sys...@gfsys.co.uk> wrote:
> On 09/17/16 02:57, David Froble wrote:
>
>> But, it's not going to happen because of business reasons, not technical
>> reasons.
>
> No, for good technical reasons as well. At minimum, any os that could
> compete in the embedded space needs a hardware abstraction layer or
> similar for portability reasons, whereas VMS has always been closely
> tied to and uses specific features of the underlying processor
> architecture. There are dozens of different embedded cpu architectures,
> each with their own quirks and requirements, modes of operation and more.
>

If the above argument (and my own arguments) are not enough to
convince David, then he should consider one other thing:

VSI was setup in the middle of 2014. Their early release version
of x86-64 VMS is currently scheduled for 2018. Even if that's
January 2018, then that's still 3.5 years and that is a hideously
long time for a port to a new architecture by today's standards.

This shows how utterly non-portable the VMS code base is by today's
standards and how, amongst other things, there is way too much deep
architecture knowledge embedded within all layers of VMS instead of
being abstracted out into a cleanly seperate layer. If this were
Linux (for example) or another OS written with portability in mind,
you would have had something in the hands of the customers way
before now.

I want to make it very clear that this is merely an observation and
is absolutely NOT a criticism of the original design of VAX/VMS.
At the time, the deeply embedded knowledge of the VAX architecture
in VAX/VMS was rightly seen as an advantage in those days. However,
times change (and so do architectures) and those same good at the
time original design decisions are now having a major negative
impact on the time it's taking to port VMS to x86-64.

Don't get me wrong, VSI appear to be doing a good job with the code
base that they are forced to work with but at the same time it also
shows how completely and utterly unsuitable VMS is for today's
embedded world where both portability across different architectures
and the ability to run on a custom designed board with a customer
supplied BSP is absolutely critical.

VMS simply isn't cut out for the BSP based embedded OS model and any
version of VMS which did match that model would effectively be a
rewrite of VMS based around modern OS abstraction concepts with
any assembly language interaction with VMS thrown down into the
architecture specific part of VMS only and which would not be a
factor in how a rewritten VMS's user level APIs would be designed.

(This does not mean object orientated BTW. This merely means, among
with various other things, portable data types being used in customer
code which is written to interact with those VMS user level APIs).

> VMS is a very good general purpose OS designed for data center
> applications. Not optimised for speed, small footprint, power
> efficiency, nor memory requirements, the exact opposite of those
> for for most embedded work. Not to mention that there are already
> dozens of real time os's specifically designed for embedded work,
> many with a long and successful track record.
>

I also agree with all that as well.

Dirk Munk

unread,
Sep 19, 2016, 3:26:12 AM9/19/16
to
clairg...@gmail.com wrote:
> On Wednesday, April 20, 2016 at 3:04:34 PM UTC-4, Simon Clubley wrote:
>> On 2016-04-20, johnwa...@yahoo.co.uk <johnwa...@yahoo.co.uk> wrote:
>>>
>>> Fortunately for VMS, it's been through enough processors that the
>>> next migration after x86 will hopefully be a mere matter of
>>> cranking the handle. I jest slightly, but which other non-Linux
>>> OS has the same proven portability.
>>>
>>
>> Apart from Linux, NetBSD and OpenBSD are the other two traditional
>> operating systems I know of which are highly portable.
>>
>> In the RTOS market, there's RTEMS and QNX which I know exist on
>> multiple architectures. (I've just checked the eCos supported
>> architecture list and see that has support for a wide range of
>> architectures as well.)
>>
>>> Many months ago, there was a comment here suggesting that 'if'
>>> VMS ever migrated to x86-64, it would be good for that migration
>>> to also consider the next one as well. At the time there was no
>>> real suggestion that nuVMS on x86 would ever happen. It's not
>>> there yet, but it's a lot closer than it was back then.
>>>
>>
>> There's the two-level hardware (User and Kernel only) issue to tackle
>> in that case. I know there's a similar issue in the x86-64 but it
>> turns out VSI are using x86-64 specific features to work around that
>> issue. (I don't remember the fine details, but do remember thinking
>> it was a creative approach; however it is one that relies on x86-64
>> functionality.)
>>
>> Simon.
>>
>
> Not quite but you almost remembered it. Yes, we will only be using two of x86's HW access modes and enforcing the other two in the OS. We already do a little of this on Itanium so it is not complete invention this time. However, we are not using anything specific to x86. In fact, we are prototyping this on Itanium to get it debugged before we reach the point of needing it on x86. Note that x86 has four modes but they do not give us the separation that VMS needs.
>
> Clair
>

To your knowledge, is there any OS that actually uses those four modes
in x86?

If not, would it be possible to ask Intel to change the design of x86 is
such a way that the separation VMS requires would be achieved? I can
imagine that another OS could benefit from that too.

Perhaps it's just a fantasy, but a nice one I hope :-)

John Reagan

unread,
Sep 19, 2016, 7:55:37 AM9/19/16
to
On Monday, September 19, 2016 at 3:26:12 AM UTC-4, Dirk Munk wrote:
> wrote:
> > On Wednesday, April 20, 2016 at 3:04:34 PM UTC-4, Simon Clubley wrote:
Not that I'm aware of. The CPU still has the four modes, but in 64-bit mode the page table entries only have K/U (the older modes have the extra modes in the PTEs). Somebody from Apple explained the rationale behind the change to me last year at the LLVM conference. Apparently there was some way to circumvent those mode checks such that S and E (in VMS terms) could get access to K memory. Instead of fixing the underlying issue, they just removed the extra modes from the PTEs.

Johnny Billquist

unread,
Sep 19, 2016, 8:25:04 AM9/19/16
to
On 2016-09-16 15:22, Bob Koehler wrote:
> In article <nrfrgj$res$1...@dont-email.me>, David Froble <da...@tsoft-inc.com> writes:
>>
>> Don't know why you insist on RMS. I haven't used it since before 1984. If I
>> was working on something new, with perhaps special file I/O requirements, then
>> I'd consider how I might best do the job.
>
> Done any file I/O on VMS lately? If not via RMS, then how?

$QIO? :-)

> If you so much as accessed a file by name and path, you used RMS.

I seriously doubt that RMS *have* to be involved. It's just convenient.
Down at the bottom end I would expect that you do a QIO to open a file,
and use QIO to read/write. In order to find a file, you start by opening
FID (4,4,0), then you read through the directory searching for the next
directory entry you need. Rinse and repeat until you are at the actual file.

A bit tedious, but definitely doable without involving RMS, unless VMS
removed that functionality somewhere along the way.

Johnny

Dirk Munk

unread,
Sep 19, 2016, 9:13:21 AM9/19/16
to
So effectively the Executive and Supervisor modes are no longer usable?

I just read something that with XEN the Hypervisor runs in Ring 0
(Kernel), and a guest OS runs in Ring 1.


Puzzling :-)


Bob Koehler

unread,
Sep 19, 2016, 9:49:59 AM9/19/16
to
In article <mqMDz.708073$aY4.6...@fx15.ams1>, Dirk Munk <mu...@home.nl> writes:
>
> To your knowledge, is there any OS that actually uses those four modes
> in x86?
>
> If not, would it be possible to ask Intel to change the design of x86 is
> such a way that the separation VMS requires would be achieved? I can
> imagine that another OS could benefit from that too.
>
> Perhaps it's just a fantasy, but a nice one I hope :-)

And then in the next port, does VSI ask another vendor to add 4 modes
to thier chip, also?

If VSI can get the 4 mode needs out of VMS, and can do it now, it
makes the future much simpler.

I like the 4 modes. I think thiey are good design. But I don't
lord over all the architecture designers.

Dirk Munk

unread,
Sep 19, 2016, 10:34:29 AM9/19/16
to
Intel did design the 4-mode setup, they just made an error in the
design. Instead of correcting the design, they crippled it even more.

I don't know if AMD has the same problem.

Stephen Hoffman

unread,
Sep 19, 2016, 11:05:54 AM9/19/16
to
On 2016-09-19 11:55:18 +0000, John Reagan said:

> The CPU still has the four modes, but in 64-bit mode the page table
> entries only have K/U (the older modes have the extra modes in the
> PTEs). Somebody from Apple explained the rationale behind the change
> to me last year at the LLVM conference. Apparently there was some way
> to circumvent those mode checks such that S and E (in VMS terms) could
> get access to K memory. Instead of fixing the underlying issue, they
> just removed the extra modes from the PTEs.

It's also possible to get from supervisor mode to full kernel access on
OpenVMS, if you're both nefariously inclined and already somehow
executing in supervisor. That's in software though, and not in the
memory management hardware.

One of the newer approaches to application isolation is Intel SGX —
that is also intended to protect against a compromised operating system
— though the security of SGX might have "some issues" in at least its
early implementations, based on some reports.

GreyCloud

unread,
Sep 19, 2016, 3:51:10 PM9/19/16
to
An interesting point you just brought up. AMD right now isn't exactly
getting rich these days selling their current processors. Assuming one
can, approach AMD with this particular question and see what is there.
If AMD did it correctly, this may give AMD some hope of increasing sales
in this particular arena.

Johnny Billquist

unread,
Sep 20, 2016, 6:48:02 AM9/20/16
to
Nope. Totally pointless. People (comapnies) in general will not write
any code or OS that would only work specifically on AMD processors. If
AMD haven't already done the same as Intel, they will. Anything else
just don't make sense.

And you really do not need 4 modes. I have said that for years around
here. Seems like VSI understood, but a lot of people still seem to want
to hang on to this like a religion.

Johnny

GreyCloud

unread,
Sep 20, 2016, 1:29:20 PM9/20/16
to
Not totally true on the modes. Why did Data Generals machines have 8 modes?
I know they went out of business, but why 8?

Bob Butler

unread,
Sep 20, 2016, 3:27:33 PM9/20/16
to
On 2016-09-20, GreyCloud <mi...@cumulus.com> wrote:
> On 09/20/16 04:48, Johnny Billquist wrote:
>> On 2016-09-19 21:51, GreyCloud wrote:
>>> On 09/19/16 08:34, Dirk Munk wrote:
>>
>>>> Intel did design the 4-mode setup, they just made an error in the
>>>> design. Instead of correcting the design, they crippled it even more.

That's classic Intel alright!

>>>>
>>>> I don't know if AMD has the same problem.
>>>
>>> An interesting point you just brought up. AMD right now isn't exactly
>>> getting rich these days selling their current processors. Assuming one
>>> can, approach AMD with this particular question and see what is there.
>>> If AMD did it correctly, this may give AMD some hope of increasing sales
>>> in this particular arena.
>>
>> Nope. Totally pointless. People (comapnies) in general will not write
>> any code or OS that would only work specifically on AMD processors. If
>> AMD haven't already done the same as Intel, they will. Anything else
>> just don't make sense.

Intel couldn't get 64 bit done for their ablomination. AMD did that. So it's
not really 100% correct to say nobody would code to AMD. Basically all the
64 bit code on Intel x86_64 is coded to AMD. I understand what you meant but
I still think the argument is valid. If AMD supported enough features for
another OS to run on it and not on the competition whilst not breaking their
"Intel" support at the same time it could be worthwhile in terms of
marketing and probably actual money. But designing and fabbing new chips is
pretty costly.

>>
>> And you really do not need 4 modes. I have said that for years around
>> here. Seems like VSI understood, but a lot of people still seem to want
>> to hang on to this like a religion.
>>
>
> Not totally true on the modes. Why did Data Generals machines have 8
> modes?

Must have been Intel-envy. If 3 or 4 are good 8 has to be at least twice as
good right? Some people view complexity as a necessary evil. Healthy people
in the engineering business view complexity as evil period. You can tell from
Intel's abominations there were and are some sick puppies "designing" their
chips. I hadn't heard that about Data General before but I'm sad I did. 8
modes to run a crappy monitor program, a few serial lines and BASIC seems like
overkill but maybe that's just me. I think they probably could have used one
mode and not even needed all of that.

> I know they went out of business, but why 8?

So they wouldn't die of complexity first? I don't know. They were clunkers.
I don't think they could compete with DEC at all in the mini arena and
that's all there was for both companies. Probably DEC not needing all those
8 modes gave DEC enough design latitude to stomp DG into computer history.

A lot of what we have today in the Inteliverse is old, torn, moldy baggage
that stinks and stinks and never goes away. I don't think any company has
enough money and integrity and sense to straighten that out.

Bob
>

Scott Dorsey

unread,
Sep 20, 2016, 4:18:24 PM9/20/16
to
In article <nrrrl7$84n$1...@dont-email.me>, GreyCloud <mi...@cumulus.com> wrote:
>
>Not totally true on the modes. Why did Data Generals machines have 8 modes?
>I know they went out of business, but why 8?

Because if you want five modes, you might as well just make eight because
it takes the same number of bits in your opcode.
--scott


--
"C'est un Nagra. C'est suisse, et tres, tres precis."

David Froble

unread,
Sep 20, 2016, 4:57:06 PM9/20/16
to
Bob Butler wrote:
> On 2016-09-20, GreyCloud <mi...@cumulus.com> wrote:

>> I know they went out of business, but why 8?

Software. The DEC OSs were better. Yeah, that's subjective, but, that's how it
worked out.

GreyCloud

unread,
Sep 20, 2016, 6:59:32 PM9/20/16
to
On 09/20/16 13:27, Bob Butler wrote:
>
> Intel couldn't get 64 bit done for their ablomination. AMD did that. So it's
> not really 100% correct to say nobody would code to AMD. Basically all the
> 64 bit code on Intel x86_64 is coded to AMD. I understand what you meant but
> I still think the argument is valid. If AMD supported enough features for
> another OS to run on it and not on the competition whilst not breaking their
> "Intel" support at the same time it could be worthwhile in terms of
> marketing and probably actual money. But designing and fabbing new chips is
> pretty costly.

And a lot of testing time looking for bugs as well.

>
>>>
>>> And you really do not need 4 modes. I have said that for years around
>>> here. Seems like VSI understood, but a lot of people still seem to want
>>> to hang on to this like a religion.
>>>
>>
>> Not totally true on the modes. Why did Data Generals machines have 8
>> modes?
>
> Must have been Intel-envy. If 3 or 4 are good 8 has to be at least twice as
> good right? Some people view complexity as a necessary evil. Healthy people
> in the engineering business view complexity as evil period. You can tell from
> Intel's abominations there were and are some sick puppies "designing" their
> chips. I hadn't heard that about Data General before but I'm sad I did. 8
> modes to run a crappy monitor program, a few serial lines and BASIC seems like
> overkill but maybe that's just me. I think they probably could have used one
> mode and not even needed all of that.

The DGs were for corporates that wanted internal security from each user.
If I recall, there were some users that just loved to play pranks on
each other in that setting. And these were made in the 70s. I can only
guess that later they just didn't want to change anything. Too much OS
software changes to handle a few less modes, so I think they left it as
it were.
I remember the cheaper 16-bit Novas, but still didn't buy into them.
The DECs at that time were still better, not only in the documentation,
but also published a lot of educational material that helped sell machines.


>
>> I know they went out of business, but why 8?
>
> So they wouldn't die of complexity first? I don't know. They were clunkers.
> I don't think they could compete with DEC at all in the mini arena and
> that's all there was for both companies. Probably DEC not needing all those
> 8 modes gave DEC enough design latitude to stomp DG into computer history.
>

I thing DG went down the tubes first.

When NAVSEA let us go shopping for a new machine back then, DG set up a
nice huge buffet table, but only three of us showed up. I just didn't
like how their fortran worked compared to what DEC showed us.

> A lot of what we have today in the Inteliverse is old, torn, moldy baggage
> that stinks and stinks and never goes away. I don't think any company has
> enough money and integrity and sense to straighten that out.
>

I believe that around the early 1990s they figured that 32-bit was
enough and didn't even bother to change the architecture. I always
viewed them as quite slow and lacked the general purpose registers that
could've sped up things a lot.

GreyCloud

unread,
Sep 20, 2016, 7:00:15 PM9/20/16
to
On 09/20/16 14:18, Scott Dorsey wrote:
> In article<nrrrl7$84n$1...@dont-email.me>, GreyCloud<mi...@cumulus.com> wrote:
>>
>> Not totally true on the modes. Why did Data Generals machines have 8 modes?
>> I know they went out of business, but why 8?
>
> Because if you want five modes, you might as well just make eight because
> it takes the same number of bits in your opcode.
> --scott
>
>
Now that sounds like more of a plausible answer.

Dirk Munk

unread,
Sep 20, 2016, 7:30:06 PM9/20/16
to
Scott Dorsey wrote:
> In article <nrrrl7$84n$1...@dont-email.me>, GreyCloud <mi...@cumulus.com> wrote:
>>
>> Not totally true on the modes. Why did Data Generals machines have 8 modes?
>> I know they went out of business, but why 8?
>
> Because if you want five modes, you might as well just make eight because
> it takes the same number of bits in your opcode.
> --scott
>
>

Modern CPU's seem to have 5 modes, there is now a -1 mode for hypervisors.

Bob Butler

unread,
Sep 21, 2016, 2:59:40 AM9/21/16
to
On 2016-09-20, GreyCloud <mi...@cumulus.com> wrote:
> On 09/20/16 13:27, Bob Butler wrote:

> The DGs were for corporates that wanted internal security from each user.

Oh I don't know about that. As far as I knew and I could certainly be wrong,
the vast majority of DG systems went into academic environments. I had
access to a Nova and an Eclipse and I really wasn't impressed. At the time
they didn't offer much if anything beyond what a TRS-80 could do and the
TRS-80 had a CRT and all we had on the DG boxes was ASR-33s. Well, the tape
reader on the teletypes was handier and cooler than the cassette interface
on the TRS-80 but that's about it. I don't remember any special features for
security but that doesn't mean there weren't any. Thankfully I've forgotten
pretty much all my DG experiences but that's also for a reason.

> When NAVSEA let us go shopping for a new machine back then, DG set up a
> nice huge buffet table, but only three of us showed up. I just didn't
> like how their fortran worked compared to what DEC showed us.

I didn't use early DEC FORTRAN much but they certainly took over the mini
space lock stock and barrell. Yet I don't think DG was ever in the running.
Maybe they were on price, I don't know. DEC had a nice solution and lots of
options and they scaled vertically fairly well AIRI.

>> A lot of what we have today in the Inteliverse is old, torn, moldy baggage
>> that stinks and stinks and never goes away. I don't think any company has
>> enough money and integrity and sense to straighten that out.
>>
>
> I believe that around the early 1990s they figured that 32-bit was
> enough and didn't even bother to change the architecture. I always
> viewed them as quite slow and lacked the general purpose registers that
> could've sped up things a lot.

In some ways 32 bits really is enough today and it's only the hardware
manufacturers abandonment of 32 bit along with the sloppy OS and software
that makes 64 bit necessary at all. You're right about the lack of registers
in Intel but there are a lot more problems than that. The 32 bit to 64 bit
transition in Intelistan still isn't complete and was never thought all the
way through.

I think 2 modes ought to be enough for anybody. Complexity is a big factor
in bugs and vulnerabilities today and anything we can do to limit that with
sane designs and simplicity will go a long way. We ought to get rid of a lot
of middleware and libraries and get back to having fewer layers and more
responsibility lying in the product. And we ought to be working on
programming by contract and having clear and well-defined APIs so we don't
ever have to go back to the mire they're wallowing in these days in
Intelfornia.

Bob

Johnny Billquist

unread,
Sep 21, 2016, 7:10:41 AM9/21/16
to
Why? Because they thought there was some advantage to that, I would
assume. They also had two stacks, and seven registers related to those
two stacks (no, I am not talking about different stacks in different modes).

Also, DG used the PC as the source to decided which mode you were
executing in.

I can't explain their reasons for their designs. But I remember back in
the 80s when I was looking at it that I just thought it utter madness. I
never liked their hardware.

Johnny

Johnny Billquist

unread,
Sep 21, 2016, 7:16:57 AM9/21/16
to
On 2016-09-20 21:27, Bob Butler wrote:
> On 2016-09-20, GreyCloud <mi...@cumulus.com> wrote:
>> On 09/20/16 04:48, Johnny Billquist wrote:
>>> On 2016-09-19 21:51, GreyCloud wrote:
>>>> An interesting point you just brought up. AMD right now isn't exactly
>>>> getting rich these days selling their current processors. Assuming one
>>>> can, approach AMD with this particular question and see what is there.
>>>> If AMD did it correctly, this may give AMD some hope of increasing sales
>>>> in this particular arena.
>>>
>>> Nope. Totally pointless. People (comapnies) in general will not write
>>> any code or OS that would only work specifically on AMD processors. If
>>> AMD haven't already done the same as Intel, they will. Anything else
>>> just don't make sense.
>
> Intel couldn't get 64 bit done for their ablomination. AMD did that. So it's
> not really 100% correct to say nobody would code to AMD. Basically all the
> 64 bit code on Intel x86_64 is coded to AMD. I understand what you meant but
> I still think the argument is valid. If AMD supported enough features for
> another OS to run on it and not on the competition whilst not breaking their
> "Intel" support at the same time it could be worthwhile in terms of
> marketing and probably actual money. But designing and fabbing new chips is
> pretty costly.

The 64bit AMD extensions are not really comparable to what we're talking
about here. And it only really became a success as Intel also decided to
adopt it. Intel, at the time, was pushing for a different 64bit
solution. So a 64bit solution was on the horizon, no matter what. And
the people had to choose between Itanium, and the AMD x86-64. Forced to
make a choice, the x86-64 was a choice of staying with what they had. It
would be more fare to compare the suggestions here to the Itanium. Ie.
let's make something incompatible...

So I would say it is in no way comparable to the suggestion that AMD
should introduce an incompatible MMU to what Intel is pushing.

>>> And you really do not need 4 modes. I have said that for years around
>>> here. Seems like VSI understood, but a lot of people still seem to want
>>> to hang on to this like a religion.
>>>
>>
>> Not totally true on the modes. Why did Data Generals machines have 8
>> modes?
>
> Must have been Intel-envy. If 3 or 4 are good 8 has to be at least twice as
> good right? Some people view complexity as a necessary evil. Healthy people
> in the engineering business view complexity as evil period. You can tell from
> Intel's abominations there were and are some sick puppies "designing" their
> chips. I hadn't heard that about Data General before but I'm sad I did. 8
> modes to run a crappy monitor program, a few serial lines and BASIC seems like
> overkill but maybe that's just me. I think they probably could have used one
> mode and not even needed all of that.

:-)

>> I know they went out of business, but why 8?
>
> So they wouldn't die of complexity first? I don't know. They were clunkers.
> I don't think they could compete with DEC at all in the mini arena and
> that's all there was for both companies. Probably DEC not needing all those
> 8 modes gave DEC enough design latitude to stomp DG into computer history.

The 32-bit Eclipse was complex. It did not improve it.

> A lot of what we have today in the Inteliverse is old, torn, moldy baggage
> that stinks and stinks and never goes away. I don't think any company has
> enough money and integrity and sense to straighten that out.

Right. Which is why we also will not see anything incompatible with
what's on the market now. Sad but true.

Johnny

Scott Dorsey

unread,
Sep 21, 2016, 9:58:06 AM9/21/16
to
In article <nrsf1n$ft1$2...@dont-email.me>, GreyCloud <mi...@cumulus.com> wrote:
>On 09/20/16 14:18, Scott Dorsey wrote:
>> In article<nrrrl7$84n$1...@dont-email.me>, GreyCloud<mi...@cumulus.com> wrote:
>>>
>>> Not totally true on the modes. Why did Data Generals machines have 8 modes?
>>> I know they went out of business, but why 8?
>>
>> Because if you want five modes, you might as well just make eight because
>> it takes the same number of bits in your opcode.
>
>Now that sounds like more of a plausible answer.

That is, in fact, the answer. However, it brings up the question of why you
would want five rather than four. (If your kernel can trust your drivers then
you really only need two, but who can trust drivers?)

David Froble

unread,
Sep 21, 2016, 1:03:33 PM9/21/16
to
Johnny Billquist wrote:

> The 64bit AMD extensions are not really comparable to what we're talking
> about here. And it only really became a success as Intel also decided to
> adopt it.

That's not exactly how it happened. AMD's 64 bit was successful, and if Intel
hadn't adopted it, they would have lost the x86 market. Even after AMD's stuff
hit the market, Intel was resisting. Finally someone at Intel decided that if
they didn't conform, they were toast.

Chris

unread,
Sep 21, 2016, 1:31:58 PM9/21/16
to
That's the way I read it as well. AMD were far faster on their feet,
while Intel were more interested in protecting their existing market
and more importantly, their "uncopyable" Itanium market, which they
thought would take over the world...

Regards,

Chris

Kerry Main

unread,
Sep 21, 2016, 1:45:04 PM9/21/16
to comp.os.vms to email gateway
That's not exactly how it happened ..

:-)

Intel wanted the next industry PC standard to be IA64, because
then AMD and any other chip vendor would have had to license
using it.

However, Intel has always had parallel Engineering projects
underway as a means to switch if one project falls behind.

That's what happened when realistic IA64 releases showed up about
4-5 years later than originally expected. Fortunately for Intel,
they had parallel X86-64 projects running in parallel with IA64
and could move the X86-64 chips to the forefront.

One can only wonder what would the tech world look like today if
Intel was not infected with the NIH virus and had instead dropped
IA64 when it picked up all the rights to Alpha.

Ah well ..

Regards,

Kerry Main
Kerry dot main at starkgaming dot com







Johnny Billquist

unread,
Sep 21, 2016, 1:54:36 PM9/21/16
to
True that Intel did resist. They tried to push Itanium. But they quickly
realized they could ignore that people wanted to stay on x86, so they
had to make sure they were protected on that front.

I still cannot see this as being anywhere similar to having AMD
introduce something incompatible with Intel in the x86 at this point.

Johnny

David Froble

unread,
Sep 21, 2016, 5:04:27 PM9/21/16
to
Kerry, you got your vision of reality, but I distinctly remember that even after
AMD's CPUs hit the market, Intel was resisting following, and AMD was gaining
market share. Only when the specter of losing the x86 market smacked Intel over
the head did they conform.

And yes, Intel really want the itanic to take over, and it just might have done
so, except for AMD's x86-64. It was never about how quickly the itanic showed
up. It was about x86 or itanic, and the users quickly decided to keep x86. I
also remember Microsoft using AMD to implement weendoze 64, and praising AMD.

Bill Gunshannon

unread,
Sep 21, 2016, 8:30:08 PM9/21/16
to
On 9/21/16 1:41 PM, Kerry Main wrote:
>
>
> One can only wonder what would the tech world look like today if
> Intel was not infected with the NIH virus and had instead dropped
> IA64 when it picked up all the rights to Alpha.
>
> Ah well ..
>
> Regards,
>
> Kerry Main
> Kerry dot main at starkgaming dot com
>

One can only wonder what the tech world would look like today if
instead of bailing Intel out IBM had gone ahead with their original
plan to use the Motorolla architecture for the PC.

Ah well...

bill


GreyCloud

unread,
Sep 21, 2016, 11:11:16 PM9/21/16
to
I didn't know that and wasn't aware of their difficult transition.

>
> I think 2 modes ought to be enough for anybody. Complexity is a big factor
> in bugs and vulnerabilities today and anything we can do to limit that with
> sane designs and simplicity will go a long way. We ought to get rid of a lot
> of middleware and libraries and get back to having fewer layers and more
> responsibility lying in the product. And we ought to be working on
> programming by contract and having clear and well-defined APIs so we don't
> ever have to go back to the mire they're wallowing in these days in
> Intelfornia.
>

Actually, VMware and VirtualBox have to use another mode, but which one
I don't know, to keep the host and the guest operating systems apart
from each other. Right now, I'm trying to get Qt 5.7 developement
envirionment on the latest OpenSuse to work. I still have a lot of
reading to do, but it has to its favor that one can download their
environment (for C++) for OS X, Windows, and Linux and possibly others.
Free for hobbyists, and a price for commercial uses. Pretty much
cross platform. I'm staying away from win10, as it seems from all
reports I've read to be Orwellian in nature. In the EULA, it says that
if you agree to the terms then any of their corporate partners can
pretty much paw through your files remotely and just take what they
want. Not for me.
And I'm still got my old G4 iMac nicely wrapped up. I consider the old
G4 with its orthogonal registers better than any thing that Intel has
yet to release. It may be slower compared to current Intels, but at
that time it was superior.


GreyCloud

unread,
Sep 21, 2016, 11:17:03 PM9/21/16
to
On 09/21/16 07:58, Scott Dorsey wrote:
You can't. I remember Linus Torvalds giving NVidia company the finger
on YouTube because Nvidia wouldn't write video card drivers for linux
for their own products. The almost drivers that the OSS community had
written failed in some areas and made a lot of people mad.

GreyCloud

unread,
Sep 21, 2016, 11:18:39 PM9/21/16
to
I tend to agree with your assessment as well. Their terminals looked
nice, but after kicking the tires on a DG demo, I didn't like it as well
as the VT series from DEC.

Chris Scheers

unread,
Sep 22, 2016, 6:43:37 PM9/22/16
to
I assume you are referring to the DG MV (32 bit) machines.

The MVs only had two modes: privileged and unprivileged.

If the ATU was switched on, there were 8 rings. In many ways, these are
similar to the Intel SGX design.

Code running in ring 0 was privileged. Code in other rings was
unprivileged.

AOS/VS only used 3 of the rings (0, 1, 7). Other products could be
loaded into other rings to provide services to user programs in a
protected way. (For example, a database.)

Since the ring indicator was part of the virtual address, I assume that
8 rings was chosen as a balance between providing enough rings and not
unnecessarily restricting the memory size of each ring.

--
-----------------------------------------------------------------------
Chris Scheers, Applied Synergy, Inc.

Voice: 817-237-3360 Internet: ch...@applied-synergy.com
Fax: 817-237-3074

Bob Koehler

unread,
Sep 23, 2016, 10:04:08 AM9/23/16
to
In article <nrrrl7$84n$1...@dont-email.me>, GreyCloud <mi...@cumulus.com> writes:
>
> Not totally true on the modes. Why did Data Generals machines have 8 modes?
> I know they went out of business, but why 8?
>

PHM over-reaction? Supposedly DG saw the 4 modes of a VAX, felt they
were an unecessary complication, and lost thier shirt building a
computer without them. So if 4 is good, 8 must be great?

Camiel Vanderhoeven

unread,
Oct 23, 2016, 9:58:03 AM10/23/16
to
Op maandag 19 september 2016 13:55:37 UTC+2 schreef John Reagan:
> On Monday, September 19, 2016 at 3:26:12 AM UTC-4, Dirk Munk wrote:
> > wrote:
> > > On Wednesday, April 20, 2016 at 3:04:34 PM UTC-4, Simon Clubley wrote:
> > >> On 2016-04-20, johnwallace4 wrote:
> > >>>
> > >>> Fortunately for VMS, it's been through enough processors that the
> > >>> next migration after x86 will hopefully be a mere matter of
> > >>> cranking the handle. I jest slightly, but which other non-Linux
> > >>> OS has the same proven portability.
> > >>>
> > >>
> > >> Apart from Linux, NetBSD and OpenBSD are the other two traditional
> > >> operating systems I know of which are highly portable.
> > >>
> > >> In the RTOS market, there's RTEMS and QNX which I know exist on
> > >> multiple architectures. (I've just checked the eCos supported
> > >> architecture list and see that has support for a wide range of
> > >> architectures as well.)
> > >>
> > >>> Many months ago, there was a comment here suggesting that 'if'
> > >>> VMS ever migrated to x86-64, it would be good for that migration
> > >>> to also consider the next one as well. At the time there was no
> > >>> real suggestion that nuVMS on x86 would ever happen. It's not
> > >>> there yet, but it's a lot closer than it was back then.
> > >>>
> > >>
> > >> There's the two-level hardware (User and Kernel only) issue to tackle
> > >> in that case. I know there's a similar issue in the x86-64 but it
> > >> turns out VSI are using x86-64 specific features to work around that
> > >> issue. (I don't remember the fine details, but do remember thinking
> > >> it was a creative approach; however it is one that relies on x86-64
> > >> functionality.)
> > >>
> > >> Simon.
> > >>
> > >
> > > Not quite but you almost remembered it. Yes, we will only be using two of x86's HW access modes and enforcing the other two in the OS. We already do a little of this on Itanium so it is not complete invention this time. However, we are not using anything specific to x86. In fact, we are prototyping this on Itanium to get it debugged before we reach the point of needing it on x86. Note that x86 has four modes but they do not give us the separation that VMS needs.
> > >
> > > Clair
> > >
> >
> > To your knowledge, is there any OS that actually uses those four modes
> > in x86?
> >
> > If not, would it be possible to ask Intel to change the design of x86 is
> > such a way that the separation VMS requires would be achieved? I can
> > imagine that another OS could benefit from that too.
> >
> > Perhaps it's just a fantasy, but a nice one I hope :-)
>
> Not that I'm aware of. The CPU still has the four modes, but in 64-bit mode the page table entries only have K/U (the older modes have the extra modes in the PTEs). Somebody from Apple explained the rationale behind the change to me last year at the LLVM conference. Apparently there was some way to circumvent those mode checks such that S and E (in VMS terms) could get access to K memory. Instead of fixing the underlying issue, they just removed the extra modes from the PTEs.

I'm afraid your source has their facts wrong...

Protected mode started on the 286 (16-bit), using segmentation; this is where you have four rings, and a segment is tied to one of the four; this was continued on the 386 (32-bit), but on the 64-bit (AMD64 and later Intel64) architecture, segmentation effectively disappeared (there's a small remnant of it left, but it's not intended to be used for memory protection any more).

On the 286, it turned out that practically no-one used four rings; instead most OS's only used two rings, so when they added paging as a protection mechanism in the 386, they simplified things a bit by treating rings 0, 1, and 2 as identical from a paging perspective. Therefore, for paging, there have never been separate bits for the four rings in the PTE; just one bit: S/U (supervisor = ring 0,1,2 / user = ring3).

In the end, we've decided that we can provide the four modes VMS requires, by doing the mode changes in software; the code that performs these mode changes runs in hardware ring 0 (where VMS's kernel mode runs too), all other VMS modes will run in ring 3. The end result is as good as running on a CPU that does this in hardware, and the code that does this will be portable to a wide variety of platforms.

Camiel.

clairg...@gmail.com

unread,
Oct 23, 2016, 9:18:45 PM10/23/16
to
On Saturday, September 17, 2016 at 6:41:13 PM UTC-4, Simon Clubley wrote:
> On 2016-09-17, Chris <xxx.sys...@gfsys.co.uk> wrote:
> > On 09/17/16 02:57, David Froble wrote:
> >
> VSI was setup in the middle of 2014. Their early release version
> of x86-64 VMS is currently scheduled for 2018. Even if that's
> January 2018, then that's still 3.5 years and that is a hideously
> long time for a port to a new architecture by today's standards.
>

Yes, that is a long time. When VSI was announced we had 6 employees, no code, and no customers. We have spent these two years putting together an engineering team and figuring out how to become a viable business, both of which are still works in progress but we are finally starting to gain a little momentum. If we had started with the appropriate number of people and no other work to do, we would be running on x86 today, maybe not ready for final release but certainly well into it.

> This shows how utterly non-portable the VMS code base is by today's
> standards and how, amongst other things, there is way too much deep
> architecture knowledge embedded within all layers of VMS instead of
> being abstracted out into a cleanly seperate layer. If this were
> Linux (for example) or another OS written with portability in mind,
> you would have had something in the hands of the customers way
> before now.
>

This was all true years ago but after porting to Alpha and especially to Itanium, architecture-specific knowledge is in very few places and extremely little of the OS is aware of architecture and that is centralized. At this point it is not so much the design of VMS that makes it time consuming to port, it is the fact that it was written in VAX assembler and BLISS and uses the GEM code generator. (Yes, we have explored converting to C three times and rejected that approach each time). Moving to ELF for Itanium has eliminated a lot of work this time and in the future and moving to LLVM this time will eliminate even more work in the future.

Porting VMS is a huge job but we are making it easier each time. Will it ever be like porting a portable OS written in C? No, absolutely not, but it is a lot more "turn the crank" than you might think.

No excuses; it takes a long time. It is difficult but it is so much better than it was in 1989 when we had little idea what we were getting into.

Phillip Helbig (undress to reply)

unread,
Oct 24, 2016, 1:20:51 AM10/24/16
to
In article <5bfd6b8e-d51b-4297...@googlegroups.com>,
clairg...@gmail.com writes:

> This was all true years ago but after porting to Alpha and especially to
> Itanium, architecture-specific knowledge is in very few places and
> extremely little of the OS is aware of architecture and that is
> centralized. At this point it is not so much the design of VMS that
> makes it time consuming to port, it is the fact that it was written in
> VAX assembler and BLISS and uses the GEM code generator. (Yes, we have
> explored converting to C three times and rejected that approach each
> time).

I remember a while back when some bugs showed up in VMS MAIL and it
turned out that they were introduced when MAIL was converted from BLISS
to C. That doesn't put the blame on C, of course, but raises the
question whether a port to another language is worth the risks.
(Personally, I think that many languages are more readable than C, and
as a result I would certainly write better code in languages other than
C. On the other hand, of course, a good Fortran programmer can write
Fortran in any language.)

John Reagan

unread,
Oct 24, 2016, 8:57:21 AM10/24/16
to
On Sunday, October 23, 2016 at 9:18:45 PM UTC-4, clairg...@gmail.com wrote:

>
> Porting VMS is a huge job but we are making it easier each time. Will it ever be like porting a portable OS written in C? No, absolutely not, but it is a lot more "turn the crank" than you might think.
>

For those who are interested.

For BLISS, is is the powerful macro language built into the frontend. It is context-sensitive so even trying to write a pre-processor in some more expressive language (Perl, Python, etc.) makes is a difficult task. For example, when expanding an iterative macro, the compiler has to decide on a separator (ie, does it insert a free "," between times or something else). That depends on where in the BLISS language the macro is being expanded.

For Macro, it is mostly the fact that you can jump between routines. You can come into routine A, jump to B, jump to C, indirectly jump to D or E, and finally return from F. That is difficult to express in C (or just about anything else). We had a translator company try years ago with such a piece of Macro, and the resulting C looked like:

int B(int R0, int R1, int R2, you get the idea...);

and kept passing those parameters from routine to routine.

The Macro-32 cross-jumping is a real pain on both Alpha and Itanium. On Alpha, all of those routines need their R27 linkage pointer to be the same. On Itanium, all of those routines need their out0-out7 registers to be the same so they all need their 'alloc's to be the same. The Macro compiler has to build an elaborate flow graph and walk it several times forward and backward to look for available registers in basic blocks, lifetimes of VAX condition codes, and which PUSHLs are putting values into parameters vs actually building some data structure on the stack. And on Itanium, some of those PUSHLs might involve NaTs and need even more special-handling.

Simon Clubley

unread,
Oct 24, 2016, 3:21:12 PM10/24/16
to
On 2016-10-24, clairg...@gmail.com <clairg...@gmail.com> wrote:
> On Saturday, September 17, 2016 at 6:41:13 PM UTC-4, Simon Clubley wrote:
>> On 2016-09-17, Chris <xxx.sys...@gfsys.co.uk> wrote:
>> > On 09/17/16 02:57, David Froble wrote:
>> >
>> VSI was setup in the middle of 2014. Their early release version
>> of x86-64 VMS is currently scheduled for 2018. Even if that's
>> January 2018, then that's still 3.5 years and that is a hideously
>> long time for a port to a new architecture by today's standards.
>>
>
> Yes, that is a long time. When VSI was announced we had 6 employees,
> no code, and no customers. We have spent these two years putting
> together an engineering team and figuring out how to become a viable
> business, both of which are still works in progress but we are finally
> starting to gain a little momentum. If we had started with the
> appropriate number of people and no other work to do, we would be
> running on x86 today, maybe not ready for final release but certainly
> well into it.
>

Just to make it clear that thread was in the context of embedded
operating systems where portability is held to be _very_ important
indeed and is even more important than normal operating systems
written in C.

The context for my comments was to show, as a simple statement of
fact, how unsuitable VMS is for the embedded world. Reading the
rest of my response puts my comments into context.

>> This shows how utterly non-portable the VMS code base is by today's
>> standards and how, amongst other things, there is way too much deep
>> architecture knowledge embedded within all layers of VMS instead of
>> being abstracted out into a cleanly seperate layer. If this were
>> Linux (for example) or another OS written with portability in mind,
>> you would have had something in the hands of the customers way
>> before now.
>>
>
> This was all true years ago but after porting to Alpha and
> especially to Itanium, architecture-specific knowledge is in very few
> places and extremely little of the OS is aware of architecture and
> that is centralized. At this point it is not so much the design of VMS
> that makes it time consuming to port, it is the fact that it was
> written in VAX assembler and BLISS and uses the GEM code
> generator. (Yes, we have explored converting to C three times and
> rejected that approach each time). Moving to ELF for Itanium has
> eliminated a lot of work this time and in the future and moving to
> LLVM this time will eliminate even more work in the future.
>

I see thanks and I stand corrected then.

So it's much less the architecture specific knowledge this time
around and much more the implementation languages which is making
it a several year project.

> Porting VMS is a huge job but we are making it easier each
> time. Will it ever be like porting a portable OS written in C? No,
> absolutely not, but it is a lot more "turn the crank" than you might
> think.
>

That's quite understandable. As I said in my original response, what
I perceive this as is the difference between an operating system
designed in the 1970s and which needed to work in that era versus
operating systems designed today and that's simply how it is.

> No excuses; it takes a long time. It is difficult but it is so much
> better than it was in 1989 when we had little idea what we were
> getting into.
>

:-)

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world

Simon Clubley

unread,
Oct 24, 2016, 4:13:39 PM10/24/16
to
On 2016-10-24, John Reagan <xyzz...@gmail.com> wrote:
>
> For those who are interested.
>
> For BLISS, is is the powerful macro language built into the
> frontend. It is context-sensitive so even trying to write a
> pre-processor in some more expressive language (Perl, Python, etc.)
> makes is a difficult task. For example, when expanding an iterative
> macro, the compiler has to decide on a separator (ie, does it insert a
> free "," between times or something else). That depends on where in
> the BLISS language the macro is being expanded.
>

I don't have any BLISS experience so I am in no position to judge
the issues here.

> For Macro, it is mostly the fact that you can jump between routines.
> You can come into routine A, jump to B, jump to C, indirectly jump to
> D or E, and finally return from F. That is difficult to express in C
> (or just about anything else). We had a translator company try years
> ago with such a piece of Macro, and the resulting C looked like:
>
> int B(int R0, int R1, int R2, you get the idea...);
>
> and kept passing those parameters from routine to routine.
>

That C code makes me go ouch. :-)

Apart from the obvious inefficiency involved, the immediate thing
which jumps out hard is to wonder if passing as int causes some
unsigned data to wrongly be used as signed data.

I would hope that this company gave _very_ careful attention to the
_exact_ instructions used in the MACRO-32 source when generating the
C code to make sure that unsigned data really did get treated as
unsigned data.

Otherwise that could have been a wonderful source of security
vulnerabilities.

> The Macro-32 cross-jumping is a real pain on both Alpha and Itanium.
> On Alpha, all of those routines need their R27 linkage pointer to be
> the same. On Itanium, all of those routines need their out0-out7
> registers to be the same so they all need their 'alloc's to be the
> same. The Macro compiler has to build an elaborate flow graph and
> walk it several times forward and backward to look for available
> registers in basic blocks, lifetimes of VAX condition codes, and which
> PUSHLs are putting values into parameters vs actually building some
> data structure on the stack. And on Itanium, some of those PUSHLs
> might involve NaTs and need even more special-handling.

And it's reading about stuff like this which makes you realise why
it takes time to port VMS to a new architecture.

Simon Clubley

unread,
Oct 24, 2016, 4:38:00 PM10/24/16
to
On 2016-10-24, Phillip Helbig (undress to reply) <hel...@asclothestro.multivax.de> wrote:
>
> I remember a while back when some bugs showed up in VMS MAIL and it
> turned out that they were introduced when MAIL was converted from BLISS
> to C. That doesn't put the blame on C, of course, but raises the
> question whether a port to another language is worth the risks.
> (Personally, I think that many languages are more readable than C, and
> as a result I would certainly write better code in languages other than
> C. On the other hand, of course, a good Fortran programmer can write
> Fortran in any language.)
>

I suppose it depends on what you are trying to achieve.

The decision to rewrite VMS mail in isolation seems a little
strange but standardised higher level system programming languages
do bring to the table a number of advantages especially when
you are replacing assembly language code (as opposed to pseudo
higher level languages such as BLISS above).

C is not exactly a great language by today's standards (it would have
been nice if a Pillar/Wirth-style language had become established
instead) but it's better than the assembly language it replaced.

Bill Gunshannon

unread,
Oct 24, 2016, 4:39:37 PM10/24/16
to
On 10/24/16 4:13 PM, Simon Clubley wrote:
> On 2016-10-24, John Reagan <xyzz...@gmail.com> wrote:
>>
>> For those who are interested.
>>
>> For BLISS, is is the powerful macro language built into the
>> frontend. It is context-sensitive so even trying to write a
>> pre-processor in some more expressive language (Perl, Python, etc.)
>> makes is a difficult task. For example, when expanding an iterative
>> macro, the compiler has to decide on a separator (ie, does it insert a
>> free "," between times or something else). That depends on where in
>> the BLISS language the macro is being expanded.
>>
>
> I don't have any BLISS experience so I am in no position to judge
> the issues here.
>
>> For Macro, it is mostly the fact that you can jump between routines.
>> You can come into routine A, jump to B, jump to C, indirectly jump to
>> D or E, and finally return from F. That is difficult to express in C
>> (or just about anything else). We had a translator company try years
>> ago with such a piece of Macro, and the resulting C looked like:
>>
>> int B(int R0, int R1, int R2, you get the idea...);
>>
>> and kept passing those parameters from routine to routine.
>>
>
> That C code makes me go ouch. :-)

The description of the Macro really makes me wonder where people get the
idea that C is the language that lets programmers write bad code.

>
> Apart from the obvious inefficiency involved, the immediate thing
> which jumps out hard is to wonder if passing as int causes some
> unsigned data to wrongly be used as signed data.

That can be done in any language. The results vary. My last COBOL
gig (just a couple years ago) had me going over a bunch of programs
after I found a bunch of calculations where the intermediate results
were all stored in unsigned data items. Place was really pleased with
themselves that none of the jobs (over a period of many years) ever
went over estimate. Oops.

>
> I would hope that this company gave _very_ careful attention to the
> _exact_ instructions used in the MACRO-32 source when generating the
> C code to make sure that unsigned data really did get treated as
> unsigned data.
>
> Otherwise that could have been a wonderful source of security
> vulnerabilities.

I don't see the security vulnerability (but then, I can't see the actual
code) but I can see where the results would frequently (if not always)
be just plain wrong.

>
>> The Macro-32 cross-jumping is a real pain on both Alpha and Itanium.
>> On Alpha, all of those routines need their R27 linkage pointer to be
>> the same. On Itanium, all of those routines need their out0-out7
>> registers to be the same so they all need their 'alloc's to be the
>> same. The Macro compiler has to build an elaborate flow graph and
>> walk it several times forward and backward to look for available
>> registers in basic blocks, lifetimes of VAX condition codes, and which
>> PUSHLs are putting values into parameters vs actually building some
>> data structure on the stack. And on Itanium, some of those PUSHLs
>> might involve NaTs and need even more special-handling.
>
> And it's reading about stuff like this which makes you realise why
> it takes time to port VMS to a new architecture.

I would certainly hope the example above is not taken from somewhere
in VMS.

bill


clairg...@gmail.com

unread,
Oct 24, 2016, 4:52:12 PM10/24/16
to
On Monday, October 24, 2016 at 3:21:12 PM UTC-4, Simon Clubley wrote:
> On 2016-10-24, clairg...@gmail.com <clairg...@gmail.com> wrote:
>
> Just to make it clear that thread was in the context of embedded
> operating systems where portability is held to be _very_ important
> indeed and is even more important than normal operating systems
> written in C.
>
> The context for my comments was to show, as a simple statement of
> fact, how unsuitable VMS is for the embedded world. Reading the
> rest of my response puts my comments into context.
>
I do not disagree with your "embedded" comments at all, just wanted to point out that over the years we have wrestled VMS into a much different position than it was in back in the beginning.

David Froble

unread,
Oct 24, 2016, 7:31:53 PM10/24/16
to
clairg...@gmail.com wrote:

> This was all true years ago but after porting to Alpha and especially to
> Itanium, architecture-specific knowledge is in very few places and extremely
> little of the OS is aware of architecture and that is centralized. At this
> point it is not so much the design of VMS that makes it time consuming to
> port, it is the fact that it was written in VAX assembler and BLISS and uses
> the GEM code generator. (Yes, we have explored converting to C three times
> and rejected that approach each time).

Since you really needed, at a minimum, the Macro-32 compiler, continuing to use
the Macro-32 code is the correct decision. Re-writing it would be needless
work. Re-writing it would introduce bugs, and such.

Some VMS customers have Macro-32 code, and without the compiler would no longer
be able to run on VMS without re-writing their applications, with the same
problems mentioned above.

To possibly a lesser extent, the same reasons apply to Bliss.

Simon Clubley

unread,
Oct 25, 2016, 9:22:57 AM10/25/16
to
On 2016-10-24, Bill Gunshannon <bill.gu...@gmail.com> wrote:
> On 10/24/16 4:13 PM, Simon Clubley wrote:
>>
>> I would hope that this company gave _very_ careful attention to the
>> _exact_ instructions used in the MACRO-32 source when generating the
>> C code to make sure that unsigned data really did get treated as
>> unsigned data.
>>
>> Otherwise that could have been a wonderful source of security
>> vulnerabilities.
>
> I don't see the security vulnerability (but then, I can't see the actual
> code) but I can see where the results would frequently (if not always)
> be just plain wrong.
>

It belongs to the class of vulnerabilities known as signed integer
overflow vulnerabilities.

Since this is unsigned data in a signed integer, then most of the time
things would be absolutely fine. It's when the numbers get big enough
to go negative in a signed integer that things get very interesting
and potentially very dangerous.

Some background reading:

https://cwe.mitre.org/data/definitions/190.html
0 new messages