Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The hardware support problem for x86

579 views
Skip to first unread message

John Dallman

unread,
May 12, 2022, 3:08:55 PM5/12/22
to
Listening to the latest webinar, it was mentioned that a common question
from customers was about support of specific hardware. At that point, I
realised that the variety of x86 server hardware currently available
probably exceeds the total variety of all the hardware that VMS has ever
run on.

Yes. All the variations of VAX, Alpha and Itanium that have ever been
supported likely provided less variety - in terms of devices to support -
than the variety of x86 server hardware that's _currently_ available,
ignoring all the x86 kit that is no longer on sale, but still viable.

How the hell is this managed for Windows and Linux? For those OSes, the
manufacturers do a lot of the work of writing and testing device drivers.
Microsoft have to do a lot themselves for Windows, and also provide SDKs
and templates. For Linux, the big distributors, such as Red Hat, do quite
a bit. There are also consultancies that write drivers for manufacturers,
and/or provide training and SDKs.

For VMS, at present, there's just VSI. They aren't a big company and they
have a lot of work to do without doing lots of drivers. An obvious
solution would be to switch to a hypervisor-only support model, but it
appears that many customers want to work on bare metal.

I can't find any information on the VSI website about writing device
drivers. There might be an opening for a company who produced device
drivers for customers.

John

Jan-Erik Söderholm

unread,
May 12, 2022, 3:22:48 PM5/12/22
to
Den 2022-05-12 kl. 21:07, skrev John Dallman:
> Listening to the latest webinar, it was mentioned that a common question
> from customers was about support of specific hardware. At that point, I
> realised that the variety of x86 server hardware currently available
> probably exceeds the total variety of all the hardware that VMS has ever
> run on.
>
> Yes. All the variations of VAX, Alpha and Itanium that have ever been
> supported likely provided less variety - in terms of devices to support -
> than the variety of x86 server hardware that's _currently_ available,
> ignoring all the x86 kit that is no longer on sale, but still viable.
>
> How the hell is this managed for Windows and Linux? For those OSes, the
> manufacturers do a lot of the work of writing and testing device drivers.
> Microsoft have to do a lot themselves for Windows, and also provide SDKs
> and templates. For Linux, the big distributors, such as Red Hat, do quite
> a bit. There are also consultancies that write drivers for manufacturers,
> and/or provide training and SDKs.
>
> For VMS, at present, there's just VSI. They aren't a big company and they
> have a lot of work to do without doing lots of drivers. An obvious
> solution would be to switch to a hypervisor-only support model, but it
> appears that many customers want to work on bare metal.

I thought that VSI said that the increast focus on VM environments is
due to customer demands. As I understand, the major part of VSI income
comes from customers where running in a VM is the prefered solution.

>
> I can't find any information on the VSI website about writing device
> drivers. There might be an opening for a company who produced device
> drivers for customers.

I really can't see the need today. Just run in a VM and you have all
the driver support you'll evr need. Special hardware that you might
need to communicate with, is probably network/TCPIP connected anyway.

So, I see no business case for custom written device drivers today.

Jan-Erik.

>
> John

Stephen Hoffman

unread,
May 12, 2022, 4:30:44 PM5/12/22
to
Most hardware vendor folks that want to sell hardware will perform the
hardware qualification themselves for Microsoft Windows, and get onto
the Microsoft HCL.

Where this gets more interesting for third-parties are hardware or
firmware bugs that are not exposed by Windows. Some vendors will fix,
some won't.

For VSI, dependencies on hypervisor software devices avoid the
immediate need to write (more) drivers, and virtio support or
equivalent will help here, and use of UEFI means that boot drivers can
potentially be easier to avoid, though all that at the cost of (some)
performance.

How large the performance cost might be is not yet clear.

For cases where performance is a factor, paravirtualization (and
virtio) reduces some of the effort.

There are some OpenVMS device drivers—graphics drivers,
particularly—that require at least source listings and quite possibly
source code access.

The little-known OpenVMS SHIP kits reduced some of the difficulties for
bootstrapping unsupported third-party storage, and the combination of
UEFI and hypervisor support will probably reduce that difficulty for
this upcoming era.

For hardware support configuration information, VSI has not publicized
an equivalent to SPOCK (backronym'd to Single Point Of Connectivity
Knowledge), or Enterprise Configurator, or ilk.

Compaq used Evernote for a while, which worked for what they were doing
with it; mainly storing QuickSpecs-like and
software-product-description-like documents, and not the hardware
detail from SPOCK.

In later parts of those same earlier times, this configuration data
detail all got to be a yet bigger mess when the OpenVMS information was
dropped from HP/HPE SPOCK, too.

Past a trivial and relatively static hardware support configuration
matrix document, the only way this information can be reasonably
maintained and updated and disseminated is with a database and not a
PDF somewhere. Configuration details here include hardware and firmware
revision, and the numbers of devices tested and supported, among other
details.

This qualification list gets even more interesting in some cases, as
hardware vendors can and have updated hardware without updating
revisions. Met that case with one very-well-known vendor's storage
hardware, where one device from the vendor worked and another didn't,
and a teardown found these two same-vendor, same-model, and
same-revision devices were internally very different devices. And the
newer device... failed.

What will happen for folks buying hardware configurations? I suspect
most folks will either use what VSI tests and supports, or what
third-party folks might test and support and package and sell, or—for
somewhat unusual requirements—occasionally third-party drivers. It'll
be fairly unusual for an end-user to create a device driver, absent
plans for a fairly substantial installed base for that end-user. If
you're rolling out a sizable deployment, or selling a supported
hardware configuration, the costs of custom drivers can be absorbed by
the buyer. Or by the seller, for a large enough purchase. And with
UEFI, probably also lacking the boot problems that traditionally arose
with third-party boot storage, absent EFI and SHIP support.

This whole area has all been discussed once or twice before. For some
of those previous discussions, SPOCK will be a pretty good search
target.

ps: If you want to see one example of a semi-related listing from this
era: https://www.synology.com/en-us/compatibility



--
Pure Personal Opinion | HoffmanLabs LLC

Jake Hamby

unread,
May 12, 2022, 4:33:40 PM5/12/22
to
On Thursday, May 12, 2022 at 12:22:48 PM UTC-7, Jan-Erik Söderholm wrote:
> Den 2022-05-12 kl. 21:07, skrev John Dallman:
> >
> > For VMS, at present, there's just VSI. They aren't a big company and they
> > have a lot of work to do without doing lots of drivers. An obvious
> > solution would be to switch to a hypervisor-only support model, but it
> > appears that many customers want to work on bare metal.
> I thought that VSI said that the increast focus on VM environments is
> due to customer demands. As I understand, the major part of VSI income
> comes from customers where running in a VM is the prefered solution.

That's right. One of the drawbacks of Itanium (and Alpha) is that there's only one provider of hypervisors, and no cloud VM providers. The only limitation I can see now with using VMS on supported hypervisors is that they don't yet support AMD CPUs. Once they add support for AMD, there's hyperthreading, which VMS could probably learn to support better, but not many other differences that are even detectable by guests.

> > I can't find any information on the VSI website about writing device
> > drivers. There might be an opening for a company who produced device
> > drivers for customers.
> I really can't see the need today. Just run in a VM and you have all
> the driver support you'll evr need. Special hardware that you might
> need to communicate with, is probably network/TCPIP connected anyway.
>
> So, I see no business case for custom written device drivers today.

Never say never. :) There's the proof-of-concept of using VMS for embedded devices on Intel Atom. It's conceivable that VMS could be a practical realtime OS for industrial applications again. I was recently sent a link to the VAX/VMS Realtime Users Guide from 1980: other than 64-bitness, the advice in there seems reasonable enough to me today, which is a demonstration that VMS could have potential there today on commodity hardware.

http://bitsavers.org/pdf/dec/vax/vms/2.0/AA-H784A-TE_VAX_VMS_2.0_Real-Time_Users_Guide_198003.pdf

Besides realtime process control, no, I don't see much need for custom VMS device drivers, or at least not that customers would pay money for. Ideally, for servers, I would love to see VSI get funding to port to AArch64, or even ppc64le (I know the second one's wishful thinking on my part), and those would also not need any driver support to run in supported cloud VMs.

Jake

Gérard Calliet

unread,
May 12, 2022, 4:40:13 PM5/12/22
to
Do you live near a nuclear plant? Do you think you would be pleased to
know the OS used to control it has part of it on the cloud?

Do you take underground? What about being somewhere in it and there is
just a subtle problem somewhere in the VmWare?

I don't want to relaunch all the chat around virtual or not virtual. But
it is just: yes there are use cases for bare metal. Minority? yes, not
in the Bid Trend? yes. VSI is right on its choices? I'm not VSI.

I heard on the last webinar VSI saying something like "if you realy need
bare metal, call us and we'll see what can be done on specific
hardwares". Not at all now a priority, but something can be done.
Impossible to be microsoft or large linux community to support any
hardware, but, yes, developping specific drivers could be done.

You know there is a litterature about that for Vax, Alpha (perhaps
itanium, I don't know). It is because it was possible for ISV to develop
drivers for their products. I don't know how many did that. But it is a
normal situation, and it is possible we'll have something like that for
x86. VSI is thinking about a community with its -for-a-far-future-
project Atom. Perhaps there will be another community for the
-not-so-far-project developping some specific drivers for specific bare
metal x86 VMS.

Gérard Calliet
>
> Jan-Erik.
>
>>
>> John
>


--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

Jan-Erik Söderholm

unread,
May 12, 2022, 5:03:34 PM5/12/22
to
But then, just use one of the supported "bare metal" solutions. It has
never been said that bare metel would not be supported *at all*, has it?

The roadmap still has "full support for HPE DL380" for 9.2-x.

Richard Maher

unread,
May 12, 2022, 10:24:35 PM5/12/22
to
Or just do an Oracle and don't support (insure against) nuclear
reactors. And to our American friends that's not pronounced Knewklar

Gérard Calliet

unread,
May 13, 2022, 1:08:50 AM5/13/22
to
Sorry, I was thinking you said there are not any use cases for bare metal.
>
> The roadmap still has "full support for HPE DL380" for 9.2-x.
One day.
>
>
>>
>> I don't want to relaunch all the chat around virtual or not virtual.
>> But it is just: yes there are use cases for bare metal. Minority? yes,
>> not in the Bid Trend? yes. VSI is right on its choices? I'm not VSI.
>>
>> I heard on the last webinar VSI saying something like "if you realy
>> need bare metal, call us and we'll see what can be done on specific
>> hardwares". Not at all now a priority, but something can be done.
>> Impossible to be microsoft or large linux community to support any
>> hardware, but, yes, developping specific drivers could be done.
>>
>> You know there is a litterature about that for Vax, Alpha (perhaps
>> itanium, I don't know). It is because it was possible for ISV to
>> develop drivers for their products. I don't know how many did that.
>> But it is a normal situation, and it is possible we'll have something
>> like that for x86. VSI is thinking about a community with its
>> -for-a-far-future- project Atom. Perhaps there will be another
>> community for the -not-so-far-project developping some specific
>> drivers for specific bare metal x86 VMS.
>>
>> Gérard Calliet
>>>
>>> Jan-Erik.
>>>
>>>>
>>>> John
>>>
>>
>>
>


Gérard Calliet

unread,
May 13, 2022, 4:20:58 AM5/13/22
to
American humour? When something dangerous exists, just ignore it.
I thought it was just in australia where they believe in ostrich
politics. (french humour)

Jean-Baptiste Boric

unread,
May 13, 2022, 7:26:13 AM5/13/22
to
Le jeudi 12 mai 2022 à 21:08:55 UTC+2, John Dallman a écrit :
> How the hell is this managed for Windows and Linux? For those OSes, the
> manufacturers do a lot of the work of writing and testing device drivers.
> Microsoft have to do a lot themselves for Windows, and also provide SDKs
> and templates. For Linux, the big distributors, such as Red Hat, do quite
> a bit. There are also consultancies that write drivers for manufacturers,
> and/or provide training and SDKs.
Linux mostly runs with kernel-mode device drivers in-tree, maintained alongside the kernel. Some hardware vendors who can't be bothered to upstream their stuff ship vendor kernels instead, which are (usually poorly maintained) forks of the kernel. There are also ways to write user-mode drivers for specific subsystems, like FUSE for file-systems and the Linux USB API, but no generic framework.

However, the reason why I'm posting (not an OpenVMS guy in the slightest) is that solutions exist in the open-source world to run drivers on foreign systems. Of note, NetBSD's rump kernel is a framework that essentially allows one to port unmodified NetBSD device drivers, file systems and network stack to run anywhere with a reasonable amount of glue code. Device drivers specifically have been ported to the Linux kernel, bare metal, GNU/Hurd... Other solutions include DDEkit for 2.6-era Linux device drivers, the HaikuOS network compatibility layer for FreeBSD network drivers and so on.

> I can't find any information on the VSI website about writing device
> drivers. There might be an opening for a company who produced device
> drivers for customers.

My point is that VSI doesn't necessarily need to write device drivers themselves and they can't realistically handle every PCIe card and USB device out there. Even just supporting all modern, server-grade equipment from the major server vendors is going to be an overwhelming task. Maybe they should look into porting device drivers from other operating systems instead. There are plenty of BSD-licensed drivers too, if licensing issues prevent them from touching GPL-licensed stuff. Personally I'd go with NetBSD's rumpkernel because it's written in C, it has a well-established track record and I don't see a technical reason why it couldn't handle OpenVMS when it can handle running in user-space for micro-kernel operating systems.

That being said, porting existing, battle-tested device drivers don't make them magically qualified to run on OpenVMS, which is also another potential bottleneck if VSI ports a framework that gives them a lot of drivers at once. That's probably fine for hobbyists, but untested or unqualified device drivers and paying customers with support contracts is not a good mix.

Jean-Baptiste.

Simon Clubley

unread,
May 13, 2022, 8:26:45 AM5/13/22
to
On 2022-05-12, John Dallman <j...@cix.co.uk> wrote:
>
> I can't find any information on the VSI website about writing device
> drivers. There might be an opening for a company who produced device
> drivers for customers.
>

The last public VMS device driver manual I am aware of is the one for
Alpha VMS. It is a book in its own right (just like the old Digital
Press books) and it is not a part of the VMS documentation set.

However, writing a device driver for VMS is an absolutely horrible
experience compared to how device driver development works on Linux.

My one and only VMS device driver was about 15 or so years ago when
I plugged a WinTV card into an AlphaStation so I could pull the teletext
data stream out of the video signal and display teletext pages under VMS.
(This was back when analogue signals were still being transmitted in the UK).

Since then, I have written multiple device drivers for bare metal, an RTOS
and for Linux, and I can tell you that if you are used to writing device
drivers on Linux, you are unlikely to want to do the same on VMS.

I certainly have no desire to ever write another VMS device driver.

On Linux, device drivers are nicely modular plug in kernel modules
which can be removed and inserted as many times as you want (provided
you don't crash the system first :-)).

On VMS, once you have loaded a device driver, you cannot unload it to
load a fixed version. You have to do a full system reboot and login
again before you can load a modified version of your driver.

There is no such thing as rmmod on VMS. :-( :-( :-(

On VMS, you are limited in the kinds of device drivers you can write
based on the public knowledge available and due to basic VMS design
limitations. For example, unlike on Linux, there's no way for you to
write a filesystem driver and then load it into the kernel.

The example VMS device drivers available to you are utterly useless once
you get past the really basic stuff. You get LRDRIVER and that's about it. :-(

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Robert A. Brooks

unread,
May 13, 2022, 10:00:13 AM5/13/22
to
On 5/13/2022 8:26 AM, Simon Clubley wrote:
> On 2022-05-12, John Dallman <j...@cix.co.uk> wrote:
>>
>> I can't find any information on the VSI website about writing device
>> drivers. There might be an opening for a company who produced device
>> drivers for customers.
>>
>
> The last public VMS device driver manual I am aware of is the one for
> Alpha VMS. It is a book in its own right (just like the old Digital
> Press books) and it is not a part of the VMS documentation set.

If you were a DEC customer paying for VMS MDDS (media and documentation
distribution service), then you got Lenny Szubowicz's driver book along with
the full documentation set.

--
-- Rob

chris

unread,
May 13, 2022, 11:11:24 AM5/13/22
to
With an open source OS, Linux, FreeBSD etc, the system probes
the hardware at boot time, against a background of probably
thousands of different hardware interface vendors and
manufacturers. All that knowledge is effectively shared between
the various OS's, developed and refined over decades. I can't
see how VMS could match that in any way, but perhaps one solution
might be to write a driver translation layer module, to connect
open source drivers to vms. That could be a volunteer effort, or
volunteer effort organised and with support from VSI.

It's easy to see why the the approach is for a VM usage, but
many will want to run bare metal and a VM in itself introduces
far more complexity, and possibility of bugs...

Chris

Simon Clubley

unread,
May 13, 2022, 1:37:42 PM5/13/22
to
Yes, I remember that (and I have my own personal copy that I bought
from a bookshop).

I was talking about the online documentation because that's where John
was looking above.

These days, what do support contract people get in their Itanium
documentation for writing device drivers and what will people get
for writing device drivers in their x86-64 VMS documentation ?

How applicable is the information in the Alpha device driver manual
for writing Itanium and x86-64 device drivers these days ?

I know there isn't going to be a new I&DS manual for x86-64. Is there
going to be a new device driver manual for x86-64 ?

Thanks,

Robert A. Brooks

unread,
May 13, 2022, 4:12:49 PM5/13/22
to
On 5/13/2022 1:37 PM, Simon Clubley wrote:

> These days, what do support contract people get in their Itanium
> documentation for writing device drivers and what will people get
> for writing device drivers in their x86-64 VMS documentation ?
>
> How applicable is the information in the Alpha device driver manual
> for writing Itanium and x86-64 device drivers these days ?
>
> I know there isn't going to be a new I&DS manual for x86-64. Is there
> going to be a new device driver manual for x86-64 ?

Lenny's book is still quite relevant today. The specific thing that
it documents is the non-MACRO-32 interfaces to device
driver and other executive routines. Those interfaces have not changed.

What *has* changed for X86_64 is that the SVAPTE is no more.
IRP$L_SVAPTE is no more.

In our experience, much use of IRP$L_SVAPTE was not actually a pointer
to a SVAPTE, but a generic pointer. That change is a mechanical replacement
to use IRP$Q_BUFIO_PKT.

For code that actually deals with SVAPTEs, the change is pretty code-specific.

Memory management is another area where port device drivers might need to be aware of changes,
due to X86-specific changes in memory management.

I cannot speak to the specifics of either the SVAPTE or memory management changes, since that's
not an area I was deeply involved.


--

-- Rob

Stephen Hoffman

unread,
May 13, 2022, 4:50:24 PM5/13/22
to
Not what I would assume. Most likely a reference to a Yale-educated US
politician known for certain mispronunciations and for a premature
accomplishment.

Your own and earnestly erudite postings can sometimes be just as
confusing to the average reader. To use a more recent metaphor and one
which may well be just as confusing to you, Darmok.

[I'm sure that at least some of my postings here read like gibberish, too.]

Returning to the usual discussions here in the comp.os.vms newsgroup,
DEC long excluded nuclear and life-critical environments from the
standard terms and conditions; what Oracle has done is not unusual.

More generally and to your earlier reference, existing nuclear and
SCADA control systems tend not to be upgraded, while new control
systems will be chosen among competing contemporary options.

Sites (still) using PDP-11 or VAX (real or emulated) hardware in 2022
is less an indication of a modern and robust market than of the
financial or bureaucratic or regulatory limitations around
qualification and re-qualification options available at these sites.

For those choosing now, there are various good options and approaches
available here for nuclear, SCADA, factory floor, and other such,
ranging from Windows to seL4 to VxWorks, and with many other good
choices available.

Whether OpenVMS x86-64 becomes anew another such option remains to be
determined. Though the current SaaS license may well discourage some
long-duration usage.

Stephen Hoffman

unread,
May 13, 2022, 5:01:47 PM5/13/22
to
On 2022-05-13 20:12:46 +0000, Robert A. Brooks said:

> On 5/13/2022 1:37 PM, Simon Clubley wrote:
>
>> I know there isn't going to be a new I&DS manual for x86-64. Is there
>> going to be a new device driver manual for x86-64 ?
> Lenny's book is still quite relevant today. The specific thing that
> it documents is the non-MACRO-32 interfaces to device driver and other
> executive routines. Those interfaces have not changed.

It's a quite a good and well-researched book. But it's getting a little
old. The errata for Margie and Lenny's driver book is fairly extensive
too, based on the number of sticky-notes my copy has accumulated over
the years.

VSI hasn't indicated plans for new device driver documentation (or a
book update), though some sort of update or errata is undoubtedly
planned as V9 becomes established and as more folks start looking at
and creating device drivers.

The parallel page table implementation may well trip a few device
drivers, for instance. And which will probably mean more sticky-notes
added into my and into other copies of the driver book.

Jake Hamby

unread,
May 13, 2022, 9:45:07 PM5/13/22
to
I'd hope that the APIs haven't changed too much between Alpha and newer systems. They're all little-endian and 64-bit, after all.

One of my Twitter friends who likes collecting 1990's PCs has been writing Voodoo2 PCI drivers to port Glide to every OS that she can get her hands on. She bought an AlphaServer DS10 last year, taught herself VMS, bought a used copy of the driver book you mentioned, and that was enough information for her to write a Voodoo2 driver and port Glide:

https://github.com/Luigi30/vms-glide
https://twitter.com/LuigiThirty/status/1395061216385581057

Incredibly impressive work, especially for one person who'd never used VMS before and was doing this port as an unpaid hobby project. It would've been more challenging still if this card required DMA or use of interrupts, but fortunately, it's "just" MMIO. She definitely had to buy the driver book in order to have the necessary info.

BTW, I know that a lot of POS systems use USB to connect to all the peripherals, and with a USB generic driver interface, which VMS has, you can talk to any of them without having to write a kernel driver. I haven't tested this, but the only weakness that I see from the docs is that they don't support isochronous pipes, so you wouldn't be able to talk to a USB audio device (not that there's a documented PCM audio driver API, after the retirement of MMOV for Alpha). USB generic support should be enough to talk to almost any other type of device as long as USB 2.0 is sufficient and you're on an Itanium (or x86).

Regards,
Jake

Richard Maher

unread,
May 13, 2022, 10:16:56 PM5/13/22
to
On 14/05/2022 4:50 am, Stephen Hoffman wrote:
> On 2022-05-13 08:20:55 +0000, G érard Calliet said:
>
>> Le 13/05/2022 à 04:24, Richard Maher a écrit :
>>>
>>> Or just do an Oracle and don't support (insure against) nuclear
>>> reactors. And to our American friends that's not pronounced Knewklar
>> American humour? When something dangerous exists, just ignore it.
>> I thought it was just in australia where they believe in ostrich
>> politics. (french humour)

Don't worry, when we need to stock up on white flags we know where to go.

>
> Not what I would assume. Most likely a reference to a Yale-educated US
> politician known for certain mispronunciations and for a premature
> accomplishment.
>

That's him.

> Your own and earnestly erudite postings can sometimes be just as
> confusing to the average reader. To use a more recent metaphor and one
> which may well be just as confusing to you, Darmok.
>
> [I'm sure that at least some of my postings here read like gibberish, too.]

What do you me "too" :-)

Gérard Calliet

unread,
May 15, 2022, 6:42:24 AM5/15/22
to
Le 13/05/2022 à 22:50, Stephen Hoffman a écrit :
> On 2022-05-13 08:20:55 +0000, G érard Calliet said:
>
>> Le 13/05/2022 à 04:24, Richard Maher a écrit :
>>>
>>> Or just do an Oracle and don't support (insure against) nuclear
>>> reactors. And to our American friends that's not pronounced Knewklar
>> American humour? When something dangerous exists, just ignore it.
>> I thought it was just in australia where they believe in ostrich
>> politics. (french humour)
>
> Not what I would assume. Most likely a reference to a Yale-educated US
> politician known for certain mispronunciations and for a premature
> accomplishment.
>
> Your own and earnestly erudite postings can sometimes be just as
> confusing to the average reader. To use a more recent metaphor and one
> which may well be just as confusing to you, Darmok.
>
> [I'm sure that at least some of my postings here read like gibberish, too.]
We have a long way to be able to play with all of our idioms. Perhaps
being more classic, using only the authors used on the Internals
epigraph chapters? Or just Shakespeare's? We really to get a common
langage for the next boot camps :)
>
> Returning to the usual discussions here in the comp.os.vms newsgroup,
> DEC long excluded nuclear and life-critical environments from the
> standard terms and conditions; what Oracle has done is not unusual.
>
> More generally and to your earlier reference, existing nuclear and SCADA
> control systems tend not to be upgraded, while new control systems will
> be chosen among competing contemporary options.
You are right. I mentioned nuclear plants or human transportation just
to argue on examples where virtual seems inadequate. There are some.
>
> Sites (still) using PDP-11 or VAX (real or emulated) hardware in 2022 is
> less an indication of a modern and robust market than of the financial
> or bureaucratic or regulatory limitations around qualification and
> re-qualification options available at these sites.
1) your are right: not an indication of a robust market
2) bare metal can still be needed somewhere
3) relaunching a "legacy" ecosystem is perhaps another issue than just
searching for classic markets; classic markets are explored with classic
marketing and classic funding (billions) ; can VMS compete with classic
markets?
>
> For those choosing now, there are various good options and approaches
> available here for nuclear, SCADA, factory floor, and other such,
> ranging from Windows to seL4 to VxWorks, and with many other good
> choices available.
Right
>
> Whether OpenVMS x86-64 becomes anew another such option remains to be
> determined.  Though the current SaaS license may well discourage some
> long-duration usage.
Right

Scott Dorsey

unread,
May 15, 2022, 7:29:14 AM5/15/22
to
>> Returning to the usual discussions here in the comp.os.vms newsgroup,
>> DEC long excluded nuclear and life-critical environments from the
>> standard terms and conditions; what Oracle has done is not unusual.

That's terrible! For years the PDP-8e was the standard controls machine
for nuclear power applications; the Nuclear Engineering department at Georgia
Tech continued teaching PDP-8 assembler well into the early 2000s.

Mind you where I work we replaced our PDP-8 that was doing wind tunnel
control with a PLC...
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."

VAXman-

unread,
May 15, 2022, 8:46:44 AM5/15/22
to
In article <t5qo66$mlq$1...@panix2.panix.com>, klu...@panix.com (Scott Dorsey) writes:
>>> Returning to the usual discussions here in the comp.os.vms newsgroup,
>>> DEC long excluded nuclear and life-critical environments from the
>>> standard terms and conditions; what Oracle has done is not unusual.
>
>That's terrible! For years the PDP-8e was the standard controls machine
>for nuclear power applications; the Nuclear Engineering department at Georgia
>Tech continued teaching PDP-8 assembler well into the early 2000s.
>
>Mind you where I work we replaced our PDP-8 that was doing wind tunnel
>control with a PLC...

That really blows! <rolleyes>

--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.

Stephen Hoffman

unread,
May 15, 2022, 1:07:44 PM5/15/22
to
On 2022-05-15 11:29:10 +0000, Scott Dorsey said:

>>>
>>> Hoff: Returning to the usual discussions here in the comp.os.vms
>>> newsgroup, DEC long excluded nuclear and life-critical environments
>>> from the standard terms and conditions; what Oracle has done is not
>>> unusual.
>
> That's terrible! For years the PDP-8e was the standard controls
> machine for nuclear power applications; the Nuclear Engineering
> department at Georgia Tech continued teaching PDP-8 assembler well into
> the early 2000s.

That's prolly related to much of the US nuclear power industry and
regulatory oversight being so utterly obdurate.

No new designs approved and online in how many years? There are a few
positive signs, with one SMR design only recently having gotten
approval for a lab install.

Last I checked, US Nuclear Regulatory Commission had never approved a
new reactor design and brought it into production in the entire history
of the agency since its formation in 1975.

Somebody needs to light a fire under that whole agency and under that
whole power generation industry, or our climate is far too soon going
to light a fire under all of us.

Yeah, I well understand safety. Competing power generation has... yet
bigger issues... there.

TL;DR: PDP-8 tech worked fine for our grandparents and great
grandparents, so it'll work fine for us.

chris

unread,
May 15, 2022, 1:54:08 PM5/15/22
to
Of course it would. Much simpler architecture, small scale integration
logic devices, less memory and complexity in the software as well. All
contributes to reliability.

From what a dealer told me some years ago, pdp11/05 series were still
in use in UK nuclear stations decades after initial installation and
the two board cpu set was fetching a real premium due to scarcity...

Chris

Simon Clubley

unread,
May 16, 2022, 1:39:38 PM5/16/22
to
On 2022-05-13, Robert A. Brooks <FIRST...@vmssoftware.com> wrote:
Thank you for the update Rob.

For the benefit of those thinking about writing a VMS device driver,
including a more complex example would be very helpful for x86-64.

The examples I had in mind were the USB drivers and associated code.

They are a VMS version of driver and utility code that is available
in every other OS out there and people with Linux driver experience
(for example) could compare the VMS and Linux drivers to see what
is different and how to carry out those same driver tasks on VMS.

Simon Clubley

unread,
May 16, 2022, 1:44:56 PM5/16/22
to
On 2022-05-15, Gérard Calliet <gerard....@pia-sofer.fr> wrote:
> Le 13/05/2022 à 22:50, Stephen Hoffman a écrit :
>>
>> Your own and earnestly erudite postings can sometimes be just as
>> confusing to the average reader. To use a more recent metaphor and one
>> which may well be just as confusing to you, Darmok.
>>
>> [I'm sure that at least some of my postings here read like gibberish, too.]
> We have a long way to be able to play with all of our idioms. Perhaps
> being more classic, using only the authors used on the Internals
> epigraph chapters? Or just Shakespeare's? We really to get a common
> langage for the next boot camps :)

Hmm, the story of Darmok _is_ a classic. :-)

Bill Gunshannon

unread,
May 19, 2022, 9:01:06 AM5/19/22
to
On 5/12/22 17:03, Jan-Erik Söderholm wrote:
>
>
> The roadmap still has "full support for HPE DL380" for 9.2-x.
>


Ah yes, lets continue to rely on the company we have despised for
so long because of their unreliability. That bodes well.

bill

Bill Gunshannon

unread,
May 19, 2022, 10:25:59 AM5/19/22
to
On 5/15/22 13:07, Stephen Hoffman wrote:
> On 2022-05-15 11:29:10 +0000, Scott Dorsey said:
>
>>>>
>>>> Hoff: Returning to the usual discussions here in the comp.os.vms
>>>> newsgroup, DEC long excluded nuclear and life-critical environments
>>>> from the standard terms and conditions; what Oracle has done is not
>>>> unusual.
>>
>> That's terrible!  For years the PDP-8e was the standard controls
>> machine for nuclear power applications; the Nuclear Engineering
>> department at Georgia Tech continued teaching PDP-8 assembler well
>> into the early 2000s.
>
> That's prolly related to much of the US nuclear power industry and
> regulatory oversight being so utterly obdurate.

Wasn't it the Canadian Atomic Energy people looking for PDP-11
experience not to long ago?

Now that it is gone completely, I wonder what Three Mile Island was
running? And the local Nuclear Plant is (I think) in the process of
being phased out. Wonder what they use?

>
> No new designs approved and online in how many years? There are a few
> positive signs, with one SMR design only recently having gotten approval
> for a lab install.
>
> Last I checked, US Nuclear Regulatory Commission had never approved a
> new reactor design and brought it into production in the entire history
> of the agency since its formation in 1975.
>
> Somebody needs to light a fire under that whole agency and under that
> whole power generation industry, or our climate is far too soon going to
> light a fire under all of us.

It seems to me that rather than lighting fires they prefer to blow
out the pilot lights and let the Nuclear industry just dry up and
go away, like the fossil fuels industry, relying entirely on the
totally inadequate wind and solar industries.

>
> Yeah, I well understand safety. Competing power generation has... yet
> bigger issues... there.
>
> TL;DR: PDP-8 tech worked fine for our grandparents and great
> grandparents, so it'll work fine for us.
>

A much simpler technology, much less likely to have hidden faults that
can make things go boom. I still like the PDP-11. If nothing else it
was a lot easier to teach (architecturally) than something like X86.

bill


Simon Clubley

unread,
May 19, 2022, 1:49:09 PM5/19/22
to
They had to pick _a_ server for their initial hardware target.

Which one would you have suggested instead ?

Bill Gunshannon

unread,
May 19, 2022, 3:22:15 PM5/19/22
to
On 5/19/22 13:49, Simon Clubley wrote:
> On 2022-05-19, Bill Gunshannon <bill.gu...@gmail.com> wrote:
>> On 5/12/22 17:03, Jan-Erik Söderholm wrote:
>>>
>>>
>>> The roadmap still has "full support for HPE DL380" for 9.2-x.
>>>
>>
>>
>> Ah yes, lets continue to rely on the company we have despised for
>> so long because of their unreliability. That bodes well.
>>
>> bill
>
> They had to pick _a_ server for their initial hardware target.
>
> Which one would you have suggested instead ?

No idea as I am not in the market and it is a rapidly moving
target. I was just pointing out the fact that for years now
people here have considered HPE the enemy but they kinda want
to still put money in their pocket. Kinda like having sanctions
against Russia while continuing to buy their gas which is their
largest selling product.

bill


Michael S

unread,
May 19, 2022, 3:54:35 PM5/19/22
to
Right now "HPE DL380" means two quite different machines: Gen 10 and Gen10 Plus.
And yet different Gen 11 is planned for 3rd or at worst 4th quarter.
Which one of the three is supported by VSI?

Dave Froble

unread,
May 19, 2022, 4:58:19 PM5/19/22
to
Well, it really sucks when all your options are poor to bad options.

Sometimes one must sort the options, and pick the least poor one. From my
perspective, which is way back in the cheap seats, HPe produces some reasonable
x86 server systems. So what's VSI to do? Hold a grudge by slicing off their
nose to spite their face? No, they will chart the best (least poor) path open
to them, and proceed.

Note, I may be one of the leaders of the "tar and feather" HP squad. So it's
tough, but one must be realistic.

As for Russian oil and gas, it may take some time, but they have been sawing at
their noses, and when the EU does get free of them, there will be no getting
them back. At least one hopes ...

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Stephen Hoffman

unread,
May 19, 2022, 5:09:57 PM5/19/22
to
On 2022-05-19 19:54:33 +0000, Michael S said:

> Right now "HPE DL380" means two quite different machines: Gen 10 and
> Gen10 Plus.
> And yet different Gen 11 is planned for 3rd or at worst 4th quarter.
> Which one of the three is supported by VSI?

The following prior to the availability of the planned VSI hardware
buyer's guide.

From previous postings by VSI folks here in the comp.os.vms newsgroup,
the initial native-boot hardware target is ProLiant DL380 Gen 9 and
later.

I'd not rush out and buy a new ProLiant Gen 33⅓ (Low Profile) nor any
other server as it first becomes available, given testing and support
delays are to be expected.

Should native boot be required prior to the availability of the buyer's
guide, contact VSI for an official support statement.

VSI have previously stated they're using these requests to determine
customer configuration demand.

Arne Vajhøj

unread,
May 19, 2022, 7:48:50 PM5/19/22
to
On 5/19/2022 4:58 PM, Dave Froble wrote:
> On 5/19/2022 3:22 PM, Bill Gunshannon wrote:
>> On 5/19/22 13:49, Simon Clubley wrote:
>>> On 2022-05-19, Bill Gunshannon <bill.gu...@gmail.com> wrote:
>>>> On 5/12/22 17:03, Jan-Erik Söderholm wrote:
>>>>> The roadmap still has "full support for HPE DL380" for 9.2-x.
>>>>
>>>> Ah yes, lets continue to rely on the company we have despised for
>>>> so long because of their unreliability.  That bodes well.
>>>
>>> They had to pick _a_ server for their initial hardware target.
>>>
>>> Which one would you have suggested instead ?
>>
>> No idea as I am not in the market and it is a rapidly moving
>> target.  I was just pointing out the fact that for years now
>> people here have considered HPE the enemy but they kinda want
>> to still put money in their pocket.
>
> Well, it really sucks when all your options are poor to bad options.
>
> Sometimes one must sort the options, and pick the least poor one.  From
> my perspective, which is way back in the cheap seats, HPe produces some
> reasonable x86 server systems.

Yes.

It had to be a brand name so HPE or Dell.

VSI got a business relationship with HPE so HPE.

It had to be a widely used model at an appropriate
mid size. DL380 seems like a fine choice.

Arne

Bill Gunshannon

unread,
May 19, 2022, 9:09:25 PM5/19/22
to
And yet there was a time here when it was thought that VMS was
going to die that these same people all said they would never
by anything from HP again. Go figure.

(Don't get me wrong. I am typing this right now on an HP All-In-One
PC running Windows 10. But I didn't put a dime into HP or MS coffers.
I got it for nothing and it's worth everything I paid for it!)

bill

David Wade

unread,
May 20, 2022, 2:33:40 AM5/20/22
to
These days I would say HP is the best of a bad bunch. The realistic
alternatives? Well DELL who change specs at the drop of a hat, and
Lenovo who kind of carried on from IBM so totally Chinese?

Dave

Michael S

unread,
May 20, 2022, 11:43:41 AM5/20/22
to
Fully-loaded DL380 G10 (even without Plus) is probably bigger than anything VMS
was ever running on, both in terms of memory size and in terms of # of cores/threads.
In bare metal scenario I would expect surprising scalability hazards.
Also, DL380 is more expensive than it really worth.

If I had to pick one HP server model for bare-metal support and if it had to be Intel-based,
I'd likely go for smaller gear. I.e. by now DL20 Gen 10.

Arne Vajhøj

unread,
May 20, 2022, 11:49:33 AM5/20/22
to
> Fully-loaded DL380 G10 (even without Plus) is probably bigger than anything VMS
> was ever running on, both in terms of memory size and in terms of # of cores/threads.
> In bare metal scenario I would expect surprising scalability hazards.
> Also, DL380 is more expensive than it really worth.
>
> If I had to pick one HP server model for bare-metal support and if it had to be Intel-based,
> I'd likely go for smaller gear. I.e. by now DL20 Gen 10.

I thought DL380 was starting around 5 K$. Which is not an unreasonable
starting point for a VMS system.

But yes - they may be more powerful than last Itanium's that are
really an decade old design.

A DL20 is the "DS10L lookalike" starting around 2 K$ right?

Arne

Michael S

unread,
May 20, 2022, 12:23:25 PM5/20/22
to
They, may be, not much more powerful than biggest Itanium Superdome
of the latest generation, may be, even somewhat less powerful in some aspects,
but, according to my understanding, VMS never was capable to run on
fully-loaded Superdome on bare metal. Only in VM.
And I'd guess that there were reasons.

>
> A DL20 is the "DS10L lookalike" starting around 2 K$ right?

I don't know what exactly was DS10L.
Would guess that it was much much smaller than DL20 in terms
of capacity and a little higher up in terms of market positioning.
But top DL20 is something VMS certainly can handle.
And the HW is a lot simpler than DL380. Also, support for DL20
would mean that in practice VMS wil run on plenty of Intel-based
desktop computers that are very similar to DL20 in terms of HW.
Which would be nice for developers even if VSI does not support
these machines officially.

>
> Arne

Jan-Erik Söderholm

unread,
May 20, 2022, 12:28:08 PM5/20/22
to
On the desktop I expect that a VM work better. Then you still
have your usual "office" environment there also.

And desptop systems are probably not the best for a "server".




>>
>> Arne

chris

unread,
May 20, 2022, 12:50:54 PM5/20/22
to
The dl two digit are generally the low end Proliants. The DL320,
DL360 abd DL380 are more the industry standard types, with
a very wide range of options. That's rack mount, but for deskside,
the MLxxx series apply and have more card slots and other options.
There are others, but they are the mainstream types. Still using
G8 series here, as a sweet spot price wise, but iirc, were the last
series that included a traditional bios, whereas from G9 onwards,
the only bios available is uefi.

Have used Proliants here for decades, mainly DL360 (1U) and DL380
(2U) and are well built with solid reliability. One of the primary
reasons why HP wanted to buy Compaq. Consistent construction and
manageability throughout the range...

Chris




Arne Vajhøj

unread,
May 20, 2022, 1:04:10 PM5/20/22
to
On 5/20/2022 12:23 PM, Michael S wrote:
> On Friday, May 20, 2022 at 6:49:33 PM UTC+3, Arne Vajhøj wrote:
>> A DL20 is the "DS10L lookalike" starting around 2 K$ right?
>
> I don't know what exactly was DS10L.

Low height rack mounted Alpha.

https://people.freebsd.org/~wilko/Alpha-gallery/DS10L/dcp_0589.jpg

> Would guess that it was much much smaller than DL20 in terms
> of capacity and a little higher up in terms of market positioning.
>
> But top DL20 is something VMS certainly can handle.
> And the HW is a lot simpler than DL380. Also, support for DL20
> would mean that in practice VMS wil run on plenty of Intel-based
> desktop computers that are very similar to DL20 in terms of HW.
> Which would be nice for developers even if VSI does not support
> these machines officially.

I would expect practically all developers to use VM.

Arne

Arne Vajhøj

unread,
May 20, 2022, 1:05:27 PM5/20/22
to
On 5/20/2022 12:50 PM, chris wrote:
> The dl two digit  are generally the low end Proliants. The DL320,
> DL360 abd DL380 are more the industry standard types, with
> a very wide range of options. That's rack mount, but for deskside,
> the MLxxx series apply and have more card slots and other options.
> There are others, but they are the mainstream types. Still using
> G8 series here, as a sweet spot price wise, but iirc, were the last
> series that included a  traditional bios, whereas from G9 onwards,
> the only bios available is uefi.
>
> Have used Proliants here for decades, mainly DL360 (1U) and DL380
> (2U) and are well built with solid reliability. One of the primary
> reasons why HP wanted to buy Compaq. Consistent construction and
> manageability throughout the range...

So you like the choice of DL380 as first supported physical?

Arne

chris

unread,
May 20, 2022, 1:40:44 PM5/20/22
to
Have no experience of the G9 and up series, but they will be
just an incremental development of the range. There is a lot
of built in capability in DL380. One or two multicore processors.
Memory is all ECC with iirc, interleaving and other modes, loads
of memory slots. Smart array sas / sata controllers, with 8 x 2.5"
disk backplane. Dual power supplies, up to 6 io slots and a bios
with pages of options. Easy to setup for remote ilom access,
just setup a network address and point a browser at it. Remote and
local aided provisioning within the bios. Oh yes, 4 network ports
standard, as well as dedicated ilom network port + usual serial
and usb ports. Check out the quickspecs on the hp site to get an
idea of what's available.

Yes, do like Proliant. Quite often have to setup offbeat or test
machines and nothing gets in the way of doing that. Don't work
for them, but as an engineer, do appreciate properly sorted,
well engineered systems, one of the things that we used to
respect about DEC...

Chris

Jake Hamby

unread,
May 20, 2022, 3:35:13 PM5/20/22
to
On Thursday, May 19, 2022 at 11:33:40 PM UTC-7, David Wade wrote:
> These days I would say HP is the best of a bad bunch. The realistic
> alternatives? Well DELL who change specs at the drop of a hat, and
> Lenovo who kind of carried on from IBM so totally Chinese?

Lenovo's pretty good at making PCs. I fully understand and support IBM offloading their x86 business, because it wasn't going to generate sufficient profit margins for IBM's sales-heavy business model. They've been focusing completely on mainframes and POWER-based systems and it seems to be working out well for them. Other divisions IBM has offloaded include POS systems (to Toshiba), and hard drives (to Hitachi).

I own a Lenovo Ideacentre that I bought from Best Buy and upgraded, and more recently I bought a custom Lenovo ThinkStation P340 Tower with exactly the specs I wanted, including the fastest CPU available and two RS-232 serial ports (the extra ports cost $5 to add, and the entire PC was discounted by 42%). Made in Mexico (presumably to avoid tariffs). No complaints at all, except that I had to manually upgrade it to Ubuntu 22.04 LTS because they hadn't flipped the switch on the OEM-supported pre-installed Ubuntu 20.04 to make the update available.

The first Lenovo came with Windows 10, and the keyboard and other components are lower quality than the new Lenovo. They are built to a price, after all. My new Lenovo came with Ubuntu 20.04 LTS and I saved $300+ compared to buying it with Windows 10 Pro for Workstations, which the custom build web page said would be required if I wanted to use Windows with the Xeon CPU that I ordered. No extra charge to run Ubuntu on server-grade Intel and AMD CPUs.

Regards,
Jake

Stephen Hoffman

unread,
May 20, 2022, 4:41:55 PM5/20/22
to
On 2022-05-19 23:48:40 +0000, Arne Vajh j said:

> It had to be a brand name so HPE or Dell.

I've suggested SuperMicro as an alternative to HPE and Dell, and with a
very large selection of servers.

Arne Vajhøj

unread,
May 20, 2022, 7:25:07 PM5/20/22
to
On 5/20/2022 4:41 PM, Stephen Hoffman wrote:
> On 2022-05-19 23:48:40 +0000, Arne Vajh j said:
>> It had to be a brand name so HPE or Dell.
>
> I've suggested SuperMicro as an alternative to HPE and Dell, and with a
> very large selection of servers.

They sell a lot of servers.

Their servers may be just as good as what HPE and Dell sells.

But I don't think they have the brand name that at least some
VMS shops may require.

Arne

chris

unread,
May 21, 2022, 7:26:03 AM5/21/22
to
There was also the report in the Register and elsewhere about
the possibility of management processors that reported back
to China. Just the possibility of spyware in hardware would
make me think twice about using such machines.

With engineer hat on, always found Dell to be a bit lightweight,
but IBM also make a 1 and 2U server platforms and most IBM kit
is very well engineered. Always interested in build quality
and what's under the hood here, as it's assumed that most major
vendor's kit will run all windows server and Linux distros
without serious issues...

Chris

Bill Gunshannon

unread,
May 21, 2022, 9:07:23 AM5/21/22
to
On 5/21/22 07:25, chris wrote:
> On 05/21/22 00:25, Arne Vajhøj wrote:
>> On 5/20/2022 4:41 PM, Stephen Hoffman wrote:
>>> On 2022-05-19 23:48:40 +0000, Arne Vajh j said:
>>>> It had to be a brand name so HPE or Dell.
>>>
>>> I've suggested SuperMicro as an alternative to HPE and Dell, and with
>>> a very large selection of servers.
>>
>> They sell a lot of servers.
>>
>> Their servers may be just as good as what HPE and Dell sells.
>>
>> But I don't think they have the brand name that at least some
>> VMS shops may require.
>>
>> Arne
>>
>
> There was also the report in the Register and elsewhere about
> the possibility of management processors that reported back
> to China. Just the possibility of spyware in hardware would
> make me think twice about using such machines.

One would hope by now that people have learned to keep production
systems isolated from the internet. That keeps them from reporting
back. :-) If nothing else, monitoring by your competent network
people :-) would identify this and it could be blocked at the firewall.

>
> With engineer hat on, always found Dell to be a bit lightweight,
> but IBM also make a 1 and 2U server platforms and most IBM kit
> is very well engineered. Always interested in build quality
> and what's under the hood here, as it's assumed that most major
> vendor's kit will run all windows server and Linux distros
> without serious issues...

If I were still doing this and had the money I would go with IBM
any day. Unless it turns out they, too, are made in China. At
that point all bets are off. We had a problem with Lenovo laptops
back in 2007 where there was a claim that they reported their
locations back to somewhere in China. Being DOD we were not happy
with that possibility and, true or not, it probably cost Lenovo
a lot of business.

bill


John Dallman

unread,
May 21, 2022, 9:35:48 AM5/21/22
to
In article <jes6g6...@mid.individual.net>, bill.gu...@gmail.com
(Bill Gunshannon) wrote:

> If I were still doing this and had the money I would go with IBM
> any day. Unless it turns out they, too, are made in China. At
> that point all bets are off.

The last time I bought low-end POWER servers from IBM, they were made in
China. So I think you can expect anything x86-based to come from there
too.

John

chris

unread,
May 21, 2022, 9:53:52 AM5/21/22
to
Fwics, a lot of kit is manufactured in China these days, but at least
with a vendor like IBM or even HP, I would trust them over some others,
as they will have processes in place to ensure security at hardware
level. All levels of manufacturing, from schematics, pcb layouts and
production samples.

The original report on the hardware spyware was in the Wall Street
Journal and involved added hardware hidden on the m/b under other
parts. If you think a microprocessor with firmware can be just a
5mm or less square these days, such things can be difficult to
find. Quite a fuss at the time, but don't remember what the final
outcome was. Of course, it might have been the work of NSA or other
security agencies here, but just shows again, how essential effective
hardware firewalling is, incoming and outgoing...

Chris

Scott Dorsey

unread,
May 21, 2022, 10:55:44 AM5/21/22
to
chris <chris-...@tridac.net> wrote:
[regarding supermicro]

>There was also the report in the Register and elsewhere about
>the possibility of management processors that reported back
>to China. Just the possibility of spyware in hardware would
>make me think twice about using such machines.

That report was a bit over the top when it came out and has since been
pretty well discredited.

However, even so, many government agencies are still barred from
buying Supermicro hardware. This is enough of a reason to avoid
it as a base if you're wanting to sell into that market.

Another significant problem from my standpoint is that Supermicro
hardware really isn't very stable. Every time I look the board I
used last has been discontinued and replaced with something new and
slightly different. They are even worse than HPE about this and HPE
is really annoying.

>With engineer hat on, always found Dell to be a bit lightweight,
>but IBM also make a 1 and 2U server platforms and most IBM kit
>is very well engineered. Always interested in build quality
>and what's under the hood here, as it's assumed that most major
>vendor's kit will run all windows server and Linux distros
>without serious issues...

I have never used the IBM x86 stuff... which models do you recommend?
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Bill Gunshannon

unread,
May 21, 2022, 11:08:14 AM5/21/22
to
On 5/21/22 09:53, chris wrote:
> On 05/21/22 14:35, John Dallman wrote:
>> In article<jes6g6...@mid.individual.net>, bill.gu...@gmail.com
>> (Bill Gunshannon) wrote:
>>
>>> If I were still doing this and had the money I would go with IBM
>>> any day.  Unless it turns out they, too, are made in China.  At
>>> that point all bets are off.
>>
>> The last time I bought low-end POWER servers from IBM, they were made in
>> China. So I think you can expect anything x86-based to come from there
>> too.
>>
>> John
>
> Fwics, a lot of kit is manufactured in China these days, but at least
> with a vendor like IBM or even HP, I would trust them over some others,
> as they will have processes in place to ensure security at hardware
> level. All levels of manufacturing, from schematics, pcb layouts and
> production samples.

The only problem would be the possibility that they have been coerced
into doing it by the Chinese government. Not paranoid, just aware of
the current situation.

>
> The original report on the hardware spyware was in the Wall Street
> Journal and involved added hardware hidden on the m/b under other
> parts. If you think a microprocessor with firmware can be just a
> 5mm or less square these days, such things can be difficult to
> find. Quite a fuss at the time, but don't remember what the final
> outcome was. Of course, it might have been the work of NSA or other
> security agencies here, but just shows again, how essential effective
> hardware firewalling is, incoming and outgoing...
>

Same as the report on Lenovo 15 years ago. It definitely wasn't
NSA as they are the one's who reported it. While firewalling is
definitely a requirement we chose to just not allow the systems on
the network at all.

bill


chris

unread,
May 21, 2022, 11:33:44 AM5/21/22
to
Still looking into that, but looking for something equivalent to dl360
and dl380 Proliant. Basic spec, 2 x I5 processors min and
options as per the DL... series. Never buy new here, but X3550 M4 or M5
look far enough down the price curve to make them interesting for
evaluation purposes. Looking at other vendors, it's quite difficult
to match the functionality & common sense of Proliant. Offball things
in bios like the ability to disable on board video to prioritise a more
capable added card, was not possible on some Fujitsu kit tested
a while back. Fine for headless use only, but dual purpose some
machines here as desktop and server. Devil is in the detail, as usual
and only way to really find out is to have the hardware in front of
you. Some machines have very limited bios functionality, which limits
choices and potential usage.

Will probably buy an IBM X series later in the year for evaluation,
but unlikely to be state of the art current model.

Chris


chris

unread,
May 21, 2022, 1:57:15 PM5/21/22
to
That's the ideal, baut can cause issues if if you are running a
web server or other external services. Get round that here by
using old Sparc hardware, on the basis that many intrusion exploits
depend on getting an executable binary loaded onto the machine. If
the binary won't run on the architecture, then the exploit fails.
Overkill perhaps, but small shop here and not a problem to experiment
with a variety of solutions. Plan to replace it with an Arm single
board computer, FreeBSD at some stage, but no time as yet...

Chris


Jake Hamby

unread,
May 21, 2022, 5:36:21 PM5/21/22
to
On Saturday, May 21, 2022 at 8:08:14 AM UTC-7, Bill Gunshannon wrote:
> >
> > Fwics, a lot of kit is manufactured in China these days, but at least
> > with a vendor like IBM or even HP, I would trust them over some others,
> > as they will have processes in place to ensure security at hardware
> > level. All levels of manufacturing, from schematics, pcb layouts and
> > production samples.
> The only problem would be the possibility that they have been coerced
> into doing it by the Chinese government. Not paranoid, just aware of
> the current situation.
> >
> > The original report on the hardware spyware was in the Wall Street
> > Journal and involved added hardware hidden on the m/b under other
> > parts. If you think a microprocessor with firmware can be just a
> > 5mm or less square these days, such things can be difficult to
> > find. Quite a fuss at the time, but don't remember what the final
> > outcome was. Of course, it might have been the work of NSA or other
> > security agencies here, but just shows again, how essential effective
> > hardware firewalling is, incoming and outgoing...
> >
> Same as the report on Lenovo 15 years ago. It definitely wasn't
> NSA as they are the one's who reported it. While firewalling is
> definitely a requirement we chose to just not allow the systems on
> the network at all.

The Supermicro story seemed like someone's planted FUD to me, given how emphatic the denials of Apple, Google, and of course Supermicro were. Anything's plausible. I'm surprised that non-US companies are so willing to buy American hardware, especially Intel, given that it's easy to make the same claims that the NSA may have coerced them into installing American spyware. :)

I think I'd trust IBM to stand up for their own corporate honor as far as the security of their firmware, and it's probably one of the reasons why they did sell their x86 division to a Chinese company: there are major Chinese banks and institutions running on IBM mainframes, after all. They can put Lenovo servers in their mainframe racks (in place of the Lenovo ThinkPads that too easily got separated from the mainframes, making them worthless for resale), and presumably everyone is auditing each other's code enough to trust "but verify" that everything's sensible.

Intel has now put so much code in their "negative rings" of the Management Engine that it's an open question whether it actually provides any security to the customer or should be disabled completely: https://en.wikipedia.org/wiki/Intel_Management_Engine

One good thing about running in virtualization is that VSI can punt on the issue of auditing the security of the UEFI BIOS and all that x86 firmware that's one of the less pleasant aspects of the x86 platform. Did you know that Linux and Windows NT have to ignore the first 1MB of RAM because there's no way to find out who may be using it? https://www.phoronix.com/scan.php?page=news_item&px=Windows-Reserves-First-1MB-RAM

Jake Hamby

unread,
May 21, 2022, 5:44:43 PM5/21/22
to
On Saturday, May 21, 2022 at 10:57:15 AM UTC-7, chris wrote:
> That's the ideal, baut can cause issues if if you are running a
> web server or other external services. Get round that here by
> using old Sparc hardware, on the basis that many intrusion exploits
> depend on getting an executable binary loaded onto the machine. If
> the binary won't run on the architecture, then the exploit fails.
> Overkill perhaps, but small shop here and not a problem to experiment
> with a variety of solutions. Plan to replace it with an Arm single
> board computer, FreeBSD at some stage, but no time as yet...

There's a good argument for Itanium for that purpose (perhaps the only good argument left for Itanium these days). The DCL vulnerability reported in early 2018 was patched for VAX, Alpha, and Itanium, but the Itanium patch was to protect any Alpha nodes on the same cluster: Itanium alone wasn't vulnerable, even though this was specifically a VMS attack and they tried to attack it. The register stack engine and unusual instruction encoding seem to thwart the common exploits.

I'd argue that ARM is much too popular to be considered "obscure" as far as providing any protection from common exploits. I'd suggest looking into recent POWER CPUs, although unfortunately you need a POWER8 or newer to run the ppc64le distros that are currently supported by the Linux vendors. Personally, I like the extra layer of confusion that running 64-bit *big-endian* Linux on a PowerMac Quad G5 provides, although admittedly it's not the best form factor for a datacenter.

chris

unread,
May 22, 2022, 8:52:57 AM5/22/22
to
Probably right about Itanium and Arm, for opposite reasons of
course. Limited experience with Power series, though was quite
impressed with a power series server some years ago. Also liked
aix 6 as a fully sorted os and the hardware construction was
superb. Good system management tools as well.

If you go Apple, there are quite a few Xserve G5 machines still around
at low cost on a good day. Long in the tooth, but dual G5 processors,
and 1u rackmount format. They are supported by quite a few open
source os's...

Chris

chris

unread,
May 22, 2022, 9:47:32 AM5/22/22
to
Cover quite a bit of ground there, but in summary, every site needs to
have techs familiar with the use of network sniffing tools like
Wireshark, tcpdump etc and know how to interpret the results. With
ever greater system complexity, the more difficult it becomes to
prove fully deterministic operation, the foundation of any security
measures built on top...

Chris
0 new messages