Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

VMware

1,140 views
Skip to first unread message

clairg...@gmail.com

unread,
Dec 8, 2019, 10:31:31 AM12/8/19
to


First, a little background. The VSI business people have had a couple very positive meetings with people at VMware. Even without anything definitive, we believe it is now time to get started on the technical side.

I have Fusion (VMware for MacOS) on my MacBook. After a slow start with some configuration issues and establishing the serial port, I can now network boot a base-level memory disk from our development cluster, issue commands to the BOOTMGR, and start the VMS boot process. This is absolutely no different than with VBox. As it is with every new platform we try, we are running into some form of configuration issue before we completely move beyond the Boot Manager and SYSBOOT. But, for just three days work, this is a great start. I'll have more once we get up into VMS itself.

VMware is not a V9.0 EAK deliverable. But, the sooner we get up and running, the better.

John H. Reinhardt

unread,
Dec 8, 2019, 12:47:47 PM12/8/19
to
Are you saying you got OpenVMS x86 to run (in a fashion) inside VMWare Fusion on a Mac? That would be very interesting... I switched from Fusion to VirtualBox on my Mac a few years ago but if this worked it might be a good reason to switch back.

--
John H. Reinhardt

clairg...@gmail.com

unread,
Dec 8, 2019, 12:52:20 PM12/8/19
to
Yes, I am booting VMS as a Fusion guest. We don't get very far yet but we will eventually get VMS up and running.

Grant Taylor

unread,
Dec 8, 2019, 1:34:53 PM12/8/19
to
On 12/8/19 10:52 AM, clair...@vmssoftware.com wrote:
> Yes, I am booting VMS as a Fusion guest. We don't get very far yet
> but we will eventually get VMS up and running.

How much difference is there between VMware Fusion and Oracle
VirtualBox? My experience is that the two systems are quite similar,
save for some minor supported hardware differences. I.e. which video
chipset or SCSI chipset or network card each emulates.

Is VMware Fusion enough different from Oracle's VirtualBox to be
significant for OpeVMS? Or is it that you're current work efforts are
using VMware Fusion?



--
Grant. . . .
unix || die

Arne Vajhøj

unread,
Dec 8, 2019, 3:07:00 PM12/8/19
to
I would assume the end goal would be to support VMWare ESXi and
that VMWare Fusion is just a step towards that.

VMWare ESXi support would be important for many customers.

Arne

clairg...@gmail.com

unread,
Dec 8, 2019, 3:24:27 PM12/8/19
to
As I have said hundreds of times, I never expect to boot on a new platform without some amount of configuration-related debugging. You can just about bet the mortgage on not getting beyond SYSBOOT.

Once you get a little further up then it will be the time keeping mechanism and how we wrangle it into our interval timer. Sometimes you lucky with that, though.

After getting booted and loading all the software images then things should be fairly normal since you are beyond most of the platform-specific stuff.



clairg...@gmail.com

unread,
Dec 8, 2019, 3:28:31 PM12/8/19
to
Yes, the goal is production environments. Fusion is a convenient debugging environment. We will likely try the PC version as well.

Hans Bachner

unread,
Dec 8, 2019, 5:21:22 PM12/8/19
to
Clair,

thanks for the excellent news that VMware apparently not only climbed up
on your priority stack, but you succeeded with some initial steps to get
VMS booting in VMware Fusion.

While Fusion certainly is an interesting option for developers (both
OpenVMS and applications), VMware Workstation will be interesting for
other developers, but for production use there's only ESXi which will
get consideration from your customers (in the VMware world).

Keep the good news coming...

Best regards,
Hans.

clairg...@gmail.com

unread,
Dec 8, 2019, 7:50:37 PM12/8/19
to
I guess I need to set something straight. VMware has always been at the top of our list but the VMware folks wouldn't give us the time of day. Now they are talking to us. That is why we started working with Fusion. We need them to officially support VMS as a guest OS. That takes a business agreement between the two companies which now seems a possibility. There was no sign of that until a couple weeks ago.

ESX is obviously the goal. Anything else just gives us easier ways to get a sense of how much work it will take to get there.



Grant Taylor

unread,
Dec 8, 2019, 7:56:45 PM12/8/19
to
On 12/8/19 1:24 PM, clair...@vmssoftware.com wrote:
> As I have said hundreds of times,

I feel like you answered a different question than the one that I was
intending to ask.

I was intending to ask about how much of a difference in the hypervisors
was a problem for you.

clairg...@gmail.com

unread,
Dec 8, 2019, 9:06:40 PM12/8/19
to
Don't know yet. Once we get booted then we can compare Fusion to VBox and kvm.

Kerry Main

unread,
Dec 8, 2019, 9:40:10 PM12/8/19
to comp.os.vms to email gateway
While getting official support for VMware would be great, the other solution
that I would like to see is the KVM flavour as it seems like it would be a
natural fit for HCI technologies from companies like Nutanix.

Nutanix HCI solutions are very hot these days for Customers who are looking
for an option to reduce their high VMware costs.

<https://www.nutanix.com/products/acropolis>

The Nutanix hypervisor is free and is based on KVM. Customers seem to like
the solution as well. You get very high VM counts on relatively small
servers.

Nutanix-HPE are also very close so ProLiant support is well integrated:
<https://www.nutanix.com/hpe>


Regards,

Kerry Main
Kerry dot main at starkgaming dot com




Dave Froble

unread,
Dec 8, 2019, 10:39:39 PM12/8/19
to
I'm not at all familiar with any of the VMware products. Personally, I
don't see the appeal. But I've been out to lunch for so long that I may
never find my way back. So what do I know.

What I see as important is that for whatever reason VSI and VMS is now
"visable" to such people. Don't know what happened to get them to talk,
but, visibility is rather important for VSI and VMS. So, generally a
"good thing".

On the road to being acceptable ....


--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Grant Taylor

unread,
Dec 9, 2019, 12:57:05 AM12/9/19
to
On 12/8/19 7:06 PM, clair...@vmssoftware.com wrote:
> Don't know yet. Once we get booted then we can compare Fusion to VBox
> and kvm.

Fair enough. That makes perfect sense.

Jean-François Piéronne

unread,
Dec 9, 2019, 3:01:21 AM12/9/19
to
We use Proxmox (https://www.proxmox.com/), also based on KVM, for many
years and on one site running more than 100 VMs.

Regards,

Jean-François

clairg...@gmail.com

unread,
Dec 9, 2019, 5:41:53 AM12/9/19
to
VBox and kvm are our committed platforms for the V9.0 EAK. We have done debugging on both all along and what works on one works on the other.

Over the past five years I have lost track of the number of times I have heard the following. "We run our entire IT shop on VMware, except for VMS which sits over in the corner on a very expensive piece of HW that everyone wants to get rid of. Get VMS on VMware and you will have a long life with us." That's why VMware is so important to us.

Kerry Main

unread,
Dec 9, 2019, 1:05:05 PM12/9/19
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax <info-vax...@rbnsn.com> On Behalf Of clair.grant---
> via Info-vax
> Sent: December 9, 2019 5:42 AM
> To: info...@rbnsn.com
> Cc: clair...@vmssoftware.com <clairg...@gmail.com>
> Subject: Re: [Info-vax] VMware
>
Absolutely - as I mentioned, VMware support would be huge.

To put thinks in perspective, I would say that VMware has approx. 90+% of
the hypervisor market, so their formal support is very important.

Having stated this, VMware is a bit like Oracle and Microsoft i.e. once they
achieve a certain level of market dominance, they keep cranking up their
prices as they believe Customers will just keep paying.

In fact, many of the larger Customers are now starting to rebel and look for
viable alternatives.

Dave Froble

unread,
Dec 9, 2019, 4:10:18 PM12/9/19
to
As I may have mentioned, I have no experience with such products. That
written, the following question may or may not have any relevance.

At what point does the cost of the VMware make that piece of HW in the
corner not so expensive, in comparison? Regardless, it would still have
it's own power supplies, and that can be an issue.

clairg...@gmail.com

unread,
Dec 9, 2019, 5:33:44 PM12/9/19
to
The big thing for the people I talk with is that VMS is different. The HW cost is certainly a big factor but even more important is that these people already run dozens, if not hundreds, of VMware guests. They want VMS to be in there, too, possibly in many guests. The message is always the same, don't be different if you want to stay around.

Arne Vajhøj

unread,
Dec 9, 2019, 7:38:56 PM12/9/19
to
On 12/8/2019 10:40 PM, Dave Froble wrote:
> On 12/8/2019 7:50 PM, clair...@vmssoftware.com wrote:
>> I guess I need to set something straight. VMware has always been at
>> the top of our list but the VMware folks wouldn't give us the time of
>> day. Now they are talking to us. That is why we started working with
>> Fusion. We need them to officially support VMS as a guest OS. That
>> takes a business agreement between the two companies which now seems a
>> possibility. There was no sign of that until a couple weeks ago.
>>
>> ESX is obviously the goal. Anything else just gives us easier ways to
>> get a sense of how much work it will take to get there.
>
> I'm not at all familiar with any of the VMware products.  Personally, I
> don't see the appeal.  But I've been out to lunch for so long that I may
> never find my way back.  So what do I know.

In on-premise data centers:
* the super-MS-centric runs Hyper-V
* the "it has to be free" crowd runs KVM or Xen
* practically everybody else runs VMWare ESXi

In cloud it is different:
* Amazon and Google runs KVM
* MS runs Hyper-V

Bottom line I think is that to get significant number of new
customers on-premise VSI need VMWare and for cloud they need KVM.

Arne




Arne Vajhøj

unread,
Dec 9, 2019, 7:42:18 PM12/9/19
to
On 12/9/2019 5:33 PM, clair...@vmssoftware.com wrote:
> The big thing for the people I talk with is that VMS is different.
> The HW cost is certainly a big factor but even more important is that
> these people already run dozens, if not hundreds, of VMware guests.

Or thousands.

Or maybe even tens of thousands if they are big.

> They want VMS to be in there, too, possibly in many guests. The
> message is always the same, don't be different if you want to stay
> around.

Being different means extra cost.

Arne


Arne Vajhøj

unread,
Dec 9, 2019, 7:48:57 PM12/9/19
to
Even a relative low end and low cost physical server can usually run
a lot of VM's today. Maybe 20 or 40 or 80 depending on size
and workload. Virtualization saves money even with high
VMWare licensee costs.

But it is not just about HW cost. There is also the flexibility.
In a virtual environment you can easily allocate more CPU and more
memory to a VM where workload is increasing. You can easily install
and start new VM's (minutes instead of waiting weeks for hardware
to arrive).

Arne






Grant Taylor

unread,
Dec 9, 2019, 8:34:04 PM12/9/19
to
On 12/9/19 5:48 PM, Arne Vajhøj wrote:
> Even a relative low end and low cost physical server can usually
> run a lot of VM's today. Maybe 20 or 40 or 80 depending on size and
> workload. Virtualization saves money even with high VMWare licensee
> costs.

VMs also offer a redundancy / hardware abstraction layer that hardware
can't match.

It's trivial to (re)start a VM on another host, even when the host it
was running on throws sparks and flames or otherwise turns into a pile
of slag.

Yes, you can have redundant physical systems for OpenVMS. But that has
scalability issues.

It's trivial to have multiple VMware hosts provide physical redundancy
for many VMs. So you end up with a much smaller number of machines
needed for the redundancy. Plus, it's relatively easy to get a new
VMware host and add it to a cluster without needing to change anything
about the guest OS.

Dave Froble

unread,
Dec 9, 2019, 10:22:51 PM12/9/19
to
I've mostly been a firm believer in the two rules of dealing with customers:

1) The customer is always right
2) When the customer is wrong, refer to rule #1

So giving customers what they want is smart business.

But, I have to ask, how do they run all those instances. It seems to me
that it would be an operations nightmare. I'm guessing they use SANs so
backup would not be such an issue. But other operations?

Guess I'm still lost in the 1980s ....

Grant Taylor

unread,
Dec 9, 2019, 11:37:23 PM12/9/19
to
On 12/9/19 8:24 PM, Dave Froble wrote:
> But, I have to ask, how do they run all those instances.  It seems to me
> that it would be an operations nightmare.  I'm guessing they use SANs so
> backup would not be such an issue.  But other operations?

It really depends what those instances are. I don't think it's
difficult to get 100 VMs on a single host, assuming it's got enough
resources and that the VMs are small enough. 10 such hosts in a cluster
is not a problem. That's 1,000 instances. More such hosts and VMs is
not difficult.

SAN LUNs are not a backup any more than shadow disks are a backup. Nor
is RAID a backup. Data corruption is a thing. You need actual backups
if you care about the data.

If the instances are Virtual Desktop Infrastructure that's accessed by
end users, then chances are good that the VDI instances are almost
cookie cutter installs and can be replaced at a moments notice. End
users are advised to save data in a location that's not part of the
VDI's disk that can get replaced.

Yes, there's operational headaches. But there's quite a bit of
automation to streamline that. If a VDI instance fails automation,
replace it with a new copy that passes automation.

IanD

unread,
Dec 10, 2019, 2:45:09 AM12/10/19
to
Dave, no shame coming from the 80's, the only shame is to refuse to want to even entertain a different world, you hardly seem like someone who refuses to look at the newer world, you just want proof that it's better, totally understandable

I assume you downloaded and had a tinker with virtualbox and saw how easy it was to spin up an OS and configure it and play with it versus real hardware

VMware is the same but on steroids. It has management functionally built in that allows the management of 1000's of instances. You can do funky thinks like create resource pools and manage multiple instance as though they were a single entity etc

You can do things like dynamically migrate a running instance to another VMware instance or even another DC all on the fly without having to reconfigure underlining network elements. You can see the immediate benefit here of continual operation, more continuous than a VMS node in a cluster because a job running on a VMS node is tied to that node which is ultimately tired to the physical machine, in VMware it's so visualised that the machine itself can be migrated to another VMware instance while still running, every process running on that machine keeps running while it's motioned off to another VMware instance

But of a lousy rushed explanation but hopefully you get the idea

Docker, another technology is making gains on VMware using a container technology but this is a topic for another day, I think OpenVMS may have to embrace contains at some point in time

I really thought VMware was out of scope as far as OpenVMS was concerned but it seems that I misunderstood, I think this is some of the biggest news yet because without VMware official support (in time), OpenVMS in my opinion would have really struggled to get traction other than in environments where it was already accepted in

The fact that VMware are bothering to talk to VSI is pretty dam good news

Dave Froble

unread,
Dec 10, 2019, 3:41:17 AM12/10/19
to
On 12/9/2019 11:37 PM, Grant Taylor wrote:
> On 12/9/19 8:24 PM, Dave Froble wrote:
>> But, I have to ask, how do they run all those instances. It seems to
>> me that it would be an operations nightmare. I'm guessing they use
>> SANs so backup would not be such an issue. But other operations?
>
> It really depends what those instances are. I don't think it's
> difficult to get 100 VMs on a single host, assuming it's got enough
> resources and that the VMs are small enough. 10 such hosts in a cluster
> is not a problem. That's 1,000 instances. More such hosts and VMs is
> not difficult.
>
> SAN LUNs are not a backup any more than shadow disks are a backup. Nor
> is RAID a backup. Data corruption is a thing. You need actual backups
> if you care about the data.

Yes, I understand all that. But backing up a SAN is probably much
easier than backing up 1000 stand alone systems.

> If the instances are Virtual Desktop Infrastructure that's accessed by
> end users, then chances are good that the VDI instances are almost
> cookie cutter installs and can be replaced at a moments notice. End
> users are advised to save data in a location that's not part of the
> VDI's disk that can get replaced.
>
> Yes, there's operational headaches. But there's quite a bit of
> automation to streamline that. If a VDI instance fails automation,
> replace it with a new copy that passes automation.

But what are the applications that can exist in such an environment?
That is where I get lost.

Dave Froble

unread,
Dec 10, 2019, 3:47:05 AM12/10/19
to
On 12/10/2019 2:45 AM, IanD wrote:
> Dave, no shame coming from the 80's, the only shame is to refuse to want to even entertain a different world, you hardly seem like someone who refuses to look at the newer world, you just want proof that it's better, totally understandable
>
> I assume you downloaded and had a tinker with virtualbox and saw how easy it was to spin up an OS and configure it and play with it versus real hardware

Yep. Played with VirtualBox. Was going to have an instance for various
old versions of WEENDOZE. Not all were so easy. Had problems with
networking. I will quite likely use VirtualBox for running x86 VMS.

> VMware is the same but on steroids. It has management functionally built in that allows the management of 1000's of instances. You can do funky thinks like create resource pools and manage multiple instance as though they were a single entity etc

I just don't see applications that I'm familiar with running in 1000
instances of the OS.

> You can do things like dynamically migrate a running instance to another VMware instance or even another DC all on the fly without having to reconfigure underlining network elements. You can see the immediate benefit here of continual operation, more continuous than a VMS node in a cluster because a job running on a VMS node is tied to that node which is ultimately tired to the physical machine, in VMware it's so visualised that the machine itself can be migrated to another VMware instance while still running, every process running on that machine keeps running while it's motioned off to another VMware instance
>
> But of a lousy rushed explanation but hopefully you get the idea
>
> Docker, another technology is making gains on VMware using a container technology but this is a topic for another day, I think OpenVMS may have to embrace contains at some point in time
>
> I really thought VMware was out of scope as far as OpenVMS was concerned but it seems that I misunderstood, I think this is some of the biggest news yet because without VMware official support (in time), OpenVMS in my opinion would have really struggled to get traction other than in environments where it was already accepted in
>
> The fact that VMware are bothering to talk to VSI is pretty dam good news
>

Yes, that's bottom line. I was rather happy to see that development.

Ian Miller

unread,
Dec 10, 2019, 4:26:29 AM12/10/19
to
Hi David, to run lots of VM systems requires massive automation otherwise they have to employ lots of sysadmins and that costs to much. With the automation you can create a VM with linux+oracle etc at the press of a button in a very short time. All the VMs are monitored automatically, some fixes for problems are applied automatically and so on. Running 10,000 systems in a data center with a few people is possible.

It is a strange new industrial big scale world. I mostly still handcraft VMS clusters from the finest ingredients so am a niche craftsman sitting in a corner while the young folk play with the shiny new industrial automated sysadmin toys.

Bob Gezelter

unread,
Dec 10, 2019, 7:05:33 AM12/10/19
to
Ian,

Perhaps it is a more subtle point. It is not so much a new world, as a world with a broader spectrum of possible choices and wider set of alternatives.

In the Windows/Linux world, many applications presume that they are on a dedicated instance. Attempting to run multiple applications in a single instance results in collisions, e.g., port numbers, serializations. Resolving such problems can be complex and time-consuming and requires the cooperation and assistance of outsiders. Using dedicated instances somewhat nullifies the problem by giving each their own "playpen" to operate within, which removes the issue of how well applications co-exist.

It is also possible to have similar problems with databases and other multi-client packages. Not everyone operates well in a shared namespace.

Testing is yet a third situation. Many times, I have needed to do an experiment with potential consequences. Better to do it on a disposable virtual instance (an approach I described in a blog article several years ago) than impact a longer lived instance. One starts with a disposable instance, and graduates to a less disposable environment after one has proven feasibility.

Similar arguments hold for training and proficiency. Crashing an inexpensive disposable instance is far less expensive than real hardware.

In production contexts, there are parallel arguments. However useful VM migration, it is not a functional replacement for OpenVMS clusters. VM migration allows controlled workload migration, in the event of an uncontrolled system failure, e.g. complete power-failure without warning or system destruction. migration will not have sufficient time to execute a migration.

A wide range of possibilities. One size does not fit all and not all options are appropriate for any particular context.

- Bob Gezelter, http://www.rlgsc.com

Craig A. Berry

unread,
Dec 10, 2019, 10:14:12 AM12/10/19
to

On 12/9/19 7:34 PM, Grant Taylor wrote:

> VMs also offer a redundancy / hardware abstraction layer that hardware
> can't match.
>
> It's trivial to (re)start a VM on another host, even when the host it
> was running on throws sparks and flames or otherwise turns into a pile
> of slag.

Doesn't being able to do that depend on OS capabilities that quiesce
everything while the running instance is being moved? I wouldn't expect
a lot of the usual VM capabilities for OpenVMS instances on first
roll-out, but it would still be very nice to have it running on VMWare,
even if managing it is still mostly done the old-fashioned way for now.

Bob Gezelter

unread,
Dec 10, 2019, 10:55:24 AM12/10/19
to
Craig,

Your assertion with regards to the instance being effectively quiesced during migration is correct in concept, but it is incomplete.

Large, if not the overwhelming majority of memory is not writeable or not actively being written at any given instant. Present write and I/O remain an issue. Block I/O initiation on mass storage and one can quiesce those I/Os. Network I/O will appear as lost packets, recoverable by normal error recovery mechanisms. In effect, the situation is not particularly different than a traditional power failure/recovery interrupt.

Grant Taylor

unread,
Dec 10, 2019, 1:26:35 PM12/10/19
to
On 12/10/19 1:42 AM, Dave Froble wrote:
> But backing up a SAN is probably much easier than backing up 1000
> stand alone systems.

You don't "back up the SAN" per say.

Think of the SAN as really fancy and long SCSI cables that move the
disks out of the machine to somewhere else in the data center (or
possibly even world).

Backups have to be coordinated and in concert with the systems. They
may be taken from the SAN side, or over the SAN (Fibre Channel / iSCSI /
et al.) fabric. But the host is involved with the backups.

You can't realistically copy / snapshot the SAN without having any input
on the state of each (remote) disk, or LUN in SAN parlance, from each
and every system. — Well, you can, but it's likely to be worse than
crash consistency.

> But what are the applications that can exist in such an environment?
> That is where I get lost.

Virtual Desktop Infrastructure (VDI) is a good example. Instead of
having Windows (et al.) running on all the desktops and wasting
dedicated resources, you can run Windows in VMs with aggregate
resources. So, that accounts for a LOT of VMs if there are (were) a lot
of desktops that are now GUI dumb terminals.

As someone else pointed out, many applications are written with the
assumption that they are the only thing running on the system. So it's
quite common in the Windows (and sometimes Linux) world to have a system
per service. Or more likely multiple systems per service application
stack. So rather than having physical system sprawl, more of these
things are happening in VMs.

Grant Taylor

unread,
Dec 10, 2019, 1:32:41 PM12/10/19
to
On 12/10/19 5:05 AM, Bob Gezelter wrote:
> However useful VM migration, it is not a functional replacement for
> OpenVMS clusters. VM migration allows controlled workload migration,
> in the event of an uncontrolled system failure, e.g. complete
> power-failure without warning or system destruction. migration will
> not have sufficient time to execute a migration.

I don't completely agree with this.

VMware has a High Availability mode where the same VM / system image is
running concurrently on multiple disparate physical hosts. One of which
will be connected to the outside world. The other is disconnected and
receiving real time updates from the first. As in VMware is replicating
memory, processor, and disk state near real time. This means that when
one physical host falls out of the rack, the other physical host takes
over and continues running the VM with the exception that it is now
master and the VM is connected to the world. As such, even established
network connections continue on the alternate physical host.

VMware has, and apparently other hypervisors have, the ability to move
running VMs from one host to another host in a manner that is almost
unperceptible to clients. Packets don't drop. They may have a slight
increase in latency /during/ the transition. But it really looks like a
momentary congestion in a router buffer somewhere.

Grant Taylor

unread,
Dec 10, 2019, 1:38:35 PM12/10/19
to
On 12/10/19 8:14 AM, Craig A. Berry wrote:
> Doesn't being able to do that depend on OS capabilities that quiesce
> everything while the running instance is being moved?

Nope.

The hypervisor does it transparently.

The guest VMs that have no idea that they are being moved any more than
a physical box knows that it's being scooted across the floor while it's
running with long network & power cords.

> I wouldn't expect a lot of the usual VM capabilities for OpenVMS
> instances on first roll-out, but it would still be very nice to have
> it running on VMWare, even if managing it is still mostly done the
> old-fashioned way for now.

I'm of the opinion that there is very little perceptible difference
(that matters) between physical hardware and virtualized machines.

I say "that matters" because there are usually some ways to tell if
you're on a VM or a physical machine. This is usually related to the
textual descriptions of devices. Does the OS /really/ care that the
textual description is a Compaq SMAR Array / DGD (whatever the old DEC /
Compaq SAN was) vs a VMware virtual disk?

So, if it works on physical hardware, I'd expect it has a very good
chance (> 80%) that it will work the same way on virtual hardware.

The biggest difference that I see (other than textual descriptions) are
the emulated devices. Different hypervisors emulate different SCSI
controllers or video cards. But if you have drivers for the respective
(physical / virtual) hardware, it shouldn't matter.

Grant Taylor

unread,
Dec 10, 2019, 1:44:01 PM12/10/19
to
On 12/10/19 8:55 AM, Bob Gezelter wrote:
> Your assertion with regards to the instance being effectively quiesced
> during migration is correct in concept, but it is incomplete.

Not necessarily.

> Large, if not the overwhelming majority of memory is not writeable or
> not actively being written at any given instant. Present write and
> I/O remain an issue. Block I/O initiation on mass storage and one
> can quiesce those I/Os.. Network I/O will appear as lost packets,

I've done a number of migrations without loosing a single packet or
dropping any connections.

Usually the worst is that it appears as if latency increased from low
single digit ms to circa 10 ms for a packet or two before returning to
the low single digit ms.

> recoverable by normal error recovery mechanisms.

I vary rarely see any hint that the guest OS or clients connected to it
have any idea that anything happened.

> In effect, the situation is not particularly different than a
> traditional power failure/recovery interrupt.

I think the situation is considerably different than a traditional power
failure / recovery event.

Ad described in a previous response, VMware's HA feature means that the
OS image continues running across multiple physical hosts. Nothing has
any idea that the original hardware is no longer being used.

Likewise with live migrations between hosts. I can move guest VMs from
host to host to host on any whim to my hearts content and neither the
guest VM, nor any of it's connected clients, will have any idea that I'm
playing games with it.

Andy Burns

unread,
Dec 10, 2019, 1:46:38 PM12/10/19
to
Grant Taylor wrote:

> VMware's HA feature means that the
> OS image continues running across multiple physical hosts.  Nothing has
> any idea that the original hardware is no longer being used.

That's the FT feature, not the HA feature (which cold boots a new
instance if the old one fails).

Craig A. Berry

unread,
Dec 10, 2019, 6:22:21 PM12/10/19
to

On 12/10/19 12:38 PM, Grant Taylor wrote:
> On 12/10/19 8:14 AM, Craig A. Berry wrote:
>> Doesn't being able to do that depend on OS capabilities that quiesce
>> everything while the running instance is being moved?
>
> Nope.
>
> The hypervisor does it transparently.
>
> The guest VMs that have no idea that they are being moved any more than
> a physical box knows that it's being scooted across the floor while it's
> running with long network & power cords.

So there is a perfect up-to-date mirror in hard real time of all of the
states of all of the devices and all these different states are
coordinated with each other? For example, if I'm in the middle of
changing my password when one of these transitions happens, the state of
this process on the new instance looks exactly like it was on the old
one regardless of whether the I/O is in a network buffer, heap memory,
file system buffer, on disk, or split among some combination of two or
more of the above?

If that's true it sounds impressive. But if the hypervisors are that
good it makes me wonder why Microsoft needed to spend megabucks on
making SQL Server and Windows work better under hypervisors (and I
believe similar efforts have gone into Linux).

Arne Vajhøj

unread,
Dec 10, 2019, 6:24:42 PM12/10/19
to
On 12/9/2019 10:24 PM, Dave Froble wrote:
> On 12/9/2019 7:42 PM, Arne Vajhøj wrote:
>> On 12/9/2019 5:33 PM, clair...@vmssoftware.com wrote:
>>> The big thing for the people I talk with is that VMS is different.
>>> The HW cost is certainly a big factor but even more important is that
>>> these people already run dozens, if not hundreds, of VMware guests.
>>
>> Or thousands.
>>
>> Or maybe even tens of thousands if they are big.
>>
>>> They want VMS to be in there, too, possibly in many guests. The
>>> message is always the same, don't be different if you want to stay
>>> around.
>>
>> Being different means extra cost.

> I've mostly been a firm believer in the two rules of dealing with
> customers:
>
> 1) The customer is always right
> 2) When the customer is wrong, refer to rule #1
>
> So giving customers what they want is smart business.

Yep.

> But, I have to ask, how do they run all those instances.  It seems to me
> that it would be an operations nightmare.

If the applications are not creating problems then it is not bad.

Advanced monitoring and management tools easy the burden significantly.

And that is one thing one get for what one is paying VMWare.

The VM to ops person ratio is pretty high today. For on premise
probably in the upper half of 3 digit.

Arne

Dave Froble

unread,
Dec 10, 2019, 7:17:08 PM12/10/19
to
On 12/10/2019 1:26 PM, Grant Taylor wrote:
> On 12/10/19 1:42 AM, Dave Froble wrote:
>> But backing up a SAN is probably much easier than backing up 1000
>> stand alone systems.
>
> You don't "back up the SAN" per say.
>
> Think of the SAN as really fancy and long SCSI cables that move the
> disks out of the machine to somewhere else in the data center (or
> possibly even world).
>
> Backups have to be coordinated and in concert with the systems. They
> may be taken from the SAN side, or over the SAN (Fibre Channel / iSCSI /
> et al.) fabric. But the host is involved with the backups.
>
> You can't realistically copy / snapshot the SAN without having any input
> on the state of each (remote) disk, or LUN in SAN parlance, from each
> and every system. — Well, you can, but it's likely to be worse than
> crash consistency.
>
>> But what are the applications that can exist in such an environment?
>> That is where I get lost.
>
> Virtual Desktop Infrastructure (VDI) is a good example. Instead of
> having Windows (et al.) running on all the desktops and wasting
> dedicated resources, you can run Windows in VMs with aggregate
> resources. So, that accounts for a LOT of VMs if there are (were) a lot
> of desktops that are now GUI dumb terminals.

Ok, let's look at this.

What is a GUI dumb terminal? Most of the ones I see are rather cheap
desktop PCs using network storage. The only advantage I might see
running such in VMs is the ability to spin up a new VM with the OS and
apps ready to go. Still gonna need the "GUI dumb terminal".

Now, if the desktop user is doing something very CPU intensive, perhaps
the VM would give less performance?

Some apps can be very CPU and video intensive.

Of course, none of this matters. If the potential customers tell Clair
they want to run VMS in VMs, The smart money is to give the customer
what he wants, and, collect lots of support money.

> As someone else pointed out, many applications are written with the
> assumption that they are the only thing running on the system.

All I'll say about that is that isn't how I learned to design apps.

> So it's
> quite common in the Windows (and sometimes Linux) world to have a system
> per service. Or more likely multiple systems per service application
> stack. So rather than having physical system sprawl, more of these
> things are happening in VMs.


--

Bob Gezelter

unread,
Dec 10, 2019, 7:27:20 PM12/10/19
to
Grant,

With all due respect, I want to see the fine-grain details on that implementation. Particularly the part about "packets do not drop". Ensuring granularity of file update is also quite a challenge.

There is a large difference between "rarely are packets lost" and "packets are never lost". Pre-loading other virtual instances and keeping memory state updated them updated is one thing, ensuring mass storage state is something else.

I will not even get into questions like the state of attached non-storage peripherals, e.g. RNGs.

My general advice is to deeply verify the precise nature of the implementation and its limitations before relying on it.

A while back, I was at an user group event where there was a presentation on VM migration. The speaker made a statement that failover migration would handle all cases. Being from New York City, I inquired about a scenario we had experienced a few years earlier.

A Boeing 767 doing between 150 and 200 knots comes through your machine room window. How long does it take to traverse the 24 inches between front of the cabinet and the back of the cabinet. Even that scenario does not include the fact that the infrastructure connecting one VM host to another has likely been severed before the VM host frame is hit.

Alexander Schreiber

unread,
Dec 10, 2019, 7:40:08 PM12/10/19
to
Dave Froble <da...@tsoft-inc.com> wrote:
> On 12/9/2019 11:37 PM, Grant Taylor wrote:
>> On 12/9/19 8:24 PM, Dave Froble wrote:
>>> But, I have to ask, how do they run all those instances. It seems to
>>> me that it would be an operations nightmare. I'm guessing they use
>>> SANs so backup would not be such an issue. But other operations?
>>
>> It really depends what those instances are. I don't think it's
>> difficult to get 100 VMs on a single host, assuming it's got enough
>> resources and that the VMs are small enough. 10 such hosts in a cluster
>> is not a problem. That's 1,000 instances. More such hosts and VMs is
>> not difficult.
>>
>> SAN LUNs are not a backup any more than shadow disks are a backup. Nor
>> is RAID a backup. Data corruption is a thing. You need actual backups
>> if you care about the data.
>
> Yes, I understand all that. But backing up a SAN is probably much
> easier than backing up 1000 stand alone systems.

Two magic words: standardization & automation.

>> If the instances are Virtual Desktop Infrastructure that's accessed by
>> end users, then chances are good that the VDI instances are almost
>> cookie cutter installs and can be replaced at a moments notice. End
>> users are advised to save data in a location that's not part of the
>> VDI's disk that can get replaced.
>>
>> Yes, there's operational headaches. But there's quite a bit of
>> automation to streamline that. If a VDI instance fails automation,
>> replace it with a new copy that passes automation.
>
> But what are the applications that can exist in such an environment?
> That is where I get lost.

Virtual desktops are a thing. Your "desktop" no longer is a box under your
desk, but a VM in a datacenter somewhere. You connect to it via your laptop.
It runs a standard image. If it is badly broken, it gets automatically
reimaged on request with the standard image.

Saves power, hardware and more importantly: support costs.

And that is only one example.

Kind regards,
Alex.
--
"Opportunity is missed by most people because it is dressed in overalls and
looks like work." -- Thomas A. Edison

Alexander Schreiber

unread,
Dec 10, 2019, 7:40:08 PM12/10/19
to
Grant Taylor <gta...@tnetconsulting.net> wrote:
> On 12/10/19 1:42 AM, Dave Froble wrote:
>
>> But what are the applications that can exist in such an environment?
>> That is where I get lost.
>
> As someone else pointed out, many applications are written with the
> assumption that they are the only thing running on the system. So it's
> quite common in the Windows (and sometimes Linux) world to have a system
> per service. Or more likely multiple systems per service application
> stack. So rather than having physical system sprawl, more of these
> things are happening in VMs.

Yes, at one employer we used virtualization to compress an entire full
rack of machines into 4-6 physical machines while increasing redunancy.

Saved a lot of hardware, power and ops time (for fixing broken machines).

Alexander Schreiber

unread,
Dec 10, 2019, 7:40:08 PM12/10/19
to
Dave Froble <da...@tsoft-inc.com> wrote:
> On 12/9/2019 7:42 PM, Arne Vajhøj wrote:
>> On 12/9/2019 5:33 PM, clair...@vmssoftware.com wrote:
>>> The big thing for the people I talk with is that VMS is different.
>>> The HW cost is certainly a big factor but even more important is that
>>> these people already run dozens, if not hundreds, of VMware guests.
>>
>> Or thousands.
>>
>> Or maybe even tens of thousands if they are big.
>>
>>> They want VMS to be in there, too, possibly in many guests. The
>>> message is always the same, don't be different if you want to stay
>>> around.
>>
>> Being different means extra cost.
>>
>> Arne
>>
>>
>
> I've mostly been a firm believer in the two rules of dealing with customers:
>
> 1) The customer is always right
> 2) When the customer is wrong, refer to rule #1
>
> So giving customers what they want is smart business.
>
> But, I have to ask, how do they run all those instances. It seems to me
> that it would be an operations nightmare.

With the "every server is a lovingly hand-maintained snowflake" approach
this would a total nightmare (and expensive in terms of people cost). Nobody
sane does this. Instead you are running a handful of system types with many
copies (e.g. web server, mail server, db server, app server type x, app
server type y, ..) and automate the living daylights out of turnup, turndown
and maintenance.

Alexander Schreiber

unread,
Dec 10, 2019, 7:40:09 PM12/10/19
to
Kerry Main <kemain...@gmail.com> wrote:
>> -----Original Message-----
>> From: Info-vax <info-vax...@rbnsn.com> On Behalf Of clair.grant---
>> via Info-vax
>> Sent: December 9, 2019 5:42 AM
>> To: info...@rbnsn.com
>> Cc: clair...@vmssoftware.com <clairg...@gmail.com>
>> Subject: Re: [Info-vax] VMware
>>
>> VBox and kvm are our committed platforms for the V9.0 EAK. We have done
>> debugging on both all along and what works on one works on the other.
>>
>> Over the past five years I have lost track of the number of times I have
> heard
>> the following. "We run our entire IT shop on VMware, except for VMS which
>> sits over in the corner on a very expensive piece of HW that everyone
> wants
>> to get rid of. Get VMS on VMware and you will have a long life with us."
>> That's why VMware is so important to us.
>>
>
> Absolutely - as I mentioned, VMware support would be huge.
>
> To put thinks in perspective, I would say that VMware has approx. 90+% of
> the hypervisor market, so their formal support is very important.

Of the "Hypervisor as a product that people are explicitly paying for"
market? Probably. Of the "Hypervisor as part of the basic infrastructure
that people aren't paying attention to" market (aka Cloud)? Not even a
detectable presence, I guess - AWS, Azure and GCP are _huge_ and not built
on VMare.

Arne Vajhøj

unread,
Dec 10, 2019, 7:43:30 PM12/10/19
to
On 12/10/2019 7:17 PM, Dave Froble wrote:
> On 12/10/2019 1:26 PM, Grant Taylor wrote:
>> On 12/10/19 1:42 AM, Dave Froble wrote:
>>> But what are the applications that can exist in such an environment?
>>> That is where I get lost.
>>
>> Virtual Desktop Infrastructure (VDI) is a good example.  Instead of
>> having Windows (et al.) running on all the desktops and wasting
>> dedicated resources, you can run Windows in VMs with aggregate
>> resources.  So, that accounts for a LOT of VMs if there are (were) a lot
>> of desktops that are now GUI dumb terminals.
>
> Ok, let's look at this.
>
> What is a GUI dumb terminal?  Most of the ones I see are rather cheap
> desktop PCs using network storage.  The only advantage I might see
> running such in VMs is the ability to spin up a new VM with the OS and
> apps ready to go.  Still gonna need the "GUI dumb terminal".
>
> Now, if the desktop user is doing something very CPU intensive, perhaps
> the VM would give less performance?
>
> Some apps can be very CPU and video intensive.

It is 23 years since I worked with desktop PC's, but my understanding
is that the main benefits of VDI are:
* easier management (lower support cost)
* users get the same Windows no matter what physical PC
they sit at

Arne

Arne Vajhøj

unread,
Dec 10, 2019, 7:55:13 PM12/10/19
to
On 12/10/2019 3:48 AM, Dave Froble wrote:
> On 12/10/2019 2:45 AM, IanD wrote:
>> Dave, no shame coming from the 80's, the only shame is to refuse to
>> want to even entertain a different world, you hardly seem like someone
>> who refuses to look at the newer world, you just want proof that it's
>> better, totally understandable
>>
>> I assume you downloaded and had a tinker with virtualbox and saw how
>> easy it was to spin up an OS and configure it and play with it versus
>> real hardware
>
> Yep.  Played with VirtualBox.  Was going to have an instance for various
> old versions of WEENDOZE.  Not all were so easy.  Had problems with
> networking.  I will quite likely use VirtualBox for running x86 VMS.

There are two somewhat different scenarios for virtualization:
1) guest OS in real OS host - this is used for developers to
run a few different OS on their development PC
2) guest OS in hypervisor host - this is used in production
with often a large number of VM's per physical box

>> VMware is the same but on steroids. It has management functionally
>> built in that allows the management of 1000's of instances. You can do
>> funky thinks like create resource pools and manage multiple instance
>> as though they were a single entity etc
>
> I just don't see applications that I'm familiar with running in 1000
> instances of the OS.

They do not need to be instances of the same application or OS.

You can have one ESXi server running VM's:
* Linux with Apache web server and PHP for company web site
* Linux with MySQL used by web site
* VMS with the business application
* Windows with Exchange for company email
* Windows for file shares and printers
* Windows for domain controller
etc.

If you need redundancy in single data center then you will have
2 ESXi servers both with all VM's (unless you use VMotion as discussed).

If you need redundancy in redundant data centers then you will
have 2 ESXi servers in each data center.

And that is still for a pretty small company.

For a larger company you may run many business applications.

Some applications may come with many different types of servers
like tradition web + app + DB.

And some server types may need multiple instances to handle
load - like 4 or 8 web servers.

It adds up quickly.

Arne

Arne Vajhøj

unread,
Dec 10, 2019, 8:03:41 PM12/10/19
to
On 12/10/2019 7:05 AM, Bob Gezelter wrote:
> On Tuesday, December 10, 2019 at 4:26:29 AM UTC-5, Ian Miller wrote:
>> On Tuesday, December 10, 2019 at 3:22:51 AM UTC, Dave Froble
>> wrote:
>>> But, I have to ask, how do they run all those instances. It
>>> seems to me that it would be an operations nightmare. I'm
>>> guessing they use SANs so backup would not be such an issue. But
>>> other operations?

>> Hi David, to run lots of VM systems requires massive automation
>> otherwise they have to employ lots of sysadmins and that costs to
>> much. With the automation you can create a VM with linux+oracle etc
>> at the press of a button in a very short time. All the VMs are
>> monitored automatically, some fixes for problems are applied
>> automatically and so on. Running 10,000 systems in a data center
>> with a few people is possible.
>>
>> It is a strange new industrial big scale world. I mostly still
>> handcraft VMS clusters from the finest ingredients so am a niche
>> craftsman sitting in a corner while the young folk play with the
>> shiny new industrial automated sysadmin toys.

> Perhaps it is a more subtle point. It is not so much a new world, as
> a world with a broader spectrum of possible choices and wider set of
> alternatives.
>
> In the Windows/Linux world, many applications presume that they are
> on a dedicated instance. Attempting to run multiple applications in a
> single instance results in collisions, e.g., port numbers,
> serializations. Resolving such problems can be complex and
> time-consuming and requires the cooperation and assistance of
> outsiders. Using dedicated instances somewhat nullifies the problem
> by giving each their own "playpen" to operate within, which removes
> the issue of how well applications co-exist.
>
> It is also possible to have similar problems with databases and other
> multi-client packages. Not everyone operates well in a shared
> namespace.

Most of these problems are really OS agnostic.

But the handling of the problems are different:

expensive physical box and expensive OS => you work out the configuration

virtual environment and cheap OS => you just spin up a new VM and 5
minutes after everything is working (less work than finding the manual
to figure out how to change the default config)

Maybe slightly exaggerated. But the point is that one app per VM
philosophy is not because technically it has to be that way - it
is is just because it is cheaper that way.

> Testing is yet a third situation. Many times, I have needed to do an
> experiment with potential consequences. Better to do it on a
> disposable virtual instance (an approach I described in a blog
> article several years ago) than impact a longer lived instance. One
> starts with a disposable instance, and graduates to a less disposable
> environment after one has proven feasibility.
>
> Similar arguments hold for training and proficiency. Crashing an
> inexpensive disposable instance is far less expensive than real
> hardware.

Yep.

Arne

Arne Vajhøj

unread,
Dec 10, 2019, 8:09:36 PM12/10/19
to
On 12/10/2019 2:45 AM, IanD wrote:
> Docker, another technology is making gains on VMware using a
> container technology but this is a topic for another day, I think
> OpenVMS may have to embrace contains at some point in time
Yep.

Container technology are reducing the use of VM's.

Kubernetes is probably the greatest threat to VMWare's business
(even though VMware of course try to get into that market as well).

Containers in VMS would be cool. But I also think it would require
a lot of work - a lot more work than what it takes to make
VMS run in VM's.

Even though VMS actually have some building blocks that
could be used to support containers.

Arne

Dave Froble

unread,
Dec 10, 2019, 8:59:33 PM12/10/19
to
I detest laptops, tablets, and smart phones. But that's me as a
developer. Not going to happen on devices with non-friendly input devices.

I do use a tablet as a book reader, and sometimes as a moving map
navigation device when flying. I admit that such devices do well for
users of technology.

> It runs a standard image. If it is badly broken, it gets automatically
> reimaged on request with the standard image.

A powerful argument.

> Saves power, hardware and more importantly: support costs.
>
> And that is only one example.

But you still got the laptop, desktop, or other user interface.

Dave Froble

unread,
Dec 10, 2019, 9:07:49 PM12/10/19
to
A huge difference. Not an insurmountable problem. When communications
are structured such that a transaction is not considered complete until
verified as so by the receiver. Also having complete re-start of
transactions built into the apps sort of makes this problem much smaller.

Everyone does that, right?

:-)

> I will not even get into questions like the state of attached
> non-storage peripherals, e.g. RNGs.
>
> My general advice is to deeply verify the precise nature of the
> implementation and its limitations before relying on it.

Agreed.

> A while back, I was at an user group event where there was a
> presentation on VM migration. The speaker made a statement that
> failover migration would handle all cases. Being from New York City,
> I inquired about a scenario we had experienced a few years earlier.

Reminds me of the young lady who declared that when I send out an
inquiry over the internet, I would ALWAYS get a reply. I casually
mentioned backhoe operators, communication failures, and her tripping
over the power cable, again.

:-)

> A Boeing 767 doing between 150 and 200 knots comes through your
> machine room window. How long does it take to traverse the 24 inches
> between front of the cabinet and the back of the cabinet. Even that
> scenario does not include the fact that the infrastructure connecting
> one VM host to another has likely been severed before the VM host
> frame is hit.

Don't you just hate it when the real world intrudes ....

Arne Vajhøj

unread,
Dec 10, 2019, 9:17:29 PM12/10/19
to
But the frontend becomes the equivalent of an X terminal.

Arne

Kerry Main

unread,
Dec 10, 2019, 9:20:05 PM12/10/19
to comp.os.vms to email gateway
Most of what was stated in this thread is true i.e. virtualization has many
benefits over most physical server hardware - especially in a Dev/Test world
where spinning up the same baseline of an instance is important.

Keep in mind that Microsoft and Red Hat love this model because they get
paid monthly support and OS licenses (MS) on a per instance basis. They
could not care less if the OS instance was P or V.

The same will be true for OpenVMS.

If a company is running OpenVMS in a VM instance on VMware/KVM/Xen/???, and
they decide to quickly spin up several OpenVMS Dev instances, then VSI will
require support and maybe license costs (depending on V9+ support model) on
a per instance basis. That is great for VSI as well.

Having stated this, there is a lot of practical reality though that unless
you are in a larger shop, most people may not realize are real challenges.

Yes, you can put a large number of VM's on a single server, but you also
need humungous amounts of physical memory on that server which adds huge
costs to each physical server. I have seen quotes for enterprise ProLiant's
with 1-2 TB's of memory in the $250,000 range - per server! And then the
VMware and any COTS application licenses costs are in addition to this.

Also, keep in mind that if using COTS products that are licensed per core,
then you definitely want to keep that COTS product on its own small VMware
cluster - just ask those who deploy Oracle on VMware.

Yes, VMware (and its equivalent competition) solved the issue of the wild
west of the 80's and 90's where every dept had their own local servers i.e.
server sprawl.

Now, however, large companies have the problem of VM sprawl. Where they
might have managed 40-80 physical servers before, they now manage hundreds
of separate VM's. A common question from IT C levels is "how come our IT
costs keep going up even though we consolidated all the physical servers we
had before?"

In addition, being able to spin up VM's quickly is great if the application
is architected such that the workload can be spread across many servers.
While this may be true for new Apps written in the last 10-15 years, the
reality is that the vast majority of legacy and COTS apps in enterprise DC's
are not architected in such a way.

The other issue with VM sprawl is that of license costs for such things as
licensing backup agents, AV agents (do you really want to ignore AV
issues?), log file monitoring (for those who care about per instance
security monitoring), scheduler agents, service desk integration (smart
ticketing) and other host based management and monitoring (M&M) agents.

Sure, if the company wants its senior developers to spend a great deal of
their time developing freeware solutions to the M&M challenges above, then
these per instance M&M costs can be mitigated somewhat.

If the company would prefer to have its senior developers adding
functionality which adds value to its core applications, then that company
is more likely to be of the frame of mind to simply use commercial agents
(they still require integration, but typically less so than freeware
options) where they have a single throat to choke.

In terms of building new Apps which are heavily distributed to take
advantage of many, many VM's, these are not without its challenges as well
in terms of data caching, consistency, coherency, complexity etc.

Certainly there are some ways to address these challenges (just ask google
and FB), but as others have stated here - it is not a slam dunk as to what
model is best suited to address any given set of requirements.

A summary of building highly distributed app challenges can be seen in this
article from 2015:
"Making the Case for Building Scalable Stateful Services in the Modern Era"
<http://highscalability.com/blog/2015/10/12/making-the-case-for-building-sca
lable-stateful-services-in-t.html>


Regards,

Kerry Main
Kerry dot main at starkgaming dot com






Grant Taylor

unread,
Dec 10, 2019, 9:35:31 PM12/10/19
to
On 12/10/19 4:22 PM, Craig A. Berry wrote:
> So there is a perfect up-to-date mirror in hard real time of all of
> the states of all of the devices and all these different states are
> coordinated with each other?

For supported (read: virtualized / emulated) devices, yes.

What is state, other than the contents of memory and CPU registers
(which are themselves another memory)?

Said state is replicated from the primary host to the secondary host in
very close to real time. When a memory write happens on one system, the
same write (or delta) is pushed to the secondary system.

> For example, if I'm in the middle of changing my password when one
> of these transitions happens, the state of this process on the new
> instance looks exactly like it was on the old one regardless of whether
> the I/O is in a network buffer, heap memory, file system buffer,
> on disk, or split among some combination of two or more of the above?

My working understanding of migrations (vmotions in VMware parlance) is
that the source host starts replicating memory to the destination host.
It keeps track of what has and has not been replicated. (The guest is
still running.) If the guest writes to a page that has been replicated,
the page considered dirty and needs to be re-replicated. Eventually the
source host will be down to only pages that are changing extremely
quickly. So the source host pauses the VM, replicates the remaining
memory pages. CPU state is also replicated. Once that is done the
destination host resumes the guest VM. The guest has no real idea that
anything has happened. The password change that you were in the middle
of is partially executed on one host, paused, transferred, and resumed
on the next host.

The over all transfer takes time, but the guest VM is running during the
vast majority of it. The guest VM is paused for a very brief period of
time. Where very brief is likely on the order of a few hundredths of a
second, if that.

This type of migration has been standard operating procedure to the
point that it's an old hat trick. (Purportedly even the free KVM can do
this.)

It is so SoP / routine that it is common to run software to monitor load
on VMware hosts and automatically move VMs around between hosts,
particularly if you have one demanding more CPU for some reason.
Automation will move other guests off of that host to give the hog more
resources. (Within reasons.) Or optionally, move the hog to a lesser
loaded system.

It's so common that VMware doesn't care about vmotions.

> If that's true it sounds impressive. But if the hypervisors are that
> good it makes me wonder why Microsoft needed to spend megabucks on
> making SQL Server and Windows work better under hypervisors (and I
> believe similar efforts have gone into Linux).

In a word, "Optimization". They spent time, effort, and money to alter
how Windows / Linux work to make them even nicer under virtualization.
You can hot add CPUs and memory to guests if the OS supports it.
Similarly, you can hot remove CPUs and memory from guests.

So, much like you can move VMs around to balance load, it's also
possible to move CPU and memory between guests to best "sweat the
assets" as is commonly said.

I believe there was also effort to make Windows (and likely Linux) more
clearly expose if a memory page was dirty or not. The idea being that
multiple VMs running the same version of the OS, kernel, libraries,
etc., can have their memory pages deduplicated. Thus further saving
memory and making it easier for a hypervisor to know the current state
of things without needing to understand the nuance of each OSs kernel.

These are (some of) the optimizations that have gone into Windows and
Linux to make them more friendly to virtualization. The changes weren't
done because they /needed/ to be done to make the OS compatible. They
were done /opportunistically/ to allow better use of the hardware.

Arne Vajhøj

unread,
Dec 10, 2019, 9:44:30 PM12/10/19
to
On 12/10/2019 9:15 PM, Kerry Main wrote:
> Yes, you can put a large number of VM's on a single server, but you also
> need humungous amounts of physical memory on that server which adds huge
> costs to each physical server. I have seen quotes for enterprise ProLiant's
> with 1-2 TB's of memory in the $250,000 range - per server!

If they don't need HP branded memory they can find third party memory
for Proliant for about 10 dollars per GB. That is 10K-20K for 1-2 TB.

(BTW I think DL580 prefer 1.5 or 3)

> Also, keep in mind that if using COTS products that are licensed per core,
> then you definitely want to keep that COTS product on its own small VMware
> cluster - just ask those who deploy Oracle on VMware.

Most COTS vendors charge per VCPU allocated to the VM.

Only a few charge per core in the physical server.

> Now, however, large companies have the problem of VM sprawl. Where they
> might have managed 40-80 physical servers before, they now manage hundreds
> of separate VM's. A common question from IT C levels is "how come our IT
> costs keep going up even though we consolidated all the physical servers we
> had before?"

True.

But their cost would have totally exploded if they had had the same
growth of instances with physical servers.

Arne

Grant Taylor

unread,
Dec 10, 2019, 9:45:26 PM12/10/19
to
On 12/10/19 5:17 PM, Dave Froble wrote:
> What is a GUI dumb terminal?

It is a dumb terminal, as in there is (effectively) no local processing
/ networking / I/O. Everything is done on a remote system. The only
thing that is done locally is provide a display, receive input from
keyboard and mouse, connect some local peripherals (think USB flash
drive), and optionally play sound locally.

A GUI dumb terminal is a graphic counterpart of a VT100. (Feel free to
pick a different VT model if you prefer.)

> Most of the ones I see are rather cheap desktop PCs

It is common to re-purpose old PCs as (GUI) dumb terminals. It doesn't
take much to run an RDP client / Citrix client / X11 ""client ("server"
in X11 parlance), etc.

It is also quite possible to use a $40 Raspberry Pi, or comparable, as a
(GUI) dumb terminal.

> using network storage.

What does "network storage" mean in this context? Is it NAS (SMB / NFS
/ NCP / etc.) or SAN (FC / iSCSI / etc.)?

I'm not accustom to (GUI) dumb terminals utilizing any of these options.

> The only advantage I might see running such in VMs is the ability to
> spin up a new VM with the OS and apps ready to go. Still gonna need
> the "GUI dumb terminal".

Yes, you still need the (GUI) dumb terminal.

But this can be re-purposed old computers, or it can be really
inexpensive single board computers.

> Now, if the desktop user is doing something very CPU intensive, perhaps
> the VM would give less performance?

On the contrary, I expect that the VM would give better performance.

Most desktops, particularly the ones being re-purposed, are CPU and / or
memory and / or disk bound. Conversely, the VMware hosts are a MASSIVE
amount of memory and extremely fast network and storage.

As such, it's quite likely that the task will run faster and better in a
VDI type environment than on a fat client.

> Some apps can be very CPU and video intensive.

Yes.

I expect that one of the most video intensive is Google's new Stadia
product which runs games in the cloud and sends the video down to the
client.

Yes, VDI has gotten good enough that high demand graphic games can be
run in the cloud.

> Of course, none of this matters.  If the potential customers tell Clair
> they want to run VMS in VMs,  The smart money is to give the customer
> what he wants, and, collect lots of support money.

Sure.

> All I'll say about that is that isn't how I learned to design apps.

I completely agree. Unfortunately, that's not been what I've
experienced over the last 20 years.

Sadly, the idea of having applications play well with other apps has
been the exception instead of the norm for the last 15+ years.

Grant Taylor

unread,
Dec 10, 2019, 10:00:27 PM12/10/19
to
On 12/10/19 5:27 PM, Bob Gezelter wrote:
> With all due respect, I want to see the fine-grain details on
> that implementation.

Please see my reply from ~ 7:35. (Adjust hour accordingly for your time
zone.)

I think that's about as granular as I can get without going and looking
things up.

I have no problem with you wanting to see the fine-grain details. I
asked very similar questions 10+ years ago. Hence why I have the
understanding that I do. Also why it's now only a high level detail.

> Particularly the part about "packets do not drop".

I've routinely moved VMs between hosts without dropping packets. I do
see latency at the epoch of the transition increase momentarily (usually
just one packet). But the packet does make it through and is not dropped.

Frequently latency is something like this:

1–3 ms
1–3 ms
1–3 ms
9–12 ms
1–3 ms
1–3 ms
1–3 ms

No packet drop.

TCP sessions continue without retransmissions.

> Ensuring granularity of file update is also quite a challenge.

Why? (Please see my other message about what happens.)

> There is a large difference between "rarely are packets lost" and
> "packets are never lost". Pre-loading other virtual instances and
> keeping memory state updated them updated is one thing, ensuring mass
> storage state is something else.

All hosts in the cluster have access to the same storage. So anything
written on one host is readable by other hosts. Part of the migration
ensures that cached data is synced to disk and / or copied as part of
the memory for the system.

So there's no "mass storage state" to keep in sync because it is the
same back end storage.

> I will not even get into questions like the state of attached
> non-storage peripherals, e.g. RNGs.

Those would be the types of things that would prevent migration between
hosts.

Though, I think that VMware has an option to allow USB peripherals to be
used across the network.

If not VMware, there are other OS level solutions to allow some
peripherals to be used across the network.

I've personally used remote (TCP based) serial ports for fax servers.
The modem is physically connected to a network attached DigiBoard (or
the likes) and the VM is free to move from host to host to host because
it's TCP connection to the serial port is still in tact.

Given that faxing is time sensitive serial audio / data (depending on
the modem) there may be an issue with the momentary increased latency.
I don't know if that would ride through a migration or if it would rely
on error detection and correction in the modem / fax level.

> My general advice is to deeply verify the precise nature of the
> implementation and its limitations before relying on it.

I think that's a wonderful idea.

> A while back, I was at an user group event where there was a
> presentation on VM migration. The speaker made a statement that
> failover migration would handle all cases. Being from New York City,
> I inquired about a scenario we had experienced a few years earlier.

~chuckle~

Absolutes are usually a problem in one way or another. ;-)

> A Boeing 767 doing between 150 and 200 knots comes through your machine
> room window. How long does it take to traverse the 24 inches between
> front of the cabinet and the back of the cabinet. Even that scenario
> does not include the fact that the infrastructure connecting one
> VM host to another has likely been severed before the VM host frame
> is hit.

I think that's a valid question. I think it's an EXTREMELY ATYPICAL
failure scenario. But it is decidedly within the "all cases" absolute
the speaker set themselves up for.

I think that would be very difficult to protect against.

I would question, what about a data center in an adjacent building that
you can extend the LAN / SAN / etc. into. Though it could also
experience a similar problem (fate sharing).

When you start talking about failures that can take out multiple
buildings in close proximity to each other, you REALLY need an EXTREMELY
robust solution.

I do think that VMware has some solutions that can work over extended
distances.

Grant Taylor

unread,
Dec 10, 2019, 10:09:52 PM12/10/19
to
On 12/10/19 7:00 PM, Dave Froble wrote:
> I detest laptops, tablets, and smart phones.  But that's me as a
> developer.  Not going to happen on devices with non-friendly input devices.

How much of your preferred workstation is anything other than something
that the keyboard, mouse / trackball / etc., monitor, speakers, and
possibly other USB devices accessories plug into?

What would you say if there was a possibility that you could have the
same peripherals connected to a GUI dumb terminal, and you had the same
(or better) performance as you have now?

Now what if that pending memory upgrade was as simple as a VDI config
change and a reboot, if the reboot was even necessary.

> I do use a tablet as a book reader, and sometimes as a moving map
> navigation device when flying.  I admit that such devices do well for
> users of technology.

That sounds to me like what I describe as devices used to consume things.

The workstation above is decidedly a production device.

> But you still got the laptop, desktop, or other user interface.

Yes and no.

You need a device to connect things to. But nothing states what form
factor it must take or what peripherals must be used.

Dave Froble

unread,
Dec 11, 2019, 1:09:49 AM12/11/19
to
This all sounds really great. As I may have mentioned, I've not been
involved with VMs in the past. In some ways, but not identical, sounds
a bit like Galaxy.

Doesn't sound like something that would support real-time, but, that
doesn't seem to be the market.

> This type of migration has been standard operating procedure to the
> point that it's an old hat trick. (Purportedly even the free KVM can do
> this.)
>
> It is so SoP / routine that it is common to run software to monitor load
> on VMware hosts and automatically move VMs around between hosts,
> particularly if you have one demanding more CPU for some reason.
> Automation will move other guests off of that host to give the hog more
> resources. (Within reasons.) Or optionally, move the hog to a lesser
> loaded system.
>
> It's so common that VMware doesn't care about vmotions.
>
>> If that's true it sounds impressive.

Yes, very impressive.

Except when I tried to test VMware, it told me my network device was not
supported. That wasn't so impressive.

>> But if the hypervisors are that
>> good it makes me wonder why Microsoft needed to spend megabucks on
>> making SQL Server and Windows work better under hypervisors (and I
>> believe similar efforts have gone into Linux).
>
> In a word, "Optimization". They spent time, effort, and money to alter
> how Windows / Linux work to make them even nicer under virtualization.
> You can hot add CPUs and memory to guests if the OS supports it.
> Similarly, you can hot remove CPUs and memory from guests.
>
> So, much like you can move VMs around to balance load, it's also
> possible to move CPU and memory between guests to best "sweat the
> assets" as is commonly said.
>
> I believe there was also effort to make Windows (and likely Linux) more
> clearly expose if a memory page was dirty or not. The idea being that
> multiple VMs running the same version of the OS, kernel, libraries,
> etc., can have their memory pages deduplicated. Thus further saving
> memory and making it easier for a hypervisor to know the current state
> of things without needing to understand the nuance of each OSs kernel.
>
> These are (some of) the optimizations that have gone into Windows and
> Linux to make them more friendly to virtualization. The changes weren't
> done because they /needed/ to be done to make the OS compatible. They
> were done /opportunistically/ to allow better use of the hardware.
>
>
>


--

Dave Froble

unread,
Dec 11, 2019, 1:20:26 AM12/11/19
to
On 12/10/2019 9:15 PM, Kerry Main wrote:
You bet, a rather good thing.

> If a company is running OpenVMS in a VM instance on VMware/KVM/Xen/???, and
> they decide to quickly spin up several OpenVMS Dev instances, then VSI will
> require support and maybe license costs (depending on V9+ support model) on
> a per instance basis. That is great for VSI as well.

I may have mentioned before that I'm hoping VSI gets their revenue from
support, not licenses. Needing to purchase a license most likely will
have a negative effect on "spinning up" more VMS instances. I'm
guessing the current VM users are already to cough up support fees.
Let's not confuse them. Keep it simple, and what they are used to.

> Having stated this, there is a lot of practical reality though that unless
> you are in a larger shop, most people may not realize are real challenges.
>
> Yes, you can put a large number of VM's on a single server, but you also
> need humungous amounts of physical memory on that server which adds huge
> costs to each physical server. I have seen quotes for enterprise ProLiant's
> with 1-2 TB's of memory in the $250,000 range - per server! And then the
> VMware and any COTS application licenses costs are in addition to this.
>
> Also, keep in mind that if using COTS products that are licensed per core,
> then you definitely want to keep that COTS product on its own small VMware
> cluster - just ask those who deploy Oracle on VMware.
>
> Yes, VMware (and its equivalent competition) solved the issue of the wild
> west of the 80's and 90's where every dept had their own local servers i.e.
> server sprawl.
>
> Now, however, large companies have the problem of VM sprawl. Where they
> might have managed 40-80 physical servers before, they now manage hundreds
> of separate VM's. A common question from IT C levels is "how come our IT
> costs keep going up even though we consolidated all the physical servers we
> had before?"

Because you're doing more ???

> In addition, being able to spin up VM's quickly is great if the application
> is architected such that the workload can be spread across many servers.
> While this may be true for new Apps written in the last 10-15 years, the
> reality is that the vast majority of legacy and COTS apps in enterprise DC's
> are not architected in such a way.

Works for me.

> The other issue with VM sprawl is that of license costs for such things as
> licensing backup agents, AV agents (do you really want to ignore AV
> issues?), log file monitoring (for those who care about per instance
> security monitoring), scheduler agents, service desk integration (smart
> ticketing) and other host based management and monitoring (M&M) agents.
>
> Sure, if the company wants its senior developers to spend a great deal of
> their time developing freeware solutions to the M&M challenges above, then
> these per instance M&M costs can be mitigated somewhat.
>
> If the company would prefer to have its senior developers adding
> functionality which adds value to its core applications, then that company
> is more likely to be of the frame of mind to simply use commercial agents
> (they still require integration, but typically less so than freeware
> options) where they have a single throat to choke.
>
> In terms of building new Apps which are heavily distributed to take
> advantage of many, many VM's, these are not without its challenges as well
> in terms of data caching, consistency, coherency, complexity etc.

Good software architects will design apps to avoid the pitfalls.

> Certainly there are some ways to address these challenges (just ask google
> and FB), but as others have stated here - it is not a slam dunk as to what
> model is best suited to address any given set of requirements.

That's for damn sure.

David Wade

unread,
Dec 11, 2019, 5:04:28 AM12/11/19
to
Its been a while but MS as far as I know on your own hardware Microsoft
does not do "per instance" licence for server OS's. Enterprise per-core
licences allow unlimited unlimited OS instances.

Standard Server OS licences are tie to the physical hardware so you need
one per physical server meaning that this isn't typically a usefull model.

I am not sure about RedHat licensing but that seemed really expensive
when we only had one server.

Oracle licencing was also a pain in the behind....

> The same will be true for OpenVMS.
>
> If a company is running OpenVMS in a VM instance on VMware/KVM/Xen/???, and
> they decide to quickly spin up several OpenVMS Dev instances, then VSI will
> require support and maybe license costs (depending on V9+ support model) on
> a per instance basis. That is great for VSI as well.

As I said Microsoft on an internal farm do not require per instance
licences. VSI are free to define their own model. Getting a balance
between models will be hard as its a speciality

>
> Having stated this, there is a lot of practical reality though that unless
> you are in a larger shop, most people may not realize are real challenges.
>
> Yes, you can put a large number of VM's on a single server, but you also
> need humungous amounts of physical memory on that server which adds huge
> costs to each physical server. I have seen quotes for enterprise ProLiant's
> with 1-2 TB's of memory in the $250,000 range - per server! And then the
> VMware and any COTS application licenses costs are in addition to this.
>

Again getting the right server size is more challenging in a Virtual
environment. I prefer smaller servers because that allows more
granularity in the environment. So having a $100K server means when you
want to grow you can do it in $100k chunks, with a $200k server than its
a $200k step. Assuming you want resilience and you think one spare
server is enough qith the $100k server that's $100k sat in reserve...

... of course thats not the only consideration. Network and SAN
connectivity may also impact on costs so for any organization the
sweetspot may be hard to find...


> Also, keep in mind that if using COTS products that are licensed per core,
> then you definitely want to keep that COTS product on its own small VMware
> cluster - just ask those who deploy Oracle on VMware.
>
> Yes, VMware (and its equivalent competition) solved the issue of the wild
> west of the 80's and 90's where every dept had their own local servers i.e.
> server sprawl.
>

Yes SQL server has the same problems as Oracle. We used to have a
separate smaller farm for SQL server...

> Now, however, large companies have the problem of VM sprawl. Where they
> might have managed 40-80 physical servers before, they now manage hundreds
> of separate VM's. A common question from IT C levels is "how come our IT
> costs keep going up even though we consolidated all the physical servers we
> had before?"
>

Yes...

> In addition, being able to spin up VM's quickly is great if the application
> is architected such that the workload can be spread across many servers.
> While this may be true for new Apps written in the last 10-15 years, the
> reality is that the vast majority of legacy and COTS apps in enterprise DC's
> are not architected in such a way.
>

which drives you towards larger servers

> The other issue with VM sprawl is that of license costs for such things as
> licensing backup agents, AV agents (do you really want to ignore AV
> issues?), log file monitoring (for those who care about per instance
> security monitoring), scheduler agents, service desk integration (smart
> ticketing) and other host based management and monitoring (M&M) agents.
>
> Sure, if the company wants its senior developers to spend a great deal of
> their time developing freeware solutions to the M&M challenges above, then
> these per instance M&M costs can be mitigated somewhat.
>

Freeware solutions are great but the long terms costs are challenging

> If the company would prefer to have its senior developers adding
> functionality which adds value to its core applications, then that company
> is more likely to be of the frame of mind to simply use commercial agents
> (they still require integration, but typically less so than freeware
> options) where they have a single throat to choke.
>
> In terms of building new Apps which are heavily distributed to take
> advantage of many, many VM's, these are not without its challenges as well
> in terms of data caching, consistency, coherency, complexity etc.
>
> Certainly there are some ways to address these challenges (just ask google
> and FB), but as others have stated here - it is not a slam dunk as to what
> model is best suited to address any given set of requirements.
>
> A summary of building highly distributed app challenges can be seen in this
> article from 2015:
> "Making the Case for Building Scalable Stateful Services in the Modern Era"
> <http://highscalability.com/blog/2015/10/12/making-the-case-for-building-sca
> lable-stateful-services-in-t.html>
>
>

I would also say that its much easier to do proper Disaster Recovery
with VMS. I not in another thread some one sited the 737 through the
window. Well having worked in an office on Manchester (UK) approaches
and where an A380 flies in twice a day and where we were about 600 yards
from the site of a previous (1960's) air crash we took DR seriously.

There is an off-site duplicated SAN. Its not lock stepped but its a
whole load better than the previous solution with daily tapes....


> Regards,
>
> Kerry Main
> Kerry dot main at starkgaming dot com
>
>
Dave Wade

Bob Gezelter

unread,
Dec 11, 2019, 8:32:12 AM12/11/19
to
Grant,

Your post proves my point.

I do not disagree that within the context of "controlled" VM migration between hosts, it is possible to accomplish the migration without loss of packets or I/O inconsistency.

It is the uncontrolled case to which I referred.

Of course, in the controlled case, the connection to the switch can be blocked/queued AND acknowledged to prevent packet(s) from being caught during the transition. Alternatively, the MAC address can be changed and the packets queued at the new host. A similar argument applies to I/O. In a controlled case, active I/O cam be completed before the transfer.

Otherwise, one needs facilities not present in x86 (e.g., lock-step execution as was implemented on some fault tolerant architectures in the past). As an example, modern hardware RNGs make precise execution profiles on modern systems unlikely.

Andy Burns

unread,
Dec 11, 2019, 8:41:42 AM12/11/19
to
Grant Taylor wrote:

> I've routinely moved VMs between hosts without dropping packets.

If I leave a ping running to a guest during vmotion, I would say it's
50:50 whether I see a single missed reply.

Simon Clubley

unread,
Dec 11, 2019, 8:48:01 AM12/11/19
to
On 2019-12-10, Grant Taylor <gta...@tnetconsulting.net> wrote:
>
> When you start talking about failures that can take out multiple
> buildings in close proximity to each other, you REALLY need an EXTREMELY
> robust solution.
>

That's what you get with VMS clusters (if you are prepared to pay the
licencing costs.)

Updates across cluster nodes 10s of kilometres apart happen in real time
(not in _near_ real time, but actual real time) and with every VMS
cluster node being in an active configuration, not in some active/passive
configuration.

If something takes out one site without warning then your VMS cluster
works out by itself that the site has gone, automatically drops the
now missing cluster nodes and reconfigures itself to carry on normal
operations without any completed I/Os being lost, even if they were
completed the immediate instant before disaster struck.

It doesn't matter if it's a database I/O or some text file that was
changed with an editor. If you are working with VMS cluster storage,
it's safe even if you were working on the node that just got destroyed.

Unfortunately, the last time I saw prices for this, it cost stupid
money for the clustering licences.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

David Wade

unread,
Dec 11, 2019, 9:02:21 AM12/11/19
to
Bob,

Other than the hypervisor, nothing under VMWARE runs as a real CPU
process.It all runs using the virtual assists. The OS drivers are not
talking to real hardware, they are talking via emulated hardware.

Typically the OS sees a SCSI or ATA adaptor but the real hardware will
be fibre SAN.

So whilst X86 does not have lock step VMWARE does have lock step...

https://searchvmware.techtarget.com/definition/VMware-vLockstep

Dave

Grant Taylor

unread,
Dec 11, 2019, 11:49:53 AM12/11/19
to
On 12/10/19 11:10 PM, Dave Froble wrote:
> Except when I tried to test VMware, it told me my network device was not
> supported.  That wasn't so impressive.

Yes. Different hypervisors support different emulated hardware.

Hence my original line of questions in this thread. ;-)

Grant Taylor

unread,
Dec 11, 2019, 11:55:44 AM12/11/19
to
On 12/10/19 11:21 PM, Dave Froble wrote:
> I may have mentioned before that I'm hoping VSI gets their revenue from
> support, not licenses.  Needing to purchase a license most likely will
> have a negative effect on "spinning up" more VMS instances.  I'm
> guessing the current VM users are already to cough up support fees.
> Let's not confuse them.  Keep it simple, and what they are used to.

I know for the minor things that I do, 30 / 60 / 90 day evaluation
licenses will work for the things that I want to test.

I'd love to spin up an OpenVMS VM to see if OpenVMS is susceptible to
what I believe is the root cause behind CVE-2019-14899. Specifically
weak end-system vs end-system host model TCP/IP stacks. A 30 day
evaluation would work perfectly for this. Heck, even a 14 day
evaluation would work quite well.

I also hope that the hobbyist license ends up better than what I
initially saw. (I did ignore the rest of the massive threads and will
re-address hobbyist licenses later.)

Bob Gezelter

unread,
Dec 11, 2019, 12:01:03 PM12/11/19
to
David,

The cited reference for vLockstep specifically notes that INSTRUCTION-level lockstep requires hardware support from the CPU itself. The cited resource does not address the question of randomization.

If your hardware RNG is operating properly, RNGs on different CPUs will invariably generate different results. If code then executes different code paths depending directly or indirectly on the output of the RNG, the execution paths will diverge.

Believe me, this can be quite a challenge. In my past, I spent a fair amount of time ensuring that two execution runs were identical in execution profile. It is surprising where randomness can creep in and produce interesting downstream effects.

David Wade

unread,
Dec 11, 2019, 2:48:50 PM12/11/19
to
Bob,
As I said almost all modern IA64 CPUs have microcode support for
virtualization and lockstep.

https://kb.vmware.com/s/article/1008027

https://www.intel.co.uk/content/www/uk/en/virtualization/virtualization-technology/intel-virtualization-technology.html

I can't see VMWARE touting something that doesn't work. I never tried it
as when I was working with VMWare it only supported single CPUS and
machines where I needed HA had more than one virtual CPU. I see Vmware 6
supports up to four virtual CPUs.

As I said you are not running on real CPUs so you can intercept
instructions that can cause different results. There is a performance
impact as detailed here:-

https://www.vmware.com/files/pdf/techpaper/VMware-vSphere6-FT-arch-perf.pdf

(you will need to re-join the URL)

Dave

Bob Gezelter

unread,
Dec 11, 2019, 3:43:27 PM12/11/19
to
Dave,

Reference: https://en.wikipedia.org/wiki/RDRAND

I think NIST and NSA will be interested if RDRAND is synchronizable between different CPUs.

There is a big difference between maintaining an ongoing backup copy of memory and lockstep execution. The whitepaper appears on a quick reading to refer to the former.

VMs, as opposed to emulators, have always executed most code natively, while taking an exception for instructions which must be specially processed. Some CPUs have microcode or hardware assist for virtualization, others take a conventional processor fault and use exception routines to emulate the subject instruction. This has been true since VM/360.

Lockstep instruction execution with the two machines separated by a network link is inherently VERY slow. It is a matter of simple physics.

Phillip Helbig (undress to reply)

unread,
Dec 11, 2019, 4:04:01 PM12/11/19
to
In article <qspii2$v47$1...@dont-email.me>, Dave Froble
<da...@tsoft-inc.com> writes:

> > Virtual desktops are a thing. Your "desktop" no longer is a box under your
> > desk, but a VM in a datacenter somewhere. You connect to it via your laptop.
>
> I detest laptops, tablets, and smart phones. But that's me as a
> developer. Not going to happen on devices with non-friendly input devices.

Not really relevant here. You can also connect via a normal PC (a.k.a.
fat client). Think of an X-terminal; you have a nice screen and
keyboard and mouse and whatever you want, but that is just a front end
to the real application running elsewhere. You get something which
looks like a normal PC interface with the start button and everything;
it's just not running locally.

> I do use a tablet as a book reader, and sometimes as a moving map
> navigation device when flying. I admit that such devices do well for
> users of technology.

For reading a PDF file, for internet while travelling, etc., tablets are
fine. Ebook readers are even better for ebooks, but not really relevant
here.

> > It runs a standard image. If it is badly broken, it gets automatically
> > reimaged on request with the standard image.
>
> A powerful argument.
>
> > Saves power, hardware and more importantly: support costs.

Right. That's why people do it.

> But you still got the laptop, desktop, or other user interface.

But that is just a front end, with no real meat there---no storage to be
backed up, no software to install, etc.

Phillip Helbig (undress to reply)

unread,
Dec 11, 2019, 4:12:21 PM12/11/19
to
In article <qspmnj$i63$1...@tncsrv09.home.tnetconsulting.net>, Grant Taylor
<gta...@tnetconsulting.net> writes:

> On 12/10/19 7:00 PM, Dave Froble wrote:
> > I detest laptops, tablets, and smart phones.  But that's me as a
> > developer.  Not going to happen on devices with non-friendly input devices.
>
> How much of your preferred workstation is anything other than something
> that the keyboard, mouse / trackball / etc., monitor, speakers, and
> possibly other USB devices accessories plug into?

Think X-terminal.

> What would you say if there was a possibility that you could have the
> same peripherals connected to a GUI dumb terminal, and you had the same
> (or better) performance as you have now?

Times change. There was a cartoon with a Mac and a Windows PC with the
punchline that the difference is decreasing the more things are done via
a browser. Many, many applications these days are accessed only via
http(s), both for users and for management. Similarly, in the virtual
world, all you have on your desktop is what you really need.

> The workstation above is decidedly a production device.

Old joke: A train stops at the train station. A bus stops at the bus
station. On my desk is a workstation. :-)

David Wade

unread,
Dec 11, 2019, 4:59:14 PM12/11/19
to
Its lockstep.

> VMs, as opposed to emulators, have always executed most code natively, while taking an exception for instructions which must be specially processed. Some CPUs have microcode or hardware assist for virtualization, others take a conventional processor fault and use exception routines to emulate the subject instruction. This has been true since VM/360.

VMWARe won't run on a CPU without the intel virtualization hardware support

>
> Lockstep instruction execution with the two machines separated by a network link is inherently VERY slow. It is a matter of simple physics.
>
> - Bob Gezelter, http://www.rlgsc.com
>

Dave

johnwa...@yahoo.co.uk

unread,
Dec 11, 2019, 5:49:45 PM12/11/19
to
If it's lockstep as you claim, cam you (or others) clarify
*what* is maintained in lockstep?

Feel free to cite your sources, especially if the sources describe
*how* it's achieved in a useful way.

Lockstep at accessible bus-cycle level hasn't existed in the x86
world for years; cache-related issues are just one of the reasons
they died out. An x86-derived system without cache isn't a saleable
system.

On the other hand, outside the IT department, there is sometimes
still a role for high integrity systems using lockstep-capable
processors. E.g. there are some microcontrollers based around ARM
cores targeted at safety-related markets, e.g
the TI Hercules family (based around ARM Cortex-R) come with
reasonably well documented lockstep capability.

Back in the IT department, a variant of the "lockstep" claim has
applied to a subset of server-class memory system designs in the
AMD64/x86-64 world.

But you mentioned IA64 - which is generally taken to mean legacy
Itanic-family stuff. Or maybe something got misread.

Here's a sample of one server vendor's definition of "lockstep",
as applied to memory subsystems in servers:
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c01746830
"Lockstep - provides enhanced protection while making all
installed memory available to the operating system. The server
can continue to function if a single- or multi-bit memory
failure within a single DRAM device occurs."

So, what is *your* definition of lockstep?

Mind you, here's an even more radical redefinition of lockstep:
"Technical Discussion: What is Lockstepping?

Starcraft, Age of Empires, and Warcraft 3 all use lockstepping - not this particular library, rather the same idea. Lockstepping forces all user input to be broadcast over the network and executed roughly 200ms into the future; when you click to move a unit, there will always be a 200ms delay before the unit responds to your input. This 200ms, known as the "latency window", provides enough time for that command to reach all other networked players, then for that command to execute in synch across everyone's simulation of the game."

Is there perhaps a terminology-conflict issue here?



Hans Bachner

unread,
Dec 11, 2019, 7:08:24 PM12/11/19
to
Clair,

clair...@vmssoftware.com schrieb am 09.12.2019 um 01:50:
> On Sunday, December 8, 2019 at 5:21:22 PM UTC-5, Hans Bachner wrote:
>> clair...@vmssoftware.com schrieb am 08.12.2019 um 21:28:
>>> On Sunday, December 8, 2019 at 3:07:00 PM UTC-5, Arne Vajhøj wrote:
>>>> On 12/8/2019 1:34 PM, Grant Taylor wrote:
>>>>> On 12/8/19 10:52 AM, clair...@vmssoftware.com wrote:
>>>>>> Yes, I am booting VMS as a Fusion guest. We don't get very far yet but
>>>>>> we will eventually get VMS up and running.
>>>>>
>>>>> [...]
>>>
>>> Yes, the goal is production environments. Fusion is a convenient debugging environment. We will likely try the PC version as well.
>>
>> Clair,
>>
>> thanks for the excellent news that VMware apparently not only climbed up
>> on your priority stack, but you succeeded with some initial steps to get
>> VMS booting in VMware Fusion.
>>
>> [...]
>>
>> Keep the good news coming...
>>
>> Best regards,
>> Hans.
>
> I guess I need to set something straight. VMware has always been at the top of our list but the VMware folks wouldn't give us the time of day.

maybe my wording wasn't accurate enough. I remember the days when you
said they weren't interested in talking to you. And VMware at the bottom
of the list of virtualization environments if at all.

> Now they are talking to us.

Congratulations for convincing them - this is a real milestone, if not a
prerequisite to get OpenVMS running in VMware.

> That is why we started working with Fusion. We need them to officially support VMS as a guest OS. That takes a business agreement between the two companies which now seems a possibility. There was no sign of that until a couple weeks ago.
>
> ESX is obviously the goal. Anything else just gives us easier ways to get a sense of how much work it will take to get there.

Great to see you making progress.

Best regards,
Hans.

Dave Froble

unread,
Dec 11, 2019, 9:31:56 PM12/11/19
to
John nails this one.

:-)

Just as there are clusters, and then clusters, I guess there is
lockstep, and then lockstep. Devil is in the details.

To me, in this context, lockstep means 2 (or more) systems, where what
happens on one, exactly happens on the other(s). Should we call them
mirrors? Nah, just more confusion.

But back to VMs. For most cases, it just doesn't matter. How many apps
need lockstep (of whatever definition)? What I've been reading here
about VMs is interesting, and even exciting.

I'm guessing, no, actually declaring, the only absolute is that THERE
WILL BE EXCEPTIONS!

IanD

unread,
Dec 12, 2019, 7:47:38 AM12/12/19
to
A point of interest, Charon I believe claim VMware certification

Not sure if that includes an OpenVMS cluster or individual OpenVMS nodes migrating, I suspect the latter

For those wanting to read a light article on what VMware basically does during a migration, read the following link

https://blogs.vmware.com/vsphere/2019/07/the-vmotion-process-under-the-hood.html

We and others often see issues around stun times. In fact we have a current issue that has gone to VMware/Dell to be addressed which almost certainly will result in a patch being released. Migrations are complex beasts

I think OpenVMS might need a bit of work fitting into a virtualized world especially where migrations are happening. The complexity in handling a node member of an OpenVMS cluster member might be very interesting indeed.

A single standalone node is one thing, migrating a cluster member might be a very different matter. You might have to come up with a formula not to migrate more than x members of an OpenVMS cluster at once etc. I don't know, I just know that when you start to deal with HA clusters things get exponentially complex very quickly

It's very good news that OpenVMS is looking to be supported in VMware, very good news indeed

Simon Clubley

unread,
Dec 12, 2019, 8:23:03 AM12/12/19
to
On 2019-12-11, johnwa...@yahoo.co.uk <johnwa...@yahoo.co.uk> wrote:
>
> But you mentioned IA64 - which is generally taken to mean legacy
> Itanic-family stuff. Or maybe something got misread.
>

IA64 is also used in some places to mean the x86-64 architecture.

Bob Gezelter

unread,
Dec 12, 2019, 9:37:52 AM12/12/19
to
Simon,

The official Intel nomenclature is that IA-64 is Itanium. IA-32 is the architecture of the x86 family processors. The 64 bit extensions are just that, 64-bit extensions.

John Reagan

unread,
Dec 12, 2019, 10:17:28 AM12/12/19
to
From the current set of Intel docs:

"IA-32" is the traditional 32-bit i386 architecture.

"Intel 64" is the 64-bit x86-64 architecture.

The title of the arch manuals is:

"Intel(r) 64 and IA-32 Architectures Software Developer's Manual"

"Intel Itanium" is current name the 64-bit Itanium architecture. Prior naming had IA-64 back when Intel thought that Itanium would be the 64-bit system of the future before AMD came along and pounded 64-bit features into the i386 architecture. That IA-64 and IA64 have carried over in various pieces of software and documentation. There was also IPF (Itanium Processor Family) as well. And then HP made it more confusing with inventing the Integrity branding which at exclusively Itanium at the start but now includes "Intel 64" systems as well.

Grant Taylor

unread,
Dec 12, 2019, 11:13:01 AM12/12/19
to
On 12/12/19 6:22 AM, Simon Clubley wrote:
> IA64 is also used in some places to mean the x86-64 architecture.

Where are you seeing that?

I've always seen IA64 (Intel Architecture 64) to be Itanium.

x86-64 or x64 has been what I've seen for generic 64-bit x86 family CPUs.

I also see AMD-64 as a specific reference to AMD's 64-bit x86
implementation. (Which as I understand it is the first 64-bit
implementation in the x86 family.)

But IA64 has always been Itanium to me.

Grant Taylor

unread,
Dec 12, 2019, 11:18:35 AM12/12/19
to
On 12/12/19 8:17 AM, John Reagan wrote:
> From the current set of Intel docs:
>
> "IA-32" is the traditional 32-bit i386 architecture.
>
> "Intel 64" is the 64-bit x86-64 architecture.

*facepalm*

So "I64" could be s shortening of that to be Intel's 64-bit x86.

The missing "A" being critical.

> The title of the arch manuals is:
>
> "Intel(r) 64 and IA-32 Architectures Software Developer's Manual"
>
> "Intel Itanium" is current name the 64-bit Itanium architecture.
> Prior naming had IA-64 back when Intel thought that Itanium would be
> the 64-bit system of the future before AMD came along and pounded
> 64-bit features into the i386 architecture. That IA-64 and IA64
> have carried over in various pieces of software and documentation.
> There was also IPF (Itanium Processor Family) as well. And then HP
> made it more confusing with inventing the Integrity branding which
> at exclusively Itanium at the start but now includes "Intel 64"
> systems as well.

Tangentially related: Solaris using "i86pc" in lieu of the more
industry standard "i386" has caused confusion for me.

Robert A. Brooks

unread,
Dec 12, 2019, 11:25:01 AM12/12/19
to
On 12/12/2019 11:18 AM, Grant Taylor wrote:
> On 12/12/19 8:17 AM, John Reagan wrote:
>> From the current set of Intel docs:
>>
>> "IA-32" is the traditional 32-bit i386 architecture.
>>
>> "Intel 64" is the 64-bit x86-64 architecture.
>
> *facepalm*
>
> So "I64" could be s shortening of that to be Intel's 64-bit x86.
>
> The missing "A" being critical.

Not necessarily.

This is on a node running a VSI version of VMS
Yeah, we need to fix the identification string.

NCP>show exec char


Node Volatile Characteristics as of 12-DEC-2019 11:18:57

Executor node = 1.149 (BROOKS)

Identification = HP DECnet for OpenVMS I64

[...]


I suspect there are other places where "I64" is used on an IA64 system.

--

-- Rob

Stanley F. Quayle

unread,
Dec 12, 2019, 11:38:31 AM12/12/19
to
> Charon I believe claim VMware certification

Yes, it does.

> Not sure if that includes an OpenVMS cluster or individual OpenVMS nodes migrating, I suspect the latter

I haven't tried migrating a whole cluster in one big "bet the farm" fashion. I frequently migrate nodes, one-by-one, until the cluster is entirely virtual.

> I think OpenVMS might need a bit of work fitting into a virtualized world especially where migrations are happening. The complexity in handling a node member of an OpenVMS cluster member might be very interesting indeed.

A CHARON node running on Linux/Windows under VMware, can be motioned with only a slight delay. You could make the cluster tolerant of the few-seconds delay without doing a cluster transition by adjusting the correct VMS parameters.

Grant Taylor

unread,
Dec 12, 2019, 12:19:29 PM12/12/19
to
On 12/12/19 9:24 AM, Robert A. Brooks wrote:
> Not necessarily.

> I suspect there are other places where "I64" is used on an IA64 system.

*FACEPALM*

Dave Froble

unread,
Dec 12, 2019, 12:19:50 PM12/12/19
to
Well, as far as I know, you are correct.

But, come now Grant, these are "people" we're talking about. You know,
humans, who can be seriously lacking. Else, how did we get such names
as "hard drive"? Entities such as marketing and sales do not respect
"official". You'll see it every day.

Remember, there are those who's first experience with computers was
WEENDOZE, and sure, of course, Microsoft invented computers, right?
Heck, they didn't even invent WEENDOZE, they stole it from Xerox.

Stephen Hoffman

unread,
Dec 12, 2019, 12:51:28 PM12/12/19
to
On 2019-12-12 16:24:58 +0000, Robert A. Brooks said:

> On 12/12/2019 11:18 AM, Grant Taylor wrote:
>> On 12/12/19 8:17 AM, John Reagan wrote:
>>> From the current set of Intel docs:
>>>
>>> "IA-32" is the traditional 32-bit i386 architecture.
>>>
>>> "Intel 64" is the 64-bit x86-64 architecture.

"Intel 64" is one of various names for processor products from Intel,
and the one that Intel is currently using for its 64-bit derivatives of
the x86 processor family.

Other names that Intel and AMD and various other folks have been using
for this architecture include x86-64, x86_64, x64, Intel IA-32E, Intel
EM64T, AMD64, and probably a few others. Variously x86, generically.

Intel has settled on Intel 64 as its name for its version of the 64-bit
architecture that AMD now calls AMD64.

Of all of the names around that aren't otherwise brand-entangled,
x86-64 is probably the most commonly-used generic name.

IA-64 and I64 are used to reference Itanium on OpenVMS.

The no-pun-intended core of Intel 64 and of AMD64 is the same, though
there are increasingly divergent and vendor-specific extensions to both
available.

As for these extensions, OpenVMS port was reportedly still dependent on
what is currently an Intel-specific extension (PCID) to Intel 64.

...

> I suspect there are other places where "I64" is used on an IA64 system.

The explanation from some of the folks that are no longer involved with
these and other related product naming decisions was that I64 was
permissible within a then-HP product name. And IA-64 and IA64 could be
used to reference the Intel Itanium product, but not as part of a
non-Intel product name. But using OpenVMS IA64 as a product name was
viewed as conflicting with an Intel trademark.

VSI seemingly hasn't picked a product name for the x86-64 port,
which'll probably make it "fun" for the development folks in the run-up
to the V9.0 release.



--
Pure Personal Opinion | HoffmanLabs LLC

Scott Dorsey

unread,
Dec 12, 2019, 1:30:12 PM12/12/19
to
On 12/12/19 9:24 AM, Robert A. Brooks wrote:
> Not necessarily.

> suspect there are other places where "I64" is used on an IA64 system.

In a deliberate attempt to confuse potential customers?
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Simon Clubley

unread,
Dec 12, 2019, 1:45:43 PM12/12/19
to
On 2019-12-12, Dave Froble <da...@tsoft-inc.com> wrote:
> On 12/12/2019 11:13 AM, Grant Taylor wrote:
>> On 12/12/19 6:22 AM, Simon Clubley wrote:
>>> IA64 is also used in some places to mean the x86-64 architecture.
>>
>> Where are you seeing that?
>>

In architecture specific parts of some open source trees. Unfortunately,
I cannot remember which ones.

I've just had a quick look around for documentation references which
refer to x86-64 as IA64 and I found this:

https://docs.microsoft.com/en-us/visualstudio/deployment/assemblyidentity-element-clickonce-deployment?view=vs-2019

Check out the processorArchitecture attribute. The wording of that section
makes it _very_ clear that describing x86-64 as IA64 is deliberate in this
case and is not an accident.

When you have someone who has never used Itanium but uses Visual Studio
every day, it comes as no surprise they might start referring to x86-64
as IA64 when they see things like that.

>
> Else, how did we get such names
> as "hard drive"?

Hard drive came about because what came before it was the floppy drive.

Stephen Hoffman

unread,
Dec 12, 2019, 2:23:19 PM12/12/19
to
On 2019-12-12 18:45:39 +0000, Simon Clubley said:

> I've just had a quick look around for documentation references which
> refer to x86-64 as IA64 and I found this:
>
> https://docs.microsoft.com/en-us/visualstudio/deployment/assemblyidentity-element-clickonce-deployment?view=vs-2019
>
>
> Check out the processorArchitecture attribute. The wording of that
> section makes it _very_ clear that describing x86-64 as IA64 is
> deliberate in this case and is not an accident.
>
> When you have someone who has never used Itanium but uses Visual Studio
> every day, it comes as no surprise they might start referring to x86-64
> as IA64 when they see things like that.

That particular article has been wrong for a while.

https://stackoverflow.com/questions/54091782/application-manifest-for-64-bit-applications


Microsoft Windows XP 64-bit Edition requires an Itanium, and I suspect
that was the source of the confusion in that section.

https://en.wikipedia.org/wiki/Windows_XP_editions#Windows_XP_64-Bit_Edition

If the folks do try that IA64 as a target for x86-64, they almost
certainly won't get the expected results.

Hans Bachner

unread,
Dec 12, 2019, 3:24:27 PM12/12/19
to
IanD schrieb am 12.12.2019 um 13:47:
> A point of interest, Charon I believe claim VMware certification

Yes, it does.

> Not sure if that includes an OpenVMS cluster or individual OpenVMS nodes migrating, I suspect the latter

I assume with "migrating" you mean moving a VM to a different host with
vMotion.

In the context of CHARON, "migration" is used for the process of moving
a physical Alpha/VAX to a CHARON instance.

But yes, using vMotion works just fine for CHARON hosts and the OpenVMS
systems running in them. As has been mentioned before in this thread,
you usually will observe a single ping showing higher latency, but the
systems stay up and running.

I did not yet try to move a VM with a single CHARON instance acting as a
cluster member, but I have a customer who runs a two-node cluster (with
a quorum disk) on a single VMware based CHARON host. You can move the VM
around between two datacenters ~10 km apart with no visible problems,
the cluster just keeps running.

My customers routinely use vMotion for both Windows and Linux based
CHARON hosts.

> [...]
>
> I think OpenVMS might need a bit of work fitting into a virtualized world especially where migrations are happening. The complexity in handling a node member of an OpenVMS cluster member might be very interesting indeed.

The OpenVMS cluster software is sufficiently tolerant to minor network
delays (if not agressively configured).

The only problem I have heard of, though several years ago with slower
hardware and networks, was a system running Rdb. If Rdb had to handle
serious load, its heavy use of memory caches led to the situation that
copying memory contents just wasn't fast enough to keep up with Rdb
activities and vMotion took forever.

> A single standalone node is one thing, migrating a cluster member might be a very different matter. You might have to come up with a formula not to migrate more than x members of an OpenVMS cluster at once etc. I don't know, I just know that when you start to deal with HA clusters things get exponentially complex very quickly

A VMware HA cluster in most cases won't help because it just reboots the
VM on a different host. I don't know how CHARON/OpenVMS work in an FT
configuration. VMware supported FT only for single vCPU VMs for a long
time, while CHARON required at least two (v)CPUs. I did not look at FT
configurations since FT supported VMs with multiple vCPUs.

> It's very good news that OpenVMS is looking to be supported in VMware, very good news indeed

+1

Hans.

Grant Taylor

unread,
Dec 12, 2019, 3:52:34 PM12/12/19
to
On 12/12/19 12:23 PM, Stephen Hoffman wrote:
> Microsoft Windows XP 64-bit Edition requires an Itanium,

Um ... I disagree. I'm about 98% certain that there is a 64-bit version
of Windows XP that runs on 64-bit x86 / x86_64 / x64 / AMD64.

There /may/ be an Itanium (IA64) version of XP, I'm just not aware of it.

Bob Gezelter

unread,
Dec 12, 2019, 4:02:32 PM12/12/19
to
On Thursday, December 12, 2019 at 1:45:43 PM UTC-5, Simon Clubley wrote:
>
> Hard drive came about because what came before it was the floppy drive.
>
> Simon.
>
> --
> Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
> Walking destinations on a map are further away than they appear.

Simon,

As you are no doubt aware, that "before" only applies to the PC space.

In the real world, hard drives are more than a decade older than floppy drives. If desired, I can dig out the patent reference for hard drives.

Stephen Hoffman

unread,
Dec 12, 2019, 5:02:41 PM12/12/19
to
On 2019-12-12 20:52:35 +0000, Grant Taylor said:

> On 12/12/19 12:23 PM, Stephen Hoffman wrote:
>> Microsoft Windows XP 64-bit Edition requires an Itanium,
>
> Um ... I disagree.

You'd be wrong. This having had a copy of the cited product. Might
still have a DVD copy around from a used-equipment purchase of an
Itanium workstation, too.

Reposting the link to the cited product name:
https://en.wikipedia.org/wiki/Windows_XP_editions#Windows_XP_64-Bit_Edition


> I'm about 98% certain that there is a 64-bit version of Windows XP that
> runs on 64-bit x86 / x86_64 / x64 / AMD64.

You're thinking of what Microsoft called Microsoft Windows XP
Professional x64 Edition.

Here's the link to that product:
https://en.wikipedia.org/wiki/Windows_XP_editions#Windows_XP_Professional_x64_Edition


Which does (or did?) boot on x86-64 systems.

This is not what Microsoft calls Microsoft Windows XP 64-bit Edition.

Which requires Itanium.

Confusing? Sure.

The Microsoft tech writer that wrote or that edited that section of
text that Simon previously cited was probably confused, too.

But Itanium is at its last hardware generation, OpenVMS I64 will almost
certainly remain the product name for the Itanium port of OpenVMS, and
the folks at VSI will decide upon a product name for the OpenVMS x86-64
port.
And Microsoft has never been known as a bastion of product naming
clarity, nor of product design clarity, nor of product licensing and
pricing clarity. Though they do have some good ideas to borrow, and
some bad ideas to avoid.

Grant Taylor

unread,
Dec 12, 2019, 5:27:11 PM12/12/19
to
On 12/12/19 3:02 PM, Stephen Hoffman wrote:
> You're thinking of what Microsoft called Microsoft Windows XP
> Professional x64 Edition.

Okay. I can't / won't argue with that.

> Which does (or did?) boot on x86-64 systems.

Not only did it boot on x86-64 systems, but it would not boot on x86-32
systems.

Well, the boot loader would technically boot far enough to display an
error to say that it required a 64-bit CPU.

> This is not what Microsoft calls Microsoft Windows XP 64-bit Edition.

Oy vey!

> Confusing?

Yes.

Scott Dorsey

unread,
Dec 12, 2019, 5:40:57 PM12/12/19
to
In article <qsu9cb$vkj$2...@tncsrv09.home.tnetconsulting.net>,
Grant Taylor <gta...@tnetconsulting.net> wrote:
>On 12/12/19 12:23 PM, Stephen Hoffman wrote:
>> Microsoft Windows XP 64-bit Edition requires an Itanium,
>
>Um ... I disagree. I'm about 98% certain that there is a 64-bit version
>of Windows XP that runs on 64-bit x86 / x86_64 / x64 / AMD64.

Yes, but that is called Windows XP Professional x64 Edition.
Windows XP 64-Bit Edition is the Itanium version.

>There /may/ be an Itanium (IA64) version of XP, I'm just not aware of it.

Yes, it's called Windows XP 64-Bit Edition, and it's actually kind of a
different fork of XP and wasn't marketed the same way.

Craig A. Berry

unread,
Dec 12, 2019, 7:49:21 PM12/12/19
to
On 12/12/19 11:19 AM, Grant Taylor wrote:
> On 12/12/19 9:24 AM, Robert A. Brooks wrote:
>> Not necessarily.
> …
>> I suspect there are other places where "I64" is used on an IA64 system.
>
> *FACEPALM*

"OpenVMS I64" as an abbreviation for "OpenVMS Industry Standard 64" is
the official marketing name for VMS on Itanium and always has been. I
don't see the reason for the facepalm as the only people who thought it
meant something else weren't paying attention to what Intel and HP/HPE
were doing.

The name of the hardware architecture, on the other hand has always been
IA64, and that is reflected in various places, such as:

$ write sys$output f$getsyi("arch_name")
IA64
$ write sys$output f$getsyi("NODE_HWTYPE")
IA64

Arne Vajhøj

unread,
Dec 12, 2019, 8:08:12 PM12/12/19
to
On 12/12/2019 1:45 PM, Simon Clubley wrote:
> On 2019-12-12, Dave Froble <da...@tsoft-inc.com> wrote:
>> On 12/12/2019 11:13 AM, Grant Taylor wrote:
>>> On 12/12/19 6:22 AM, Simon Clubley wrote:
>>>> IA64 is also used in some places to mean the x86-64 architecture.
>>>
>>> Where are you seeing that?
>
> In architecture specific parts of some open source trees. Unfortunately,
> I cannot remember which ones.
>
> I've just had a quick look around for documentation references which
> refer to x86-64 as IA64 and I found this:
>
> https://docs.microsoft.com/en-us/visualstudio/deployment/assemblyidentity-element-clickonce-deployment?view=vs-2019
>
> Check out the processorArchitecture attribute. The wording of that section
> makes it _very_ clear that describing x86-64 as IA64 is deliberate in this
> case and is not an accident.

Not really.

This is just a documentation bug.

The corrrect values are MSIL, X86, AMD64 and IA64 where AMD64 means
x86-64 and IA64 means Itanium.

They got it right elsewhere in the docs:

https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/runtime/assemblyidentity-element-for-runtime

And if you have any doubts then look at the code the XML is being
mapped to:

https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.tasks.deployment.manifestutilities.assemblyidentity.processorarchitecture?view=netframework-4.8

https://docs.microsoft.com/en-us/dotnet/api/microsoft.build.utilities.processorarchitecture?view=netframework-4.8

Other have noted the bug:

https://stackoverflow.com/questions/13867188/are-ia64-and-amd64-interchangeable-in-clickonce-manifests

> When you have someone who has never used Itanium but uses Visual Studio
> every day, it comes as no surprise they might start referring to x86-64
> as IA64 when they see things like that.

Hmmmm.

First not that many use click once. It is not a technology that is in
fashion today.

Second anyone using Itanium will get an error and anyone using IA64
on something not an Itanium will get an error.

So everybody using the wrong info will get an error.

Arne

Arne Vajhøj

unread,
Dec 12, 2019, 8:16:43 PM12/12/19
to
On 12/12/2019 7:49 PM, Craig A. Berry wrote:
> On 12/12/19 11:19 AM, Grant Taylor wrote:
>> On 12/12/19 9:24 AM, Robert A. Brooks wrote:
>>> Not necessarily.
>> …
>>> I suspect there are other places where "I64" is used on an IA64 system.
>>
>> *FACEPALM*
>
> "OpenVMS I64" as an abbreviation for "OpenVMS Industry Standard 64" is
> the official marketing name for VMS on Itanium and always has been.  I
> don't see the reason for the facepalm as the only people who thought it
> meant something else weren't paying attention to what Intel and HP/HPE
> were doing.

It is also used in file names in installation kits.

> The name of the hardware architecture, on the other hand has always been
> IA64, and that is reflected in various places, such as:
>
> $ write sys$output f$getsyi("arch_name")
> IA64
> $ write sys$output f$getsyi("NODE_HWTYPE")
> IA64

Some name conversion will be needed.

Itanium -> x86-64

seems very natural to me.

I64 -> ?
IA64 -> ?

It may be considered heretic but I think X64 would be fine.

3 letters, no special characters and thanks to Microsoft most
people will understand what it means.

Arne

Arne Vajhøj

unread,
Dec 12, 2019, 8:20:10 PM12/12/19
to
On 12/12/2019 8:22 AM, Simon Clubley wrote:
> On 2019-12-11, johnwa...@yahoo.co.uk <johnwa...@yahoo.co.uk> wrote:
>> But you mentioned IA64 - which is generally taken to mean legacy
>> Itanic-family stuff. Or maybe something got misread.
>
> IA64 is also used in some places to mean the x86-64 architecture.

It should not.

IA-64 is Itanium.

AMD64, EM64T, IA-32e and Intel 64 all are or has been widely used
names for x86-64.

(and the IA-32e name is really misleading!)

But nobody should use IA-64 for it.

Arne

It is loading more messages.
0 new messages