Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Distributed Applications, Hashgraph, Automation

666 views
Skip to first unread message

IanD

unread,
Feb 14, 2018, 1:52:40β€―PM2/14/18
to
This is indeed interesting news

I know it was posted a while ago but I just saw it

https://www.swirlds.com/vms-software-selects-swirlds-hashgraph-as-platform-to-build-secure-distributed-applications/

There's actually a few different technologies coming online that offer fast transaction throughout, Hashgraph is probably a more proven technology as it's deployed in some organisations already

Iota would be another interesting one with the tangle protocol

There's certainly some forward thinking ideas flowing for OpenVMS to be getting into

Other concepts like Docker will keep gaining traction.

The whole DevOPS BS mantra is gaining ground also with that concept being pushed into other spaces, like server provisioning.

In a growing number of places, one cannot even log into the production environments with admin accounts to even do an OS install or configuration.
One must use tools like Chef and do everything at arm's length

Docker certainly can assist with the whole packaged idea and one would think with OpenVMS's history of app stacking, it would be a fairly natural fit

I wonder if Hashgraph has the ability to ultimately replace cluster traffic on OpenVMS? It's supposed to scale to I think 200K transactions per second, according to the glossy brouchers at least

I really didn't expect to see the Hashgraph post on VSI's site. Very forward thinking and good advertising to show how far forward thinking they are hoping to push OpenVMS

At least people visiting the VSI site might not automatically think of OpenVMS as the dinosaur

Simon Clubley

unread,
Feb 14, 2018, 2:29:32β€―PM2/14/18
to
On 2018-02-14, IanD <iloveo...@gmail.com> wrote:
>
> In a growing number of places, one cannot even log into the production environments with admin accounts to even do an OS install or configuration.
> One must use tools like Chef and do everything at arm's length
>

That can work both ways. While things like that are certainly needed and
VMS certainly needs a massive dose of automation work, there is one thing
I do worry about here.

If you can manage full networks with these tools, then doesn't this risk
leading to a shortfall of people with the skills to securely design and
implement the next generation of very low level tools and libraries that
these higher level easy to use tools rely on ?

IOW, these higher level tools are needed in today's world but where do
you get enough people with the experience to _properly_ design the next
generation of tools ?

IoT and other embedded devices are a really good example of this in that
some people think that just because they can write some high level code,
then they are qualified to handle all the low level issues that embedded
devices require to be handled.

As the various security issues have shown, embedded devices can have massive
security issues that should never have existed in the first place.

>
> I wonder if Hashgraph has the ability to ultimately replace cluster traffic
> on OpenVMS? It's supposed to scale to I think 200K transactions per second,
> according to the glossy brouchers at least
>

Interesting question as there are clusters and then there are clusters.

There are a number of places where this can be used but I am having a hard
time seeing how it can replace the traditional zero loss of data disaster
tolerant mission critical transaction processing that some VMS clusters do.

If it can do that, then it seems all you are doing is replacing one
clustering protocol with another that has the word "blockchain" in it.

IOW, I am not yet seeing what the unique selling point is for people
who are used to VMS style clustering.

>
> At least people visiting the VSI site might not automatically think of
> OpenVMS as the dinosaur

That depends on whether VSI management can take a reality based approach
to what they place on their website.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world

Arne VajhΓΈj

unread,
Feb 14, 2018, 2:53:40β€―PM2/14/18
to
On 2/14/2018 1:52 PM, IanD wrote:
> This is indeed interesting news
>
> I know it was posted a while ago but I just saw it
>
> https://www.swirlds.com/vms-software-selects-swirlds-hashgraph-as-platform-to-build-secure-distributed-applications/

> I wonder if Hashgraph has the ability to ultimately replace cluster
> traffic on OpenVMS? It's supposed to scale to I think 200K
> transactions per second, according to the glossy brouchers at least
I think it is a business application thing not an OS thing.

[application clusters are in many ways doing the same as OS clusters,
but ...]

After all it is a J thing.

:-)

Arne


Kerry Main

unread,
Feb 14, 2018, 10:00:09β€―PM2/14/18
to comp.os.vms to email gateway
This is one of those "it depends" answers.

Regardless of the OS platform, there is really only 2 types of
clustering architectures:

1. Shared disk (OpenVMS, Linux/GFS ,z/OS, others)
2. Shared nothing (OpenVMS, Linux. Windows, *NIX, Non-Stop, others)

<http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-
architecture/>

"So shared nothing is great for systems needing high throughput writes
if you can shard your data and stay clear of transactions that span
different shards. The trick for this is to find the right partitioning
strategy, for instance you might partition data for a online banking
system such that all aspects of a user's account are on the same
machine. If the data set can be partitioned in such a way that
distributed transactions are avoided then linear scalability, at least
for key-based reads and writes, is at your fingertips.

The counter, from the shared disk camp, is that they can use
partitioning too. Just because the disk is shared does not mean that
data can't be partitioned logically with different nodes servicing
different partitions. There is much truth to this, assuming you can set
up your architecture so that write requests are routed to the correct
machine, as this tactic will reduce the amount of lock (or block)
shipping taking place (and is exactly how you optimise databases like
Oracle RAC)."

I like the analogy that compares the shared nothing model (Windows,
Linux, OpenVMS) to a dragster and the shared everything model
(Linux/GFS, OpenVMS, Z/OS) to a Ferrari. In a quarter mile race on a
track, the dragster will win hands down every time. In a race on normal
streets, the Ferrari will win every time.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com





Phillip Helbig (undress to reply)

unread,
Feb 15, 2018, 6:28:55β€―AM2/15/18
to
In article <mailman.1.1518663581.1...@rbnsn.com>,
"Kerry Main" <kemain...@gmail.com> writes:

> Regardless of the OS platform, there is really only 2 types of
> clustering architectures:
>
> 1. Shared disk (OpenVMS, Linux/GFS ,z/OS, others)
> 2. Shared nothing (OpenVMS, Linux. Windows, *NIX, Non-Stop, others)

Shared disk and shared nothing.

> I like the analogy that compares the shared nothing model (Windows,
> Linux, OpenVMS) to a dragster and the shared everything model
> (Linux/GFS, OpenVMS, Z/OS) to a Ferrari. In a quarter mile race on a
> track, the dragster will win hands down every time. In a race on normal
> streets, the Ferrari will win every time.

Shared everything and shared nothing.

Kerry Main

unread,
Feb 15, 2018, 7:20:05β€―AM2/15/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> Phillip Helbig undress to reply via Info-vax
> Sent: February 15, 2018 6:29 AM
> To: info...@rbnsn.com
> Cc: Phillip Helbig undress to reply <hel...@asclothestro.multivax.de>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
> In article <mailman.1.1518663581.18116.info-
> vax_rb...@rbnsn.com>,
Just to clarify -

While the OpenVMS community refer to its clustering arch as shared
everything, the industry term for the same thing is shared disk.

In both cases, one could refer to these as differing strategies to share
data between multiple systems. There are pro's and con's.

Another good extract from the link
<
http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-a
rchitecture/>

" Shared Disk Architectures are write-limited where multiple writer
nodes must coordinate their locks around the cluster. Shared Nothing
Architectures are write limited where writes span multiple partitions
necessitating a distributed two phase commit."

So, in terms of the new crypto currency considerations, the real
question is "regardless of the OS platform, and keeping in mind that the
network is by far the biggest contributor to overall "solution" latency,
which data sharing strategy (shared disk or shared nothing) is a better
approach a crypto currency solution?

The second question wold then be "given the answer to the first
question, what OS platform is better suited and has proven itself as a
solid implementation of the strategy decided on in question 1".

[hint - try not to let OS religion drive the decision on answer to Q1]

Phillip Helbig (undress to reply)

unread,
Feb 15, 2018, 7:22:29β€―AM2/15/18
to
In article <mailman.2.1518697085.1...@rbnsn.com>,
"Kerry Main" <kemain...@gmail.com> writes:

> Just to clarify -
>
> While the OpenVMS community refer to its clustering arch as shared
> everything, the industry term for the same thing is shared disk.

Other people in the industry refer to a wide variety of configurations
as "clusters", though many offer not even the most basic of VMS cluster
benefits. In a VMS cluster with shared disks, more is usually involved
than in other "clusters" with shared disks.

Stephen Hoffman

unread,
Feb 15, 2018, 1:00:36β€―PM2/15/18
to
On 2018-02-15 12:22:27 +0000, Phillip Helbig (undress to reply said:


> Other people in the industry refer to a wide variety of configurations
> as "clusters", though many offer not even the most basic of VMS cluster
> benefits. In a VMS cluster with shared disks, more is usually
> involved than in other "clusters" with shared disks.

Shared storage is a simple model, though the volume-level granularity
of sharing makes the whole scheme somewhat less than efficient when
you're really trying to push I/O around. The locking overhead inherent
in the sharing also limits how well the whole design works. The apps
tend to know what information the apps need to share. Sharing the
whole volume certainly works, and it's fairly simple to work with, but
a volume-level shared-access file system doesn't abstract app network
access and data sharing all that well and the sharing itself is
inherently self-limiting, around performance and around details such as
online or offline backups. Yeah, with 10 GbE and SSD for locks and for
storage respectively, that ceiling is rather higher than it used to be.
But the shared storage also tends to tip over apps that don't expect
that to even be happening, including OpenVMS native apps that aren't
developed as cluster-aware.

OpenVMS needs to fully adopt LDAP, and replace SYSUAF and RIGHTSLIST.
Fully integrate LDAP with MAIL and other network services. Across
clusters.

OpenVMS needs to make clustering vastly easier to configure, and easier
to manage. ~Dozens of manually-referenced shared files is not a user
interface, it's a hilariousness.

UICs and identifiers are a problem for adding and removing hosts.
That's all tossed at the system manager to manually de-conflict. Or to
ignore, with all the hilariousness that can then entail. Deprecate
those for most uses and UUID all the underlying parts.

The lock manager API needs a wrapper for simpler usage for common
tasks. Electing a primary process, notifications of associated apps
and processes arriving and departing, etc.

Notifications need to be integrated and consolidated and not scattered
across a dozen different interfaces and APIs and files.

Task scheduling for the local host, for the cluster, and across clusters.

Message-passing needs to be available and increasingly preferred over
shared disk for apps that require performance.

File shares. Client and server. Integrated. Gotta play nice with
SMB. Current-version SMB. For most folks, SMB is what shared storage
means. Not SCS, and not what OpenVMS clustering provides. (No, I
wouldn't expect to see SCS replaced with SMB. But that's certainly an
interesting idea. NQ, Tuxera or otherwise might be a starting point
here, too.)

Ubiquitous encryption. Ubiquitous distributed authentication.
System-integrated TLS and DTLS. This also includes encrypted SCS.
Make unencrypted SCS a special-case security downgrade. Use
processor-level cryptographic support, and consider requiring
processors with that support. APIs that help developers avoid the
common errors. Fully-integrated certificate support with a maintained
root store, and with an overhauled UI.

Secured and encrypted key store for passwords and private keys,
preferably distributed.

Secured automatic software distributions for VSI and for third-party
apps. Opt-in automatic installations for priority patches.

Online backups. Easier restorations and recoveries. Not just for
integrity, but because OpenVMS servers have been breached and OpenVMS
servers will get breached, and intrusion detection is largely missing
and tends to be delayed allowing attackers access for far too long, and
server restoration and recovery as currently implemented is a horrid
process.

Fully-integrated IP. Not layered. Not add-on. Not anything resembling
the current kitting and organization, a wonderful case that dates back
to early VMS development's now-absurd antipathy toward IP. Always
present. Always installed. IPv4 and IPv6. Current services also
always present and always integrated and always configured (if not
enabled), including Apache, Tomcat, DNS, DHCPv6, etc. Message-passing,
too.

telnet, ftp, DECnet and any other insecure protocols either need to be
wrapped and updated and secured, or they need to be deprecated and
removed.

Image-install and boot and connect into the newly-installed guest
securely, and without requiring a hardware console. Making OpenVMS
easier and simpler to deploy, whether you're running your own hardware
or VMs, or you're hosting. Yes, some folks want to run OpenVMS hosted
elsewhere. Get over it. Make it easier. Make it as secure as it can
reasonably be, using SGX or otherwise where appropriate.

We're also headed toward byte-addressable non-volatile main storage,
and that's going to be a big chance to storage.
https://www.usenix.org/system/files/conference/fast18/fast18-won.pdf
Etc.

Clustered is the default system configuration, and the default
installation configuration. Not standalone. Allow advanced users to
de-tune and de-configure, where that's specifically necessary. No
separate license, either. Make OpenVMS hosts trivial to connect and to
coordinate and to securely serve and share storage. Don't erect
roadblocks and impediments and complexity on the path to one of the
most powerful remaining features within the whole platform.

Or we can discuss the most heavily advertised algorithms, like what a
b-tree algorithm would have turned into had that algorithm had a
slightly better marketing agent and funding from a sketchy-looking
crypto currency acting as a close associate. Or we can endlessly
debate the features and limitations of pre-millennium clustering and
who was first with what terminology, of course. Or we can debate
marketing the current cluster-related features and support, because
that's worked out so well over the past twenty+ years. I'm sure
that'll all work out well for the future of OpenVMS, too. Or we can
realize that we're headed forward, not back to Itanium or Alpha nor to
VAX, and that we're not headed back to ISO or DECnet or wide-open
connections, nor to existing and old assumptions around staffing and
skills and software and network environments.


--
Pure Personal Opinion | HoffmanLabs LLC

Kerry Main

unread,
Feb 15, 2018, 1:15:05β€―PM2/15/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> Phillip Helbig undress to reply via Info-vax
> Sent: February 15, 2018 7:22 AM
> To: info...@rbnsn.com
> Cc: Phillip Helbig undress to reply <hel...@asclothestro.multivax.de>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
> In article <mailman.2.1518697085.18116.info-
> vax_rb...@rbnsn.com>,
While I agree with you, the really basic difference is whether the
co-ordination of writes among multiple servers is done at the OS level
(shared disk - integrated or add-on DLM) or at the App / DB layers
(shared nothing requires App logic coding and data partitioning).

The article points out where each data sharing strengths and weaknesses
are.

Norman F Raphael

unread,
Feb 15, 2018, 1:50:05β€―PM2/15/18
to info...@info-vax.com
-----Original Message-----
From: Stephen Hoffman via Info-vax <info...@info-vax.com>
To: info-vax <info...@info-vax.com>
Sent: Thu, Feb 15, 2018 1:10 pm
Subject: Re: [New Info-vax] Distributed Applications, Hashgraph, Automation
>
> On 2018-02-15 12:22:27 +0000, Phillip Helbig (undress to reply said:
>
> > Other people in the industry refer to a wide variety of configurations
> > as "clusters", though many offer not even the most basic of VMS cluster
> > benefits. In a VMS cluster with shared disks, more is usually
> > involved than in other "clusters" with shared disks.
> <snip>

>
> We're also headed toward byte-addressable non-volatile main storage,
> and that's going to be a big chance to storage.
> https://www.usenix.org/system/files/conference/fast18/fast18-won.pdf
> Etc.

There is a typo in here: "Figure 9: 4KB Randwom Write;" s/b "Random"

> <snip>

Norman F. Raphael
Please reply to: norman....@ieee.org
"Everything worthwhile eventually
degenerates into real work." -Murphy

Bob Koehler

unread,
Feb 15, 2018, 5:00:19β€―PM2/15/18
to
In article <mailman.2.1518697085.1...@rbnsn.com>, "Kerry Main" <kemain...@gmail.com> writes:
>
> While the OpenVMS community refer to its clustering arch as shared
> everything, the industry term for the same thing is shared disk.

Great, another misleading industry standard phrase that leaves out
details so anyone can hang thier coat on it.

We share a lot more things than disks in an OpenVMS cluster. Maybe
the other "shared disk" clusters do, too; or not.

Craig A. Berry

unread,
Feb 15, 2018, 5:51:43β€―PM2/15/18
to
On 2/15/18 12:00 PM, Stephen Hoffman wrote:

> File shares.Β  Client and server.Β  Integrated.Β Β  Gotta play nice with
> SMB.Β  Current-version SMB.Β Β  For most folks, SMB is what shared storage
> means.Β  Not SCS, and not what OpenVMS clustering provides.Β Β  (No, I
> wouldn't expect to see SCS replaced with SMB.Β  But that's certainly an
> interesting idea.Β  NQ, Tuxera or otherwise might be a starting point
> here, too.)

I don't know much since I haven't been around a cluster in a long time,
but shouldn't "SCS" in that paragraph really read "MSCP"?

Richard Maher

unread,
Feb 15, 2018, 9:20:11β€―PM2/15/18
to
On 15-Feb-18 8:17 PM, Kerry Main wrote:
>
> Just to clarify -
>
> While the OpenVMS community refer to its clustering arch as shared
> everything, the industry term for the same thing is shared disk.
>
> In both cases, one could refer to these as differing strategies to share
> data between multiple systems. There are pro's and con's.
>

I disagree and think you'll find that the third option "shared
everything" includes share memory. I can't believe I've forgotten what
VMS' offering for a low latency interconnect was Memory Channel?

Oracle Cache Fusion and Redis Cache are wide area examples.

Kerry Main

unread,
Feb 15, 2018, 10:20:05β€―PM2/15/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> Richard Maher via Info-vax
> Sent: February 15, 2018 9:20 PM
> To: info...@rbnsn.com
> Cc: Richard Maher <maher_rj...@hotmail.com>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
mmmm.. it's a bit different, but the basics are really about how data
sharing is done between servers.

Regardless of whether disk or memory sharing, with shared disk (OpenVMS
- shared everything), there is still a DLM doing the inter-server update
coordination.

I fully agree OpenVMS has significant advantages over other shared disk
offerings - mission critical proven DLM, cluster logicals, cluster
batch, common file system (new one with significant new features cooking
as well). However, the industry really only looks at shared disk or
shared nothing.

Btw, the modern day equivalent to memory channel and ultra low latency
data sharing is either Infiniband or RoCEv2 (RDMA over converged
ethernet)

Not sure where it is at right now, but RoCEv2 is on the research slide
of the OpenVMS roadmap.

Imho, this type of cluster communications capability is critical to next
generation cluster scalability of shared disk clusters. It is how VSI
can address the biggest counter argument to shared disk clusters -
"shared disk clusters have scalability issues due to the requirement of
a distributed lock manager"

Note - RoCEv2 is supported on Linux, Microsoft environments and that is
what VSI's competition is in the new world.

Reference:
<http://www.mellanox.com/related-docs/whitepapers/roce_in_the_data_cente
r.pdf>
" OS bypass gives an application direct access to the network card,
allowing the CPU to communicate directly with the I/O adapter, bypassing
the need for the operating system to transition from the user space to
the kernel. With RDMA, there is no need for involvement from the OS or
driver, creating a huge savings in efficiency of the interconnect
transaction.

RDMA also allows communication without the need to copy data to the
memory buffer. This zero copy transfer enables the receive node to read
data directly from the send node's memory, thereby reducing the overhead
created from CPU involvement.

Furthermore, unlike in legacy interconnects, RDMA provides for the
transport protocol stack to be handled by the hardware. By offloading
the stack from software, there is less CPU involvement, and the
transport is more reliable.

The overall effect of the significant reduction of CPU overhead that
RDMA provides by way of OS bypass, zero copy, and CPU offloading is to
maximize efficiency in order to provide lightning fast interconnect"

Richard Maher

unread,
Feb 15, 2018, 11:16:29β€―PM2/15/18
to
On 16-Feb-18 11:15 AM, Kerry Main wrote:
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
>> Richard Maher via Info-vax
>> Sent: February 15, 2018 9:20 PM
>> To: info...@rbnsn.com
>> Cc: Richard Maher <maher_rj...@hotmail.com>
>> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
> Automation
>>
>> On 15-Feb-18 8:17 PM, Kerry Main wrote:
>>>
>>> Just to clarify -
>>>
>>> While the OpenVMS community refer to its clustering arch as shared
>>> everything, the industry term for the same thing is shared disk.
>>>
>>> In both cases, one could refer to these as differing strategies to
> share
>>> data between multiple systems. There are pro's and con's.
>>>
>>
>> I disagree and think you'll find that the third option "shared
>> everything" includes share memory. I can't believe I've forgotten what
>> VMS' offering for a low latency interconnect was Memory Channel?
>>
>> Oracle Cache Fusion and Redis Cache are wide area examples.
>
> mmmm.. it's a bit different, but the basics are really about how data
> sharing is done between servers.

IMHO Share Everything does what it says on the tin.

>
> Regardless of whether disk or memory sharing, with shared disk (OpenVMS
> - shared everything), there is still a DLM doing the inter-server update
> coordination.

And Oracle took that beautiful tool with its bullshit 16 then 64? byte
LVB limitation and create Cache Fusion where the data moves around the
cluster WITH the lock and so much i/o is simply eliminated.

VMS engineering asleep again with their head up their arse about
DECforms :-(

>
> I fully agree OpenVMS has significant advantages over other shared disk
> offerings - mission critical proven DLM, cluster logicals, cluster
> batch, common file system (new one with significant new features cooking
> as well). However, the industry really only looks at shared disk or
> shared nothing.

It also has many disadvantages: -
1) Maximum number of nodes
2) Geographical limitations
3) No PaaS capability

>
> Btw, the modern day equivalent to memory channel and ultra low latency
> data sharing is either Infiniband or RoCEv2 (RDMA over converged
> ethernet)
>
> Not sure where it is at right now, but RoCEv2 is on the research slide
> of the OpenVMS roadmap.

Goodo.

>
> Imho, this type of cluster communications capability is critical to next
> generation cluster scalability of shared disk clusters. It is how VSI
> can address the biggest counter argument to shared disk clusters -
> "shared disk clusters have scalability issues due to the requirement of
> a distributed lock manager"

Oracle's DLM seems not to have these scalability issues.

Kerry Main

unread,
Feb 16, 2018, 12:15:05β€―AM2/16/18
to comp.os.vms to email gateway
Technically speaking - 96 server x 64 cpus each with 2TB?

> 2) Geographical limitations

If you want sync data (RPO=0), then in any multi-site environment, you
are typically limited to <100km.

> 3) No PaaS capability

That can come later .. the public cloud is just a modern hyped name for
"outsourcing lite"

Many Customers who went to public clouds and/or outsourcing are now
coming back in house.

>
> >
> > Btw, the modern day equivalent to memory channel and ultra low
> latency
> > data sharing is either Infiniband or RoCEv2 (RDMA over converged
> > ethernet)
> >
> > Not sure where it is at right now, but RoCEv2 is on the research
slide
> > of the OpenVMS roadmap.
>
> Goodo.
>
> >
> > Imho, this type of cluster communications capability is critical to
next
> > generation cluster scalability of shared disk clusters. It is how
VSI
> > can address the biggest counter argument to shared disk clusters -
> > "shared disk clusters have scalability issues due to the requirement
of
> > a distributed lock manager"
>
> Oracle's DLM seems not to have these scalability issues.
>

Well, Oracle's DLM came from Tru64 UNIX DLM, which was a watered down
version of OpenVMS DLM, so I really do not see how the Oracle DLM can be
that much different from the OpenVMS DLM.

Regardless, since few can afford Oracle Clustering, its no wonder you do
not hear any issues.

List pricing (USD) for dual 4 cpu x86 servers with Oracle RAC: (yes, big
Cust's get discounts)
($47K x 4 cpus x 2 servers) *1.5 (add for RAC) + 15% list mandatory
annual support

Hint - Oracle Rdb has no 50% uplift for its clustering like Oracle RAC
does.

Good news for OpenVMS Customers on X86 with Oracle - the previous
formula would include overall multiplier x0.5 (Oracle Processor factor)

IN other words, moving to OpenVMS (Oracle Server or Rdb) on X86-64
should reduce those Customers Oracle pricing by 50%. That in alone would
likely justify many Customers moving from OpenVMS Integrity/Alpha to
OpenVMS X86-64

Stephen Hoffman

unread,
Feb 16, 2018, 5:22:31β€―PM2/16/18
to
On 2018-02-16 03:15:48 +0000, Kerry Main said:


> mission critical proven DLM,

Handy, definitely. In competitive configurations, other DLMs exist.

> cluster logicals,

Or as is commonly used on various other platforms, LDAP.
https://directory.apache.org/apacheds/ or VSI Enterprise Directory, or
otherwise.

> cluster batch,

Batch is not competitive as scheduling offerings goes. It's a pain in
the rump, in practice. Third-party scheduling offerings for OpenVMS,
and other configurations on other platforms have vastly better
offerings. Hadoop YARN or Mesos, etc.

> common file system (new one with significant new features cooking as well).

The new file system is comparatively old, unfortunately. We're
clearly headed toward in-memory processing and byte-addressable
non-volatile storage too, and not toward main processing using HDD or
SSD storage, save as archival and recovery and overflow.

> However, the industry really only looks at shared disk or shared nothing.
>
> Btw, the modern day equivalent to memory channel and ultra low latency
> data sharing is either Infiniband or RoCEv2 (RDMA over converged
> ethernet)
>
> Not sure where it is at right now, but RoCEv2 is on the research slide
> of the OpenVMS roadmap.
>
> Imho, this type of cluster communications capability is critical to
> next generation cluster scalability of shared disk clusters. It is how
> VSI can address the biggest counter argument to shared disk clusters -
> "shared disk clusters have scalability issues due to the requirement of
> a distributed lock manager"

In terms of features and capabilities provided, RDMA is a
next-generation cluster interconnect and not a next-generation cluster.

Richard Maher

unread,
Feb 16, 2018, 7:55:30β€―PM2/16/18
to
Kerry the world, against my judgement, has decreed that "shared servers"
are a thing of the past. On any server instance only one application
shall run. This makes a mockery of your monolith proposals.

>
>> 2) Geographical limitations
>
> If you want sync data (RPO=0), then in any multi-site environment, you
> are typically limited to <100km.

Pathetic!

>
>> 3) No PaaS capability
>
> That can come later .. the public cloud is just a modern hyped name for
> "outsourcing lite"

You just can't get your head around this can you :-(
>
> Many Customers who went to public clouds and/or outsourcing are now
> coming back in house.
>
>>
>>>
>>> Btw, the modern day equivalent to memory channel and ultra low
>> latency
>>> data sharing is either Infiniband or RoCEv2 (RDMA over converged
>>> ethernet)
>>>
>>> Not sure where it is at right now, but RoCEv2 is on the research
> slide
>>> of the OpenVMS roadmap.
>>
>> Goodo.
>>
>>>
>>> Imho, this type of cluster communications capability is critical to
> next
>>> generation cluster scalability of shared disk clusters. It is how
> VSI
>>> can address the biggest counter argument to shared disk clusters -
>>> "shared disk clusters have scalability issues due to the requirement
> of
>>> a distributed lock manager"
>>
>> Oracle's DLM seems not to have these scalability issues.
>>
>
> Well, Oracle's DLM came from Tru64 UNIX DLM, which was a watered down
> version of OpenVMS DLM, so I really do not see how the Oracle DLM can be
> that much different from the OpenVMS DLM.

Educate yourself!

>
> Regardless, since few can afford Oracle Clustering, its no wonder you do
> not hear any issues.

Oh I see, the cost conscious want VMS but not Oracle?

Kerry Main

unread,
Feb 16, 2018, 9:10:04β€―PM2/16/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> Stephen Hoffman via Info-vax
> Sent: February 16, 2018 5:22 PM
> To: info...@rbnsn.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
> On 2018-02-16 03:15:48 +0000, Kerry Main said:
>
>
> > mission critical proven DLM,
>
> Handy, definitely. In competitive configurations, other DLMs exist.
>

Just not as proven in mission critical environments. Experience and
reputation does matter for components as critical as the DLM.

Yes, z/OS has a well respected mission critical DLM as well.

> > cluster logicals,
>
> Or as is commonly used on various other platforms, LDAP.
> https://directory.apache.org/apacheds/ or VSI Enterprise Directory, or
> otherwise.
>
> > cluster batch,
>
> Batch is not competitive as scheduling offerings goes. It's a pain in
> the rump, in practice. Third-party scheduling offerings for OpenVMS,
> and other configurations on other platforms have vastly better
> offerings. Hadoop YARN or Mesos, etc.
>

Most companies look to schedulers as add-on LP's - not as core OS
offerings.

The reason is that they want the same scheduler to run on all their
production platforms.

Heck, no one uses the native Windows Server batch service.

> > common file system (new one with significant new features cooking as
> well).
>
> The new file system is comparatively old, unfortunately. We're
> clearly headed toward in-memory processing and byte-addressable
> non-volatile storage too, and not toward main processing using HDD or
> SSD storage, save as archival and recovery and overflow.
>

Main memory is heading towards TB. Multiple disks (SSD and HDD) are
heading towards PB.

Apples and Oranges. There will continue to be a place for both.

> > However, the industry really only looks at shared disk or shared
> nothing.
> >
> > Btw, the modern day equivalent to memory channel and ultra low
> latency
> > data sharing is either Infiniband or RoCEv2 (RDMA over converged
> > ethernet)
> >
> > Not sure where it is at right now, but RoCEv2 is on the research
slide
> > of the OpenVMS roadmap.
> >
> > Imho, this type of cluster communications capability is critical to
> > next generation cluster scalability of shared disk clusters. It is
how
> > VSI can address the biggest counter argument to shared disk clusters
-
> > "shared disk clusters have scalability issues due to the requirement
of
> > a distributed lock manager"
>
> In terms of features and capabilities provided, RDMA is a
> next-generation cluster interconnect and not a next-generation
cluster.
>

Which is what I stated. " this type of cluster communications
capability"

Kerry Main

unread,
Feb 16, 2018, 9:30:10β€―PM2/16/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> Richard Maher via Info-vax
> Sent: February 16, 2018 7:55 PM
> To: info...@rbnsn.com
> Cc: Richard Maher <maher_rj...@hotmail.com>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation

[snip...]

> servers"
> are a thing of the past. On any server instance only one application
> shall run. This makes a mockery of your monolith proposals.
>
> >
> >> 2) Geographical limitations
> >
> > If you want sync data (RPO=0), then in any multi-site environment, you
> > are typically limited to <100km.
>
> Pathetic!
>

Science and the speed of light.

Course, it also depends on the R/W ratio of the application that also impacts exactly how far apart the two sites can be.

> >
> >> 3) No PaaS capability
> >
> > That can come later .. the public cloud is just a modern hyped name for
> > "outsourcing lite"
>
> You just can't get your head around this can you :-(

Outsourcing definition - giving all or part of your IT to a vendor to manage for a variable service fee per month.

Public cloud definition - giving all or part of your IT to a vendor to manage for a variable service fee per month.

What am I missing?

Perhaps I should be drinking more Gartner kool-aide?

> >
> > Many Customers who went to public clouds and/or outsourcing are
> now
> > coming back in house.
> >
> >>
> >>>
> >>> Btw, the modern day equivalent to memory channel and ultra low
> >> latency
> >>> data sharing is either Infiniband or RoCEv2 (RDMA over converged
> >>> ethernet)
> >>>
> >>> Not sure where it is at right now, but RoCEv2 is on the research
> > slide
> >>> of the OpenVMS roadmap.
> >>
> >> Goodo.

Need an Aussie dictionary for that one.

😊

> >>
> >>>
> >>> Imho, this type of cluster communications capability is critical to
> > next
> >>> generation cluster scalability of shared disk clusters. It is how
> > VSI
> >>> can address the biggest counter argument to shared disk clusters -
> >>> "shared disk clusters have scalability issues due to the requirement
> > of
> >>> a distributed lock manager"
> >>
> >> Oracle's DLM seems not to have these scalability issues.
> >>
> >
> > Well, Oracle's DLM came from Tru64 UNIX DLM, which was a watered
> down
> > version of OpenVMS DLM, so I really do not see how the Oracle DLM
> can be
> > that much different from the OpenVMS DLM.
>
> Educate yourself!
>
> >
> > Regardless, since few can afford Oracle Clustering, its no wonder you
> do
> > not hear any issues.
>
> Oh I see, the cost conscious want VMS but not Oracle?
>

Last 10 years was all about reducing HW costs.

Next 10 years will be all about reducing SW costs.

Oracle, SAP and similar App / DB players with ridiculous pricing are in for some very tough years ahead.

History - Windows/Linux X86-64 servers were never really considered technically "better" than Solaris/SPARC, OpenVMS/Tru64 Alpha etc in their prime.

However, Customers viewed Windows/Linux X86-64 as "good enough".

Same thing is coming for the big SW companies.

[snip...]

Jim Johnson

unread,
Feb 18, 2018, 12:59:23β€―PM2/18/18
to
Briefly, what I've seen about the cloud has many aspects, only a few align with outsourcing. Not sure how much to go into that.


I was following the discussion on shared-everything vs. shared-nothing structures. I've used both. The RMS cache management was pretty expensive to run when I last looked at it. It was especially bad for high write, high collision rate files. This drove a different approach with the TP server in DECdtm. It is structurally a shared nothing service on top of a shared everything access substrate. It uses an unconventional leader election, in that the 'home' system always is the leader if it is alive, and the other systems elect one of themselves as the leader if it isn't.

(This was done via a pattern of use with the DLM, and I agree with Steve that either documenting the known patterns or encapsulating them for easier consumption could be useful)

This produced much better write performance, and good recovery availability times. It allowed RDB to assume it had something like a cluster-wide TM without cross node overheads in the normal case. At the time I thought this was a good hybrid between the two models. There are aspects that I still think are. If you have an efficient shared writable volume this is, I think, still a good design. But I'm very aware that the aspects that matter can also be achieved with either remote storage servers (especially with rdma) or with direct replication (e.g. as you'd find with a Paxos log).

I think, but am not sure, that the Audit Server also used a leader election based shared nothing service on top of a shared volume. Fwiw.


Let me add a caveat on the above. I've been away from VMS for >15 years. All my data on VMS is very old, and is likely very out of date.

Jim.

Stephen Hoffman

unread,
Feb 18, 2018, 1:36:26β€―PM2/18/18
to
Adding a new-generation memory channel (RDMA) interconnect isn't going
to change the market perception of OpenVMS clustering in any
appreciable nor meaningful fashion.

What you were discussing was the past, and with some incremental
changes to the present, with the potential addition of RDMA. Which
hopefully also includes 40 GbE and some related updates. Not about
hauling the whole environment forward. Which was what started this
thread, and what I was referencing. Where OpenVMS is now has clearly
not convinced a whole lot of folks to purchase OpenVMS and particularly
to adopt clustering as implemented.

There's more than a little work deprecating and replacing the worst of
the parts of OpenVMS while preserving the best and most of the rest.
Rethinking cluster and app configuration and control for instance.
Integrating IP, LDAP, SMB and other ubiquitous services. Updating the
DLM. Scheduling. Etc. Clusters as implemented still have a couple
of really good features, too. Logical names β€” cluster or otherwise β€”
as configuration tools are among the worst of ideas found on OpenVMS,
and I could see replacing the whole of the V4-era design with an
LDAP-based design even for device I/O redirection, and with an app
configuration tools and API based on YAML or otherwise.

Integrating a distributed ledger as an operating system component β€”
which is what started off this thread β€” I'm not so sure about.
Certainly distributed ledgers are very useful for a specific apps and
environments, and there's certainly ample fodder for marketing, but
there's not a whole lot of (no pun intended) consensus around which
distributed ledger schemes and how that's going to work, and there's
certainly a concern that issues arising with cryptocurrencies such as
fraud or theft could undermine the perception of distributed ledgers as
marketing fodder.

Stephen Hoffman

unread,
Feb 18, 2018, 2:09:27β€―PM2/18/18
to
On 2018-02-18 17:59:21 +0000, Jim Johnson said:

> Briefly, what I've seen about the cloud has many aspects, only a few
> align with outsourcing. Not sure how much to go into that.
>
> I was following the discussion on shared-everything vs. shared-nothing
> structures. I've used both. The RMS cache management was pretty
> expensive to run when I last looked at it. It was especially bad for
> high write, high collision rate files. This drove a different approach
> with the TP server in DECdtm. It is structurally a shared nothing
> service on top of a shared everything access substrate. It uses an
> unconventional leader election, in that the 'home' system always is the
> leader if it is alive, and the other systems elect one of themselves as
> the leader if it isn't.
>
> (This was done via a pattern of use with the DLM, and I agree with
> Steve that either documenting the known patterns or encapsulating them
> for easier consumption could be useful)
>
> This produced much better write performance, and good recovery
> availability times.

That experience is typical. I've ended up splitting more than a few
apps similarly. Either at the volume level, or within the app.
While SSDs have helped substantially with I/O performance, the
coordination involved with distributed shared writes ends up limited by
how fast you can fling lock requests around. The byte-addressable
non-volatile storage that's coming on-line right now will only increase
the coordination load and the likelihood that sharding will be
considered or required, if you really want to use that memory at speed.
Faster networks also cause problems: https://lwn.net/Articles/629155/

> Let me add a caveat on the above. I've been away from VMS for >15
> years. All my data on VMS is very old, and is likely very out of date.

You're still rather current then, with a few errata. Various of the
spinlocks have been much better broken up and there've been
optimizations in the lock management communications implementation and
elsewhere, and there've been incremental increases to FC HBA speeds and
NIC speeds, but there've not been significant changes to clustering nor
to DECdtm and the DLM since the XA-era work, and the cluster
configuration and management user interface is more or less the same
though has backslid somewhat in various areas. Features such as SDN,
SMB, iSCSI, USB 3.1, UTF-8, and 40 GbE haven't yet arrived in OpenVMS.
Some are underway and some are planned:
https://www.vmssoftware.com/products_roadmap.html

Jim Johnson

unread,
Feb 18, 2018, 2:36:07β€―PM2/18/18
to
Yup. It wasn't just the cost of flinging the lock requests around. The old RMS code, at least, propagated writes via the disk, rather than forking the I/O and sending a copy memory-to-memory. From what I remember, we looked at doing that, but that was complex to get right didn't happen while I involved. Maybe it has happened since.

One thing to highlight above is that the DLM mechanism we used produced an affinity aware election -- something that is uncommon today. One reason it is uncommon, of course, is that we had a static affinity requirement and the research today is mostly focused on deriving unknown and evolving affinity requirements to drive leader election. But still, for VMS clusters it is pretty straightforward to have failover for recovery and failback on restart in order to keep the cross node traffic down.

> Faster networks also cause problems: https://lwn.net/Articles/629155/
>
> > Let me add a caveat on the above. I've been away from VMS for >15
> > years. All my data on VMS is very old, and is likely very out of date.
>
> You're still rather current then, with a few errata. Various of the
> spinlocks have been much better broken up and there've been
> optimizations in the lock management communications implementation and
> elsewhere, and there've been incremental increases to FC HBA speeds and
> NIC speeds, but there've not been significant changes to clustering nor
> to DECdtm and the DLM since the XA-era work, and the cluster
> configuration and management user interface is more or less the same
> though has backslid somewhat in various areas. Features such as SDN,
> SMB, iSCSI, USB 3.1, UTF-8, and 40 GbE haven't yet arrived in OpenVMS.
> Some are underway and some are planned:
> https://www.vmssoftware.com/products_roadmap.html

Hmm. Ok.

I think I agree with a number of your comments then. I especially share the concerns around deployment agility and scale.

To be clear, I also think it is great that VSI has taken the baton and pushing on this. There is a lot of good core code to work from, and some excellent engineering principles behind the system. I am still proud of the engineering in that system to this day.

Jim.
(also personal opinion :))

Kerry Main

unread,
Feb 18, 2018, 3:00:06β€―PM2/18/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of Jim
> Johnson via Info-vax
> Sent: February 18, 2018 12:59 PM
> To: info...@rbnsn.com
> Cc: Jim Johnson <jjohns...@comcast.net>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>

[snip...]

>
> Briefly, what I've seen about the cloud has many aspects, only a few align
> with outsourcing. Not sure how much to go into that.
>

Imho - Just as there are many different types of public cloud there are many different types of outsourcing, but the basics are the same.

Some outsourcers provide a gui based service catalogue that has work flows supporting automation, work flows and approvals etc.

While many like to think the GUI based "point-click" to create some VM's is cool or new technology, this is really only a GUI based service catalogue which has been part of ITSM for decades.

In reality there are a number of third party commercial add-ons to VMware that will do this exact thing for internal private clouds aka internal shared services aka the "IT Utility".
Jim - I don't think we ever met while you were at DEC, but I have had numerous "solving the problems of the world" brew sessions with J Apps and M Keyes who speak very highly of you as being one of the industry leaders in file system / TP designs, so your feedback is more than welcome here.

😊

For those not familiar with Jim's past work, check this out:
<http://www.hpl.hp.com/hpjournal/dtj/vol8num2/vol8num2art1.pdf>

Btw, from what I understand, the new file system (VAFS?) VSI is working on right now is being designed to address some of the issues you mentioned.

You may find this interesting:
<http://www.hp-connect.se/SIG/New_File_System_VMS_Boot%20Camp_2016.pdf>

Stephen Hoffman

unread,
Feb 18, 2018, 3:29:09β€―PM2/18/18
to
That implementation hasn't changed.

Somewhat related to this, writing (shadowing) data from server memory
to remote server memory is empirically faster than writing to local
HDD, which can make shadowing from server memory to server memory a
better choice than to HDD. Outboard SSD is faster than HDD, but not
enough. Add in non-volatile byte-addressable storage and this all gets
very interesting for even the folks with apps that require non-volatile
writes.

So many of the existing operating system and app designs are predicated
on the existing I/O performance hierarchy, too.

Related discussions of OpenVMS I/O performance from David Mathog from a
number of years ago:

https://groups.google.com/d/msg/comp.os.vms/4FZHjDQ1R4A/DO5xV-z-XGEJ
ftp://saf.bio.caltech.edu/pub/software/benchmarks/mybenchmark.zip

Shared write is hard to get right, and yet harder to scale up, and
it'll inherently not be competitive with the performance of unshared
and undistributed writes.

In a number of the OpenVMS-related proposals I've encountered,
clustering often ends up getting nixed on price. Folks don't see it as
enough to warrant the expense and the effort of adopting clustering.
For folks that are clustered and that are looking at yet higher
performance end up adding their own workarounds for the OpenVMS and
clustering limits, if swapping in faster I/O hardware isn't enough. No
pun intended.

Stephen Hoffman

unread,
Feb 18, 2018, 3:56:43β€―PM2/18/18
to
On 2018-02-18 19:56:40 +0000, Kerry Main said:

> Btw, from what I understand, the new file system (VAFS?) VSI is working
> on right now is being designed to address some of the issues you
> mentioned.

VAFS is a big step forward from ODS-2 and ODS-5, though one that should
require very few changes to applications. (Mostly compatible, though
details such as the volume size storage fields may or will need a look
within a few apps, for instance.) VAFS was designed around a decade
or so ago, and was intended for use with then-current HDDs and I/O
hardware. I suspect there'll be SSD-related changes incorporated into
VAFS such as TRIM and secure erase, but that remains to be learned.
I don't expect to see open-channel SSD support nor encrypted storage
support in VAFS very soon, though I'm quite willing to be surprised
here. And the entirety of what VAFS can provide (most) applications
is still going to be limited by the RMS APIs. VAFS will undoubtedly be
faster than current I/O performance with ODS-2 and ODS-5. The larger
volume sizes also make full mirror copies β€” HBVS RAID-1 shadowing β€”
take correspondingly longer or require correspondingly larger
bandwidth, too.

VAFS is somewhat afield from and largely unrelated to distributed
applications, though. Most apps certainly do expect a local file
system of some sort, but whether it's VAFS or ODS-5 or ZFS or BTFS or
is up to what the operating system supports and up to what the app
might require.

Jim Johnson

unread,
Feb 18, 2018, 4:23:43β€―PM2/18/18
to
Kerry, thanks much!

I don't think we met, but I recognize your name. And John & Mick were always way too kind.

The VAFS looks interesting, and I'm glad to see that Andy is associated with it.

LFS structures were pretty nascent at the time of Spiralog, and there were things that we definitely got wrong. Spiralog was a shot at leadership in the FS space. I don't want to take credit for that. I arrived late - the very cool ideas behind it predated me, and that team deserves the credit to pushing to the state of the art as much as they did.

There are two problems around file access in clusters: being able to store the data reliably at scale, and being able to access the data efficiently across the cluster at scale. Just reading the slides, VAFS looks to help with the first, which is certainly a precondition to much of anything else. What Steve and I were discussing was about the second - that if you have a write mostly (and 'mostly' can be surprisingly small) workload that can be partitioned, driving that workload as, effectively, shared nothing with an HA store is better. That is mostly above the file system itself.


Fwiw, I spent the last 7 years (until I retired last month) working in the Azure infrastructure. It gave me a perspective on the cloud, biased by being part of a cloud vendor. So, this is just my personal view on the cloud - one that I know is partial.

The comparison to outsourcing is missing two aspects that are probably most front and center to me. First, it misses the dynamism about resource acquisition and release. An outsourcer, or any 'private cloud' (inside company cloud) is not going to be able to provide the ability to have quick peak and valley workloads with equally low cost. The public clouds can. That has led to workloads that are inherently transient. They spin up 100's to 1000's of VMs for a short period, use them, and then give them back. You need a lot of incoming workloads to amortize that effectively.

This also pushes on deployment and configuration agility. If you're expecting to use 1000 VMs for an hour, but you first have to spend an hour deploying to them, you're not going to be happy. So that drives deployment times to small numbers of minutes.

This is where batch has gone, from what I can see. Whether it is Azure Batch, Hadoop, or something else.

But aren't all my workloads basal (i.e. always must be there)? Maybe today. I've watched a lot of basal workloads turn into transient workloads as people have understood that there's value in doing so. It wasn't that they had to be basal, just that it was easier to express if there was no real transient resource capability. There are indeed basal workloads, but they're typically a smaller subset than people first expect.


The second aspect also has to do with agility. Again, my understanding is from thinking about software providers. Every vendor is in a repeated contest with their competitors. This means that speed of getting from requirement to product in front of the potential customer matters -- i.e. the length of their relative release cycles. A shorter release cycle matters - it lets you get ahead of your competition, showing features that are more relevant and appearing more responsive (note that this doesn't say that the engineers aren't working on about the same cool features in both places, only that the potential customer is not seeing them for the company with the longer release cycle).

And one aspect of the cycle time is the cost of release. The higher the cost, the more work that has to go into the release for it to be justified. For a traditional ('box') product, this is rarely shorter than 6 months.

These are just things that have been true.

The cloud disrupted this in a big way. The delivery mechanism and the structure of most services (including the incorporation of devops) has driven this cycle time to as low as minutes for higher level features to a few months, worst case, for deep technical changes. That means that a cloud service competing with a box product is always ahead, and often way ahead, of responding to changing customer requirements.

Note that this is a lot more than just changing the delivery channel. That is part of it. But it also requires care on the engineering processes, on the service architecture, on monitoring and telemetry, and inclusion of devops. For the last, I had a continuous love-hate relationship with devops - I loved the insight it gave me on my customers and how my service actually worked, and hated the 2AM calls. πŸ˜ƒ

This is incomplete - it is just two top level thoughts that I had when I was reading. I honestly don't know how much of this is relevant to VMS. I'm just sharing my thoughts. YMMV. Again, just my personal opinions.

Jim.

Richard Maher

unread,
Feb 18, 2018, 6:04:51β€―PM2/18/18
to
On 19-Feb-18 3:36 AM, Jim Johnson wrote:

Great to see you still writing here!

Jim Johnson

unread,
Feb 18, 2018, 7:02:58β€―PM2/18/18
to
On Sunday, February 18, 2018 at 3:04:51 PM UTC-8, Richard Maher wrote:
> On 19-Feb-18 3:36 AM, Jim Johnson wrote:
>
> Great to see you still writing here!

Thanks!

Jim.

IanD

unread,
Feb 20, 2018, 2:55:56β€―PM2/20/18
to
On Saturday, February 17, 2018 at 1:30:10 PM UTC+11, Kerry Main wrote:

<snip>

>
> Last 10 years was all about reducing HW costs.
>
> Next 10 years will be all about reducing SW costs.
>

...and the removal of any specific hardware awareness from the application so that you interface with virtual layers only. Virtual IP's, Virtual devices, virtual everything

No hardware dependencies at the application level is wanted. Automated deployments, DevOPS, firewalls / networks all managed from generic interfaces with the hard work done by software underneath so humans don't need to learn the complexities of what is underneath (except when it all blows up of course!).

Hardware lock-in begone, software lock-in begone, humans to tend things, begone

Business want the ability to deploy over both private, public cloud or physical servers and mix n match components at will without the need for tear down and/or to specifically reconfigure

VMS clusters have got a long way to go before they can adhere to this dynamic model

> Oracle, SAP and similar App / DB players with ridiculous pricing are in for some very tough years ahead.
>

They are being exited out the door as simply too expensive, especially when they push their own barrow. They no longer hold the exclusive ability to unite the disparate parts of the business together

> History - Windows/Linux X86-64 servers were never really considered technically "better" than Solaris/SPARC, OpenVMS/Tru64 Alpha etc in their prime.
>
> However, Customers viewed Windows/Linux X86-64 as "good enough".
>
> Same thing is coming for the big SW companies.
>

This is the same argument I've said about VMS Clusters

Linux clusters, no matter how crippled or fickle compared to VMS clusters, are considered 'good enough', because other layers have stepped in, be it application of some other like Hadoop etc

IanD

unread,
Feb 20, 2018, 2:57:23β€―PM2/20/18
to
On Thursday, February 15, 2018 at 6:53:40 AM UTC+11, Arne VajhΓΈj wrote:
> On 2/14/2018 1:52 PM, IanD wrote:
> > This is indeed interesting news
> >
> > I know it was posted a while ago but I just saw it
> >
> > https://www.swirlds.com/vms-software-selects-swirlds-hashgraph-as-platform-to-build-secure-distributed-applications/
>
> > I wonder if Hashgraph has the ability to ultimately replace cluster
> > traffic on OpenVMS? It's supposed to scale to I think 200K
> > transactions per second, according to the glossy brouchers at least
> I think it is a business application thing not an OS thing.
>
> [application clusters are in many ways doing the same as OS clusters,
> but ...]
>
> After all it is a J thing.
>
> :-)
>
> Arne

Yes, it is an application thing

But the throughout and number of nodes possible was the interesting point to me

I thought if it could be adapted (and I have no idea if it can be, I've only watched some video's etc and read the glossy brochures) then it might be a mechanism to allow scaling of VMS clusters beyond the 96 to a few thousand, which I believe Hashgraph can scale to

The other point that Hoff keeps raising and I agreed wholeheartedly with is around the use of LDAP to replace VMS UAF etc

Arne VajhΓΈj

unread,
Feb 20, 2018, 8:57:46β€―PM2/20/18
to
On 2/20/2018 2:57 PM, IanD wrote:
> On Thursday, February 15, 2018 at 6:53:40 AM UTC+11, Arne VajhΓΈj wrote:
>> On 2/14/2018 1:52 PM, IanD wrote:
>>> This is indeed interesting news
>>>
>>> I know it was posted a while ago but I just saw it
>>>
>>> https://www.swirlds.com/vms-software-selects-swirlds-hashgraph-as-platform-to-build-secure-distributed-applications/
>>
>>> I wonder if Hashgraph has the ability to ultimately replace cluster
>>> traffic on OpenVMS? It's supposed to scale to I think 200K
>>> transactions per second, according to the glossy brouchers at least
>> I think it is a business application thing not an OS thing.
>>
>> [application clusters are in many ways doing the same as OS clusters,
>> but ...]
>>
>> After all it is a J thing.
>>
>> :-)
>
> Yes, it is an application thing
>
> But the throughout and number of nodes possible was the interesting point to me
>
> I thought if it could be adapted (and I have no idea if it can be,
> I've only watched some video's etc and read the glossy brochures)
> then it might be a mechanism to allow scaling of VMS clusters beyond
> the 96 to a few thousand, which I believe Hashgraph can scale to

It may very well be able to.

Most non-sharded persisting clusters with more than 1000 nodes are Java
based.

But the integration from native to Java would be a hassle.

Arne

Kerry Main

unread,
Feb 21, 2018, 6:10:09β€―AM2/21/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of IanD
> via Info-vax
> Sent: February 20, 2018 2:57 PM
> To: info...@rbnsn.com
> Cc: IanD <iloveo...@gmail.com>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>
Lets not forget that OpenVMS can be deployed in a shared nothing environment just as easily as Windows or Linux.

All of the locking and data partitioning In a shared nothing "cluster" is done at the app / db level, so why could this not also be done with OpenVMS?

Yes, the pricing, tcpip stack and file system on OpenVMS needs to be updated, but these major changes are cooking for release with OpenVMS X86.

> The other point that Hoff keeps raising and I agreed wholeheartedly with
> is around the use of LDAP to replace VMS UAF etc
>

The right approach would be to adopt a local/domain model similar to Windows.

Windows has the concept of a local account (sysuaf) or an AD domain account (LDAP)

Kerry Main

unread,
Feb 21, 2018, 6:10:14β€―AM2/21/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of Arne
> VajhΓΈj via Info-vax
> Sent: February 20, 2018 8:58 PM
> To: info...@rbnsn.com
> Cc: Arne VajhΓΈj <ar...@vajhoej.dk>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>
Keep in mind that these very large node clusters are typically based on small independent server cpu counts (less than 12) and not that big of memory all connected by ethernet networking. The big challenge is how to keep all of these large numbers of servers busy. It is very high application / data sharding complexity

Also - by far the biggest source of overall solution latency today is network LAN latency.

That is why most next gen designs are being designed with fewer server node counts with larger core counts, increased large amounts of shared memory and Infiniband/RoCEv2 technologies as cluster communication interfaces.

Just ask Google - they are introducing Power9 based server solutions into their service offerings.

Btw - Lack of scalability (as compared to natively compiled apps) is why super computers (again, very large numbers of independent servers with small core counts) do not typically use Java.

Kerry Main

unread,
Feb 21, 2018, 6:10:23β€―AM2/21/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of IanD
> via Info-vax
> Sent: February 20, 2018 2:56 PM
> To: info...@rbnsn.com
> Cc: IanD <iloveo...@gmail.com>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
> On Saturday, February 17, 2018 at 1:30:10 PM UTC+11, Kerry Main wrote:
>
> <snip>
>
> >
> > Last 10 years was all about reducing HW costs.
> >
> > Next 10 years will be all about reducing SW costs.
> >
>
> ...and the removal of any specific hardware awareness from the
> application so that you interface with virtual layers only. Virtual
IP's,
> Virtual devices, virtual everything
>

This has been a best programming practice for a long time i.e. using
logicals, fqdns (not IP's), alias's

Concept is often called "service transparency".

> No hardware dependencies at the application level is wanted.

This has been around for a long time.

> Automated deployments, DevOPS, firewalls / networks all managed
> from generic interfaces with the hard work done by software
> underneath so humans don't need to learn the complexities of what is
> underneath (except when it all blows up of course!).
>

Support of virtual worlds like software defined DC's is also risky
because it is so easy. Any FW's are now virtual, so FW person
inadvertently clicking on a wrong rule can expose all sorts of data very
easily.

> Hardware lock-in begone, software lock-in begone, humans to tend
> things, begone
>
> Business want the ability to deploy over both private, public cloud or
> physical servers and mix n match components at will without the need
> for tear down and/or to specifically reconfigure
>

Be careful here - a lot of this is cool-aid provided by media, cloud
(aka outsourcing) vendors and consulting org's.

Yes, there are pro's and con's with public and private clouds. One needs
to carefully understand these before drinking the cool-aid.

> VMS clusters have got a long way to go before they can adhere to this
> dynamic model
>

I would argue that OpenVMS clusters have supported "location independent
services" for a long time. Multi-site cluster logicals, alias's, HBVS,
and shared disk active-active strategy where one does not need to know
what server or what site you are on to have a program run on or submit a
job to batch on any server and have it run with no node specific logic
embedded.

Is there room for improvement?

Sure, but the same can be stated for Windows, Linux where the concept of
multi-site data is usually based on replication technologies and data
partitioning (read lots of complexity at App/data level).

> > Oracle, SAP and similar App / DB players with ridiculous pricing are
in for
> some very tough years ahead.
> >
>
> They are being exited out the door as simply too expensive, especially
> when they push their own barrow. They no longer hold the exclusive
> ability to unite the disparate parts of the business together
>
> > History - Windows/Linux X86-64 servers were never really considered
> technically "better" than Solaris/SPARC, OpenVMS/Tru64 Alpha etc in
> their prime.
> >
> > However, Customers viewed Windows/Linux X86-64 as "good
> enough".
> >
> > Same thing is coming for the big SW companies.
> >
>
> This is the same argument I've said about VMS Clusters
>
> Linux clusters, no matter how crippled or fickle compared to VMS
> clusters, are considered 'good enough', because other layers have
> stepped in, be it application of some other like Hadoop etc

See earlier discussions regarding pro's and con's of shared disk vs.
shared nothing clusters.

Linux/GFS is shared disk like OpenVMS (locking via DLM, no data
partitioning required, but can be optimized by doing this). Regular
Linux is shared nothing (locking handled at App / DB layer with data
partitioning required).

OpenVMS can be deployed in either model.

Bottom line - there is no guarantee any one technology will be
successful in the future. Its wide open.

Richard Maher

unread,
Feb 21, 2018, 8:19:30β€―AM2/21/18
to
Still at Microsoft?

With the death of SOAP I guess all the WS-Transaction stuff went out the
door as well?

Bring back and SSL form of TIP!!!

Jim Johnson

unread,
Feb 21, 2018, 9:59:06β€―AM2/21/18
to
Probably very boring for the main thread. Fwiw, I retired last month, and have been enjoying the transition since then. :)

I've not been involved in the SOAP/WS-T space for 10+ years, I think. I've lost track of what that team is doing, I'm afraid. I don't know where that work stands.

Jim.
These days I can be found at jjohnson4250 at comcast dot net.

Stephen Hoffman

unread,
Feb 21, 2018, 1:34:35β€―PM2/21/18
to
On 2018-02-18 21:23:40 +0000, Jim Johnson said:


> Fwiw, I spent the last 7 years (until I retired last month) working in
> the Azure infrastructure. It gave me a perspective on the cloud,
> biased by being part of a cloud vendor. So, this is just my personal
> view on the cloud - one that I know is partial.
>
> The comparison to outsourcing is missing two aspects that are probably
> most front and center to me. First, it misses the dynamism about
> resource acquisition and release. An outsourcer, or any 'private
> cloud' (inside company cloud) is not going to be able to provide the
> ability to have quick peak and valley workloads with equally low cost.
> The public clouds can. That has led to workloads that are inherently
> transient. They spin up 100's to 1000's of VMs for a short period, use
> them, and then give them back. You need a lot of incoming workloads to
> amortize that effectively.
>
> This also pushes on deployment and configuration agility. If you're
> expecting to use 1000 VMs for an hour, but you first have to spend an
> hour deploying to them, you're not going to be happy. So that drives
> deployment times to small numbers of minutes.

Kerry's business is seemingly dead-center in this market, but Kerry's
apparently not (yet?) interested in this. Gaming has its hits and its
misses, big launches and hopefully long tails, and system and network
loads can vary widely. It's also distributed, which gets into
discussions of globally-distributed locality and which haven't been
mentioned here yet. Based on what's been presented and what's been
posted here, Kerry is planning on over-provisioning and clustering
private hardware. Which also includes dealing with some of the
difficulties within OpenVMS around preferably-unattended installations
and configurations and lifecycle distributed security. A large part of
this implementation hasn't seen an overhaul in OpenVMS since ~V6.0 with
PCSI replacing VMSINSTAL.

> This is where batch has gone, from what I can see. Whether it is Azure
> Batch, Hadoop, or something else.

DECscheduler was vastly easier than dealing with home-grown batch
procedures twenty years ago, and options and alternatives have only
gotten better. OpenVMS doesn't even have something of the sheer
sophistication of cron, which is not competitive.

> But aren't all my workloads basal (i.e. always must be there)? Maybe
> today. I've watched a lot of basal workloads turn into transient
> workloads as people have understood that there's value in doing so. It
> wasn't that they had to be basal, just that it was easier to express if
> there was no real transient resource capability. There are indeed
> basal workloads, but they're typically a smaller subset than people
> first expect.
>
>
> The second aspect also has to do with agility. Again, my understanding
> is from thinking about software providers. Every vendor is in a
> repeated contest with their competitors.

The better ones are in a contest with themselves; with replacing and
updating their own products.

> This means that speed of getting from requirement to product in front
> of the potential customer matters -- i.e. the length of their relative
> release cycles. A shorter release cycle matters - it lets you get
> ahead of your competition, showing features that are more relevant and
> appearing more responsive (note that this doesn't say that the
> engineers aren't working on about the same cool features in both
> places, only that the potential customer is not seeing them for the
> company with the longer release cycle).
>
> And one aspect of the cycle time is the cost of release. The higher
> the cost, the more work that has to go into the release for it to be
> justified. For a traditional ('box') product, this is rarely shorter
> than 6 months.

VSI hasn't embraced continuous releases, though they'll likely be
thinking about that. Outside of patches.

> These are just things that have been true.
>
> The cloud disrupted this in a big way. The delivery mechanism and the
> structure of most services (including the incorporation of devops) has
> driven this cycle time to as low as minutes for higher level features
> to a few months, worst case, for deep technical changes. That means
> that a cloud service competing with a box product is always ahead, and
> often way ahead, of responding to changing customer requirements.
>
> Note that this is a lot more than just changing the delivery channel.
> That is part of it. But it also requires care on the engineering
> processes, on the service architecture, on monitoring and telemetry,
> and inclusion of devops. For the last, I had a continuous love-hate
> relationship with devops - I loved the insight it gave me on my
> customers and how my service actually worked, and hated the 2AM
> calls. πŸ˜ƒ
>
> This is incomplete - it is just two top level thoughts that I had when
> I was reading. I honestly don't know how much of this is relevant to
> VMS. I'm just sharing my thoughts. YMMV. Again, just my personal
> opinions.

It's what I've seen from the customer side of this mess, too.
Duplicating what already exists is not competitive, and it increases
costs and testing efforts, or makes the release cycles slower, or both.
I really need a good reason to do that.

Beyond discussions of distributed operations and clustering β€” what
kicked the whole discussion off here – OpenVMS really needs to be
better around installing into a hosted environment, even if some of the
folks β€” not the least of whom is Kerry β€” don't plan on ever using that
approach. Even if it's updates and changes for easier installations
and configurations and operations in a private cloud sans hardware
console, for those that have reasons to (try to) own the whole stack.

DaveFroble

unread,
Feb 21, 2018, 4:21:18β€―PM2/21/18
to
As many are aware, I don't get out much, so I have no idea what percentage of
users would fit the description of needing varying resources. My experience is
more with situations where the requirements are more fixed, basically the same
every day.

Got any numbers showing the distribution of users based upon varying, or
non-varying requirements?


--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Jim Johnson

unread,
Feb 21, 2018, 5:20:01β€―PM2/21/18
to
I've tried to cautious in what I say about this, at least partly because I don't know what sort of use current VMS systems have. I do not want to presume relevance here.

Because of that, let's instead look at what the enablers and drivers are. You'll have to decide if they'd be at all relevant to you.

First, if an application is only a scale-up application, then this discussion isn't relevant. To get bigger you need a bigger machine, not more machines. To get smaller, you need a smaller machine, not fewer machines.

Second, if it is a scale-out application, then you'll over-provision enough to carry you through any acquisition delays. If it takes a quarter to get a new batch of machines, then you'll plan to have enough until then. Drops in usage that are smaller than your acquisition delay just don't count in your planning, as you can't respond in that time.

Third, if your cost structure is such that there is no economic benefit to giving back machines that you're not fully using, then you'll not add the complexity to do so.

So, in a traditional world where you're running on physical machines that you've purchased and installed, there's a lot of bias to running the applications as if they had fixed loads. You might get some variation between a few applications, such as you'd find between open and after-hours application runs, but overall there was little ROI in trying to closely track load.

Now, let's imagine (and I'm pulling these numbers out of the air for the purposes of discussion) that I make a few changes. I move to a VM-based workload in the cloud with a more modern configuration management system -- such that I'm not more than, say, 10m from having a new VM with a new instance of my application online and running, and not more than, say, 1m from removing an instance of my application. Furthermore, I'm now charged by the instance-minute for the resources I consume, so dumping VMs that are not being used heavily enough provides immediate payback.

A lot of interactive workloads suddenly become very interesting, especially those with diurnal patterns in a given region. Or with monthly or quarterly or yearly peaks. Or recurring, but non-continuous, analysis workloads.

The management of these can be simplified with autoscaling services that use real time monitoring and rule bases to automatically shut down or create instances based on the current demand.

Fwiw,
Jim.

Stephen Hoffman

unread,
Feb 21, 2018, 5:23:38β€―PM2/21/18
to
On 2018-02-21 21:21:17 +0000, DaveFroble said:

> As many are aware, I don't get out much, so I have no idea what
> percentage of users would fit the description of needing varying
> resources. My experience is more with situations where the
> requirements are more fixed, basically the same every day.

A number of sites have seasonal activities and/or have peak seasons,
and for any of various business-related reasons. Ask'm what their
upgrade window is, and when their systems are most heavily loaded.

Some other sites have incremental growth with plots out six months or
longer; plots with predictions of when their capacity requirements will
outgrow their current hardware.

But varying loads can also include operational-related activities such
as running backups, activities making heavy use of encryption or
compression, or of running weekly or monthly reports, optimizing a
database or local storage, or whatever, too.

> Got any numbers showing the distribution of users based upon varying,
> or non-varying requirements?

No, I don't.

What I do see are a lot of folks with lots of spare cycles on their
OpenVMS systems; with larger-than-necessary server configuration than
they need for their typical load. Existing supported server hardware
and software forced many (most?) OpenVMS folks into over-building and
over-provisioning their data centers.

We're all also used to the effort involved in spinning up a new OpenVMS
system instance, which gets back to integrating the pieces and parts
and core services into the base distro, of integrating IP networking,
of provisioning, of streamlining the patch process, of sandboxing and
app isolation, and other assorted details.

OpenVMS is headed into an era when that over-provisioning won't be as
centrally required, as support for x86-64 and for operating as a guest
becomes available. Where spinning up an instance can and should be a
whole lot easier and faster; more competitive. Spin up a cluster
member for running backups or whatever. Or for dealing with a surprise
increase in loads, whether due to a data center failure and fail-over
elsewhere in your organization, or due to unexpectedly-increased app
loads secondary to any number of potential reasons. Right now,
over-provisioning is often seen as easier than adapting to a changing
load, and cheaper than (for instance) clustering. But how long is that
approach going to remain competitive? For some folks with small
seasonal variations, probably quite a while. For other folks with
wider variations in app activities or with the expectation of app or
server or site fail-overs or whatever, maybe they get interested?
It's really quite nice to spin up an instance or a dozen instances for
(for instance) software testing, too.

Pricing aside β€” and OpenVMS Alpha diverges from past practices here,
and diverges in the right direction β€” cluster rolling upgrades and
clustering are still a powerful construct for end-users and for
developers. This gets back to making details such as the DLM and
deployments easier to use, as well as other enhancements that've been
mentioned in various threads.

I'm here ignoring the HPE iCAP support, as that capability hasn't
seemed particularly popular among folks.
http://h41379.www4.hpe.com/openvms/journal/v13/troubleshooting_icap.html

Collecting telemetry β€” opt-in, etc β€” would help VSI figure some of this
out, too.

DaveFroble

unread,
Feb 21, 2018, 8:41:48β€―PM2/21/18
to
I agree that some things that make system management and monitoring would be a
good thing.

I've got solar panels, and the inverters include a crude web server. I can
connect with a browser and sit there and watch the energy I'm generating in real
time, and non-real time reporting. A nice concept. VMS doing similar is also a
nice concept.

But, when talking about being able to "spin up" additional resources on demand,
I have to ask first, what is the problem, and what type and amount of resources
should be thrown at the problem.

All I have to go on is my own experiences. None of our ustomers are running a
cluster. It's been discussed, as has SANs, and the customers don't see the need
considering the cost.

Running on a single low end VMS system is about as cheap as one can get. Not
sure that cloud services would be any cheaper.

So, define the problem. Am I the 1% who would not benefit, or am I the 99% who
would sooner see the resources used for other problems? It just seems to me
that asking for something without seeing the demand isn't the way to address things.

Robert A. Brooks

unread,
Feb 21, 2018, 9:34:17β€―PM2/21/18
to
On 2/21/2018 8:41 PM, DaveFroble wrote:

> I've got solar panels, and the inverters include a crude web server.Β  I can
> connect with a browser and sit there and watch the energy I'm generating in real
> time, and non-real time reporting.Β  A nice concept.Β  VMS doing similar is also a
> nice concept.

monitoring.solaredge.com ?

--
-- Rob

Kerry Main

unread,
Feb 21, 2018, 9:40:10β€―PM2/21/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> Stephen Hoffman via Info-vax
> Sent: February 21, 2018 1:35 PM
> To: info...@rbnsn.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>
Large shops inevitably use custom methods to deploy "gold" images.

A gold image puts all patches, agents, hardening in one custom image so you simply lay that image down on the HW, customize it for things like name, IP etc. and then boot it into the solution.

Reference:
<http://h41379.www4.hpe.com/openvms/journal/v15/blade_servers_ovms.pdf>

Also, reference the new Synergy HW based deployment option from HPE: (think OpenVMS X86-64)
<https://www.youtube.com/watch?v=eB59ycpv0w4>

Re: public clouds
Just to clarify, public cloud may have some benefits, but for any environment where you need to tightly control overall solution latency, public cloud is not the answer.

As a reminder, much of the public Internet is based on MPLS protocols which gives one availability, but if something internal to the MPLS cloud is unavailable it will re-route the connection and you have no idea of what your new latency is going to be.

Reference:
<http://www.bbc.com/news/business-36854291>
"Onlive had many challenges. but perhaps the most difficult was latency, or time delay. It is the perceivable amount of time players wait for a game to send commands to a data centre and then send back the results."

> > This is where batch has gone, from what I can see. Whether it is Azure
> > Batch, Hadoop, or something else.
>
> DECscheduler was vastly easier than dealing with home-grown batch
> procedures twenty years ago, and options and alternatives have only
> gotten better. OpenVMS doesn't even have something of the sheer
> sophistication of cron, which is not competitive.
>

Again, most enterprise Customers would rather purchase a cross platform scheduler with a common look-n-feel than have a whizz bang scheduler on one platform that does not run on all of its other platforms.

> > But aren't all my workloads basal (i.e. always must be there)? Maybe
> > today. I've watched a lot of basal workloads turn into transient
> > workloads as people have understood that there's value in doing so. It
> > wasn't that they had to be basal, just that it was easier to express if
> > there was no real transient resource capability. There are indeed
> > basal workloads, but they're typically a smaller subset than people
> > first expect.
> >
> >
> > The second aspect also has to do with agility. Again, my understanding
> > is from thinking about software providers. Every vendor is in a
> > repeated contest with their competitors.
>
> The better ones are in a contest with themselves; with replacing and
> updating their own products.
>
> > This means that speed of getting from requirement to product in front
> > of the potential customer matters -- i.e. the length of their relative
> > release cycles. A shorter release cycle matters - it lets you get
> > ahead of your competition, showing features that are more relevantd
See deployment links above.

Could OpenVMS installations be better from a single server install perspective? Absolutely.

However, as noted above, large enterprises do not use the traditional install processes (on any platform).

Once OpenVMS is available in a virtual KVM environment, I would anticipate things similar to VMware's templates will be available for OpenVMS.

Kerry Main

unread,
Feb 21, 2018, 10:20:05β€―PM2/21/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of Jim
> Johnson via Info-vax
> Sent: February 21, 2018 5:20 PM
> To: info...@rbnsn.com
> Cc: Jim Johnson <jjohns...@comcast.net>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>
[snip...]

Jim - lets put this in perspective.

How many Customers require a bump in 1,000 VM servers for only an hour?

What about the associated storage, load balancing, network loads and security (FW's) services associated with spinning up a 1,000 VM's?

Backups of data? AV scanning of data?

What about the complexity of data partitioning, App-data routing and data replication (assuming DR is required) across so many VM's for so short a period?

>
> I've tried to cautious in what I say about this, at least partly because I
> don't know what sort of use current VMS systems have. I do not want to
> presume relevance here.
>
> Because of that, let's instead look at what the enablers and drivers are.
> You'll have to decide if they'd be at all relevant to you.
>
> First, if an application is only a scale-up application, then this discussion
> isn't relevant. To get bigger you need a bigger machine, not more
> machines. To get smaller, you need a smaller machine, not fewer
> machines.
>

Agree. Fwiw, this is the issue HPE's HP-UX has and why HPE used to push big Superdomes for scaling up.

> Second, if it is a scale-out application, then you'll over-provision enough
> to carry you through any acquisition delays. If it takes a quarter to get a
> new batch of machines, then you'll plan to have enough until then.
> Drops in usage that are smaller than your acquisition delay just don't
> count in your planning, as you can't respond in that time.
>

Server acquisition times and costs are now a fraction of what they used to be.

Vendors also know how to take care of their big Customers. If you are a major Customer of any server vendor, they will ensure they have supplies available locally to ensure they can ship very quickly.

> Third, if your cost structure is such that there is no economic benefit to
> giving back machines that you're not fully using, then you'll not add the
> complexity to do so.
>

Capacity on demand (CoD) solutions are nothing new. Even OpenVMS had CoD back when the big Alpha Wildfires were shipping over 20 years ago.
<http://h41379.www4.hpe.com/doc/731final/documentation/pdf/ovms_es47_gs1280_nf_rn.pdf>
Reference sect 1.18.

With OpenVMS X86-64, perhaps this CoD might be resurrected with the new KVM virtualization capabilities being planned for OpenVMS V9.*?

> So, in a traditional world where you're running on physical machines that
> you've purchased and installed, there's a lot of bias to running the
> applications as if they had fixed loads. You might get some variation
> between a few applications, such as you'd find between open and after-
> hours application runs, but overall there was little ROI in trying to closely
> track load.
>
> Now, let's imagine (and I'm pulling these numbers out of the air for the
> purposes of discussion) that I make a few changes. I move to a VM-
> based workload in the cloud with a more modern configuration
> management system -- such that I'm not more than, say, 10m from
> having a new VM with a new instance of my application online and
> running, and not more than, say, 1m from removing an instance of my
> application. Furthermore, I'm now charged by the instance-minute for
> the resources I consume, so dumping VMs that are not being used
> heavily enough provides immediate payback.
>
> A lot of interactive workloads suddenly become very interesting,
> especially those with diurnal patterns in a given region. Or with monthly
> or quarterly or yearly peaks. Or recurring, but non-continuous, analysis
> workloads.
>
> The management of these can be simplified with autoscaling services
> that use real time monitoring and rule bases to automatically shut down
> or create instances based on the current demand.
>
> Fwiw,
> Jim.

Perhaps I am a dinosaur, but in the good ole days this was called capacity planning combined with CoD.

Having stated this, I would far prefer VSI focus on solving issues for traditional enterprise Customers and not that razor thin upper stratosphere layer of Customers that need 1,000 servers for only an hour.

😊

Jim Johnson

unread,
Feb 22, 2018, 2:01:54β€―AM2/22/18
to
Let me start by reiterating that I do not have data on what current VMS customers are doing, so I'm not willing to make a claim about how many would need any particular feature. My reason for replying here was to clarify what has seemed to be some misunderstandings about what goes on in today's public clouds.

For the questions around the ancillary complexity, there are roughly two answers. First, for applications of any scale, configuration and deployment are fully automated. This makes propagating, e.g. firewall rules, very straightforward. I've honestly not encountered this as an issue.

Second, around the data storage, most applications have externalized their data so that when the number of VMs scale out there is little overhead on the data management (under the covers the storage provider may be doing a lot of work, but it is not unique to the scale out operation).

It may or may not be interesting, but https://azure.microsoft.com/en-us/features/autoscale/ gives a start for using the Azure autoscaling service. It is far from the only such service, just the one I've seen more up close than the others.

For whether or not short term workloads exist in any volume (as opposed to scaling up and down a long running workload), a place to start is to look to batch systems, such as https://azure.microsoft.com/en-us/services/batch/. Again, this is far from the only one...

For server acquisition, faster is certainly better, in that you can delay purchasing for longer. But can you also return hardware to the vendor at the same rate, with sufficient refund, and do both regularly?

Which then brings in the comparison to CoD. Yes, there is some comparison, and I could even imagine ways that it could be hooked up as a trigger for a scaling operation. But it also does not address the full lifecycle - the arrival of resources that you'll be charged for having, the use of those resources, and the return of those resources and the termination of charges for them.

I would expect CoD to cover acquiring resources you already own for use by a workload, using it, and then returning it to your pool of available resources. That's certainly helpful to triggering the more extended lifecycle above, but it, in and of itself, is not the same thing.

Again, I'm not suggesting what VSI's priorities should be. I'd expect they would have a much better handle on that than I ever work.

Fwiw, I'd assume that many of the basic items they could do to help as a scalable workload would be common with other requirements - such as improvements in configuration management complexity and time - especially around addition and removal of cluster nodes.

HTH,
Jim.

DaveFroble

unread,
Feb 22, 2018, 3:36:50β€―AM2/22/18
to
Two Fronius 5000 watt inverters. I enjoy the times the bar graph turns red,
each inverter getting a bit more than 5K. Too much more and it shuts down. Not
so good.

Simon Clubley

unread,
Feb 22, 2018, 8:29:21β€―AM2/22/18
to
On 2018-02-21, DaveFroble <da...@tsoft-inc.com> wrote:
>
> But, when talking about being able to "spin up" additional resources on demand,
> I have to ask first, what is the problem, and what type and amount of resources
> should be thrown at the problem.
>

An example problem is when your website load has a normal base load but
which can also vary dramatically depending on the season (eg: Christmas) or
marketing days (eg: Black Friday), etc.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world

Stephen Hoffman

unread,
Feb 22, 2018, 10:35:23β€―AM2/22/18
to
On 2018-02-22 07:01:50 +0000, Jim Johnson said:

> For the questions around the ancillary complexity, there are roughly
> two answers. First, for applications of any scale, configuration and
> deployment are fully automated. This makes propagating, e.g. firewall
> rules, very straightforward. I've honestly not encountered this as an
> issue.

There's not really a firewall available on OpenVMS prior to the new
stack, so that part's easy. On OpenVMS, installation and
configuration is still a substantial issue. It's all home-done, if
it's been automated. No profiles, no provisioning. The pinnacle of
the OpenVMS installations is still the factory-installed software (FIS)
package, and not the mainline OpenVMS installer.

> Fwiw, I'd assume that many of the basic items they could do to help as
> a scalable workload would be common with other requirements - such as
> improvements in configuration management complexity and time -
> especially around addition and removal of cluster nodes.

Ayup. OpenVMS still treats and still packages IP as an optional
add-on and not as an always-available and integrated system component,
and the complexity and the installation and configuration and related
efforts unfortunately all tend to increase from there. Way too much of
a product installation is in dealing with the permutations that all
these options and all this flexibility inherently produces, or just
punting the whole problem over to the documentation and the person
installing the product kit, or installing OpenVMS itself.
Pragmatically, who still runs an OpenVMS system without an IP stack,
and β€” in 2020 or 2025 β€” is it really worth catering to those folks that
don't use IP first and foremost, and not to everybody else that
actually uses IP networking, and shutting off the always-present IP
stack for the handful of OpenVMS folks that don't use it? There's a
lot of that in OpenVMS, and there are a lot of folks that are used to
the flexibility and the complexity. Some of that is good. But there's
a whole lot of the current installation and configuration and product
packaging that's little more than legacy-preserving absurdity.
There's massive amounts of incremental work here, keeping the current
folks from porting while also incorporating changes to the platform
intended to attract newer partners and newer deployments. And changes
that make for easier deployments onto Amazon, Azure or otherwise β€”
getting rid of the need for an out-of-band console for initial software
install and configuration, quite possibly β€” is almost certainly on the
VSI development list, though not at the top.

Stephen Hoffman

unread,
Feb 22, 2018, 11:22:36β€―AM2/22/18
to
On 2018-02-22 03:14:26 +0000, Kerry Main said:

>
> Perhaps I am a dinosaur, but in the good ole days this was called
> capacity planning combined with CoD.

Adding whole systems is a little different than licensing and
unlicensing some cores. iCAP / CoD was a way to transiently license
and incrementally add cores into an SMP instance. It probably would
have been preferably for DEC and Compaq to have been able to use iCAP /
CoD as a way to ship out fewer permutations of server systems and
preferably with most or all of the server boxes shipped out configured
fully populated with cores. But that clearly didn't happen,
particularly given the economics of adding cores back in that era was a
whole lot different than multi-cores are today.

Wouldn't surprise me to see iCAP / CoD return for a few folks using
OpenVMS, but it's still not particularly close to temporarily spinning
up multiple guests and clustering them, as an example. iCAP / CoD
also doesn't help with apps that have conflicting system resource
requirements (network ports, for instance), or that have conflicting
system parameter requirements or conflicting username or software
version requirements, etc.

> Having stated this, I would far prefer VSI focus on solving issues for
> traditional enterprise Customers and not that razor thin upper
> stratosphere layer of Customers that need 1,000 servers for only an
> hour.

Outside of the installed base, traditional enterprise customers aren't
all that interested in OpenVMS. Most server customers aren't
interested, for that matter. That's something VSI will be working on.

Making it (much) easier for new folks to spin up some new servers for
prototypes or some testing servers for developers might get some new
deployments, too. Even from existing sites.

Thirty years ago, it would have been really handy to spin up a few
MicroVAX systems as transient front ends or transient statistical
quality control servers for testing parts of the factory network or to
deal with transient loads, but that sort of ease and speed and
flexibility just didn't exist back then. And no, iCAP / CoD isn't a
help here, because app-stacking different apps or multiple copies of
the same apps onto the same boxes tends to disrupt things.

DaveFroble

unread,
Feb 22, 2018, 5:01:01β€―PM2/22/18
to
Simon Clubley wrote:
> On 2018-02-21, DaveFroble <da...@tsoft-inc.com> wrote:
>> But, when talking about being able to "spin up" additional resources on demand,
>> I have to ask first, what is the problem, and what type and amount of resources
>> should be thrown at the problem.
>>
>
> An example problem is when your website load has a normal base load but
> which can also vary dramatically depending on the season (eg: Christmas) or
> marketing days (eg: Black Friday), etc.
>
> Simon.
>

Not a problem, here ..

Frankly, for some needs, one can not purchase a system small/weak enough to
exactly fit the needs. So, one might have excess capability, but if that is the
smallest available, so what?

Today's HW can be extremely capable. Maybe not for everyone.

It all comes back to, who needs certain things, and who doesn't. It's that
breakdown that seems to be totally absent in these discussions.

Stephen Hoffman

unread,
Feb 22, 2018, 5:57:08β€―PM2/22/18
to
On 2018-02-22 22:00:59 +0000, DaveFroble said:

> Simon Clubley wrote:
>> On 2018-02-21, DaveFroble <da...@tsoft-inc.com> wrote:
>>> But, when talking about being able to "spin up" additional resources on
>>> demand, I have to ask first, what is the problem, and what type and
>>> amount of resources should be thrown at the problem.
>>
>> An example problem is when your website load has a normal base load but
>> which can also vary dramatically depending on the season (eg:
>> Christmas) or marketing days (eg: Black Friday), etc.
>>
>> Simon.
>
> Not a problem, here ..
>
> Frankly, for some needs, one can not purchase a system small/weak
> enough to exactly fit the needs. So, one might have excess capability,
> but if that is the smallest available, so what?
>
> Today's HW can be extremely capable. Maybe not for everyone.
>
> It all comes back to, who needs certain things, and who doesn't. It's
> that breakdown that seems to be totally absent in these discussions.

What's also missing is that over-building server configurations is
normal on OpenVMS; there's just no entry-level server available, and
there really hasn't been one for a decade or two. Of how capable or
over-capable some of that hardware really is. What's also missing is
that not spinning up new servers for testing or for prototyping is
normal on OpenVMS. That even installing and configuring a new server
is an involved task on OpenVMS. That clustering has been an
exceedingly expensive approach to license on OpenVMS. That clustering
itself hasn't been further integrated and updated in OpenVMS. That
you can't gain access to a guest or a slice or a private server at a
hosting provider fully online and within minutes, with OpenVMS. Folks
that are used to other platforms aren't used to these assumptions and
these limits and these requirements; the whole
runs-on-commodity-hardware discussion is soon in play for everybody.
OpenVMS and its apps are headed into a completely different world.
With actual entry-level hardware. Whether any business takes
profitable advantage of this? I know of a number of folks running
lightly-loaded rx2800 boxes that may well end up replacing them with
servers that are a fraction of the size (toaster-sized or cartridges or
otherwise), and at a fraction of the hardware prices of the Integrity
boxes that they'll be replacing. Or where you can spin up and run a
test system for tens of dollars a month, and somebody else deals with
the hardware and the network and the rest of that.

These sorts of differences in costs and revenues and availabilities
won't interest some accounting departments and some managers and some
developers...

For folks with under-loaded Integrity boxes, I'd be looking at a range
with some of the following at the low-end...

https://www.supermicro.com/products/system/Mini-ITX/SYS-E300-8D.cfm
https://www.supermicro.com/products/system/midtower/5028/SYS-5028L-TN2.cfm
https://www.supermicro.com/products/system/midtower/5028/SYS-5028D-TN4T.cfm

This all also depends heavily on which processors, chipsets and I/O
widgets gets supported by VSI, and what sorts of local and served
storage and storage interconnects and protocols will be available and
supported and necessary, of course. FC, maybe SCS over DTLS, iSCSI and
maybe iSER, maybe FCIP, SMB, etc.

At the middle-range of OpenVMS usage, I know of data centers running
racks of OpenVMS servers that might well end up with the whole
environment replaced by a few 3U/6U-class boxes; blades and SSDs and
all.

And I'd be surprised to not see a number of OpenVMS systems running
hosted. Some in production. Some for testing. Though this definitely
hinges on the VSI pricing and packaging practices for x86-64.

Kerry Main

unread,
Feb 24, 2018, 1:35:06β€―PM2/24/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> Stephen Hoffman via Info-vax
> Sent: February 22, 2018 5:57 PM
> To: info...@rbnsn.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
Reality exists for Windows/Linux servers as well. Entry level ProLiant
servers end up with server utilization of less than 20% busy.

Reality is that the server HW has advanced so much, that the traditional
one bus app to one server model is really a waste.

Heck, even many Linux/Windows VM's we have migrated in the past are not
that much busier either.

> Of how capable or
> over-capable some of that hardware really is. What's also missing is
> that not spinning up new servers for testing or for prototyping is
> normal on OpenVMS. That even installing and configuring a new server
> is an involved task on OpenVMS.

Using traditional "put the OS CD in and boot" that is true. Definitely
room for improvement.

Of course, the same is true for Windows and Linux environments as well.

Of course, for med to large environments, no one installs new OS's that
way anyway, so this is really only true for small environments.

In med to large environments, they use "gold" images (image saveset of
pre-built master image) or templates (VMware) to deploy new OS's.

In OpenVMS, you could also build master LD containers to image backup to
new OS, customize and reboot. Maybe 15-30 minutes start to finish?

Could also be part of cluster with common system disk which most other
platforms do not have (common start-ups etc.)

> That clustering has been an
> exceedingly expensive approach to license on OpenVMS.

Apparently to be addressed in X86-64 license model.

> That clustering
> itself hasn't been further integrated and updated in OpenVMS. That
> you can't gain access to a guest or a slice or a private server at a
> hosting provider fully online and within minutes, with OpenVMS.

Cloud marketing hype.

Please tell me of any Customer who absolutely needs VM or OS's within
minutes and I will tell you a Customer who is not involved with their
business and does not know didley about capacity planning.

For those 10 or less Customers in the world that really need OS's within
minutes - go to Amazon and Azure (hopefully consistent latency is not an
issue for their App)

> Folks
> that are used to other platforms aren't used to these assumptions and
> these limits and these requirements;

Cloud marketing hype. What you get within minutes is "IT lite" i.e. A VM
spin-up, but without any AV, no OS patching, no monitoring integrated
with your service desk, no backups, no security hardening, no firewalls.

Yes, you can add these on (extra costs), but this where cloud providers
insert "yes, additional planning is required". Duhh..

Internal IT Depts provide full service OS instances per their company
policies. Does it take longer? Sure. But it's much more integrated and
"ready" for a prod environment.

> the whole
> runs-on-commodity-hardware discussion is soon in play for everybody.
> OpenVMS and its apps are headed into a completely different world.

Agree.

> With actual entry-level hardware. Whether any business takes
> profitable advantage of this? I know of a number of folks running
> lightly-loaded rx2800 boxes that may well end up replacing them with
> servers that are a fraction of the size (toaster-sized or cartridges
or
> otherwise), and at a fraction of the hardware prices of the Integrity
> boxes that they'll be replacing. Or where you can spin up and run a
> test system for tens of dollars a month, and somebody else deals with
> the hardware and the network and the rest of that.
>

Lets not forget the SW costs are typically the biggest slice of the IT
stack by far - on every OS.

Small dual core server in a 2 node RAC cluster on Windows/Linux running
Oracle Server will likely cost about $200K for the Oracle licenses
alone.

Servers running at $3K-$10k are usually not that big of a factor.

Good news for Oracle on OpenVMS Integrity or Alpha Customers - their
Oracle server (AND Rdb) license costs should drop by 50% when they move
to X86-64.

This is because the Processor Multiplication Factor which Oracle uses to
stifle their competitive HW platforms is 1.0 for Integrity, Power and
Alpha, but is only 0.5 for X86-64. It makes no difference what the OS
is.

This alone will likely make a move from OpenVMS Integrity to OpenVMS
X86-64 a no brainer.

> These sorts of differences in costs and revenues and availabilities
> won't interest some accounting departments and some managers and
> some
> developers...
>

You are comparing the now to the past, not the now to the future
announced plans.

As example - new license model (subscription based?), new virtualization
capabilities, new emulators etc.

> For folks with under-loaded Integrity boxes, I'd be looking at a range
> with some of the following at the low-end...
>
> https://www.supermicro.com/products/system/Mini-ITX/SYS-E300-
> 8D.cfm
> https://www.supermicro.com/products/system/midtower/5028/SYS-
> 5028L-TN2.cfm
> https://www.supermicro.com/products/system/midtower/5028/SYS-
> 5028D-TN4T.cfm
>
> This all also depends heavily on which processors, chipsets and I/O
> widgets gets supported by VSI, and what sorts of local and served
> storage and storage interconnects and protocols will be available and
> supported and necessary, of course. FC, maybe SCS over DTLS, iSCSI
and
> maybe iSER, maybe FCIP, SMB, etc.
>
> At the middle-range of OpenVMS usage, I know of data centers running
> racks of OpenVMS servers that might well end up with the whole
> environment replaced by a few 3U/6U-class boxes; blades and SSDs and
> all.
>
> And I'd be surprised to not see a number of OpenVMS systems running
> hosted. Some in production. Some for testing. Though this
definitely
> hinges on the VSI pricing and packaging practices for x86-64.
>

No doubt existing OpenVMS hosting and remote support providers like SCI
(and others) will jump on the HCI movement which is taking over many
large data centers. It is a way to significantly reduce the
infrastructure and associated server/storage costs typically used for
hosting very high numbers of VM's.

Note - SAN folks will not like this as HCI uses local storage with a
hypervisor on top to server the local drives to other cluster nodes. It
is not unlike OpenVMS clustering in a box.

Some HCI companies, like Nutanix (very popular now btw), support KVM
hypervisors which would be a nice fit for the planned virtualization
features planned for OpenVMS V9+ releases.

Reference:
<www.nutanix.com>

Note - while VMware is the most popular hypervisor today, many Customers
are really complaining about the high licensing costs, so KVM solutions
could become much more popular in the future. With a small company
called Red Hat developing KVM, you can be sure KVM will only get more
popular in the future. Yes, OpenVMS and VSI can hopefully ride the wave
..

Remember my earlier point about the next 10 years being all about
reducing SW costs?

Craig A. Berry

unread,
Feb 24, 2018, 2:16:42β€―PM2/24/18
to
On 2/24/18 12:33 PM, Kerry Main wrote:

> In OpenVMS, you could also build master LD containers to image backup to
> new OS, customize and reboot. Maybe 15-30 minutes start to finish?

Then another month of tinkering to figure out how to change the node
name without breaking anything.

Kerry Main

unread,
Feb 24, 2018, 3:55:05β€―PM2/24/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of Craig
> A. Berry via Info-vax
> Sent: February 24, 2018 2:17 PM
> To: info...@rbnsn.com
> Cc: Craig A. Berry <craig...@nospam.mac.com>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
Never been much of an issue if its just the OS (modparams) and TCPIP
(tcpip$config).

If Apps involved, then it does get a bit more tricky, but that is the
same on *NIX and Windows as well.

Its certainly not a show stopper.

Kerry Main

unread,
Feb 24, 2018, 4:30:06β€―PM2/24/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> Stephen Hoffman via Info-vax
> Sent: February 22, 2018 11:23 AM
> To: info...@rbnsn.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>
> On 2018-02-22 03:14:26 +0000, Kerry Main said:
>
> >
> > Perhaps I am a dinosaur, but in the good ole days this was called
> > capacity planning combined with CoD.
>
> Adding whole systems is a little different than licensing and
> unlicensing some cores. iCAP / CoD was a way to transiently license
> and incrementally add cores into an SMP instance. It probably would
> have been preferably for DEC and Compaq to have been able to use iCAP
> /
> CoD as a way to ship out fewer permutations of server systems and
> preferably with most or all of the server boxes shipped out configured
> fully populated with cores. But that clearly didn't happen,
> particularly given the economics of adding cores back in that era was a
> whole lot different than multi-cores are today.
>
> Wouldn't surprise me to see iCAP / CoD return for a few folks using
> OpenVMS, but it's still not particularly close to temporarily spinning
> up multiple guests and clustering them, as an example. iCAP / CoD
> also doesn't help with apps that have conflicting system resource
> requirements (network ports, for instance), or that have conflicting
> system parameter requirements or conflicting username or software
> version requirements, etc.
>

Somewhat agree, but VM sprawl is one of the biggest challenges facing many large companies today. In a one bus app per OS instance culture, its also extremely tough to address.

Without proper governance, VM's propagate like rabbits - with all of the associated management and licensing costs. It’s the companies who allow internal resources to spin up new VM's in "minutes" (as stated here a few times), who are having the biggest challenges with VM sprawl.

Companies like Microsoft and Red Hat love VM sprawl because they still get their licensing and/or monthly support subscription $'s. They do not care if the OS is P or V.

On the positive side - VM sprawl is also going to be good for VSI once the virtualization and emulator capabilities kick in.

> > Having stated this, I would far prefer VSI focus on solving issues for
> > traditional enterprise Customers and not that razor thin upper
> > stratosphere layer of Customers that need 1,000 servers for only an
> > hour.
>
> Outside of the installed base, traditional enterprise customers aren't
> all that interested in OpenVMS. Most server customers aren't
> interested, for that matter. That's something VSI will be working on.
>
> Making it (much) easier for new folks to spin up some new servers for
> prototypes or some testing servers for developers might get some new
> deployments, too. Even from existing sites.
>

That exists today with HP-UX Integrity VM's.
<http://h41379.www4.hpe.com/openvmsft/hpvm/integrityvm_cookbook.pdf>

> Thirty years ago, it would have been really handy to spin up a few
> MicroVAX systems as transient front ends or transient statistical
> quality control servers for testing parts of the factory network or to
> deal with transient loads, but that sort of ease and speed and
> flexibility just didn't exist back then. And no, iCAP / CoD isn't a
> help here, because app-stacking different apps or multiple copies of
> the same apps onto the same boxes tends to disrupt things.
>

Only if not well planned. The argument for "one bus app per OS instance" is common in the commodity OS world because of not only technical challenges, but also culture i.e. "no way I am running my bus App on the same OS as another Bus App".

See above regarding VM sprawl.

OpenVMS customers have been running the App stack model with OpenVMS standalone and cluster environments for decades.

Mission critical factory environments using OpenVMS do planning exceptionally well and often run multiple factory apps on the same OpenVMS OS.

Can there be improvements to the OpenVMS App stacking model (e.g. enhancements to the native class scheduler)?

Absolutely.

Having stated this, imho, App stacking will be one of the future differentiators for OpenVMS.

What's old is new again.

😊


Regards,

Kerry Main
Chief Information Officer (CIO)
Stark Gaming Inc.
613-797-4937 (cell)
613-599-6261 (fax)
Kerry...@starkgaming.com
http://www.starkgaming.com




Craig A. Berry

unread,
Feb 24, 2018, 4:33:07β€―PM2/24/18
to
On 2/24/18 2:52 PM, Kerry Main wrote:
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of Craig
>> A. Berry via Info-vax
>> Sent: February 24, 2018 2:17 PM
>> To: info...@rbnsn.com
>> Cc: Craig A. Berry <craig...@nospam.mac.com>
>> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
> Automation
>>
>> On 2/24/18 12:33 PM, Kerry Main wrote:
>>
>>> In OpenVMS, you could also build master LD containers to image
>> backup to
>>> new OS, customize and reboot. Maybe 15-30 minutes start to finish?

When you're paying for every minute the instance exists, that's a long
time fussing around before you're ready to do any computing.

>> Then another month of tinkering to figure out how to change the node
>> name without breaking anything.
>>
>
> Never been much of an issue if its just the OS (modparams) and TCPIP
> (tcpip$config).

There is quite a bit more to it than that:

<http://h41379.www4.hpe.com/faq/vmsfaq_007.html#mgmt9>

> If Apps involved, then it does get a bit more tricky, but that is the
> same on *NIX and Windows as well.
>
> Its certainly not a show stopper.

Of course it can be done with enough time and expertise. There was a
partial solution for certain HP blades, but it was never made into a
general and generally-available solution. There's a long ways to go
before VMS could be ready to spin up an instance in a few seconds, work
on some compute problem for a few seconds or minutes, and then go away.
As is quite commonly done on other platforms.

Kerry Main

unread,
Feb 24, 2018, 5:20:05β€―PM2/24/18
to comp.os.vms to email gateway
There are best practices when building a homogeneous environment.

Node names - Assume no DECnet and start-up files which use logicals and
lexical functions to determine node names.

Also, Unless required, assume node names not used in things like disk
logicals and batch names. Not the best strategy when one wants a
homogeneous environment anyway.

Same thing for rights identifiers based on node name - not the best
strategy in a homogeneous environment.

Licenses - not sure if this will be an issue with the new subscription
model.

Yes, one does need to ensure script files and dcl files are not node
name unique, but OpenVMS cluster Customers are familiar with this
anyway.

> > If Apps involved, then it does get a bit more tricky, but that is
the
> > same on *NIX and Windows as well.
> >
> > Its certainly not a show stopper.
>
> Of course it can be done with enough time and expertise. There was a
> partial solution for certain HP blades, but it was never made into a
> general and generally-available solution. There's a long ways to go
> before VMS could be ready to spin up an instance in a few seconds,
work
> on some compute problem for a few seconds or minutes, and then go
> away.
> As is quite commonly done on other platforms.

I still see this as marketing hype for the majority of real world
production IT environments.

Spinning up VM's - what about the data partitioning model and/or perhaps
even replication (assuming data is important) associated with these
VM's?

What about AV on these VM's - should they do data processing with zero
fear of being compromised?

What about firewalls and the detailed rules between App-DB and web VM
layers?

Imho, for most large environments, I would rather see a galaxy like
private cloud environment at some point in the future (yes, there are
challenges in bringing this functionality forward), where larger servers
have numerous spare core's in a "pool" or even spare vm's that can be
brought in as required dynamically based on capacity based rules.

Build out with lots of little VM's is the most common compute model
today, but in the future, I would rather see a model that says build up
first (higher cores, larger memory), then build out as required.

It will be interesting to see how the new virtualization functionality
in OpenVMS V9+ will evolve.

Stephen Hoffman

unread,
Feb 24, 2018, 7:06:10β€―PM2/24/18
to
The implementation of host names on OpenVMS is utterly absurd, and long
overdue for an overhaul. I've had better success removing and
reinstalling certain packages when changing the host name, as that was
far faster than getting a different host name accepted by (for
instance) the Apache port.

Stephen Hoffman

unread,
Feb 24, 2018, 7:10:22β€―PM2/24/18
to
On 2018-02-24 20:52:20 +0000, Kerry Main said:

> Never been much of an issue if its just the OS (modparams) and TCPIP
> (tcpip$config).

No, it's not. Go try it. We'll wait.

> If Apps involved, then it does get a bit more tricky, but that is the
> same on *NIX and Windows as well.

When are apps not involved in a deployment?

> Its certainly not a show stopper.

Given that host names and DNS forward and reverse translations are all
involved with network security, don't bet on that.

Stephen Hoffman

unread,
Feb 24, 2018, 7:22:45β€―PM2/24/18
to
On 2018-02-24 22:15:01 +0000, Kerry Main said:

>>
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of Craig
>> A. Berry via Info-vax
>> Sent: February 24, 2018 4:33 PM
>> To: info...@rbnsn.com
>> Cc: Craig A. Berry <craig...@nospam.mac.com>
>> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>>
>> On 2/24/18 2:52 PM, Kerry Main wrote:
>>>> -----Original Message-----
>>>> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of Craig
>>>> A. Berry via Info-vax
>>>> Sent: February 24, 2018 2:17 PM
>>>> To: info...@rbnsn.com
>>>> Cc: Craig A. Berry <craig...@nospam.mac.com>
>>>> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>>>
>>> Never been much of an issue if its just the OS (modparams) and TCPIP
>>> (tcpip$config).
>>
>> There is quite a bit more to it than that:
>>
>> <http://h41379.www4.hpe.com/faq/vmsfaq_007.html#mgmt9>

That list is incomplete, too. The host name(s) have been stored in
various other spots since that list was created or other storage
locations have been found. This due to the complete lack of
system-wide shared data for these and related details. OpenVMS never
really solved this problem, and deferred it to the applications, and
the applications then all went their own unique ways, with their own
unique interpretations, tools and methods for changing the name. Which
really makes creating a monolithic master... interesting.

> There are best practices when building a homogeneous environment.

Alas, OpenVMS nor layered products nor apps follow these (unpublished)
practices.... The host name(s) of an OpenVMS server is a complete mess.

DaveFroble

unread,
Feb 25, 2018, 2:18:15β€―AM2/25/18
to
Kerry Main wrote:

> Only if not well planned. The argument for "one bus app per OS instance" is
> common in the commodity OS world because of not only technical challenges,
> but also culture i.e. "no way I am running my bus App on the same OS as
> another Bus App".

This can be traced to the usage of PCs to get around IT departments who may or
may not have been responsive enough for users. It was anarchy then, and now it
is just a big mess.

DaveFroble

unread,
Feb 25, 2018, 2:22:01β€―AM2/25/18
to
This is the result of setting up some things once, and assuming they will not
change.

One would hope VSI takes a critical look at such, and perhaps uses some database
to contain all such things. Then there is still the question of whether such
data would be set at boot time, or could be changed on the fly. That could get
sticky. If not a re-boot, perhaps some other type of "refresh". There would be
the question of other computers "knowing" the node name, and getting confused.

johnwa...@yahoo.co.uk

unread,
Feb 25, 2018, 5:39:46β€―AM2/25/18
to
What are names used for (and useful for) in the context of
computers and applications (and, if necessary, users)?

How many of those uses are things that people outside the
IT Department should care about?

How many of them are things that should be important to
the innards of OS, rather than (say) some OS-independent
distributed naming layer on top of the OS?

Host names, for anyone outside the IT department, for
example? In a seriously distributed environment, are
host names as such not a rather dated and devalued
concept? Perhaps they should even be deprecated (for
things being designed from scratch)?

Even back in the 1980s, in a terminal-centric environment,
things like terminal servers allowed user-visible 'service
names' to be distinct from IT-visible host names. A bit of
'terminal server magic' was all that was needed. For LAT
users, or for telnet users (round robin DNS?).

What's the 'modern' equivalent, where what is needed is not
just users talking to application services but applications
talking to other applications, in a (semi?) standardised
fashion? (The obvious legacy approach is to use well-known
IPhostnames and wll-known IP portnumber/name but that's
not really helpful for reasons that should be fairly obvious)

And why do the OS internals have to get involved in this,
except to provide the necessary facilities in a suitably
robust and trustworthy way?

As a historical side note, I'm thinking that back in the
1980s, there was a VAX VMS software product that did the
*technical* stuff of changing the SCSnode and DECnet
name and stuff like that as part of deploying what Kerry
likes to call a 'golden image'. It might have been called
VAX Remote Systems Manager or something like that, and it
wasn't just intended for use within a VMScluster. No
matter. Anyway, on top of that, there was still the licencing
stuff, which DEC did one way, others did other ways (FlexLM,
dongles, etc). Three decades later there still isn't a
universally accepted licence management and enforcement
mechanism.

DHCP and friends (mDNS etc?) may be part of a modern
follow on. Or may not. But I'm struggling to see why
a host name (as such) is still important (outside the
IT department). Application service names? Different
matter; they may well need to be meaningful, or at
least pre-agreed.

To an extent, the same naming issue applies to storage
(files etc). That data someone wanted, those files
that need restoring, are they on C: or are they on
banana$dka300:[john] or /usr/users/john, or what (and
where)?

Enlightenment welcome.

Jan-Erik Soderholm

unread,
Feb 25, 2018, 6:19:55β€―AM2/25/18
to
Den 2018-02-25 kl. 11:39, skrev johnwa...@yahoo.co.uk:
> On Sunday, 25 February 2018 07:22:01 UTC, DaveFroble wrote:
>> Craig A. Berry wrote:
>>> On 2/24/18 12:33 PM, Kerry Main wrote:
>>>
>>>> In OpenVMS, you could also build master LD containers to image backup to
>>>> new OS, customize and reboot. Maybe 15-30 minutes start to finish?
>>>
>>> Then another month of tinkering to figure out how to change the node
>>> name without breaking anything.
>>
>> This is the result of setting up some things once, and assuming they will not
>> change.
>>
>> One would hope VSI takes a critical look at such, and perhaps uses some database
>> to contain all such things. Then there is still the question of whether such
>> data would be set at boot time, or could be changed on the fly. That could get
>> sticky. If not a re-boot, perhaps some other type of "refresh". There would be
>> the question of other computers "knowing" the node name, and getting confused.
>>
>> --
>> David Froble Tel: 724-529-0450
>> Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
>> DFE Ultralights, Inc.
>> 170 Grimplin Road
>> Vanderbilt, PA 15486
>
> What are names used for (and useful for) in the context of
> computers and applications (and, if necessary, users)?
>

You do need *some* reference to whatever you'd like to connect
to, I guess.

> How many of those uses are things that people outside the
> IT Department should care about?
>
> How many of them are things that should be important to
> the innards of OS, rather than (say) some OS-independent
> distributed naming layer on top of the OS?
>

Most used do not care which of the servers you are actually
"using" when you access www.google.com.

> Host names, for anyone outside the IT department, for
> example? In a seriously distributed environment, are
> host names as such not a rather dated and devalued
> concept? Perhaps they should even be deprecated (for
> things being designed from scratch)?
>

And replaced with, what?

> Even back in the 1980s, in a terminal-centric environment,
> things like terminal servers allowed user-visible 'service
> names' to be distinct from IT-visible host names. A bit of
> 'terminal server magic' was all that was needed. For LAT
> users, or for telnet users (round robin DNS?).
>

Very much like having an DNS A-record for the "host IP address"
and one or more DNS ALIAS-record for the "service names" (pointing
to the A-reord). If you always use the name of an ALIAS-record, the
IP address of the A-reocrd can change with no changes for the user.

It is just a differnt name. A "LAT service name" could be seen as
a IP DNS ALIAS record.

Kerry Main

unread,
Feb 25, 2018, 10:00:06β€―AM2/25/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of Jan-
> Erik Soderholm via Info-vax
> Sent: February 25, 2018 6:20 AM
> To: info...@rbnsn.com
> Cc: Jan-Erik Soderholm <jan-erik....@telia.com>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
The present day term for what is being discussed here is "service
location transparency".

Users should be able to connect to a "service" (e.g. cluster alias)
without having to understand the underlying infrastructure in terms of
things like site location, node names, disk names etc.

For those with grey or disappearing hair, this is not a new concept.

Remember all the 30 year old discussions about the "IT Utility"? For
those who have forgotten, simply google "IT Utility".

I remember DEC Canada CIO pushing this Utility concept back in the old
days. His push was that users should be able to connect to a "service"
like plugging into a "electrical service" receptacle without having to
know if what rating, phase or cost it incurs.

Article from 2003 (and the concept goes back much further than this)
<http://www.informit.com/articles/printerfriendly/101165>
"Utility pricing provides a customer with a financial plan to enable a
monthly cost for using a resource based on an agreed usage."

Does this sound like a "public cloud" business model?

What's old is new again.

Kerry Main

unread,
Feb 25, 2018, 10:25:05β€―AM2/25/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> DaveFroble via Info-vax
> Sent: February 25, 2018 2:18 AM
> To: info...@rbnsn.com
> Cc: DaveFroble <da...@tsoft-inc.com>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
> Kerry Main wrote:
>
> > Only if not well planned. The argument for "one bus app per OS
> instance" is
> > common in the commodity OS world because of not only technical
> challenges,
> > but also culture i.e. "no way I am running my bus App on the same OS
> as
> > another Bus App".
>
> This can be traced to the usage of PCs to get around IT departments
who
> may or
> may not have been responsive enough for users. It was anarchy then,
> and now it
> is just a big mess.
>

Yep, the centralized glass house was not responding to the needs of its
users, so when cheap IT technology became available locally, most IT
departments jumped into it with no little planning other than "lets go
buy servers and do stuff".

Today, this is often referred to as the wild west of distributed
computing. HP internally had a major push to root this out and called it
"Shadow IT".

Now, when the overall $'s started to dry up ad the real costs of
managing so much IT infrastructure in an uncoordinated manner and 5-10%
busy servers really became known to C-Level execs, that is when products
like VMware were born.

Unfortunately, while products like VMware addressed HW sprawl, they do
not address the cultural issue of every BU still wants to do their own
IT strategy. The only difference is that they now do their own plans
with individual VM's and not separate server HW. Hence, VM sprawl is now
a scourge in many companies and this is a much, much tougher issue to
address.

So, to summarize, the answer is NOT totally distributed or totally
centralized compute models, but something in between that adopts the
best of both strategies.

Both have pro's and con's.
- The totally distributed model - side is that this model is expensive
to maintain and is difficult to deploy common standards. However, on the
+ side, this model is more responsive to local end users service
requirements.
- The totally centralized model + side provides very low management
costs and is easy to deploy common standards. However, on the - side,
this model is much less responsive to local users service requirements.

Each company needs to find the mix that works best for their unique
needs.

Kerry Main

unread,
Feb 25, 2018, 10:50:05β€―AM2/25/18
to comp.os.vms to email gateway

> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> Stephen Hoffman via Info-vax
> Sent: February 24, 2018 7:10 PM
> To: info...@rbnsn.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
> On 2018-02-24 20:52:20 +0000, Kerry Main said:
>
> > Never been much of an issue if its just the OS (modparams) and TCPIP
> > (tcpip$config).
>
> No, it's not. Go try it. We'll wait.
>

If just changing the node name and the TCPIP address as part of a new OS
deployment (e.g. gold image deployment - what we are talking about
here), then this is not a big deal.

Done it many times.

> > If Apps involved, then it does get a bit more tricky, but that is
the
> > same on *NIX and Windows as well.
>
> When are apps not involved in a deployment?
>

In most cases I have been involved with, you deploy apps after the OS.
Ideally, not on the system disk but we all know there are poorly written
Apps that expect to be on the system disk.

In fact using todays concepts, you deploy the OS image and "App
containers" separately on non-system disk partitions. Just need to
change a few start-up items.

> > Its certainly not a show stopper.
>
> Given that host names and DNS forward and reverse translations are all
> involved with network security, don't bet on that.
>

The topic here is changing the server name and IP address of a new gold
image deployment.

Regardless of how the server is deployed, the network implications like
DNS AND FW rules still need to be addressed, but this also applies to
VMware Windows/Linux images using templates.

Btw, one idea that HP internal IT was talking about doing just before I
left in 2012 was deploying gold images based on a server based DHCP
strategy for each NIC with long TTL's e.g. 5 days.

Interesting idea, but I am not sure if they followed through on this or
not.

DaveFroble

unread,
Feb 25, 2018, 12:15:49β€―PM2/25/18
to
Some good questions.

There is actually two issues.

1) Is node names the way to ID a computer?

Well, you need to know someone's phone #, or email address, or such before you
can contact them. Sort of the same issue with computers, right? Not saying
node names is the best method, but, it is the method we're used to using.

2) How to manage such identification?

On VMS there is not one central app that can do so. Nor could it be static, as
requirements change, and the app would have to be updated to include new
requirements.

Can such be managed? Yes, and I've done so in the past. If it's been a short
while since the last change, I was able to remember the correct incantations to
get the job done. After a couple of years, it was a bit more questionable. It
should not be that way.

DaveFroble

unread,
Feb 25, 2018, 12:25:14β€―PM2/25/18
to
I'd say "it just is". Not new or old.

A properly set up application does this for users.

For example, the first thing a user sees is a menu. The user then selects what
task or application to use, for example, AP or AR of GL or .... The user
doesn't have to know where the actual executables, or data, or ... is located.
However, something has to know this.

This is the way decent applications have been set up for quite some time now.
Your "what's old is new again", however, it's more like "this is always how it's
done".

Consider what happens in such scenarios. A user logs in, and is running some
set of instructions, a command procedure, that keeps the user from ever seeing
DCL. Locations and such are for the set-up of the application, not for the users.

Kerry Main

unread,
Feb 25, 2018, 3:45:05β€―PM2/25/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> DaveFroble via Info-vax
> Sent: February 25, 2018 12:16 PM
> To: info...@rbnsn.com
> Cc: DaveFroble <da...@tsoft-inc.com>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
Automation
>
The bigger the IT environment, the more critical it is to get resource
naming right. The challenge is that server naming is a bit of a OS
religion in may shops and you often have varying levels of IT experience
and maturity.

Depending on the site's IT maturity, the best approach to keep track of
who supports any IT Infrastructure device or App is in the local help
desk / service desk system. The Service Desk is a typically a web based
CMDB based system which can be used for tracking and maintaining all
CI's (configuration items) and this includes server AND App names. If a
user can not access a specific service, they should simply log a call
with "unable to connect to the XYZ" service. Each Service has a primary
owner who then determines if the issue is network or server or app or ??
related. If someone leaves and there is a new server owner assigned,
then it can be updated transparently without telling any other IT
people.

Course, this breaks down pretty quickly in sites with less IT maturity,
where the Service Desk is not kept up to date.

Re: Server Naming considerations -

Server names like POPEYE, MOE, JOE and CURLY etc. are ok when the site
is small, but is obviously not a scalable naming strategy.

Past mistakes also include putting location and/or functions in the node
name. Other notes:
- always assume the server may be relocated to another location e.g. DC
consolidation (e.g. some Cust's now have a server called TOR-001 server
now sitting in a Montreal DC beside a server called MQO-00x)
- think about a hacker who sees a node named PAYROLL or PROD-ERP in the
node name. Or a server with DB or WEB in the name?
- should also align with naming on other platforms e.g. align with
NetBIOS and TCPIP host name requirements (no illegal characters).

Think those names might provide an incentive and/or clues (web server
vs. a DB server) as to how to approach a hack?

Most large environments prefer a generic server naming approach for prod
e.g. SYS001, SYS002, SYS00x etc. or MFGxxx which makes them easier to
relocate and less visible as primary hacking targets. They might do
something like sub-divide servers 1-100 are Wintel range while servers
200-400 are Linux, OpenVMS servers are 401-500, LAB/Dev servers are
800-899 etc.

johnwa...@yahoo.co.uk

unread,
Feb 25, 2018, 5:12:54β€―PM2/25/18
to
You've done this before haven't you :)

Let's also remember that in some 'organisations'
(not just DEC, CPQ, and HP), there's potentially
the fun of merging (and then perhaps de-merging)
various naming conventions, the database names,
the Bill of Material and parts numbers
hierarchies, serial numbersm and other such
administrivia (some of which actually matter
outside HQ). Ideally with no service disruption
(please don't laugh, I've seen it happen).

In those circumstances, the node/host name used for
a given box (be it virtual or physical) really isn't
the biggest worry. It'd be nice if it wasn't a
worry at all, but...

Simon Clubley

unread,
Feb 25, 2018, 7:36:29β€―PM2/25/18
to
On 2018-02-25, DaveFroble <da...@tsoft-inc.com> wrote:
>
> Consider what happens in such scenarios. A user logs in, and is running some
> set of instructions, a command procedure, that keeps the user from ever seeing
> DCL. Locations and such are for the set-up of the application, not for the users.
>

It would be nice to think that all those menu driven users are
using accounts which are marked as captive accounts so if they
manage to break through the menu system and get to DCL, they
will get kicked off the system instead.

DaveFroble

unread,
Feb 25, 2018, 8:33:50β€―PM2/25/18
to
Simon Clubley wrote:
> On 2018-02-25, DaveFroble <da...@tsoft-inc.com> wrote:
>> Consider what happens in such scenarios. A user logs in, and is running some
>> set of instructions, a command procedure, that keeps the user from ever seeing
>> DCL. Locations and such are for the set-up of the application, not for the users.
>>
>
> It would be nice to think that all those menu driven users are
> using accounts which are marked as captive accounts so if they
> manage to break through the menu system and get to DCL, they
> will get kicked off the system instead.
>
> Simon.
>

You're in luck. It's nice, they are captive.

Kerry Main

unread,
Feb 25, 2018, 9:55:07β€―PM2/25/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
> johnwallace4--- via Info-vax
> Sent: February 25, 2018 5:13 PM
> To: info...@rbnsn.com
> Cc: johnwa...@yahoo.co.uk
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>
[snip...]
Yep, when doing DC consolidations, you get to see the real ugly "under belly" of DC's on all platforms

😊

> Let's also remember that in some 'organisations'
> (not just DEC, CPQ, and HP), there's potentially
> the fun of merging (and then perhaps de-merging)
> various naming conventions, the database names,
> the Bill of Material and parts numbers
> hierarchies, serial numbersm and other such
> administrivia (some of which actually matter
> outside HQ). Ideally with no service disruption
> (please don't laugh, I've seen it happen).
>
> In those circumstances, the node/host name used for
> a given box (be it virtual or physical) really isn't
> the biggest worry. It'd be nice if it wasn't a
> worry at all, but...
>

Agree .. when doing DC consolidations, the typical mantra is "transition", NOT "transformation".

Big challenge is always scope creep like when some BU says "lets rename [upgrade] the server as part of the migration!"

Std answer is "you can do anything you want before hand as long as it is not within 2 weeks of the workload migration".

Btw, if you want to see some real fireworks with resource naming, merging large AD environments is even more fun when it comes to account name strategies and challenges.

Duplicates (John Smith, Janitor and Jayne Smith, Doctor) e.g. who gets Jsmith that both have been using for 10 years in different facilities. What happens if John Smith is given access to jsmith in the new AD which happens to be associated with the Doctors account? This is real life as while at HP, we were involved with a number of vendors and partners on a large project which merged 14 provincial Health industry AD's into 1. 90,000 accounts across the province.

Resource naming was a huge issue.

😊

Stephen Hoffman

unread,
Feb 26, 2018, 11:21:33β€―AM2/26/18
to
On 2018-02-25 15:48:02 +0000, Kerry Main said:

>>
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of
>> Stephen Hoffman via Info-vax
>> Sent: February 24, 2018 7:10 PM
>> To: info...@rbnsn.com
>> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
>> Subject: Re: [Info-vax] Distributed Applications, Hashgraph,
> Automation
>>
>> On 2018-02-24 20:52:20 +0000, Kerry Main said:
>>
>>> Never been much of an issue if its just the OS (modparams) and TCPIP
>>> (tcpip$config).
>>
>> No, it's not. Go try it. We'll wait.
>>
>
> If just changing the node name and the TCPIP address as part of a new
> OS deployment (e.g. gold image deployment - what we are talking about
> here), then this is not a big deal.
>
> Done it many times.

Go try changing a host name of a previously-installed and non-trivial
OpenVMS system. Getting a domain-change request can be a real
challenge, absent a strategy built on guests or containers.


>>> If Apps involved, then it does get a bit more tricky, but that is the
>>> same on *NIX and Windows as well.
>>
>> When are apps not involved in a deployment?
>>
>
> In most cases I have been involved with, you deploy apps after the OS.
> Ideally, not on the system disk but we all know there are poorly
> written Apps that expect to be on the system disk.
>
> In fact using todays concepts, you deploy the OS image and "App
> containers" separately on non-system disk partitions. Just need to
> change a few start-up items.

Those other platforms are better about managing their host names than
OpenVMS, and that rolling out servers with those platforms is easier
than with OpenVMS, and that OpenVMS has no concept of isolating
installed apps, nor does regenerating and testing monolithic masters
work very well in environments that are getting updates as often as
they're arriving now and into the future. Among other details.

>>> Its certainly not a show stopper.
>>
>> Given that host names and DNS forward and reverse translations are all
>> involved with network security, don't bet on that.
>
> The topic here is changing the server name and IP address of a new gold
> image deployment.
>
> Regardless of how the server is deployed, the network implications like
> DNS AND FW rules still need to be addressed, but this also applies to
> VMware Windows/Linux images using templates.
>
> Btw, one idea that HP internal IT was talking about doing just before I
> left in 2012 was deploying gold images based on a server based DHCP
> strategy for each NIC with long TTL's e.g. 5 days.
>
> Interesting idea, but I am not sure if they followed through on this or not.

So... was HP / HPE IT doing that with OpenVMS?

Stephen Hoffman

unread,
Feb 26, 2018, 12:41:25β€―PM2/26/18
to
On 2018-02-25 17:15:48 +0000, DaveFroble said:

> 1) Is node names the way to ID a computer?

In OpenVMS, host names are central to clustering and a whole lot of
normal operations. In many other environments, servers are installed
with host names acquired using DHCP and DDNS, or generated host names,
and with locally-generated or ACME-acquired certificates or
install-time certificate provisioning. Certificate authentication and
SSL are centrally tied to the DNS host name and the presence of the
private key for the certificate. Generating a host name is one thing,
getting a proper set of certificates loaded (securely) can be more
entertaining. OpenVMS didn't work all that well with DHCP-provided
configuration data the last few times I've tried that; some of the
network services really wanted more traditional host names. Folks that
don't use those services however, might have an option with DHCP and
DDNS configurations.

Getting the right public certs and an appropriate public-private key
pair into a newly-installed host or into a guest in a VM or into the
app(s) in a container is its own little source of entertainment.
OpenVMS doesn't really have anything similar to RedHat Kickstart, for
instance.

Malware and adware is using domain-name generation these days. This to
avoid some of the more common detection measures and block lists. But
I digress.

> Well, you need to know someone's phone #, or email address, or such
> before you can contact them. Sort of the same issue with computers,
> right? Not saying node names is the best method, but, it is the method
> we're used to using.
>
> 2) How to manage such identification?
>
> On VMS there is not one central app that can do so. Nor could it be
> static, as requirements change, and the app would have to be updated to
> include new requirements.
>
> Can such be managed? Yes, and I've done so in the past. If it's been
> a short while since the last change, I was able to remember the correct
> incantations to get the job done. After a couple of years, it was a
> bit more questionable. It should not be that way.

DNS records (SRV, A and increasingly AAAA records) and LDAP are some of
what is commonly used for that. mDNS in a few other environments, and
for locating local network resources. OpenVMS has no support for
mDNS. There are other choices available here; some of the Apache
projects are right in this area, for instance, and not the least of
which would be Hadoop and all its adjuncts. Hadoop is something VSI
has mentioned, too. Maybe Hashicorp Nomad for dealing with all of this
across various private or public hosting providers, too.

IPv6 and 802.1x authentication and the ACME PKI certificate API, and
DHCPv6 and many other pieces and parts were not things that many
OpenVMS folks were seemingly commonly using, though. Many (most?) use
very traditional IPv4, NAT'd networks and protocols and implementations.

Note: Not enough folks are using the ACME API within OpenVMS (and the
doc for that API is... problematic), but that ACME API I'm referencing
here is very different from the PKI ACME API that is used to acquire
signed digital certificates.

To paraphrase a comment that's been around for a while... OpenVMS
systems are often managed as pets with names and individualized
configurations and care, and where other folks and other server systems
are managed more as livestock. Or the folks that are increasingly
managing multiple herds of livestock. For folks running one or two
servers or incremental manually-processed deployments, this sort of
stuff makes little sense for the servers, but might make sense for the
clients connecting into the servers. For folks running ten or a
hundred or more servers or that are automating server deployments,
things here get much more interesting.

Kerry Main

unread,
Feb 26, 2018, 9:40:07β€―PM2/26/18
to comp.os.vms to email gateway
Mmm, we are talking in this thread about changing the server and tcpip
host names of a gold image which was just image copied to a system
partition?

It was already mentioned that if Apps are installed with no focus on the
environment being homogeneous and/or keeping things off the system disk,
then there will likely be challenges with changing names.
Not that it matters since the strategy is one that can be used on any OS
platform, but HP were planning on doing this with all their Wintel/Linux
environments (small amount of HP-UX as well). Again, not sure if it was
ever implemented or not.

With the new IP stack, this should make it even less of an issue with
OpenVMS.




Stephen Hoffman

unread,
Feb 28, 2018, 6:14:36β€―PM2/28/18
to
I've yet to encounter a a installer that doesn't have to change the
host name at install time, and β€” with some of usual sorts of apps
installed β€” that usually gets entertaining. Or the deployment
configuration is headed toward thinner provisioning and installing the
packages separately, which avoids having to rename stuff, but adds
install-time "fun". And the "thinner" provisioning approach is
probably a more sustainable approach than the omnibus "gold master"
installations, but all of this is presently bespoke code and tooling.
And as was suggested earlier, please go try changing the host name of
an existing and non-trivial OpenVMS host. It gets entertaining.
Particularly with Apache or some of the other packages that cache the
host name around the file system or the configuration data. OpenVMS
itself caches the host name in some "odd" places, including in the
RIGHTSLIST...

>
> With the new IP stack, this should make it even less of an issue with OpenVMS.

Feel free to try the same host-renaming exercise starting with Multinet.

Changing the host name of an OpenVMS server β€” whether from an omnibus
master or from an existing server β€” far more work than it should be.

Deploying and redeploying and reconfiguring OpenVMS is something we all
need to be better at; all our apps and including OpenVMS itself. The
host name mess is just one small aspect of this problem, too.;

DaveFroble

unread,
Feb 28, 2018, 11:35:36β€―PM2/28/18
to
As far as I can remember, none of our applications refer to the host name, or
much else about the OS. I guess some communications stuff might care, but, they
should be flexible enough that it would not be a major problem.

>> With the new IP stack, this should make it even less of an issue with
>> OpenVMS.
>
> Feel free to try the same host-renaming exercise starting with Multinet.
>
> Changing the host name of an OpenVMS server β€” whether from an omnibus
> master or from an existing server β€” far more work than it should be.

Yes, this should be as simple as modify and re-boot. Anything that needs the
name, or other stuff, should load it upon startup. More work than it should be.

> Deploying and redeploying and reconfiguring OpenVMS is something we all
> need to be better at; all our apps and including OpenVMS itself. The
> host name mess is just one small aspect of this problem, too.;

Agreed

Stephen Hoffman

unread,
Mar 1, 2018, 4:57:16β€―PM3/1/18
to
I'm referring to a system installer; a so-called "gold image" or
"omnibus" software installation.

As for apps and host name references, that varies widely. Very
widely. More and more apps can and do include host names, whether
it's in the logging or embedded in the digital certificates, or in the
hosts or certificates targeted for remote connections and whether
that's using the old local configuration or DNS resolutions, or
otherwise. Existing tools such as Mail and Notes can and variously do
embed host names, too β€” I've had to break into Notes conferences to fix
access when the host name changes, for instance. As OpenVMS software
practices are moved (slowly) forward, even entirely-local apps will be
uploading app crash data to the vendor's or the developers' crash
servers, and that'll include host names and other details. We're not
heading back to the era of non-networked configurations, the
partitioning of OpenVMS and its networking support to the contrary.
Even for local apps. But again, my reference was to a pre-packaged
installation environment that's now being prepared for deployment.
That can get very entertaining, depending on the particular mix of
services around, and that all needs to be installed or reinstalled or
named or renamed or migrated.

Kerry Main

unread,
Mar 1, 2018, 10:05:05β€―PM3/1/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax <info-vax...@rbnsn.com> On Behalf Of Stephen
> Hoffman via Info-vax
> Sent: March 1, 2018 4:57 PM
> To: info...@rbnsn.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>
Deploying gold images is nothing new.

Its done on all platforms. In a large environment, no one sticks the OS CD in and selects setup.exe anymore - on any platform. Its now done with gold images and/or with templates (VMware).

Assuming one builds the gold system image using best practices for homogeneous environments, it’s very simple to change node / host names and/or customize start-up scripts.

Lets agree to disagree on this one.

Stephen Hoffman

unread,
Mar 2, 2018, 2:18:11β€―PM3/2/18
to
On 2018-03-02 03:02:02 +0000, Kerry Main said:

> Deploying gold images is nothing new.

Ponder what the limitations of that approach might be, and how your
oft-cited app stacking avoids various of those same issues.

This ignoring the obvious issues around app isolation, app signing, app
authentication, and related steps, and around tools for and processing
and applying patches on OpenVMS, and most of which is also woefully
outdated on OpenVMS.

> Its done on all platforms. In a large environment, no one sticks the OS
> CD in and selects setup.exe anymore - on any platform. Its now done
> with gold images and/or with templates (VMware)

If you're going old-style and can avoid having to roll out patches more
quickly, and a whole host of other dependencies. Reloading without
clobbering the gets really interesting with OpenVMS.

> Assuming one builds the gold system image using best practices for
> homogeneous environments, it’s very simple to change node / host names
> and/or customize start-up scripts.

Go change the host name of an established OpenVMS system, and tell me
it's not a complete pile of stupid. And who wants to have future
systems even require those "simple" manual steps? And they're only
"simple" to somebody that's dealt with it all before.

> Lets agree to disagree on this one.

We disagree about most things, Kerry.

Kerry Main

unread,
Mar 3, 2018, 12:10:05β€―PM3/3/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax <info-vax...@rbnsn.com> On Behalf Of Stephen
> Hoffman via Info-vax
> Sent: March 2, 2018 2:18 PM
> To: info...@rbnsn.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [Info-vax] Distributed Applications, Hashgraph, Automation
>
> On 2018-03-02 03:02:02 +0000, Kerry Main said:
>
> > Deploying gold images is nothing new.
>
> Ponder what the limitations of that approach might be, and how your
> oft-cited app stacking avoids various of those same issues.
>
> This ignoring the obvious issues around app isolation, app signing, app
> authentication, and related steps, and around tools for and processing
> and applying patches on OpenVMS, and most of which is also woefully
> outdated on OpenVMS.
>

We are talking about new OS implementations. The app layer is a totally different discussion.

> > Its done on all platforms. In a large environment, no one sticks the OS
> > CD in and selects setup.exe anymore - on any platform. Its now done
> > with gold images and/or with templates (VMware)
>
> If you're going old-style and can avoid having to roll out patches more
> quickly, and a whole host of other dependencies. Reloading without
> clobbering the gets really interesting with OpenVMS.
>

One simply keeps the gold images up to date. Ongoing patching is a totally different discussion.

> > Assuming one builds the gold system image using best practices for
> > homogeneous environments, it’s very simple to change node / host
> names
> > and/or customize start-up scripts.
>
> Go change the host name of an established OpenVMS system, and tell
> me
> it's not a complete pile of stupid. And who wants to have future
> systems even require those "simple" manual steps? And they're only
> "simple" to somebody that's dealt with it all before.
>

One more time, I am talking about new OS implementations, not an established environment that did not take into consideration a homogeneous strategy.

One more time - imho, even bigger challenge exists for established Windows servers and all the associated registry issues.

> > Lets agree to disagree on this one.
>
> We disagree about most things, Kerry.
>

Not true - see .. we agree on this last statement.

😊

Stephen Hoffman

unread,
Mar 3, 2018, 4:45:23β€―PM3/3/18
to
On 2018-03-03 17:08:11 +0000, Kerry Main said:

> We are talking about new OS implementations. The app layer is a totally
> different discussion.

Oh, okay. Here I thought the point was to roll out the app and its
underpinnings, of which the OS and its layered products is part.
Nevermind.
0 new messages