Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

OT: what is old is new again?

710 views
Skip to first unread message

Neil Rieck

unread,
Apr 28, 2016, 7:31:02 AM4/28/16
to
I love these Computerphile lectures from the University of Nottingham which I try to watch as soon as they are available.

This lecture discusses "Paxos" which sounds suspiciously similar to VMS clustering software I first saw in 1987 in a VAX Cluster.

https://www.youtube.com/watch?v=s8JqcZtvnsM

Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/


John Reagan

unread,
Apr 28, 2016, 7:57:13 AM4/28/16
to
Yep. You should put in a comment to the video to the author.

I remember going to Cornell many years ago as part of a group from Digital (we gave funding to some research there) to listen to PhD students discuss their theses. One woman described a multi-node system just like this and discussed how they would fail over and handle the case where a node crashed or a network link went down; and when the node came back. She pretty much re-invented the cluster transition algorithm. I hated to tell her that her PhD was already existing technology. Her face went very pale. I couldn't blame her, but her faculty advisor should be blamed. I mailed her some VMS manuals when I got back to ZKO. I never heard from her as to whether they granted her degree.

Stephen Hoffman

unread,
Apr 28, 2016, 10:43:05 AM4/28/16
to
On 2016-04-28 11:31:00 +0000, Neil Rieck said:

> I love these Computerphile lectures from the University of Nottingham
> which I try to watch as soon as they are available.
>
> This lecture discusses "Paxos" which sounds suspiciously similar to VMS
> clustering software I first saw in 1987 in a VAX Cluster.

In the OpenVMS products and APIs used by most applications, Paxos is
much closer to a database or to DECdtm than to clustering — though
there are certainly transactional aspects within the cluster connection
manager, and underneath the lock manager and related. (There are some
interesting sequencing corner cases in the lock manager too, but I
digress.)

I've referenced Paxos here on several occasions. Here is some
introductory reading:

http://the-paper-trail.org/blog/distributed-systems-theory-for-the-distributed-systems-engineer/

http://the-paper-trail.org/blog/consensus-protocols-paxos/


Some of what's available for distributed processing and clustering with
Apache tools:

Apache Kafka, Storm and Zookeeper
https://kafka.apache.org
http://storm.apache.org
https://zookeeper.apache.org/doc/trunk/zookeeperOver.html


Clustering as implemented in OpenVMS hasn't particularly changed in the
last ~twenty years in terms of its fundamental design, APIs or scaling
limits. Clustering has also seen few enhancements in these areas, nor
all that much work to simplify load balancing, sharding nor process
migration. You can do these tasks, but you're either rolling your own
based on the OpenVMS primitives, or using Apache or other such tools.
Or both.


--
Pure Personal Opinion | HoffmanLabs LLC

Kerry Main

unread,
Apr 28, 2016, 12:40:07 PM4/28/16
to comp.os.vms to email gateway
I would agree with this concept, but I would also state that OpenVMS
clustering is a better architecture for those that believe system availability,
node mgmt. (add/deletes), load balancing and inter node communications
standards should be the responsibility of the OS and NOT the applications.

This is because it is hard (not impossible) to ensure multiple different apps
from different groups will all follow a similar strategy for these topics.

A more basic question for distributed systems - is the core concept of
breaking different functions like Web, App and DB into separate dedicated
servers (P or V) still valid?

The whole concept of separating different OS instances into separate tiers
e.g. Web, App, DB etc. was based on servers that back in the 90's were
overloaded so it made sense to separate the workloads into dedicated
servers communicating over slow(er) networks (100Mb and now primarily
1Gb)

Today, we have 64 x 2.5Ghz core with small blades, with 1.5TB local
memory, flash storage + multiple 10+Gb blade chassis interconnections.

Now, if we ignore the fact that commodity OS's today have both technical
and cultural issues with App stacking, in order to drastically reduce delays
with network communications, would it not make sense to use clustered
apps with clustered DB's on the same OS instance? Direct IO calls between
App-DB on the same system OS instance, to large memory caches or direct
IO's to flash are an order of magnitude smaller than sending writes over the
network.

Part of the challenge I see today is that distributed systems view centralized
model alternatives as being one server e.g. the mainframe. They have not
considered a model which is between centralized and distributed i.e. ability
to scale up initially on big blades (pick initial number), but then scale out by
adding big blades dynamically as required with zero change in the application
code.

Imho, App developers should focus on optimizing App development - not
how to address server mgmt./availability / internode communication issues.

Sure - there may be a few IT environments that need more than 96 x 64
core 2.5gHz Blade servers that each have 1.5TB of memory, but these will
be highly custom models anyway e.g. Google looking at custom Power9+
type server solutions.

Re: 96 server limit in clusters - most here know that limit is there because
that is all that has been tested. Perhaps we will see this increased significantly
In the future with cluster "lite" protocols using RDMA over Ethernet?

What is old is new again?

:-)

Regards,

Kerry Main
Kerry dot main at starkgaming dot com







Stephen Hoffman

unread,
Apr 28, 2016, 2:53:58 PM4/28/16
to
On 2016-04-28 16:35:18 +0000, Kerry Main said:

> but I would also state that OpenVMS clustering is a better
> architecture for those that believe system availability, node mgmt.
> (add/deletes), load balancing and inter node communications standards
> should be the responsibility of the OS and NOT the applications.

The VMS clustering design here is good for some apps and some
requirements, certainly. But then I've seen more than a few
applications on OpenVMS that do their best to ignore clustering, too —
then get tangled, and fail. Clustering on OpenVMS is not known for
scaling — last I checked, ~255 hosts was the architectural limit on
OpenVMS, and that's a rounding error in what are considered current
production-scale cluster configurations in recent times – the
clustering product itself is very expensive — and with effectively no
low-end pricing for hardware or software, there's no entry point for
newer and for smaller deployments, and as David Dachtera was absolutely
right to reference — and the existing cluster management APIs and
cluster application development APIs are — and I'm going to be
exceedingly charitable here — complex, arcane, and clunky to use, or —
as is the case with OMS — undocumented. As for management, the SNMP
bits (via SMH) are old (no SNMPv3, insecure) and SMH itself is
under-patched at best.

The file-based implementation of the cluster shared configuration data
and rest of the so-called node management is — as I've previously
indicated — is not a design normally regarded as being particularly
robust, elegant, simple or easily-maintainable. Node management is
scattered around in 20 or thirty files and with sharing that must be
manually maintained, the APIs are all over the place, the integration
with LDAP is little more than passwords, and there are no baked-in
LDAP-related command line tools and nothing beyond the
little-recognized LDAP API that appeared a while back....

But again, app stacking and clustering are not related to Paxos.
Clustering and the DLM also has little or nothing to do with what Paxos
provides applications, except as these provide some underpinning for
DECdtm.

For those that want an overview of what's involved here with app
stacking — since Kerry is so fond of that, and some of the related
pieces that are either limited or entirely missing from OpenVMS — see
the following ancient, 2004-vintage BSD write-up on the topic:

http://queue.acm.org/detail.cfm?id=1017001

If VSI is to make progress outside of the installed base and the
existing legacy applications and to appeal to newer deployments and
wholly new applications and development, there's going to be more than
a little effort involved — and hopefully a willingness to scrape off
the accreted code and design barnacles, and re-think and reimplement
the related pieces of the operating system. Adding Paxos support and
sandboxes and better management and other updates might or will be part
of that certainly, but VSI has a tremendous pile of work ahead of them
with just the x86-64 port and with getting to sustainable profits.

dgordo...@gmail.com

unread,
Apr 28, 2016, 4:13:30 PM4/28/16
to
On Thursday, April 28, 2016 at 2:53:58 PM UTC-4, Stephen Hoffman wrote:
> Clustering on OpenVMS is not known for
> scaling -- last I checked, ~255 hosts was the architectural limit on
> OpenVMS,

The architectural limit is 252.

The "SPD" supported limit is 96 but there have been supported customers with node counts around 150 in the past. I suspect no one has node counts like that any more. When I worked in VMS Engineering the first time, the anecdotal evidence gathered was that most customers who had more than 2 nodes were between 4 and 8 inclusive.

Stephen Hoffman

unread,
Apr 28, 2016, 5:20:55 PM4/28/16
to
There's a reason for fewer cluster systems, beyond just the increased
capabilities and improved server density...

Anecdata from around here is that mention of the cluster license price
ends more than a few of the purchase and upgrade discussions. In one
case, we bumped a two-host AlphaServer DS10 cluster up to a pair of
DS15 boxes, as that hardware upgrade was a rounding error in the
configuration price in comparison with the base and cluster and a few
other core software licenses. The combined bottom-end OpenVMS I64
license BOE base license and the cluster license price presently looks
to be just under US$11K per socket, for the bottom-end two-socket
boxes. Which is lower than the Alpha DS-class config prices though, as
those were even further up into nosebleed territory for what were the
bottom-end Alpha boxes.

This is not the way to gain any added or new business in the entry- and
low-end, and no entry level means few new cluster configurations and
cluster-aware applications. Folks that do without clustering and swap
disks around, for that matter. Folks can get some very nice server
configurations — including all hardware and software — for rather less
than that ~US$11K software price, and less than that ~US$22K software
price when populating both sockets in the box.

Whether or not there is any price elasticity here remains to be seen.
Though without achieving and maintaining profitability, VSI itself will
not remain commercially viable, either.

While on the subject of pricing and price lists:
http://www.spacex.com/about/capabilities

Phillip Helbig (undress to reply)

unread,
Apr 29, 2016, 2:24:40 AM4/29/16
to
In article <f5618a2b-0ea5-4c18...@googlegroups.com>,
dgordo...@gmail.com writes:

> The "SPD" supported limit is 96 but there have been supported customers wit=
> h node counts around 150 in the past. I suspect no one has node counts lik=
> e that any more. When I worked in VMS Engineering the first time, the anec=
> dotal evidence gathered was that most customers who had more than 2 nodes w=
> ere between 4 and 8 inclusive.

I think this is probably right. Almost all of the rest is probably
9--20.

Michael Moroney

unread,
Apr 29, 2016, 9:57:09 AM4/29/16
to
In article <f5618a2b-0ea5-4c18...@googlegroups.com>,
dgordo...@gmail.com writes:

> The "SPD" supported limit is 96 but there have been supported customers wit=
> h node counts around 150 in the past. I suspect no one has node counts lik=
> e that any more. When I worked in VMS Engineering the first time, the anec=
> dotal evidence gathered was that most customers who had more than 2 nodes w=
> ere between 4 and 8 inclusive.

The supported limit was never more than 96 members.

I have seen in the code where a field went negative causing a crash when
it reached 128 (cluster members), overflowing a signed byte...found by a
customer. Later I heard stories how a certain customer had a ~160 node
cluster.

Sort of brings up a side question. What does VMS need to do to continue
being the best cluster type solution in a more modern environment?

Stephen Hoffman

unread,
Apr 29, 2016, 10:23:55 AM4/29/16
to
On 2016-04-29 13:56:38 +0000, Michael Moroney said:

> Sort of brings up a side question. What does VMS need to do to continue
> being the best cluster type solution in a more modern environment?

Replace the error-prone and manually-maintained shared-RMS-indexed-file
"design" for the system and application cluster shared data, as a
starting point.

Dirk Munk

unread,
Apr 29, 2016, 10:39:33 AM4/29/16
to
I just wonder when the cluster communication (exchange of lock
information etc.) will become a serious problem for system performance.
New solid state storage will become so fast compared with traditional
hard disks, that exchanging lock information may take more time then
actual IOs.

That is why I wrote about PCIe clustering a short while ago, it is at
the moment the fastest way to exchange information between two or more
systems.

Dirk Munk

unread,
Apr 29, 2016, 10:41:57 AM4/29/16
to
Stephen Hoffman wrote:
> On 2016-04-29 13:56:38 +0000, Michael Moroney said:
>
>> Sort of brings up a side question. What does VMS need to do to
>> continue being the best cluster type solution in a more modern
>> environment?
>
> Replace the error-prone and manually-maintained shared-RMS-indexed-file
> "design" for the system and application cluster shared data, as a
> starting point.
>

Perhaps VSI could ask Hein to modernize RMS?

David Froble

unread,
Apr 29, 2016, 10:51:56 AM4/29/16
to
Sort of brings up the question of "what is a more modern environment"?

There is more computing to do today.

Today's computers can do a lot more.

Got to wonder, which of the above is progressing faster?

VAXman-

unread,
Apr 29, 2016, 11:23:49 AM4/29/16
to
In article <nfvqjb$f2q$1...@dont-email.me>, Stephen Hoffman <seao...@hoffmanlabs.invalid> writes:
>On 2016-04-29 13:56:38 +0000, Michael Moroney said:
>
>> Sort of brings up a side question. What does VMS need to do to continue
>> being the best cluster type solution in a more modern environment?
>
>Replace the error-prone and manually-maintained shared-RMS-indexed-file
>"design" for the system and application cluster shared data, as a
>starting point.

Some particular file you have issue with or all RMS indexed files?

--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.

Paul Sture

unread,
Apr 29, 2016, 12:49:28 PM4/29/16
to
On 2016-04-29, VAXman- @SendSpamHere.ORG <VAX...@SendSpamHere.ORG> wrote:
> In article <nfvqjb$f2q$1...@dont-email.me>,
> Stephen Hoffman <seao...@hoffmanlabs.invalid> writes:
>>On 2016-04-29 13:56:38 +0000, Michael Moroney said:
>>
>>> Sort of brings up a side question. What does VMS need to do to continue
>>> being the best cluster type solution in a more modern environment?
>>
>>Replace the error-prone and manually-maintained shared-RMS-indexed-file
>>"design" for the system and application cluster shared data, as a
>>starting point.
>
> Some particular file you have issue with or all RMS indexed files?

SYSUAF and RIGHTSLIST would be good a starting point. Right from the
introduction of RIGHTSLIST it was too easy to get them out of step
using AUTHORIZE itself.

o - it's not easy to add fields to SYSUAF without breaking existing
apps.
o - the RIGHTSLIST manipulation commands within AUTHORIZE have me
consulting the manual way too often.

VMSMAIL_PROFILE too. Looking at my version I see a NOSPAMM as
well as a NOSPAM account in there. NOSPAMM isn't in the SYSUAF
so I assume it got there via finger trouble, but didn't get
renamed or deleted when I corrected the entry in SYSUAF.

--
There are two hard things in computer science: cache invalidation,
naming, and off-by-one errors.

Stephen Hoffman

unread,
Apr 29, 2016, 1:03:40 PM4/29/16
to
I'm sure that faster interconnects and a better RMS will fix this
dumpster fire of "design".

Kerry Main

unread,
Apr 29, 2016, 1:05:05 PM4/29/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Stephen Hoffman via Info-vax
> Sent: 28-Apr-16 2:54 PM
> To: info...@info-vax.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [New Info-vax] OT: what is old is new again?
>
> On 2016-04-28 16:35:18 +0000, Kerry Main said:
>
> > but I would also state that OpenVMS clustering is a better
> > architecture for those that believe system availability, node mgmt.
> > (add/deletes), load balancing and inter node communications
> standards
> > should be the responsibility of the OS and NOT the applications.
>
> The VMS clustering design here is good for some apps and some
> requirements, certainly. But then I've seen more than a few
> applications on OpenVMS that do their best to ignore clustering, too —
> then get tangled, and fail. Clustering on OpenVMS is not known for
> scaling — last I checked, ~255 hosts was the architectural limit on
> OpenVMS, and that's a rounding error in what are considered current
> production-scale cluster configurations in recent times – the
> clustering product itself is very expensive — and with effectively no
> low-end pricing for hardware or software, there's no entry point for
> newer and for smaller deployments, and as David Dachtera was
> absolutely
> right to reference — and the existing cluster management APIs and
> cluster application development APIs are — and I'm going to be
> exceedingly charitable here — complex, arcane, and clunky to use, or —
> as is the case with OMS — undocumented. As for management, the
> SNMP
> bits (via SMH) are old (no SNMPv3, insecure) and SMH itself is
> under-patched at best.
>

While no one here would argue these tech and mgmt. issues do not need
to be improved, the same is true of other platforms as well - especially
commodity OS's.

Anyone with real experience in large companies Operations shops knows
all of the custom crap that goes exists today. Each Windows, Linux, Solaris
group has their own tools, their own custom scripts and locally written
utilities.

> The file-based implementation of the cluster shared configuration data
> and rest of the so-called node management is — as I've previously
> indicated — is not a design normally regarded as being particularly
> robust, elegant, simple or easily-maintainable. Node management is
> scattered around in 20 or thirty files and with sharing that must be
> manually maintained, the APIs are all over the place, the integration
> with LDAP is little more than passwords, and there are no baked-in
> LDAP-related command line tools and nothing beyond the
> little-recognized LDAP API that appeared a while back....
>

Listen to the video and re-read the articles you provided earlier on
creating next gen apps with stateful services. These highlight some of
the issues and challenges faced by commodity OS's and other platforms
being used today.

Key point - I'd much rather deal with OpenVMS issues than the ones
described in these articles from a well-known Twitter dev and MS games
developer:

https://www.youtube.com/watch?v=H0i_bXKwujQ

http://highscalability.com/blog/2015/10/12/making-the-case-for-building-scalable-stateful-services-in-t.html

While the native capabilities do need to be improved, just like there are
commercial options on commodity OS's, the same is true for OpenVMS.

No one should expect that VSI is responsible for providing all of the
solutions and add-ons for OpenVMS. In the same way as Microsoft and RH
have active third party solution providers to address gaps or provide added
value to their native offerings (does anyone use the Windows backup
utility??), there are third party add-on options available for OpenVMS as
well. VSI has stated they are working closely with partners.

As an example, System Detective from Point Secure adds some really
nice security features to OpenVMS. Process Software and IDMworks
provide LDAP enhancements.

Comtek provides some nice mgmt. add-ons:
http://www.comtekservices.com/vms.html

> But again, app stacking and clustering are not related to Paxos.
> Clustering and the DLM also has little or nothing to do with what Paxos
> provides applications, except as these provide some underpinning for
> DECdtm.
>
> For those that want an overview of what's involved here with app
> stacking — since Kerry is so fond of that, and some of the related
> pieces that are either limited or entirely missing from OpenVMS — see
> the following ancient, 2004-vintage BSD write-up on the topic:
>
> http://queue.acm.org/detail.cfm?id=1017001
>

Not sure what the point is - the 12 year old article is written by someone
With a UNIX mentality who is saying sharing apps on the same system is a
good thing (duh?), but you need to use things like ACL's, group identifiers
and have a good namespace scheme in place. to do this. Ok?

In addition, the SysAdmin has a way of examining what the process in the
"jail" is doing which is similar to what can be done with System Detective
on OpenVMS i.e. the ability to start "watching" and/or logging what a
suspect process is doing without them knowing about it.

> If VSI is to make progress outside of the installed base and the
> existing legacy applications and to appeal to newer deployments and
> wholly new applications and development, there's going to be more than
> a little effort involved — and hopefully a willingness to scrape off
> the accreted code and design barnacles, and re-think and reimplement
> the related pieces of the operating system. Adding Paxos support and
> sandboxes and better management and other updates might or will be
> part
> of that certainly, but VSI has a tremendous pile of work ahead of them
> with just the x86-64 port and with getting to sustainable profits.
>

Yep, think the message is understood - lots of work ahead.

Having stated this, let's not fall into the trap of thinking "the grass is greener
on the other side" .. the grass on the other side has loads of work required
as well.

Jim

unread,
Apr 29, 2016, 1:17:01 PM4/29/16
to
On Friday, April 29, 2016 at 12:49:28 PM UTC-4, Paul Sture wrote:
[...]
> VMSMAIL_PROFILE too. Looking at my version I see a NOSPAMM as
> well as a NOSPAM account in there. NOSPAMM isn't in the SYSUAF
> so I assume it got there via finger trouble, but didn't get
> renamed or deleted when I corrected the entry in SYSUAF.

Though the following may not be the case with your NOSPAMM name...

With SYSPRV VMSMAIL permits registration of mailbox names for non-existent SYSUAF users - and this is handy when one wants to capture and redirect mail targeting some specific, but nonexistent, name and routing it elsewhere.

MAIL> SET FORWARD/USER=POSTMASTER SYSTEM
MAIL> SET FORWARD/USER=ROOT m...@somewhere.domain

Stephen Hoffman

unread,
Apr 29, 2016, 1:19:27 PM4/29/16
to
On 2016-04-29 16:49:22 +0000, Paul Sture said:

> On 2016-04-29, VAXman- @SendSpamHere.ORG <VAX...@SendSpamHere.ORG> wrote:
>> In article <nfvqjb$f2q$1...@dont-email.me>,
>> Stephen Hoffman <seao...@hoffmanlabs.invalid> writes:
>>> On 2016-04-29 13:56:38 +0000, Michael Moroney said:
>>>
>>>> Sort of brings up a side question. What does VMS need to do to continue
>>>> being the best cluster type solution in a more modern environment?
>>>
>>> Replace the error-prone and manually-maintained shared-RMS-indexed-file
>>> "design" for the system and application cluster shared data, as a
>>> starting point.
>>
>> Some particular file you have issue with or all RMS indexed files?
>
> SYSUAF and RIGHTSLIST would be good a starting point....

Ayup. It's the utter train-wreck arising from decades of accreted RMS
files solving the need for simple and expedient and incremental and
upward-compatible changes that have in aggregate produced a "design" of
unmitigated stupidity and complexity for the control and configuration
and management of a cluster, but then I'm feeling rather polite today.

Stephen Hoffman

unread,
Apr 29, 2016, 1:26:26 PM4/29/16
to
Just don't have POSTMASTER or ROOT defined anywhere system-wide. Do
have VMSMAIL_COMMON correctly defined system-wide. And have the rest
of the ~twenty or more logical names correctly established. Remember
to manually coordinate access, so that removing a SYSUAF entry from one
or more SYSUAF files present in the cluster also removes the
VMSMAIL_COMMON entry, too. And down the rabbit hole of "design" we
go....

Stephen Hoffman

unread,
Apr 29, 2016, 1:27:16 PM4/29/16
to
On 2016-04-29 17:26:23 +0000, Stephen Hoffman said:

> VMSMAIL_COMMON

VMSMAIL_PROFILE

Michael Moroney

unread,
Apr 29, 2016, 1:34:11 PM4/29/16
to
Stephen Hoffman <seao...@hoffmanlabs.invalid> writes:

>> Perhaps VSI could ask Hein to modernize RMS?

>I'm sure that faster interconnects and a better RMS will fix this
>dumpster fire of "design".


OK.... I realize that RMS indexed files are nowhere near as sophisticated
or snazzy as a database, but could you expand how they are a 'dumpster
fire of "design"' ?

I agree how they wound up being used in the case of SYSUAF/RIGHTSLIST/
VMSMAIL_PROFILE is a real hack.

Dirk Munk

unread,
Apr 29, 2016, 1:39:21 PM4/29/16
to
By the way, with faster I don't mean speed in Gb/s. What I mean is
lowest latency. There isn't much traffic involved in cluster
communication, but information should be exchanged as fast as possible,
with minimal latency.

Kerry Main

unread,
Apr 29, 2016, 3:20:05 PM4/29/16
to comp.os.vms to email gateway, Dirk Munk
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Dirk Munk via Info-vax
> Sent: 29-Apr-16 1:39 PM
> To: info...@info-vax.com
> Cc: Dirk Munk <mu...@home.nl>
> Subject: Re: [New Info-vax] OT: what is old is new again?
>
That interconnect solution is called RDMA over Ethernet or RCoE:
http://www.mellanox.com/related-docs/whitepapers/roce_in_the_data_center.pdf

Think of a future RCoE cluster solution for OpenVMS as being similar to
a modern star coupler which would allow very low latency cluster
communications. The WP above shows how it achieves low latency.

I would envision each OpenVMS node in a future cluster would have
3-4 dedicated VLAN's - one of which would be the cluster VLAN. Some
sites do this today & is similar to what VMware also requires for optimized
cluster communications.

"RDMA enables sub-microsecond latency and up to 56Gb/s bandwidth,
translating to screamingly fast application performance, better storage and
data center utilization, and simplified network management. Until recently,
though, RDMA was only available in InfiniBand fabrics. With the advent of
RDMA over Converged Ethernet (RoCE), the benefits of RDMA are now
available to data centers that are based on an Ethernet or mixed-protocol
fabric as well."

Mellanox also has 100Gb NIC's available today as well as adapters which
will do functions normally associated with a firewall e.g. deep packet
inspection and compression/decompression of network loads.
http://www.mellanox.com/ethernet/adapters.php

They even have a product for VMS:
http://www.mellanox.com/page/products_dyn?product_family=164&mtag=vms

(ok, ok, - another example why VSI would never want to go back to just the
"VMS" name)

Stephen Hoffman

unread,
Apr 29, 2016, 3:47:31 PM4/29/16
to
On 2016-04-29 17:33:39 +0000, Michael Moroney said:

> OK.... I realize that RMS indexed files are nowhere near as
> sophisticated or snazzy as a database, but could you expand how they
> are a 'dumpster fire of "design"' ?

I have no particular quarrels with RMS, for what RMS does. Other than
that it's slow and that the native API is baroque and the defaults are
antiquated, that the transactional support is an extra-cost item, the
general lack of relational capabilities, the closest thing to a data
dictionary is now third-party, and then there's the complete lack of a
FUSE layer for replacing the XQP and RMS. Other than that, RMS is
fine. I won't even ask for an object graph store or anything that
deals with marshaling or unmarshaling the data, or what are
increasingly common features on other platforms. But then RMS and
object stores are not what I'd consider the issue with a cluster.
Those are just general problems with the file system, and maybe VSI
TNFS or CFS or whatever The New File System is called might address or
improve some of these areas?

> I agree how they wound up being used in the case of SYSUAF/RIGHTSLIST/
> VMSMAIL_PROFILE is a real hack.

Ayup. SYSOAF and FRIGHTSLIST and the other dozen or two that comprise
the shared files are definitely a dumpster fire of "design". Which
was what the question I was responding to was asking — areas for
cluster improvements.

Michael Moroney

unread,
Apr 29, 2016, 5:13:51 PM4/29/16
to
Stephen Hoffman <seao...@hoffmanlabs.invalid> writes:

>On 2016-04-29 17:33:39 +0000, Michael Moroney said:

>> OK.... I realize that RMS indexed files are nowhere near as
>> sophisticated or snazzy as a database, but could you expand how they
>> are a 'dumpster fire of "design"' ?

>I have no particular quarrels with RMS, for what RMS does. Other than
>that it's slow and that the native API is baroque and the defaults are
>antiquated, ...

Sorry, misinterpreted you. I thought you were complaining that RMS itself
was the "dumpster fire", not the SYSUAF/RIGHTSLIST/other files mess.

VAXman-

unread,
Apr 29, 2016, 5:57:58 PM4/29/16
to
In article <itfcvc-...@news.chingola.ch>, Paul Sture <nos...@sture.ch> writes:
>On 2016-04-29, VAXman- @SendSpamHere.ORG <VAX...@SendSpamHere.ORG> wrote:
>> In article <nfvqjb$f2q$1...@dont-email.me>,
>> Stephen Hoffman <seao...@hoffmanlabs.invalid> writes:
>>>On 2016-04-29 13:56:38 +0000, Michael Moroney said:
>>>
>>>> Sort of brings up a side question. What does VMS need to do to continue
>>>> being the best cluster type solution in a more modern environment?
>>>
>>>Replace the error-prone and manually-maintained shared-RMS-indexed-file
>>>"design" for the system and application cluster shared data, as a
>>>starting point.
>>
>> Some particular file you have issue with or all RMS indexed files?
>
>SYSUAF and RIGHTSLIST would be good a starting point. Right from the
>introduction of RIGHTSLIST it was too easy to get them out of step
>using AUTHORIZE itself.

That's not a problem with RMS indexed files; it's a problem of implementation
using RMS indexed files.

Dirk Munk

unread,
Apr 29, 2016, 6:14:57 PM4/29/16
to
Thanks Kerry,

I just don't know why I should use Ethernet for this purpose. At first
it may seem a logical choice, but I have my doubts.

For high speed low latency cluster traffic, you want a separate
interface, you don't want to use that interface for other traffic.

You may also want to use a separate switch, you don't want any
non-cluster traffic on those interfaces.

So what you want is a separate infrastructure for cluster traffic, and
then in principle the type of infrastructure doesn't matter any more. It
can be Ethernet, or Infiniband, or PCIe directly.

I would go for PCIe directly, no need for extra layers on top of PCIe.
Because that is what happens, PCIe > Ethernet - Ethernet > PCIe.

It's basically the same reason why you wouldn't use TCP/IP for RDMA
traffic, the less overhead the better.

I knew about the Mellanox adapters, they even have adapters that can be
use for Infiniband as well as Ethernet.





Dirk Munk

unread,
Apr 29, 2016, 6:15:12 PM4/29/16
to

Dirk Munk

unread,
Apr 29, 2016, 6:15:24 PM4/29/16
to

Kerry Main

unread,
Apr 29, 2016, 6:40:04 PM4/29/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Dirk Munk via Info-vax
> Sent: 29-Apr-16 6:15 PM
> To: info...@info-vax.com
> Cc: Dirk Munk <mu...@home.nl>
> Subject: Re: [New Info-vax] OT: what is old is new again?
>

[snip...]

>
> Thanks Kerry,
>
> I just don't know why I should use Ethernet for this purpose. At first
> it may seem a logical choice, but I have my doubts.
>
> For high speed low latency cluster traffic, you want a separate
> interface, you don't want to use that interface for other traffic.
>
> You may also want to use a separate switch, you don't want any
> non-cluster traffic on those interfaces.
>

Which is why you would use separate VLAN for cluster communications
and assuming a multi-site cluster within 100km of each other and fibre
links, could be used extended layer2 VLANS for this as well.

> So what you want is a separate infrastructure for cluster traffic, and
> then in principle the type of infrastructure doesn't matter any more. It
> can be Ethernet, or Infiniband, or PCIe directly.
>

As whitepaper mentioned, dedicated proprietary switches are expensive
and DC Network folks do not want any proprietary switches. Again, think
about multi-site clusters as well.

> I would go for PCIe directly, no need for extra layers on top of PCIe.
> Because that is what happens, PCIe > Ethernet - Ethernet > PCIe.
>
> It's basically the same reason why you wouldn't use TCP/IP for RDMA
> traffic, the less overhead the better.
>

Whitepaper shows how RDMA bypasses the TCPIP stack on both ends.

> I knew about the Mellanox adapters, they even have adapters that can
> be
> use for Infiniband as well as Ethernet.
>

See note above about DC Network folks not wanting to support dedicated
Proprietary switches. Also multi-site clusters?

David Froble

unread,
Apr 29, 2016, 7:01:28 PM4/29/16
to
It's still not a fair claim. Back in 1978 what else was available? What other
methods were available?

Now, the real problem is, when better tools and such became available, there was
not any effort to take a look at using them to update VMS.

You could claim that a horse and buggy is also a "dumpster fire", but at one
time they were better than walking.

You could claim the wheel is a dumpster fire, but at one time it was better then
dragging things across the ground.

David Froble

unread,
Apr 29, 2016, 7:12:07 PM4/29/16
to
One of my rather large bitches is, who died and left the network people
(supposedly those who serve others) in charge?

It's like bean counters taking over the top position in companies. We've seen
where that can lead.

Dirk Munk

unread,
Apr 29, 2016, 7:35:25 PM4/29/16
to
Kerry Main wrote:
>> -----Original Message-----
>> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
>> Dirk Munk via Info-vax
>> Sent: 29-Apr-16 6:15 PM
>> To: info...@info-vax.com
>> Cc: Dirk Munk <mu...@home.nl>
>> Subject: Re: [New Info-vax] OT: what is old is new again?
>>
>
> [snip...]
>
>>
>> Thanks Kerry,
>>
>> I just don't know why I should use Ethernet for this purpose. At first
>> it may seem a logical choice, but I have my doubts.
>>
>> For high speed low latency cluster traffic, you want a separate
>> interface, you don't want to use that interface for other traffic.
>>
>> You may also want to use a separate switch, you don't want any
>> non-cluster traffic on those interfaces.
>>
>
> Which is why you would use separate VLAN for cluster communications
> and assuming a multi-site cluster within 100km of each other and fibre
> links, could be used extended layer2 VLANS for this as well.

The latency involved with a distance of 100km is so enormous, that it
would be pointless to use RDMA. Assuming the speed of light is 200,000
km/s over a fibre, then a round trip of a signal would take at least
200/200,000 seconds, or 1 millisecond. That is why the PCIe clustering
has a maximum distance of 100 meters, in that case a round trip will
take 0,2/200,000 seconds or 1 microsecond!

The impact of a 1 millisecond latency on the performance a modern high
performance computer system with high speed solid state storage would be
disastrous. That is a whole new issue for these kind of clusters.

>
>> So what you want is a separate infrastructure for cluster traffic, and
>> then in principle the type of infrastructure doesn't matter any more. It
>> can be Ethernet, or Infiniband, or PCIe directly.
>>
>
> As whitepaper mentioned, dedicated proprietary switches are expensive
> and DC Network folks do not want any proprietary switches. Again, think
> about multi-site clusters as well.

For multi-site, no problem. Use Ethernet, and even TCP/IP. It doesn't
matter, the extra latency is of no importance. For (very) local
clusters, make a good study of advantages and disadvantages. "We don't
like that" isn't a very good argument in my opinion.

>
>> I would go for PCIe directly, no need for extra layers on top of PCIe.
>> Because that is what happens, PCIe > Ethernet - Ethernet > PCIe.
>>
>> It's basically the same reason why you wouldn't use TCP/IP for RDMA
>> traffic, the less overhead the better.
>>
>
> Whitepaper shows how RDMA bypasses the TCPIP stack on both ends.

I know, what I was trying to say that *if* you would use TCP/IP, you
would introduce extra overhead and latency. It's the same with iSCSI, it
adds the IP stack, and with that latency, no matter what the line speed is.

Stephen Hoffman

unread,
Apr 29, 2016, 8:30:53 PM4/29/16
to
While there are always some common threads, the problems that we're all
dealing with are far larger and far more complex than from back when
the VAX-11 or PDP-11 was common.

There's still a business for farriers and for horse tack, and I've used
some horse-drawn equipment. Then there was that time we needed to
upright a horse, but that's fodder for another discussion. But that
market works rather differently than the computing market, where you
either move your software forward or your ssh eventually doesn't
connect.

Will the future of OpenVMS be a competitive platform with wholly new
applications and new ISVs, and with updates and enhancements? Or will
the remaining folks still using OpenVMS in a decade be working to
maintain configurations equivalent to that Canadian nuclear-plant
PDP-11, or of your local tack shop?

The question was around what could be done to improve clustering. As
I've stated, the configuration and management and implementation and
user interface of clustering on OpenVMS has... deficiencies. (And
I've certainly been responsible for parts of that hackery and parts of
the existing doc, too. The canonical logical name documentation was
something I hacked together in the late 1990s, and that doc hasn't even
made it into the main manuals. In ~sixteen years. And that's
describing parts of the existing morass. But I digress.)

How far VSI can drag OpenVMS forward? Donno. But if they can't or if
they don't, well, you can guess where all this ends up, eh?

Kerry Main

unread,
Apr 29, 2016, 9:25:05 PM4/29/16
to comp.os.vms to email gateway
The purpose of a multi-site cluster and one of the critical reasons why sites
like the Shanghai Stock Exchange chose OpenVMS as their next gen platform
that went live back in in 2011 timeframe was being able to provide DR with
no loss of data in a significant event at one site (often called RPO=0)

Regardless of the technology, in mission critical env's, one has to be able to
accommodate RPO=0 requirements.

In any mission critical cluster design (today and in the future), this means
some form of sync updates between at least 2 sites. The alternative is async
updates which means you are ok with losing some amount of data.

While it is heavily dependent on the App and the amount of writes vs reads,
a rule of thumb is that to maintain acceptable performance, the distance
between sites should be less than 100km. The closer the sites, the smaller
the latency, but also the bigger the risk that a single significant event might
take out BOTH sites. It's the business that typically will decide on how much
risk they are willing to take.

[snip..]



mcle...@gmail.com

unread,
Apr 29, 2016, 10:08:52 PM4/29/16
to

I'm just waiting for Stephen Hoffman to say something positive about VMS. It must be a couple of months (at least) since he's done so.

Paul Sture

unread,
Apr 30, 2016, 4:32:18 AM4/30/16
to
On 2016-04-29, VAXman- @SendSpamHere.ORG <VAX...@SendSpamHere.ORG> wrote:
> In article <itfcvc-...@news.chingola.ch>,
> Paul Sture <nos...@sture.ch> writes:
>>On 2016-04-29, VAXman- @SendSpamHere.ORG <VAX...@SendSpamHere.ORG> wrote:
>>> In article <nfvqjb$f2q$1...@dont-email.me>,
>>> Stephen Hoffman <seao...@hoffmanlabs.invalid> writes:
>>>>On 2016-04-29 13:56:38 +0000, Michael Moroney said:
>>>>
>>>>> Sort of brings up a side question. What does VMS need to do to continue
>>>>> being the best cluster type solution in a more modern environment?
>>>>
>>>>Replace the error-prone and manually-maintained shared-RMS-indexed-file
>>>>"design" for the system and application cluster shared data, as a
>>>>starting point.
>>>
>>> Some particular file you have issue with or all RMS indexed files?
>>
>>SYSUAF and RIGHTSLIST would be good a starting point. Right from the
>>introduction of RIGHTSLIST it was too easy to get them out of step
>>using AUTHORIZE itself.
>
> That's not a problem with RMS indexed files; it's a problem of
> implementation using RMS indexed files.

Correct, but only up to the point where you want to add extra fields,
(arguably implementation again) or reorganise in situ and/or on the fly.

Hoff mentioned that RMS Journalling was an extra product and CDD now
belongs to someone else. I was using a product back in the eighties
which had journalling and a data dictionary built in (OK that product
wasn't cheap either and also suffered from a lack of marketing).

I don't recall much about CDD now, but in my opinion it fell short of
the product I was already using. We had things like field headings,
validation pattern matches and subroutines, help and error messages all
definable in the data dictionary, and those could be pulled in by a form
generator, which in turn could be instructed to refresh those values
after a data dictionary update.

Changing e.g. company name across all screen forms was a relative piece
of cake using a system like that. Indeed changing the length of a given
data field for the application programs themselves was relatively easy -
the hard part was converting the physical RMS files, but with COBOL's
MOVE CORRESPONDING feature, the programs to do that were trivial to
generate. OK, the hard part was testing afterwards and finding
hard-coded snippets which hadn't played by the rules :-)

What Django has now for Field Options is that basic concept extended:

<https://docs.djangoproject.com/en/1.9/ref/models/fields/>

and using a relational database you use its own tools for converting
fields (and Django generates those instructions for you).

But back to RMS, with all the good stuff you have done with Attunity you
are probably better qualified than most of us to see where the innards
of RMS fall short :-)

Dirk Munk

unread,
Apr 30, 2016, 4:56:45 AM4/30/16
to
Let there be no mistake, I absolutely agree with you on these issues.

The point I'm trying to make is that in the near future a single system
with 'memory type' solid state storage will be so fast, that clustering
such a system over any significant distance ( > 100m or even > 50m )
will slow it down enormously. The speed of light is too slow!

A read IO from a storage array with conventional hard disks takes about
10 milliseconds, a read IO from solid state storage can be thousands of
times faster in the near future. The distance between CPU and storage
will become very important.

We will have to learn how to deal with these matters, it requires a new
way of thinking.


Kerry Main

unread,
Apr 30, 2016, 8:55:05 AM4/30/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Paul Sture via Info-vax
> Sent: 30-Apr-16 4:32 AM
> To: info...@info-vax.com
> Cc: Paul Sture <nos...@sture.ch>
> Subject: Re: [New Info-vax] OT: what is old is new again?
>

[snip..]
It's interesting this discussion came up here as I was doing some basic
research into how conventional dev groups today handle what was
typically done by a data dictionary in the past.

Re: CDD - it has a had a number of product changes with DEC/Oracle:
http://www.oracle.com/technetwork/products/rdb/cdd-datasheet-088484.html

>From reading a number of discussion groups, it almost seems like groups
today have just decided a data dictionary is not needed or its too hard to
implement or groups are only looking at their App requirements and not
the needs of the organizations or ????

Thoughts ?

What are c.o.v. views on their past and current experiences with data
dictionaries?

Neil Rieck

unread,
Apr 30, 2016, 12:26:33 PM4/30/16
to
IIRC, Carly penned a deal with Oracle which allowed some/all of the OpenVMS clustering technology to be used at Oracle. TTBOMK, this was later marketed as Oracle RAC (real application clusters)

Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/

Neil Rieck

unread,
Apr 30, 2016, 12:56:21 PM4/30/16
to
On Friday, April 29, 2016 at 10:41:57 AM UTC-4, Dirk Munk wrote:
> Stephen Hoffman wrote:
> > On 2016-04-29 13:56:38 +0000, Michael Moroney said:
> >
> >> Sort of brings up a side question. What does VMS need to do to
> >> continue being the best cluster type solution in a more modern
> >> environment?
> >
> > Replace the error-prone and manually-maintained shared-RMS-indexed-file
> > "design" for the system and application cluster shared data, as a
> > starting point.
> >
>
> Perhaps VSI could ask Hein to modernize RMS?

At this point only VSI would be able to determine if the risk (of modernizing RMS) would be worth the reward. But I wonder if Indexed RMS's time has past by. I am certain that Indexed RMS will always have a place inside VMS (SYSUAF spings to mind) or customer databases which will never be upgraded because the calling program's source code has been lost. But many customers are moving to other solutions.

We have been using indexed RMS files in our shop since 1987 so have developed a lot of expertise (building, tuning, etc). Back in 2014 we started migrating our data away from Indexed RMS files toward MariaDB (an alternate fork of MySQL). At this point in time our databases are split 50/50 between the two technologies and it is becoming apparent that MariaDB is the way to go.

1) First off, locating then retrieving data from MariaDB is a whole lot faster than Indexed RMS.

2) Just this week a new employee notified us that the company had extended the length of official Personal Employee Identification Numbers (PEIN) without telling anyone.

3) In MariaDB the field was lengthened for the client while he was on the phone

4) We still haven't lengthened the field in the Indexed RMS part of he system since this involves doing a database conversion some Sunday morning which also involves recompiling about 50 programs.

p.s. I have decided not to bet-the-farm on MariaDB since we are getting it from a single source. Mimer appears to be a good candidate for OpenVMS on Alpha and/or Itanium (I have heard it is very popular with Canadian companies like "Volvo Canada"). You can download free trials here:
http://developer.mimer.com/downloads/index.htm
(I have no idea how much a non-trail version would cost)

Stephen Hoffman

unread,
Apr 30, 2016, 1:05:35 PM4/30/16
to
On 2016-04-30 12:49:55 +0000, Kerry Main said:

> What are c.o.v. views on their past and current experiences with
> datadictionaries?

I usually prefer to use a relational database, which largely avoids
this mess. This being part of why SQLite or PostgreSQL are
interesting. or an object graph store or such.

Stephen Hoffman

unread,
Apr 30, 2016, 1:12:28 PM4/30/16
to
On 2016-04-30 02:08:50 +0000, mcle...@gmail.com said:

> I'm just waiting for Stephen Hoffman to say something positive about
> VMS. It must be a couple of months (at least) since he's done so.

"The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man."

Jan-Erik Soderholm

unread,
Apr 30, 2016, 6:26:04 PM4/30/16
to
Den 2016-04-30 kl. 19:05, skrev Stephen Hoffman:
> On 2016-04-30 12:49:55 +0000, Kerry Main said:
>
>> What are c.o.v. views on their past and current experiences with
>> datadictionaries?
>
> I usually prefer to use a relational database, which largely avoids this
> mess. This being part of why SQLite or PostgreSQL are interesting. or an
> object graph store or such.
>
>

But a major point with CDD is to do simle COPY xx FROM DICTIONARY
in Cobol to get correct record definitions from the same source
as the Rdb database tables was created from. I do not see how
SQLite or PostgreSQL can directly replace that.

Stephen Hoffman

unread,
Apr 30, 2016, 7:36:29 PM4/30/16
to
Sure; if I were obligated to deal with COBOL code and enough scratch
for the Oracle CDD/REPOSITORY licenses and related, that combination
would make for a somewhat more manageable project.

But that still leads to the same sorts of messes that SYSUAF and the
rest suffer from around dealing with the records and dealing with with
upgrading the record layouts (RECORD, FIELD, MAP, whatever BASIC or
COBOL or other such use), and — without an object framework of some
sort — I still have to deal with marshaling and unmarshaling the data.

Dealing with RMS involves more than a little code and more than a
little complexity, whether it's BASIC or Fortran or COBOL or C or
otherwise, and tends to lead to approaches and solutions that I would
not prefer to use. Application rolling upgrades in a cluster are an
entirely home-made effort. What should be easier isn't. And what is
often effectively easier — adding another file — tends to lead to more
complex environments over time. Not simpler.

In short, RMS — with or without CDD — is simply not my preference.
When I have access to a precompiler or related or to an object store or
an object graph store, I find dealing with the data management at the
level of the RMS record and fields within the RMS records to be tedious.

David Froble

unread,
Apr 30, 2016, 10:33:44 PM4/30/16
to
There is just too many factors that are good about such a database.

Change a table, doesn't affect a program, unless something specific was done in
the program. The change is rather painless. Expand a field. Add fields (columns).

In too many instances, with ISAM, relative, and such files, way too much
knowledge about the data is built into the programs. Not all bad, but, most of
the time more work, less flexible, and such.

There was a time when a nice tight design using files would stomp all over a
database when doing additions, updates, and such. Retrival seems to have always
favored the database products. No more, the HW is so fast, any difference
doesn't matter.

If I was to do anything new, not working on existing systems, I'd choose a
database. There is where the problems might occur. You mention "single source"
for MariaDB. Just about anything has a single source. I think your real
concern is the reliability of that source.

It would be good if in some manner one or more database products were available
for VMS, with some good feeling that they will be around for a while. Not
saying the current ones won't. Just saying that having as much confidence as
one might have with RMS would be helpful.

David Froble

unread,
Apr 30, 2016, 10:47:27 PM4/30/16
to
I think that having what you're calling a "data dictionary" is a rather good
thing. It takes to some extent or other the knowledge of the format of a data
record out of the preview of the application programs. Not entirely, since the
application programs must work with the data.

The database product I designed and implemented more than 30 years ago had a
built in data dictionary, sort of. It was not a comprehensive (monolithic) type
of product. Rather than one file handled by some database engine, it had data
files, ISAM or non-keyed, with the record definitions and such as part of the
data file. For example, first 8 blocks, record and field definitions, next x
blocks, space for keys, and the third part the data records. One of the
greatest strengths was the ability to write utility programs that could work on
any file, getting all the required information from the data file itself.

30 years is a long time, and techniques and capabilities have advanced. Still,
having the "data dictionary" separate from any applications programs is in my
opinion a rather good thing. (I could be biased.)

David Froble

unread,
Apr 30, 2016, 10:50:41 PM4/30/16
to
Stephen Hoffman wrote:
> On 2016-04-30 12:49:55 +0000, Kerry Main said:
>
>> What are c.o.v. views on their past and current experiences with
>> datadictionaries?
>
> I usually prefer to use a relational database, which largely avoids this
> mess. This being part of why SQLite or PostgreSQL are interesting. or
> an object graph store or such.
>
>

I don't think that is an alternative. The relational database still has the
"data dictionary" in some form. It's part of the database design. When you
design a table in a relational database, you're setting up some data dictionary
information.

David Froble

unread,
Apr 30, 2016, 10:55:15 PM4/30/16
to
Are you looking at this from the perspective of Cobol being able to use the
table definitions that exist in a database product? If so, it would have to be
something that Cobol knows about, or, custom code to go and get the table
definitions. I don't know enough to address your question.

David Froble

unread,
Apr 30, 2016, 11:00:40 PM4/30/16
to
If speed is the overriding factor, sure. But at times reliability and DR will
be the overriding factors, and people will just have to respect Einstein.

It will depend on the priorities ....

Dirk Munk

unread,
May 1, 2016, 4:29:17 AM5/1/16
to
Very true David. But when the performance of a (future) system plummets
after it has been made part of a cluster, we will have to do a lot of
difficult explaining to customers, managers, etc. They may think it is
the cluster software itself that is the problem. I had the experience
that a manager tried to solve these matters with a higher speed network
link.

My only point is that we, as technicians, must be aware of these things,
and we must learn to calculate with the effects of the limited speed of
light, and the effects on the performance of our computer systems.

Jan-Erik Soderholm

unread,
May 1, 2016, 6:45:08 AM5/1/16
to
Den 2016-05-01 kl. 04:55, skrev David Froble:
> Jan-Erik Soderholm wrote:
>> Den 2016-04-30 kl. 19:05, skrev Stephen Hoffman:
>>> On 2016-04-30 12:49:55 +0000, Kerry Main said:
>>>
>>>> What are c.o.v. views on their past and current experiences with
>>>> datadictionaries?
>>>
>>> I usually prefer to use a relational database, which largely avoids this
>>> mess. This being part of why SQLite or PostgreSQL are interesting. or an
>>> object graph store or such.
>>>
>>>
>>
>> But a major point with CDD is to do simle COPY xx FROM DICTIONARY
>> in Cobol to get correct record definitions from the same source
>> as the Rdb database tables was created from. I do not see how
>> SQLite or PostgreSQL can directly replace that.
>
> Are you looking at this from the perspective of Cobol...

Right, it was obviously a mistake to mention "Cobol" here. :-)
I should have said "your favourite programming language"...

Basic:
%INCLUDE %FROM %CDD "pathname"

Fortran:
DICTIONARY ’ACCOUNTS’

Pascal:
PROGRAM SAMPLE1;
TYPE
%DICTIONARY ’Pascal_SALESMAN_RECORD’

C:
#pragma dictionary "service.salary_record"

Cobol:
COPY "DEVICE:[VMS_DIRECTORY]SALES.CUSTOMER_ADDRESS_RECORD" FROM DICTIONARY.


> being able to use the table definitions that exist in a database product?

No, it enables both the database product *and* the programming language to
use the same data definitions. In the database when doing "create table",
and in the programming languages when defining the corresponding records.


> so, it would have
> to be something that Cobol knows about, or, custom code to go and get the
> table definitions.

No no no. It reads the records defined in the CDD. Not in the "database".
And it is nothing "custom", it is built-in in the compilers.

> I don't know enough...

Obviously. :-)

> ...to address your question.

There was no question.


Stephen Hoffman

unread,
May 1, 2016, 10:14:07 AM5/1/16
to
Ayup, and the programmer has to deal with the resulting record layouts
rather less often than with RMS, and the programmer doesn't have to
explicitly and directly resort to creating and managing separate files
nearly as often, and the programmer has to deal with creating and
maintaining export and import and online backup rather less often, etc.
Relational databases do have some downsides — a surprising number of
folks have utterly no clue about them being one of the larger — and
there are certainly also cases where the classic RMS files can be an
appropriate choice, of course.

Kerry Main

unread,
May 1, 2016, 11:20:04 AM5/1/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Neil Rieck via Info-vax
> Sent: 30-Apr-16 12:27 PM
> To: info...@info-vax.com
> Cc: Neil Rieck <n.r...@sympatico.ca>
> Subject: Re: [New Info-vax] OT: what is old is new again?
>

[snip..]

> > Re: CDD - it has a had a number of product changes with DEC/Oracle:
> > http://www.oracle.com/technetwork/products/rdb/cdd-datasheet-
> 088484.html
> >
> > >From reading a number of discussion groups, it almost seems like
> groups
> > today have just decided a data dictionary is not needed or its too hard
> to
> > implement or groups are only looking at their App requirements and
> not
> > the needs of the organizations or ????
> >
> > Thoughts ?
> >
> > What are c.o.v. views on their past and current experiences with data
> > dictionaries?
> >
> > Regards,
> >
> > Kerry Main
> > Kerry dot main at starkgaming dot com
>
> IIRC, Carly penned a deal with Oracle which allowed some/all of the
> OpenVMS clustering technology to be used at Oracle. TTBOMK, this was
> later marketed as Oracle RAC (real application clusters)
>
> Neil Rieck
> Waterloo, Ontario, Canada.
> http://www3.sympatico.ca/n.rieck/
>

Actually, I believe the agreement with Oracle was to license the Tru64
cluster code which was actually a subset of OpenVMS clustering. The
Tru64 clustering code is what Oracle RAC was originally based on.

Btw - Not to many folks realize that Oracle Enterprise was originally
developed on/for RSX/OpenVMS.

http://www.orafaq.com/wiki/Oracle_Corporation

1978 - Oracle 1 ran on PDP-11 under RSX, 128 KB max memory. Written
in assembly language. Implementation separated Oracle code and user
code. Oracle V1 was never officially released.
1980 - Oracle 2 released - the first commercially available relational database
to use SQL. Oracle runs on DEC PDP-11 machines. Code is still written in
PDP-11 assembly language, but now ran under Vax/VMS.
1982 - Oracle 3 released, Oracle became the first DBMS to run on
mainframes, minicomputers, and PC's (portable code base). First release
to employ transactional processing. Oracle V3's server code was written
in C.
1983 - Relational Software Inc. changed its name to Oracle Corporation.
1984- Oracle 4 released, introduced read consistency, was ported to
multiple platforms, first interoperability between PC and server.
1986 - Oracle 5 released. Featured true client/server, VAX-cluster support
and distributed queries. First DBMS with distributed capabilities.

Kerry Main

unread,
May 1, 2016, 12:15:04 PM5/1/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@info-vax.com] On Behalf Of
> Jan-Erik Soderholm via Info-vax
> Sent: 01-May-16 6:45 AM
> To: info...@info-vax.com
> Cc: Jan-Erik Soderholm <jan-erik....@telia.com>
> Subject: Re: [New Info-vax] OT: what is old is new again?
>
[snip]

Imho, another example of a gold nugget lost in the shuttle of companies
Changing hands and best practices from 20+ years ago being dropped,
even though they likely would be a significant help for companies today

Perhaps this meta data distributed and object oriented repository concept
should be re-invented for next gen systems when dealing with challenges
today like "big data"?

A few randomly selected extracts from a 1991 guide from Digital called
"Digital's Distributed Repository - Blueprint for Managing Enterprise Wide
Information" : (part of NAS, which has today been re-invented as "SOA")

"CDD/Repository provides a shared store of information about systems,
data, and projects. CDD/Repository furnishes the vehicle for centralizing,
standardizing, and integrating information about systems. This facilitates
the implementation, maintenance, management and integration of these
systems. Because CDD/Repository is object oriented, it also provides for
storage, standardization, and sharing of procedures to manage systems,
data, and projects."

"CDD/Repository is the next step beyond CDD/Plus, Digital's Common Data
Dictionary (CDD). CDD/Repository uses an object oriented architecture that
provides maximum flexibility and makes it possible to tailor the repository
to the needs of specific organizations."

"Data administrators, system planners, and systems and data architects use
CDD/Repository for strategic planning, systems planning, data modeling,
system architecture planning and impact analysis."

"Commercial and technical application developers use CDD/Repository for
strategic planning, project management, analysis & design, code generation
and maintenance, and configuration maintenance"

"System managers and administrators use CDD/Repository for systems
management, configuration management, capacity planning, and usage
analysis"

"CDD/Repository uses an object oriented model of process and data.
This object orientation is the basis for control and extensibility because
CDD/Repository contains both data and procedures for manipulating the
data. It is possible to customize CDD/Repository by adding new data and
procedure descriptions."

Evolution from CDD/Plus - "The scope is expanded. CDD/Plus managed
"data about data" records, fields, data elements, and procedures. Because
it is object oriented, CDD/Repository stores information about objects and
methods. Information of interest to software developers, maintainers,
or users is considered an object. Code modules, databases, models, files
and documentation are examples of objects. Methods are the operations
that can be performed on objects, such as creating a new version of a
module, revising a database definition, viewing a model, copying a file
or printing a document."

The above is not necessarily intended to state that CDD/Repository is the
product of choice, but it does perhaps open up the discussion here as to
what OpenVMS or for that matter, any next generation system needs in
the future.

Neil Rieck

unread,
May 1, 2016, 2:48:14 PM5/1/16
to
On Sunday, May 1, 2016 at 10:14:07 AM UTC-4, Stephen Hoffman wrote:
> On 2016-05-01 02:50:39 +0000, David Froble said:
>
> > Stephen Hoffman wrote:
> >> On 2016-04-30 12:49:55 +0000, Kerry Main said:
> >>
> >>> What are c.o.v. views on their past and current experiences with
> >>> datadictionaries?
> >>
> >> I usually prefer to use a relational database, which largely avoids
> >> this mess. This being part of why SQLite or PostgreSQL are
> >> interesting. or an object graph store or such.
> >
> > I don't think that is an alternative. The relational database still
> > has the "data dictionary" in some form. It's part of the database
> > design. When you design a table in a relational database, you're
> > setting up some data dictionary information.
>
> Ayup, and the programmer has to deal with the resulting record layouts
> rather less often than with RMS, and the programmer doesn't have to
> explicitly and directly resort to creating and managing separate files
> nearly as often, and the programmer has to deal with creating and
> maintaining export and import and online backup rather less often, etc.
> Relational databases do have some downsides -- a surprising number of
> folks have utterly no clue about them being one of the larger -- and
> there are certainly also cases where the classic RMS files can be an
> appropriate choice, of course.
>
>
> --
> Pure Personal Opinion | HoffmanLabs LLC

The big problem here is this: we old farts are in the twilight of our careers. We know how to do records layouts for both FMS and RMS (stingily counting bytes from a time when memory was both precious and expensive). We also know how to tune the files as well as do a recovery whenever something goes wrong.

But if we want this system to survive our retirements then we need to move to a technology that more people are familiar with. And this would be something SQL-compliant. Now we are quite adept at using products like Attunity to provide SQL-like access to our RMS data, and there's nothing wrong with that, but the core system still requires someone with 30-year-old special skills to keep it all going. (think Amish)

So I began experimenting with Oracle-RDB which was a really neat product but my boss damn near had a coronary when he learned that Oracle quoted US$30k for a 4-user license on our AS-DS20e.

So from that time on I had been trialing less expensive and free alternatives. Everyone raved about MySQL so I checked it out MySQL-4 on OpenVMS (found it on the DECUS offerings) but it seemed to me to be just a proof of concept (it certainly wasn't faster than ISAM RMS).

With MySQL-5 I thought they had finally gotten to the point where we should be looking at it more seriously. Over the next few years I recall trailing three different of MySQL-5 on OpenVMS, and you could actually tell that each version was faster while more stable.

Oracle must have noticed this too because they acquired SUN then began slowing development. Around the same time, the Swedes (who had sold MySQL AB to SUN) forked MySQL into MariaDB which seemed to me just like a logical continuation of the MySQL-5.

We're currently running MariaDB-5.5-25 which can outperform ISAM RMS on every test we can throw at it. Meanwhile, the Swedes are under pressure from Oracle so will no longer be releasing MariaDB in lock-step with MySQL. So the latest version jumped from 5 to 10 but if you check out the developer notes it would appear that they have rewritten 50% of it.

So how do they make their money? They give it away but sell support contracts. But they do not offer something for OpenVMS which means we must rely on someone "to provide us with a build" or "do a build ourselves". Now I asked that single provider if he was willing to sell us a support contract and responded as not being interested in doing this.

So for me I can only see two options.

1) develop a fallback strategy based upon Mimer
2) hope that someone at VSI is considering doing a MariaDB build

Jan-Erik Soderholm

unread,
May 1, 2016, 5:27:23 PM5/1/16
to
Den 2016-05-01 kl. 20:48, skrev Neil Rieck:

>
> We're currently running MariaDB-5.5-25 which can outperform ISAM RMS on
> every test we can throw at it.

http://www3.sympatico.ca/n.rieck/docs/openvms_notes_mysql_mariardb.html

That is still in some test environment, right? You can't be seriously
thinking about production use with the kind of issues that are described
there?

Not beeing able to run in serializable isolation mode? And not using
serializable "because we are not a bank"? Where on the two linked pages
do they specificaly refer to banks?

And ridiculous long shutdown times?

Your demo applications asks on the command line for user/password.
What if you do not want that and do not want to have hardcoded
user/passwords or user/password stored anywhere else?

Linking "produces lots of informational and warning messages".
So you have to filter the linker output to find your "real"
linking errors (those unreleated to MariaDB).


> So for me I can only see two options.
>
> 1) develop a fallback strategy based upon Mimer 2) hope that someone at
> VSI is considering doing a MariaDB build
>

Or hope that the VSI/Oracle talks results in something good. :-)

Snowshoe

unread,
May 2, 2016, 11:28:29 AM5/2/16
to
On 4/30/2016 10:33 PM, David Froble wrote:
>>> Perhaps VSI could ask Hein to modernize RMS?
>>
>> At this point only VSI would be able to determine if the risk (of
>> modernizing RMS) would be worth the reward. But I wonder if Indexed
>> RMS's time has past by. I am certain that Indexed RMS will always have
>> a place inside VMS (SYSUAF spings to mind) or customer databases which
>> will never be upgraded because the calling program's source code has
>> been lost. But many customers are moving to other solutions.
>>
>> We have been using indexed RMS files in our shop since 1987 so have
>> developed a lot of expertise (building, tuning, etc). Back in 2014 we
>> started migrating our data away from Indexed RMS files toward MariaDB
>> (an alternate fork of MySQL). At this point in time our databases are
>> split 50/50 between the two technologies and it is becoming apparent
>> that MariaDB is the way to go.

Does it make any sense to take some sort of database design,
optimize/VMS-ize it and make it actually part of RMS or the rumored "new
file system"? Just a crazy thought. I know it cannot do everything
people use databases for, but perhaps a "one size fits many" type
implementation is possible?

Neil Rieck

unread,
May 2, 2016, 10:28:35 PM5/2/16
to
You did see my previous post about increasing a column width in MariaDB while the complaining customer was on the phone, right?

While we had a few problems with MariaDB-5.5-25 on OpenVMS, it is so damned fast (compared to ISAM RMS) that we have moved over most everything which is not critical to our business. We've got one developer in the RMS world trying to eliminate his 4-5 second delays (this is a huge parts database) to match Maria's 1/2 second delay. It appears that he will soon give up trying.

The serialization setting (which also affects locking) was changed to deal with with long shutdown times (not sure if this is due something we are doing wrong or perhaps a bug in the port to OpenVMS). I thought we needed fancy locking like we had in the RMS world but this turned out to be false when used from web sessions which are really connectionless. Do a google search of "optimistic locking" and "pessimistic locking" to see what I mean.

As I said previously, every release of MySQL-5 was noticeably better than the one before it so I am counting on improvements just around the corner. Nobody on other platforms (Windows, Unix, Linux) is using MariaDB-5 as they have already moved to MariaDB-10. Mark Berryman is working on a port of MariaDB-10 for VMS which we hope to test soon. If that doesn't happen by June-1 I am going experiment with Mimer which is a product we can license along with annual support.

Jan-Erik Soderholm

unread,
May 3, 2016, 4:30:37 AM5/3/16
to
Sure. I do not know of any rellational database where that
would be a major issue.

>
> While we had a few problems with MariaDB-5.5-25 on OpenVMS, it is so
> damned fast (compared to ISAM RMS) that we have moved over most
> everything which is not critical to our business. We've got one
> developer in the RMS world trying to eliminate his 4-5 second delays
> (this is a huge parts database) to match Maria's 1/2 second delay. It
> appears that he will soon give up trying.
>
> The serialization setting (which also affects locking) was changed to
> deal with with long shutdown times (not sure if this is due something we
> are doing wrong or perhaps a bug in the port to OpenVMS). I thought we
> needed fancy locking like we had in the RMS world but this turned out to
> be false when used from web sessions which are really connectionless. Do
> a google search of "optimistic locking" and "pessimistic locking" to see
> what I mean.
>
> As I said previously, every release of MySQL-5 was noticeably better
> than the one before it so I am counting on improvements just around the
> corner. Nobody on other platforms (Windows, Unix, Linux) is using
> MariaDB-5 as they have already moved to MariaDB-10. Mark Berryman is
> working on a port of MariaDB-10 for VMS which we hope to test soon. If
> that doesn't happen by June-1 I am going experiment with Mimer which is
> a product we can license along with annual support.
>
> Neil Rieck Waterloo, Ontario, Canada.
>

OK, right. I see that we have slightly different viewpoints simply
becuse you are comparing with plain RMS files and I am comparing
with Rdb. :-) A lot of the points you mention on your MariaDB page
sounds quite weird when having a Rdb viewpoint.

When you say that isolation level serializable "affects locking"
is that ment as in a negative way? I guess that MariaDB doesn't
have the "snapshot" functionallity as Rdb does. And yes, in a
purely web-based environment, that is probably fine... :-) But
even then, you sometimes have reqirements to run reports or
summaries where you expect a consistent view on the data during
the total run of multible reports (hence "repeatable reads").

An accountant would never accept a "summary" and a "details"
report that doesn't dislay the same values on the "bottom line".

I guess your 5 sec vs. 0.5s difference is mainly due to on-disk
vs. in-memory access of the data. Hard to tell without details.

I would never accept that long shutdown times. We have had our
database open now for "(elapsed 141 19:04:43)" (since last boot),
and I know that it will shotdown in 5-10 sec, including the
rollback/cleanup handling of any open transactions.


But if you are happy with the situation as described on your
MariaDB page, just go ahead. :-)

But then, maybe I'd better try MariaDB on my DS25... :-)

Jan-Erik.




Bob Koehler

unread,
May 3, 2016, 9:15:41 AM5/3/16
to
In article <ng7rmk$2rk$2...@gioia.aioe.org>, Snowshoe <n...@spam.please> writes:
>
> Does it make any sense to take some sort of database design,
> optimize/VMS-ize it and make it actually part of RMS or the rumored "new
> file system"? Just a crazy thought. I know it cannot do everything
> people use databases for, but perhaps a "one size fits many" type
> implementation is possible?

Gee, I hope not. There are already enough people spreading the
byte-stream dogma claiming they don't want something like RMS because
they don't want to have to go through a DBMS.

Beside, a real DBMS has a lot of features that I don't need and
don't want to pay for. All I need is variable length records that
don't make my code know what the line separator is, and keyed-indexed.

Stephen Hoffman

unread,
May 3, 2016, 10:00:49 AM5/3/16
to
On 2016-05-02 15:31:03 +0000, Snowshoe said:

> Does it make any sense to take some sort of database design,
> optimize/VMS-ize it and make it actually part of RMS or the rumored
> "new file system"? Just a crazy thought. I know it cannot do everything
> people use databases for, but perhaps a "one size fits many" type
> implementation is possible?

How much experience do you have with RMS APIs, and with relational
databases? Have you used Rdb, SQLite, PostgreSQL, MySQL/MariaDB or any
of the other available relational databases?

Grafting a locally-developed, designed and maintained (presumably?)
unique relational database onto OpenVMS is an idea that is not unique.
DEC did this, with Rdb. Grafting a relational database onto RMS
would be unique, however. That's not been done before.

Downside is that the development of Rdb was a massive effort to get it
where it was. The current database market is very different from the
era when Rdb was created. More than a few folks haven't adapted to
these and other changes in the database market, too. But I digress.

Various of the typical relational operations are very different from
what the RMS APIs provide, and RMS has no concept of a data dictionary,
nor any related concepts of the format of the data within individual
records beyond the key fields, nor of complex searches, joins nor the
other, well, relational features that many folks would expect with a
relational database. These are at the core of why folks choose
relational databases over RMS, too.

But to your suggestion — if the results of that not-inconsequential
effort don't and can't exceed what SQLite or PostgreSQL or other such
provides, and if folks aren't interested in adopting this new database
— the effort would be wasted. This particularly given the SQLite
and PostgreSQL databases have software licenses that appear compatible
with OpenVMS, and given that RMS APIs are already not exactly the
easiest to use, and given that most folks are probably more interested
in SQL or SQL-like access and tools, and not RMS APIs.

So.... no. AFAICT, this construct doesn't make any sense.

Now were VSI to add SQLite or PostgreSQL into the OpenVMS base
distribution — always present, always available, always started, etc —
and particularly start using it and integrating it, and also integrate
and incorporate LDAP or other databases as necessary, and that's a
viable approach for OpenVMS. But this is going back to the previous
threads that have been posted on this add-a-database topic, too.

Craig A. Berry

unread,
May 3, 2016, 8:11:12 PM5/3/16
to
On 5/2/16 10:31 AM, Snowshoe wrote:

> Does it make any sense to take some sort of database design,
> optimize/VMS-ize it and make it actually part of RMS or the rumored "new
> file system"? Just a crazy thought. I know it cannot do everything
> people use databases for, but perhaps a "one size fits many" type
> implementation is possible?

If you're thinking of a file system with database-like characteristics
(BeOS, etc.), some of those capabilities might well be in the new file
system. But that doesn't give you a general-purpose database.

If you're thinking of adding database features on top of RMS, that's
been done before, and I believe is still available via one or more third
party products. It would certainly be nice to have that capability built
in so people could move their existing RMS-based applications forward
and query their existing data more easily.

Someone declared in this newsgroup a few years ago that he was going to
take advantage of the pluggable storage engine feature of MySQL and
create an RMS-based storage engine for it. I don't think that ever
happened. Might be nice if someone were to revisit that.

Craig A. Berry

unread,
May 3, 2016, 10:33:43 PM5/3/16
to
On 5/1/16 1:48 PM, Neil Rieck wrote:

> We're currently running MariaDB-5.5-25 which can outperform ISAM RMS

> They give it away but sell support
> contracts. But they do not offer something for OpenVMS which means we
> must rely on someone "to provide us with a build" or "do a build
> ourselves". Now I asked that single provider if he was willing to sell
> us a support contract and responded as not being interested in doing this.


> So for me I can only see two options.
>
> 1) develop a fallback strategy based upon Mimer
>
> 2) hope that someone at VSI is considering doing a MariaDB build


And of course "doing a build" would be of limited help with your
situation unless they were also going to offer support for it, right?

At a TUD a few years ago, someone asked about OpenVMS Engineering
providing support for MySQL and the HP person present said, "Why do you
want us to do it?". Apparently that was a rhetorical question as he
didn't wait for an answer. It seemed pretty obvious to me that the
reason people want support for open source packages from the purveyor of
VMS is that no one else is offering it.

Jan-Erik Soderholm

unread,
May 4, 2016, 4:49:12 AM5/4/16
to
Den 2016-05-04 kl. 02:11, skrev Craig A. Berry:
> On 5/2/16 10:31 AM, Snowshoe wrote:
>
>> Does it make any sense to take some sort of database design,
>> optimize/VMS-ize it and make it actually part of RMS or the rumored "new
>> file system"? Just a crazy thought. I know it cannot do everything
>> people use databases for, but perhaps a "one size fits many" type
>> implementation is possible?
>
> If you're thinking of a file system with database-like characteristics
> (BeOS, etc.), some of those capabilities might well be in the new file
> system. But that doesn't give you a general-purpose database.
>
> If you're thinking of adding database features on top of RMS, that's
> been done before,...

"Rdb Transparent Gateway for RMS", comes to mind.

You can/could (it is still available unsupported for Alpha) access
and "use" RMS based data just as if it was an Rdb database including
doing SQL joins between Rdb and RMS data.

> and I believe is still available via one or more third
> party products. It would certainly be nice to have that capability built
> in so people could move their existing RMS-based applications forward
> and query their existing data more easily.
>

Right. Just what the above did... :-)

Stephen Hoffman

unread,
May 4, 2016, 8:22:56 AM5/4/16
to
On 2016-05-04 02:33:39 +0000, Craig A. Berry said:

> And of course "doing a build" would be of limited help with your
> situation unless they were also going to offer support for it, right?
>
> At a TUD a few years ago, someone asked about OpenVMS Engineering
> providing support for MySQL and the HP person present said, "Why do you
> want us to do it?". Apparently that was a rhetorical question as he
> didn't wait for an answer. It seemed pretty obvious to me that the
> reason people want support for open source packages from the purveyor
> of VMS is that no one else is offering it.


That, and that OpenVMS doesn't have an integrated relational database.
The folks in Bolton should want that for use within OpenVMS itself,
save that any migration out of the existing quagmire would take a
decade.

David Froble

unread,
May 4, 2016, 4:52:17 PM5/4/16
to
Well, it seems like an opportunity to me. For example, I believe it's Mark
Berryman who is building MarieDB for VMS. If VSI decided to distribute the
product as part of VMS, and partnered with Mark to provide support, and made
sure customers had easy methods of obtaining such support, ...

The "vendor" would perhaps have a sufficient revenue stream from support to
insure the resources for such support.

Or, SQlite, or Postgre, or ...

The key would be for VSI to pick one or more RDBMS products and make it happen.

Neil Rieck

unread,
Nov 23, 2016, 12:27:04 PM11/23/16
to
Let me second those remarks:

First off, while I have been using Mark's port of MariaDB-5.5-25 since 2014, we have noticed a few issues. When you check the blogs most people say something like "Oh, that problem went away when we upgraded to MariaDB-10.1"

I understand Mark works for a division of HP/HPE that is being outsourced so he is very busy doing work that results in putting food on the table. That said, with no version of MariaDB-10 on the horizon for OpenVMS, I have resorted to installing CentOS Linux on a nearby server, then installing MariaDB-10.1.19 which we access from our OpenVMS development platform via the network. (think of this experiment as an alternative form of SAN; I did this only to see if our issues just went away, which they did)

I'm not certain what the open source community did between MariaDB-5 and MariaDB-10 but everyone claims it is faster and we noticed this as well.

We are at a point where 50% of our OpenVMS data still remains on RMS and we would like to move it to MariaDB but do not want to be trapped in a single-source situation. (which VSI could solve by including MariaDB with OpenVMS; or we could solve by sticking with the Linux alternative); our only other alternative is to investigate something like Mimer:
http://developer.mimer.com/platforms/index.tml )

Neil Rieck
Waterloo, Ontario, Canada.

p.s. do not think that SQLite is a viable alternative to MySQL, MariaDB or Postgres. While SQLite it is very powerful from a single user perspective, it introduces some new issues when used in a multi-user environment. That said, I play with it every day and believe it to be a viable alternative to small RMS-based ISAM projects. I have posted my efforts here:
http://www3.sympatico.ca/n.rieck/docs/openvms_notes_sqlite.html

David Froble

unread,
Nov 23, 2016, 3:05:13 PM11/23/16
to
Isn't that part of the reason for new versions, to fix problems?

> We are at a point where 50% of our OpenVMS data still remains on RMS and we
would like to move it to MariaDB but do not want to be trapped in a
single-source situation. (which VSI could solve by including MariaDB with
OpenVMS;

Now, this is where people like me, third party vendors, get confused. Whether
it's Mark, or someone else, or VSI, what's the difference? If a third party
agreed to provide support and such, they why do you insist on only the developer
of VMS?

Jan-Erik Soderholm

unread,
Nov 23, 2016, 7:02:00 PM11/23/16
to
Are you asking what the difference is between one single man and a
company of 100+ people?

Must be something I guess, since many do think there is a difference.



Craig A. Berry

unread,
Nov 23, 2016, 7:36:09 PM11/23/16
to
On 11/23/16 2:05 PM, David Froble wrote:

> Now, this is where people like me, third party vendors, get confused.
> Whether it's Mark, or someone else, or VSI, what's the difference? If a
> third party agreed to provide support and such, they why do you insist
> on only the developer of VMS?

As has been said, numerous times I think, no such third party exists.
There are multiple vendors who offer commercial support for MariaDB and
even more for MySQL. But not on VMS. There are free community edition
downloads of binary installation packages and/or instructions for
installing from source for all major platforms. But not for VMS.

Anyone wanting to sell VMS to people who don't already have it (or are
struggling to keep it) will need an answer for the question "Where can I
get X for VMS?" where X may be MariaDB, NodeJS, MongoDB, or hundreds of
other packages that can be dabbled with for free and high quality
commercial support obtained from multiple vendors. "We hope someone else
will do it" might not always be considered an adequate answer.

Yes, VSI has injected some hope into the equation. But this problem has
been around for a decade or two during which the free stuff has become
vastly more important than it used to be.

David Froble

unread,
Nov 24, 2016, 4:34:24 AM11/24/16
to
Not at all. I didn't mention numbers. There are surely larger organizations
than VSI. Regardless, what's important is expertize in the product in question.
I do seem to recall more than one statement from someone at VSI that "there is
nobody at VSI that worked on ????". Now, what good is numbers, if not one of
them knows the product in question?

I ran into this many years ago. When attempting to sell the Tolas ERP system
which did not use RMS for data files. Some bigots would not consider the system
because it didn't use RMS, regardless of how good, or bad, the application
system was. Didn't make sense then, still doesn't make sense now.

Just as today's bigots will say that if an RDBMS isn't used, the application
isn't acceptable. The reality is whether the application system does the job
adequately, or not.

But let's just look at this:

"which VSI could solve by including MariaDB with OpenVMS"

So, tell me, how this would be of any help, if nobody at VSI had a clue about
MariaDB? Not saying they do, or don't.

Makes no sense to me, but, then, I don't get out much ....

David Froble

unread,
Nov 24, 2016, 4:37:46 AM11/24/16
to
Yep, all true. But not really on topic. The topic was "why does it have to be
the OS vendor?"

There are two questions. First, are there any who see the need for formal
support as an opportunity? Second, are there any customers who will pay for it?

Jan-Erik Soderholm

unread,
Nov 24, 2016, 5:44:01 AM11/24/16
to
It doesn't, of course. Many are happy getting Rdb from Oracle and
has been for something like 15 years now. But that is a completely
different situation then for MariaDB...




Michael Moroney

unread,
Nov 24, 2016, 1:40:14 PM11/24/16
to
David Froble <da...@tsoft-inc.com> writes:

>I do seem to recall more than one statement from someone at VSI that "there is
>nobody at VSI that worked on ????". Now, what good is numbers, if not one of
>them knows the product in question?

...

>But let's just look at this:

>"which VSI could solve by including MariaDB with OpenVMS"

>So, tell me, how this would be of any help, if nobody at VSI had a clue about
>MariaDB? Not saying they do, or don't.

>Makes no sense to me, but, then, I don't get out much ....

We at VSI wear many hats. Some of them get shoved onto our heads. For
example, none of us really knew DECnet V. But there are customers who
use it and we have to support it. I drew the short straw so I had to
figure out how to build, test and support it, from a bunch of backup
images from HP.

If none of us know X, but we have to do X, we learn X.

To get something like MariaDB or other such freeware onto VMS, the
managers have to be convinced that doing so will sell more VMS, and to
the extent it's worth taking someone's time away from important stuff like
X86, the 64 bit file system etc.

I am not saying it's not a good idea, just that VSI is not ZK3-4 with its
many occupants.

Kerry Main

unread,
Nov 24, 2016, 3:10:04 PM11/24/16
to comp.os.vms to email gateway
100% agree ..

VSI needs to focus on the core functions while working with
partners / ISV's / Dev community to develop / support other value
add components. That's the model other platform providers use and
VSI is no different. Same applies to things like graphics.

Hey, opportunity knocks for someone or some company to take on
MariaDB support via paid support contracts ...

Specifically to MariaDB, while it has some industry acceptance,
the issue I have with it is similar to some open source offerings
i.e. no native OpenVMS cluster support. Ok, perhaps someone can
correct me here, but I believe there is some limited shared
nothing MariaDB cluster offerings on Linux only.

I would like to see more emphasis on not just getting Open Source
offerings working on OpenVMS, but also supporting native (shared
disk) clustering e.g. like the Apache Web Server port (supports
active-active native clustering).

David Froble

unread,
Nov 24, 2016, 6:40:08 PM11/24/16
to
Yep!

> Specifically to MariaDB, while it has some industry acceptance,
> the issue I have with it is similar to some open source offerings
> i.e. no native OpenVMS cluster support. Ok, perhaps someone can
> correct me here, but I believe there is some limited shared
> nothing MariaDB cluster offerings on Linux only.

I know nothing about the product, but I'm assuming some type of locking. If
it's internal to the product, then it would not have cluster wide locking.
However, if the locking was rather modular, then replacing the locking modules
with something using the VMS DLM would then allow cluster wide locking. If the
DLM in it's current state was compatable with the product's requirements.

Now if some large customer decided to get serious about using the product, then
you get into Michael's "convincing managers at VSI", or someone wants a freebie,
and TANSTAAFL.

Similar thing if you want more than cluster wide locking ....

Arne Vajhøj

unread,
Nov 24, 2016, 6:49:59 PM11/24/16
to
On 11/23/2016 12:27 PM, Neil Rieck wrote:
> First off, while I have been using Mark's port of MariaDB-5.5-25
> since 2014, we have noticed a few issues. When you check the blogs
> most people say something like "Oh, that problem went away when we
> upgraded to MariaDB-10.1"
>
> I understand Mark works for a division of HP/HPE that is being
> outsourced so he is very busy doing work that results in putting food
> on the table. That said, with no version of MariaDB-10 on the horizon
> for OpenVMS, I have resorted to installing CentOS Linux on a nearby
> server, then installing MariaDB-10.1.19 which we access from our
> OpenVMS development platform via the network. (think of this
> experiment as an alternative form of SAN; I did this only to see if
> our issues just went away, which they did)

Application on one system accessing database on another system is
the most common today, so ...

Arne

Arne Vajhøj

unread,
Nov 24, 2016, 7:00:33 PM11/24/16
to
This is really the key question.

The claim that you can not buy support for MariaDB on VMS is
not true.

If someone is willing to pay enough for the service to make
it profitable to provide then it will be provided.

Most IT people and companies do not discriminate against
specific products. It is all about money.

So if the service is not available, then it is because
the demand is not not there (and demand in this context
implies willingness to pay).

There may be a large pseudo-demand for VSI doing the
support included in VMS or very low cost.

But VSI also have to look at the money. Will they really
sell that many more VMS licenses if MariaDB is available.

Arne

Arne Vajhøj

unread,
Nov 24, 2016, 7:05:20 PM11/24/16
to
On 11/24/2016 3:08 PM, Kerry Main wrote:
> Specifically to MariaDB, while it has some industry acceptance,
> the issue I have with it is similar to some open source offerings
> i.e. no native OpenVMS cluster support. Ok, perhaps someone can
> correct me here, but I believe there is some limited shared
> nothing MariaDB cluster offerings on Linux only.

MariaDB and MySQL have gotten a tremendous market share on
other platforms without it.

And rewriting a database from a failover model to a loadsharing model
is a huge task.

Why spend a lot of money creating a VMS specific fork to support
features that are not strictly needed.

> I would like to see more emphasis on not just getting Open Source
> offerings working on OpenVMS, but also supporting native (shared
> disk) clustering e.g. like the Apache Web Server port (supports
> active-active native clustering).

A web server is very different than a database in this regard.

Arne


Jan-Erik Soderholm

unread,
Nov 24, 2016, 7:07:33 PM11/24/16
to
With this level of complex product, I do not expect anyone "replacing"
much in the product itself. Maybe some monor things around the build
procedures and such, but apart from that I guess that the environment
where it is expected to run has to adopt to the product.

Since MariaDB is of the server-process database type, and that process
might not be designed to run in a share-disk environment, you should not
expect the kind of shared-storage with cluster wide locking that we
are used to.

The current cluster offering for MariaDB is called "Galera Cluster".
That is a separate product that MariaDB uses. The principle is that
you run separate MariaDB instances on separate servers and they (at
transaction commit time) *replicate* the data to the other server.

The replication in the MariaDB case is asynchronous, so if a node
fails, you risk loosing data that is in the replication queue.

https://mariadb.com/kb/en/mariadb/what-is-mariadb-galera-cluster/
https://mariadb.com/kb/en/mariadb/mariadb-galera-cluster-known-limitations/
https://mariadb.com/kb/en/mariadb/about-galera-replication/

There is huge differnce between each cluster node having to *replicate*
all updates to each other in this setup compared to the usual OpenVMS
setup where each node directly updates the common/shared storage.


> If the DLM in it's current state was compatable with the
> product's requirements.
>
> Now if some large customer decided to get serious about using the product...

A large customers either will probably run one of the Oracle database
variants, or simply run something else then OpenVMS.

If you realy want to run MariaDB (or similar), you can just as well
run it somewhere where it is easier to run.

Arne Vajhøj

unread,
Nov 24, 2016, 7:55:35 PM11/24/16
to
On 11/24/2016 3:08 PM, Kerry Main wrote:
>Ok, perhaps someone can
> correct me here, but I believe there is some limited shared
> nothing MariaDB cluster offerings on Linux only.

Sharding is common but that requires nothing from the database.
MariaDB and MySQL could do sharding on VMS out of the box
(if there are any VMS customers left with enough VMS systems
to make it relevant).

MySQL Cluster is really a transparent & automated version of
the same sharding concept.

Galera Cluster is a third party addon that provides
active-active clustering using a replication model to
both MySQL and MariaDB.

That is what I am aware of.

Arne

Arne Vajhøj

unread,
Nov 24, 2016, 8:00:19 PM11/24/16
to
On 11/24/2016 7:07 PM, Jan-Erik Soderholm wrote:
> Since MariaDB is of the server-process database type, and that process
> might not be designed to run in a share-disk environment, you should not
> expect the kind of shared-storage with cluster wide locking that we
> are used to.

It does not seem to be needed.

A lot of databases does not support active-active with shared
storage.

And for those that does (like Oracle and DB2) then it often
not used.

Arne


Kerry Main

unread,
Nov 24, 2016, 9:45:05 PM11/24/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf
> Of Arne Vajhøj via Info-vax
> Sent: 24-Nov-16 8:00 PM
> To: info...@rbnsn.com
> Cc: Arne Vajhøj <ar...@vajhoej.dk>
> Subject: Re: [Info-vax] MariaDB (Re: OT: what is old is new
> again?)
>
Part of the reason many Oracle Customers do not use Oracle RAC
(Real App Cluster) is that RAC adds 50% PER CORE to the overall
license costs. At list $47,000 per core + annual support, that is
not a small consideration.

It means the Oracle RAC cluster costs would increase to list
$70,000/per core * processor core factor.

Oracle Rdb Clustering is INCLUDED with the $47,000/core
licensing. Yes, it's still crazy pricing.

Good news for VSI OpenVMS Customers using Rdb is that the Oracle
Rdb pricing *should* drop by 50% on OpenVMS X86-64 because the
X86-64 PCF is 0.5.

Btw - there was a presentation on SharkDB at the Bootcamp. It is
a native, high performance DB for OpenVMS that is 100% cluster
aware. Written to take advantage of OpenVMS native features for
max scalability.

:-)

Kerry Main

unread,
Nov 24, 2016, 10:05:04 PM11/24/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf
> Of Arne Vajhøj via Info-vax
> Sent: 24-Nov-16 7:56 PM
> To: info...@rbnsn.com
> Cc: Arne Vajhøj <ar...@vajhoej.dk>
> Subject: Re: [Info-vax] MariaDB (Re: OT: what is old is new
> again?)
>
A good whitepaper posted in 2015 that looks at the pros and cons
of shared-nothing (Linux, Windows, UNIX, NonStop) databases vs.
shared-disk (OpenVMS, Linux/GFS, z/OS) can be found here:
http://bit.ly/2dScx9k

Original will likely wrap:
http://www.scaledb.com/wp-content/uploads/2015/11/Shared-Nohing-v
s-Shared-Disk-WP_SDvSN.pdf

extract "Comparing shared-nothing and shared-disk in benchmarks
is analogous to comparing a dragster and a Porsche. The dragster,
like the hand-tuned shared-nothing database, will beat the
Porsche in a straight quarter mile race. However, the Porsche,
like a shared-disk database, will easily beat the dragster on
regular roads. If your selected benchmark is a quarter mile
straightaway that tests all out speed, like Sysbench, a
shared-nothing database will win. However, shared-disk will
perform better in real world environments."

Extract from conclusion "The comparison between shared-disk and
shared-nothing is analogous to comparing automotive
transmissions. Under certain conditions and, in the hands of an
expert, the manual transmission provides a modest performance
improvement. But under the vast majority of real world
conditions, the automatic transmission provides a better overall
experience. Similarly, shared-nothing can be tuned to provide
superior performance, assuming you can minimize the function- and
data-shipping. Unfortunately, this is rarely a valid assumption.
Shared-disk, much like an automatic transmission, is easier to
set-up and it adjusts over time to accommodate changing usage
patterns."

Jan-Erik Soderholm

unread,
Nov 25, 2016, 5:53:25 AM11/25/16
to
He he. Of course, but that is a completely different and
non-technical question... :-)


0 new messages