Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A Samba alternative, could this be something for VMS?

389 views
Skip to first unread message

Dirk Munk

unread,
Aug 27, 2016, 6:49:55 PM8/27/16
to
This information about a Samba alternative that was developed together
with Microsoft was posted in a LinkedIn group. Could it be something for
VMS too?

http://ryussi.com/products/mosmb/

BillPedersen

unread,
Aug 27, 2016, 8:48:12 PM8/27/16
to
Well, the first thing you or SOMEONE is going to have to do is get them interested in VMS. This is a proprietary licensing model - not open source.

So...

Bill.

Dirk Munk

unread,
Aug 28, 2016, 3:49:25 AM8/28/16
to
Yes, you're right it is a proprietary licensing model. However THEY
don't have to be interested in VMS. "SOMEONE", VSI or a third party, has
to see such an advantage in this product over Samba, that it could be a
viable base for a VMS SMB server, and pay for the license fees.

David Froble

unread,
Aug 28, 2016, 12:39:55 PM8/28/16
to
Uh ... that doesn't sound right ....

Someone is to pay the owners of the product, so that they can sell copies and
get the money from the sales?

I'd think not!

If the owners of the product want to sell into the VMS market, they should
modify the product to be usable, or, maybe pay someone to do so. After all,
they will be getting the sales revenue.

Johnny Billquist

unread,
Aug 28, 2016, 2:29:04 PM8/28/16
to
You got things very backwards...
Just because "someone" is interested, and willing to pay, does not mean
that it suddenly works on VMS.

For that to happen, someone needs to port it. And porting it can only be
done if you have the sources. Buying a license does not give you
sources, nor the right to modify and distribute you modified version.

So, in short, *they* have to be interested in VMS, since *they* need to
port it. Unless they actually want to give out their code to someone
else, for them to port it. In which case you also have the issues of how
to merge this with the main development tree, how to cooperate on
development, and needs that might be specifically important to VMS,
which noone else would care about, and making sure the owners still
maintain control over where they software goes, and so on...

You're view of how porting works, and what licenses mean, seem rather
broken.

Johnny

--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: b...@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol

Simon Clubley

unread,
Aug 28, 2016, 5:39:33 PM8/28/16
to
On 2016-08-28, David Froble <da...@tsoft-inc.com> wrote:
>
> Uh ... that doesn't sound right ....
>
> Someone is to pay the owners of the product, so that they can sell copies and
> get the money from the sales?
>
> I'd think not!
>
> If the owners of the product want to sell into the VMS market, they should
> modify the product to be usable, or, maybe pay someone to do so. After all,
> they will be getting the sales revenue.

The only reason that Ada 95 exists on Alpha and above is that DEC paid
Adacore to port Adacore's GNAT toolkit to VMS.

IOW, DEC did exactly what you said above that people should not do
and we got Ada 95 on VMS Alpha as a result. Here's the press release
for the IA64 GNAT port when HP gave Adacore a contract to port GNAT
to IA64 (I couldn't find the original Alpha press release with a
quick search):

http://www.adacore.com/press/openvms-for-itanium/

It's still an open question about exactly what arrangement VSI will
have with ACT so that VMS can have a modern Ada compiler on x86-64.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world

Dirk Munk

unread,
Aug 28, 2016, 6:27:08 PM8/28/16
to
Johnny Billquist wrote:
> On 2016-08-28 09:49, Dirk Munk wrote:
>> BillPedersen wrote:
>>> On Saturday, August 27, 2016 at 6:49:55 PM UTC-4, Dirk Munk wrote:
>>>> This information about a Samba alternative that was developed together
>>>> with Microsoft was posted in a LinkedIn group. Could it be something
>>>> for
>>>> VMS too?
>>>>
>>>> http://ryussi.com/products/mosmb/
>>>
>>> Well, the first thing you or SOMEONE is going to have to do is get
>>> them interested in VMS. This is a proprietary licensing model - not
>>> open source.
>>>
>>> So...
>>>
>>> Bill.
>>>
>>
>> Yes, you're right it is a proprietary licensing model. However THEY
>> don't have to be interested in VMS. "SOMEONE", VSI or a third party, has
>> to see such an advantage in this product over Samba, that it could be a
>> viable base for a VMS SMB server, and pay for the license fees.
>
> You got things very backwards...
> Just because "someone" is interested, and willing to pay, does not mean
> that it suddenly works on VMS.

I know that.

>
> For that to happen, someone needs to port it. And porting it can only be
> done if you have the sources. Buying a license does not give you
> sources, nor the right to modify and distribute you modified version.
>

The way I read their web page is that they have build a base package
that has to be adapted/ported to the Unix of your choice (or VMS?). They
do not sell ready to use packages for any kind of Unix flavour.

> So, in short, *they* have to be interested in VMS, since *they* need to
> port it. Unless they actually want to give out their code to someone
> else, for them to port it. In which case you also have the issues of how
> to merge this with the main development tree, how to cooperate on
> development, and needs that might be specifically important to VMS,
> which noone else would care about, and making sure the owners still
> maintain control over where they software goes, and so on...

All legitimate questions, but once again they are selling a half-product
that has to be ported to / incorporated in your own Unix (or VMS). That
is what they state explicitly on their web site.

David Froble

unread,
Aug 28, 2016, 6:31:15 PM8/28/16
to
Simon Clubley wrote:
> On 2016-08-28, David Froble <da...@tsoft-inc.com> wrote:
>> Uh ... that doesn't sound right ....
>>
>> Someone is to pay the owners of the product, so that they can sell copies and
>> get the money from the sales?
>>
>> I'd think not!
>>
>> If the owners of the product want to sell into the VMS market, they should
>> modify the product to be usable, or, maybe pay someone to do so. After all,
>> they will be getting the sales revenue.
>
> The only reason that Ada 95 exists on Alpha and above is that DEC paid
> Adacore to port Adacore's GNAT toolkit to VMS.
>
> IOW, DEC did exactly what you said above that people should not do
> and we got Ada 95 on VMS Alpha as a result. Here's the press release
> for the IA64 GNAT port when HP gave Adacore a contract to port GNAT
> to IA64 (I couldn't find the original Alpha press release with a
> quick search):
>
> http://www.adacore.com/press/openvms-for-itanium/
>
> It's still an open question about exactly what arrangement VSI will
> have with ACT so that VMS can have a modern Ada compiler on x86-64.
>
> Simon.
>

I''m not sure that VSI has oddles of money to be tossing around.

Also, wasn't it then DEC, and not Adacore that then sold licenses to customers?

Craig A. Berry

unread,
Aug 28, 2016, 6:48:53 PM8/28/16
to
On 8/28/16 4:39 PM, Simon Clubley wrote:

> It's still an open question about exactly what arrangement VSI will
> have with ACT so that VMS can have a modern Ada compiler on x86-64.

I don't really understand what the GMGPL license is, but it seems at
least possible one can have GNAT without dealing with ACT:

<http://www.dragonlace.net/questions/quest_004/>

But it's also possible that if you used this license you'd have to give
away your source code to the resulting compiler. I assume ACT releases
their code under a non-GPL license if you pay for the privilege, but I
haven't looked into it.

Simon Clubley

unread,
Aug 28, 2016, 7:36:39 PM8/28/16
to
On 2016-08-28, David Froble <da...@tsoft-inc.com> wrote:
> Simon Clubley wrote:
>>
>> The only reason that Ada 95 exists on Alpha and above is that DEC paid
>> Adacore to port Adacore's GNAT toolkit to VMS.
>>
>> IOW, DEC did exactly what you said above that people should not do
>> and we got Ada 95 on VMS Alpha as a result. Here's the press release
>> for the IA64 GNAT port when HP gave Adacore a contract to port GNAT
>> to IA64 (I couldn't find the original Alpha press release with a
>> quick search):
>>
>> http://www.adacore.com/press/openvms-for-itanium/
>>
>> It's still an open question about exactly what arrangement VSI will
>> have with ACT so that VMS can have a modern Ada compiler on x86-64.
>>
>
> I''m not sure that VSI has oddles of money to be tossing around.
>

VSI will have to determine how important the remaining VMS Ada customers
are to it and decide based on that.

> Also, wasn't it then DEC, and not Adacore that then sold licenses to customers?

Not as far as I know unless there's something going on behind the
scenes. All the literature I have seen for GNAT Pro on VMS says
to directly contact Adacore for further information.

If you read the link above, you will see that you are directed to
contact Adacore and not HP. Unfortunately, a GNAT Pro subscription
which _starts_ at $14,000 is a little above my personal budget. :-)

(See http://www.adacore.com/press/gnat-pro-now-available-for-hp-openvms-on-hp-integrity-servers/
for where I got that figure from.)

Of course, that doesn't mean DEC/HP didn't negotiate a slice of any
GNAT Pro for VMS sales but I have not seen anything to suggest that.

Simon Clubley

unread,
Aug 28, 2016, 8:03:31 PM8/28/16
to
On 2016-08-28, Craig A. Berry <craig...@nospam.mac.com> wrote:
> On 8/28/16 4:39 PM, Simon Clubley wrote:
>
>> It's still an open question about exactly what arrangement VSI will
>> have with ACT so that VMS can have a modern Ada compiler on x86-64.
>
> I don't really understand what the GMGPL license is, but it seems at
> least possible one can have GNAT without dealing with ACT:
>
><http://www.dragonlace.net/questions/quest_004/>
>
> But it's also possible that if you used this license you'd have to give
> away your source code to the resulting compiler.

The GMGPL licence exception to the GPL allows you to use code covered
by the GMGPL within a binary without that code causing the binary to
become covered by the GPL.

Of course, you may have other code in the binary which is pure GPL and
hence causes the binary to still fall under the requirements of the GPL.

> I assume ACT releases
> their code under a non-GPL license if you pay for the privilege, but I
> haven't looked into it.
>

That's correct. There are two core gcc code bases at play here; the
Adacore code base and the FSF code base.

Adacore push some changes from their code base into the FSF code base
every so often but what's in the FSF code base for VMS wasn't enough
(as at early 4.9.x) for you to be able to build the FSF code base for
a VMS target.

Any versions of GNAT on the Adacore website (which do not include the
VMS version) do not have the GMGPL exception and hence are pure GPL
with regards to the binaries they generate.

Pay Adacore some money and you can have GNAT Pro instead which allows
you to use it without the compiler itself imposing a GPL requirement.
(Although you could always still be using GPL libraries in your code
and hence the binary would still be covered by the GPL in that case.)

johnwa...@yahoo.co.uk

unread,
Aug 29, 2016, 4:32:11 AM8/29/16
to
I'm pretty sure that DEC HQ wanted nothing further to do with
modern Ada at that point; HQ's priorities were elsewhere.

Customers wanting an Ada95 compiler on VMS were pointed at
Adacore and/or GreenHills, in the same way as for many other
third-party packages. That's the way I remember it anyway
(but it was a long time ago).

The stuff freely available from AdaCore is documented at and
downloadable via
http://libre.adacore.com/download/
but I make no claim to understand the licence implications.

Kerry Main

unread,
Aug 29, 2016, 11:20:04 AM8/29/16
to comp.os.vms to email gateway
While having another option would certainly be great,
getting any new products to OpenVMS is only step 1.

For improved scalability required by the centralized
direction where things are headed, we also need these
products to support the shared everything (aka shared
disk) cluster model which is used not just by OpenVMS, but
also btw, Linux/GFS.

Reference:
https://en.wikipedia.org/wiki/GFS2


Regards,

Kerry Main
Kerry dot main at starkgaming dot com





Dirk Munk

unread,
Aug 30, 2016, 8:55:47 AM8/30/16
to
Stornext would be a similar product, but not open source of course.

There are others as well.

The problem with these products is that they do not know the VMS/RMS
file attributes. Attributes like fixed record lengths, or variable
record length width record length fields, indexed files, access control
lists, version numbers etc. Quite essential for VMS.

Ob the other hand VMS had no problem reading and writing Unix or Windows
files.

So we do have a bit of a problem there..........



Kerry Main

unread,
Aug 30, 2016, 11:10:06 AM8/30/16
to comp.os.vms to email gateway
One of the "why OpenVMS" answers is that from a developer
perspective, a properly config'ed OpenVMS cluster provides
the benefit that multiple systems appears to their App
code as a single system. It does not matter what system
the connection gets directed to. They do not need to worry
about node mgmt., data consistency, HA etc. With other
platforms, node mgmt., data consistency (replication), HA
etc is handled at the Application code level - not the OS
level.

That's a lot of code complexity that needs to be repeated
for every different application.

Note - Keep in mind that if a port supports RMS files,
then there is a good chance it will support OpenVMS
clusters.

Case in point - Apache on OpenVMS port uses RMS files and
hence supports multiple Apache servers being deployed in
an active-active shared everything OpenVMS cluster
configuration. If the Apache Web cluster starts getting
overloaded, simply add more servers to the Web cluster -
no changes required at all in the App code.

Stephen Hoffman

unread,
Aug 30, 2016, 12:11:45 PM8/30/16
to
On 2016-08-30 15:06:34 +0000, Kerry Main said:

> One of the "why OpenVMS" answers is that from a developer perspective,
> a properly config'ed OpenVMS cluster provides the benefit that multiple
> systems appears to their App code as a single system. It does not
> matter what system the connection gets directed to. They do not need to
> worry about node mgmt., data consistency, HA etc. With other platforms,
> node mgmt., data consistency (replication), HA etc is handled at the
> Application code level - not the OS level.
>
> That's a lot of code complexity that needs to be repeated for every
> different application.
>
> Note - Keep in mind that if a port supports RMS files, then there is a
> good chance it will support OpenVMS clusters.
>
> Case in point - Apache on OpenVMS port uses RMS files and hence
> supports multiple Apache servers being deployed in an active-active
> shared everything OpenVMS cluster configuration. If the Apache Web
> cluster starts getting overloaded, simply add more servers to the Web
> cluster - no changes required at all in the App code.

That's certainly been the classic OpenVMS marketing. You're likely
going to be learning much about that and the limits of that design and
the limits of OpenVMS-style clustering and about implementing and using
partitioning as Stark scales up and starts to distribute data
geographically, too. Though if you can stay within the limits and
within a single cluster or can otherwise avoid or isolate geographic
distribution, things will be easier. What's available in OpenVMS is
a pain to use, though. More than a little investment is needed to make
all that work, and RMS can be very much less than fun when application
rolling upgrades are involved. I'd still want some of what the open
source tools offer including the ability to tie together OpenVMS
clusters, and I'd still want a distributed database; whether
cluster-aware or independent of clustering.


--
Pure Personal Opinion | HoffmanLabs LLC

Dirk Munk

unread,
Aug 30, 2016, 1:55:34 PM8/30/16
to
If you need to build a VMS cluster for performance reasons, then forget
geographic spreading. The latency involved with that is so enormous that
you will get far less performance instead of more performance.

The maximum distance between two nodes shouldn't be more then 50 meters
or so.

Of course this applies to all kind of clusters, not just VMS.

> What's available in OpenVMS is a
> pain to use, though. More than a little investment is needed to make
> all that work, and RMS can be very much less than fun when application
> rolling upgrades are involved.

Why? What has that to do with RMS?

Stephen Hoffman

unread,
Aug 30, 2016, 2:33:01 PM8/30/16
to
On 2016-08-30 17:55:32 +0000, Dirk Munk said:

> Stephen Hoffman wrote:
>>
>> That's certainly been the classic OpenVMS marketing. You're likely
>> going to be learning much about that and the limits of that design and
>> the limits of OpenVMS-style clustering and about implementing and using
>> partitioning as Stark scales up and starts to distribute data
>> geographically, too. Though if you can stay within the limits and
>> within a single cluster or can otherwise avoid or isolate geographic
>> distribution, things will be easier.
>
> If you need to build a VMS cluster for performance reasons, then forget
> geographic spreading. The latency involved with that is so enormous
> that you will get far less performance instead of more performance.

Flip your design over and think of cases when the clusters themselves
need to be distributed. There are times when you have to
geographically distribute your configurations, beyond cases of disaster
tolerance. When you have to maintain locality to the remote systems.
When the design that OpenVMS marketing has envisioned gets turned on
its head. Stark is very likely aiming to be world-wide, which means
there's no easy way around dealing with the latency, and which means
the folks gain experience around what's involved in these
configurations. This is where the classic OpenVMS marketing designs
start to require rather more local development work and/or integration
with open source clustering tools.

> The maximum distance between two nodes shouldn't be more then 50 meters or so.

The world is rather wider than that. Configurations past 50 meters
can work well for many cluster applications. Configurations past 50
meters can be mandatory for some. Though there are cases where the
latency is secondary to the storage and not to the links. FC SAN
bandwidth is low and FC SAN SSD latency is high, when compared with
networks and server DRAM. Stark is headed toward dealing with
configurations much wider than 50 meters, too. Then there's the fun
(or "fun") of keeping all that working and consistent.

> Of course this applies to all kind of clusters, not just VMS.

Ayup, the physics are certainly common. Approaches toward
availability and consistency do differ. Kerry will be hitting those
in the environment, given the clients are inherently distributed, and
which means that either long latencies from clients to distant clusters
are involved, or geographic distribution of the clusters and probably
then local software designs involving eventual consistency of the data
across those clusters.

>> What's available in OpenVMS is a pain to use, though. More than a
>> little investment is needed to make all that work, and RMS can be very
>> much less than fun when application rolling upgrades are involved.
>
> Why? What has that to do with RMS?

Ponder rolling application upgrades in a cluster configuration using
RMS-based applications, particularly when the application developers
need to change the file formats. When the cluster and the
applications have to remain available. It all gets... interesting.
Sooner or later, the application developers find themselves dealing
with something that looks rather like the morass that is the cluster
RMS file-based management — which is very far from pretty — and working
with limitations within that design that can be difficult to remove.
Making changes in these RMS file-based application environments is a
tedious slog, with the usual solution involving increasing numbers of
files, or moving to approaches that use other types of tools or
databases in preference to using RMS files.

All of these pieces — upgrades, clustering, patches, application
updates, shared RMS files — are interlinked, and it certainly appears
to have been a very long time since anybody's thought about and made
any changes around how all these pieces fit together, and whether the
existing design is sustainable and supportable. Sure, OpenVMS and
clustering works. That's certainly goodness. But — particularly if
the goal is to bring over new folks and new designs and not simply ease
OpenVMS into retirement — it's ill-documented, extremely difficult if
not impossible to upgrade without breaking compatibility, and with more
than a few other limitations and pains and misfeatures.

Jan-Erik Soderholm

unread,
Aug 30, 2016, 7:09:49 PM8/30/16
to
Right, then do not use "RMS-based applications". You probably shouldn't
anyway, no matter if it is a cluster configuration or not.

> Sooner or later, the
> application developers find themselves dealing with something that looks
> rather like the morass that is the cluster RMS file-based management —
> which is very far from pretty — and working with limitations within that
> design that can be difficult to remove. Making changes in these RMS
> file-based application environments is a tedious slog, with the usual
> solution involving increasing numbers of files, or moving to approaches
> that use other types of tools or databases in preference to using RMS files.
>

Right, but then do use a proper database.



Kerry Main

unread,
Aug 30, 2016, 9:05:04 PM8/30/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On
> Behalf Of Jan-Erik Soderholm via Info-vax
> Sent: 30-Aug-16 7:10 PM
> To: info...@rbnsn.com
> Cc: Jan-Erik Soderholm <jan-erik....@telia.com>
> Subject: Re: [Info-vax] A Samba alternative, could this be
> something for VMS?
>
> Den 2016-08-30 kl. 20:32, skrev Stephen Hoffman:
> > On 2016-08-30 17:55:32 +0000, Dirk Munk said:
> >
> >> Stephen Hoffman wrote:
> >>>
> >>> That's certainly been the classic OpenVMS marketing.
> You're likely
> >>> going to be learning much about that and the limits of
> that design and
> >>> the limits of OpenVMS-style clustering and about
> implementing and using
> >>> partitioning as Stark scales up and starts to distribute
> data
> >>> geographically, too. Though if you can stay within the
> limits and
> >>> within a single cluster or can otherwise avoid or
> isolate geographic
> >>> distribution, things will be easier.
> >>
> >> If you need to build a VMS cluster for performance
> reasons, then forget
> >> geographic spreading. The latency involved with that is
> so enormous that
> >> you will get far less performance instead of more
> performance.
> >

Disaster tolerance 101 - to enable synch updates both A-A sites must be within 100km (best practice, but depends on App R/W ratios).

If one needs to accommodate the loss of both sites, then you implement A-A-P (synch-synch-asynch) with third site being much further away.

There are ways to mitigate overall solution latency that is not associated with just distance e.g. QoS on network MPLS links, WAN Optimizers (chatty apps), minimize east-west LAN traffic (server-server).


> > Flip your design over and think of cases when the
> clusters themselves need
> > to be distributed. There are times when you have to
> geographically
> > distribute your configurations, beyond cases of disaster
> tolerance. When
> > you have to maintain locality to the remote systems.
> When the design that
> > OpenVMS marketing has envisioned gets turned on its
> head. Stark is very
> > likely aiming to be world-wide, which means there's no
> easy way around
> > dealing with the latency, and which means the folks gain
> experience around
> > what's involved in these configurations. This is where
> the classic
> > OpenVMS marketing designs start to require rather
> more local development
> > work and/or integration with open source clustering
> tools.
> >

Think multiple clusters in strategic locations. In gaming, what is often the case is "consistent" latency - not lowest possible latency for all gamers.

For WW play, think about the Olympics where athletes come from around the globe to compete at one location. Environmental conditions are (or made to be) the same for all.

> >> The maximum distance between two nodes shouldn't
> be more then 50 meters
> >> or so.
> >

DR/DT design principle is based on a single significant event not taking out both sites.

There is a trade-off between this principle and the performance hit associated with write latency between sites.

> > The world is rather wider than that. Configurations past
> 50 meters can
> > work well for many cluster applications. Configurations
> past 50 meters
> > can be mandatory for some. Though there are cases
> where the latency is
> > secondary to the storage and not to the links. FC SAN
> bandwidth is low and
> > FC SAN SSD latency is high, when compared with
> networks and server DRAM.

When stating networks, you should emphasize read latency. Network write latency across switches, routers, FW's, LB's is orders of magnitude greater than local flash updates.
RMS today? Agree 100%. Have to deal with file maint, open file backups.

RMS with new file system - TBD, but looks very promising.
Again, I expect the new RMS file system will address many of the current issues with RMS today.

> Right, but then do use a proper database.
>

Agree - imho, the right DB of the future is one that takes advantage of TB's pf low latency, non-volatile memory on all nodes, and is based on a shared disk / everything design.

Why? Reference: (shared nothing is typical UNIX/Windows/Linux/NonStop native DB model. Shared disk is OpenVMS, Linux/GFS model)
http://www.scaledb.com/wp-content/uploads/2015/11/Shared-Nohing-vs-Shared-Disk-WP_SDvSN.pdf

Dirk Munk

unread,
Aug 31, 2016, 5:14:13 AM8/31/16
to
I started out with writing "If you need to build a VMS cluster for
*performance* reasons, then forget geographic spreading. The latency
involved with that is so enormous that you will get far less performance
instead of more performance."

So I'm explicitly *not* talking about disaster tolerance.

Those new Micro X-Point storage modules has a read latency of 10μs. How
far does an electric impulse travel in 10μs? The speed of light is
300,000 km/sec, however in a conductor it is slower. But let's keep that
300,000 km/sec. for our calculation, then we get a distance of 3 meters.

Adding a distance of 100km between cluster nodes will increase the
latency by a factor of 300,000, just by the distance, not counting
switches, routers etc.

New systems with semi-conductor storage will bring some very interesting
challenges to the cluster concept

Steven Schweda

unread,
Aug 31, 2016, 7:52:08 AM8/31/16
to
> [...] How far does an electric impulse travel in 10µs?
> [...] 300,000 km/sec [...] we get a distance of 3 meters.

Not all of us get 3m. Some of us get 3km. U.S. rule of
thumb: 1ns = 1 foot. (Or, for the rest of the world, 3ns =
1m.) Micro is considerably bigger than nano.

Kerry Main

unread,
Aug 31, 2016, 9:15:04 AM8/31/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On
> Behalf Of Dirk Munk via Info-vax
> Sent: 31-Aug-16 5:14 AM
> To: info...@rbnsn.com
> Cc: Dirk Munk <mu...@home.nl>
> Subject: Re: [Info-vax] A Samba alternative, could this be
> something for VMS?
>

[snip..]
The issue is not just distance latency. The real issue is solution response times because that is what impacts the end user.

Latency associated with distance is only one part of the solution response time.

Other things that impact solution response times are things like App R/W ratio's, FW delays, network hops, server-server (east-west) communications, App design (was App designed for a multi-site cluster or was it simply dropped on top of one).

As one example, a multi-site cluster could be much more than 100km apart if the number of application writes is a very small number as compared to the reads. By the same token, an application with a 50% r/w ratio may not have acceptable performance at anything over 20km.

As another example - If an application is designed so that it does not timeout within the multi-site latency and the overall response time meets formal SLA's, then the inter-site distance is not an issue.

[snip]

I do agree that new technology may require App developers to rethink their traditional deployment strategies. However, what this new technology will NOT do is cause most enterprise businesses to give up their DR/DT/RTO/RPO requirements.

Bottom line - any new technology needs to adopt to meet business DR/DT/RTO/RPO SLA's - not the other way around.

David Froble

unread,
Aug 31, 2016, 11:37:35 AM8/31/16
to
Dirk Munk wrote:

> I started out with writing "If you need to build a VMS cluster for
> *performance* reasons, then forget geographic spreading. The latency
> involved with that is so enormous that you will get far less performance
> instead of more performance."
>
> So I'm explicitly *not* talking about disaster tolerance.
>
> Those new Micro X-Point storage modules has a read latency of 10μs. How
> far does an electric impulse travel in 10μs? The speed of light is
> 300,000 km/sec, however in a conductor it is slower. But let's keep that
> 300,000 km/sec. for our calculation, then we get a distance of 3 meters.
>
> Adding a distance of 100km between cluster nodes will increase the
> latency by a factor of 300,000, just by the distance, not counting
> switches, routers etc.
>
> New systems with semi-conductor storage will bring some very interesting
> challenges to the cluster concept

Ok, let's limit the discussion to performance.

Without getting into some of the original reasons to develop VMS clusters, I'd
suggest that you do not implement a VMS cluster today, if performance is your
objective.

The VAX 11/780 had, I believe, 4 rather large boards that comprised the CPU.
Basically one CPU per large cabinet. (Lets not get into the 782) So if you
needed to add CPUs, you needed 4 more large boards, the large cabinet, and
everything that came with it. Including a rather large price tag.

What do we have today? 8 or more cores on one piece of silicon. Much shorter
data paths. Add high speed NV memory, and performance unmatched in the past is
possible. You're getting this not with faster CPUs, but with both shorter data
paths, and advances in memory.

So yeah, if strictly performance is your goal, then use multi-core chip(s), very
fast NV memory, and such. No slow "rust" memory and such. One central system
with everything needed included. No network connections for internal work, no
external storage, none of that old slow stuff. Hard to argue with that concept.

Unfortunately, in most cases, the real world throws some wrenches into the
works. One might need to insure that $10M in receivables doesn't get lost.
Maybe have it on several layers of "rust" storage. One might need to insure
that services stay available, regardless of flood, tornado, terrorist, and such.
Different requirements. Definitely different solutions.

So, while distributed clusters are not for performance, they have other
qualities. So, no clusters when absolute performance is the goal.

David Froble

unread,
Aug 31, 2016, 11:39:45 AM8/31/16
to
Kerry Main wrote:

> I do agree that new technology may require App developers to rethink their traditional deployment strategies. However, what this new technology will NOT do is cause most enterprise businesses to give up their DR/DT/RTO/RPO requirements.
>
> Bottom line - any new technology needs to adopt to meet business DR/DT/RTO/RPO SLA's - not the other way around.

The voice of reason ....

IanD

unread,
Sep 6, 2016, 9:07:53 PM9/6/16
to
On Wednesday, August 31, 2016 at 11:15:04 PM UTC+10, Kerry Main wrote:

>
> Bottom line - any new technology needs to adopt to meet business DR/DT/RTO/RPO SLA's - not the other way around.
>
>
> Regards,
>
> Kerry Main
> Kerry dot main at starkgaming dot com

+1

Business availability is #1 priority where I am, everything is skewed around that ideal

VMware has done well in terms of the business spend here because they perceive VMware as giving them total availability

Our OpenVMS systems were clustered across different geographical regions and that came with issues of it's own. Stretched vlan's are the focal hate point of networks. Latency on disk writes was somewhat annoying but the old SCSI HSZ disks were by far the bigger pain than going across a vlan. Being a Telco, they threw plenty of money into hefty links (this was spanning three DC's after all).

When the business decided geographical redundancy wasn't worth it anymore because the new DC was supposedly designed with a near 100% up-time guarantee, our world got a lot smaller as we consolidated to a single DC

Charon with flash drives replaced the HSZ based 8400 and I/O rates literally went through the roof. SAN is now the bottleneck (one system still remains on physical hardware)

Redundancy in the geographical form for our business has been replaced by pushing that responsibility onto the DC to manage. Supposedly the DC is designed to take a plane strike and other such mad things human being want to do, so the concept of clusters spanning long distances for us went away (whether right or wrong is not for me to say)

I think when we see those blazingly fast memory based systems, the business will embrace them endlessly and in the stampede for performance, all concerns about housing everything on a single bit of hardware will be thrown to the wind until something happens and they get bit firmly in the arse and then suddenly redundancy will come into sharp focus again

Human being have a psychological flaw in that they are overly optimistic (Can't remember the actual research article now)

<sarcasm>
Oh look, there's some new technology over there. Looks good, but what about...Stand out of the way or you'll get trampled by the mob...
</sarcasm>

Kerry Main

unread,
Sep 7, 2016, 12:35:05 AM9/7/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On
> Behalf Of IanD via Info-vax
> Sent: 06-Sep-16 9:08 PM
> To: info...@rbnsn.com
> Cc: IanD <iloveo...@gmail.com>
> Subject: Re: [Info-vax] A Samba alternative, could this
be
> something for VMS?
>
Re: consolidate to one DC ... really bad idea.

Even if it is a tier 4, all it takes is for one person
with SARS or some other exotic pandemic issue to walk into
the front desk area of the DC and that's it - when the
guys in white suits find out, everyone in that entire
facility physically goes home for 10 days.

This is where you find out if you can reboot, power off/on
and if you have enough remote access / VPN lines.

If a critical system crashes - too bad, the Maint folks
won't be able to get in until the quarantine is over.

Question - if no one can get into the lone DC for 10 days,
what would be the impact on your company?

And I have pointed this out before, but this actually
happened to HP in Toronto during the SARS epidemic: (mind
you they had dual site strategy, so the impact was not as
bad)

Reference:

http://www.cnet.com/news/sars-sends-hp-workers-home-in-can
ada/
http://www.itbusiness.ca/news/sars-scare-leaves-hp-canada-
in-limbo/5669

Related:
http://www.computerworld.com/article/2561655/it-management
/sidebar--the-sars-effect.html

David Froble

unread,
Sep 7, 2016, 9:00:07 AM9/7/16
to
Yep! Doesn't even require a small asteroid ....
0 new messages