Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

redis-cache another freeware opportunity

183 views
Skip to first unread message

mah...@googlemail.com

unread,
Aug 16, 2016, 11:05:44 PM8/16/16
to
Yesterday, I went to the Microsoft Azure talk. (Very good if, like me, you know little about Azure’s offerings. The guy giving the presentation was not a developer but the guy coming next month will be. I recommend going.)

One question I had that could not be answered was regarding SessionState or, rather more generally, how to access various IIS configuration options. The impression I got was “You can’t”. If it’s not in the Azure Portal then tough luck.

Anyway I thought I’d ask one of you in case you’ve had a go or done more reading than me. The way I understand it (from the links below) is: -

a) In-process session-state “Sticky Sessions” are not supported in Azure. Cop That!
b) SessionState server is not supported in Azure. Doh!
c) Azure Redis Cache is the (recommended non-database distributed cache) solution

Anyone know more?

https://azure.microsoft.com/en-us/services/cache/
https://azure.microsoft.com/en-us/documentation/articles/cache-dotnet-how-to-use-azure-redis-cache/
https://blogs.msdn.microsoft.com/cie/2013/05/17/session-state-management-in-windows-azure-web-roles/

Cheers

Kerry Main

unread,
Aug 18, 2016, 12:50:05 PM8/18/16
to comp.os.vms to email gateway
Richard - stirring the pot ..

Imho, designing stateful applications could be one of the differentiators for OpenVMS clusters - especially with the new file system now cooking.

Numerous new application groups are looking to develop stateful applications as a means to create next generation applications.

Reference:
http://highscalability.com/blog/2015/10/12/making-the-case-for-building-scalable-stateful-services-in-t.html
" Making the Case for Building Scalable Stateful Services in the Modern Era" (Oct 2015)

https://www.youtube.com/watch?v=H0i_bXKwujQ
" "Building Scalable Stateful Services" (Sept 2015) - very, very painful using todays common shared nothing, distributed model.

Question - Why do current apps need Redis distributed cache solutions?

It's because there is no common file system and maintaining state in web/DB servers is painful and performs poorly. Hence, redis type caching solutions were born.

Traditionally, state was maintained either at the Web server or DB level. The problem with the Web Server approach is load balancing means the session may get re-directed to a different web server on a subsequent session request. The problem with the DB approach is that it adds additional time (App to DB server latency associated with FW's, switches, routers, IP Stacks, NIC's etc) and adds much higher loads on the DB server. In the traditional shared nothing DB, to further complicate the solution, these loads need to be routed by smarts coded in the App server tier to the right DB server depending on the data partitioning scheme in place.

So instead of large numbers of small core, low memory servers (each of which have separate system and appl local disks to manage / upgrade), why not use the best of both distributed and centralized models? What about an approach that uses much fewer scale up first big core (32+) servers with big memory (TB+), then scale out. In addition, use a common file system (think new file system being developed) across all nodes in both sites (if DT/DR required)?

The state can be maintained on the local file system, so no matter which server node in what DC the session is directed to, the state is available locally via direct reads to local HW cache or the local file system that is common to both sites.

You can further reduce the overall latency by installing the App, DB and middleware all on the same OS instance to take advantage of the big memory, large HW caches. This also enhances server utilization levels.

[now putting on hard hat ..]

:-)

Regards,

Kerry Main
Kerry dot main at starkgaming dot com






Richard Maher

unread,
Aug 18, 2016, 6:34:55 PM8/18/16
to
On 19-Aug-16 12:47 AM, Kerry Main wrote:
> Interesting stuff . . .
>
>

Kerry, I agree with you but I am not the jury or a cloud service
provider :-(

I'm imagining VMS as a purchasable option in Azure or AWS

Arne Vajhøj

unread,
Aug 18, 2016, 9:18:24 PM8/18/16
to
On 8/16/2016 11:05 PM, mah...@googlemail.com wrote:
> Yesterday, I went to the Microsoft Azure talk. (Very good if, like
> me,you know little about Azure’s offerings. The guy giving the presentation
Redis is really a database.

it is an in-memory NoSQL database of the key-value store type with
optional near real time persistence of transaction log.

That is rather typical of what is used today for cluster wide state.

Arne


Arne Vajhøj

unread,
Aug 18, 2016, 9:39:27 PM8/18/16
to
On 8/18/2016 12:47 PM, Kerry Main wrote:
> Imho, designing stateful applications could be one of the
> differentiators for OpenVMS clusters - especially with the new file
> system now cooking.
>
> Numerous new application groups are looking to develop stateful
> applications as a means to create next generation applications.

????

Stateful applications are totally industry standard today.

IBM WXS, Oracle Coherence, JBoss Infinispan, Hazelcast, memcached,
redis etc..

It is off the shelf functionality.

> Reference:
> http://highscalability.com/blog/2015/10/12/making-the-case-for-building-scalable-stateful-services-in-t.html

That seems to be more about the application frameworks to make
developing easy than about how to share the state.

> Question - Why do current apps need Redis distributed cache
> solutions?

> So instead of large numbers of small core, low memory servers (each
> of which have separate system and appl local disks to manage /
> upgrade), why not use the best of both distributed and centralized
> models? What about an approach that uses much fewer scale up first
> big core (32+) servers with big memory (TB+), then scale out. In
> addition, use a common file system (think new file system being
> developed) across all nodes in both sites (if DT/DR required)?

Few high end systems are much more expensive than many low end systems.

For many cases it is simply not competitive.

> The state can be maintained on the local file system, so no matter
> which server node in what DC the session is directed to, the state is
> available locally via direct reads to local HW cache or the local
> file system that is common to both sites.

Accessing cache servers with in memory databases will be a couple
of magnitudes faster than a shared file system and lock manager.

> You can further reduce the overall latency by installing the App, DB
> and middleware all on the same OS instance to take advantage of the
> big memory, large HW caches. This also enhances server utilization
> levels.

Poor security and a performance benefit that decreases with
scale out and a risk for interference between applications
is not an attractive solution.

Arne

Kerry Main

unread,
Aug 18, 2016, 10:50:04 PM8/18/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On
> Behalf Of Arne Vajhøj via Info-vax
> Sent: 18-Aug-16 9:39 PM
> To: info...@rbnsn.com
> Cc: Arne Vajhøj <ar...@vajhoej.dk>
> Subject: Re: [Info-vax] redis-cache another freeware
> opportunity
>
> On 8/18/2016 12:47 PM, Kerry Main wrote:
> > Imho, designing stateful applications could be one of
the
> > differentiators for OpenVMS clusters - especially with
> the new file
> > system now cooking.
> >
> > Numerous new application groups are looking to
> develop stateful
> > applications as a means to create next generation
> applications.
>
> ????
>
> Stateful applications are totally industry standard
today.
>
> IBM WXS, Oracle Coherence, JBoss Infinispan, Hazelcast,
> memcached,
> redis etc..
>
> It is off the shelf functionality.
>

All based on very high LAN network latency.

In relative times, CPU-cache, CPU-memory, CPU-flash disk
are sec's/mins/days whereas persistent LAN network latency
(updates) is in order of months.

> > Reference:
> > http://highscalability.com/blog/2015/10/12/making-the-
That is old technology thinking. Big blade systems are a
fraction of what they were 10+ years ago. In addition,
they are much better integrated than they were 10+ years
ago.

Yes, I do agree OpenVMS pricing does need to evolve to
become more competitive with other X86-64 platforms.

> > The state can be maintained on the local file system,
so
> no matter
> > which server node in what DC the session is directed
to,
> the state is
> > available locally via direct reads to local HW cache
or the
> local
> > file system that is common to both sites.
>
> Accessing cache servers with in memory databases will be
> a couple
> of magnitudes faster than a shared file system and lock
> manager.
>

Which is why I suggested putting App servers and DB's on
the same OS instance with TB scale local memory. If a
reference is not in memory, then it is a local IO, not a
remote IO across the low latency LAN network.

> > You can further reduce the overall latency by
installing
> the App, DB
> > and middleware all on the same OS instance to take
> advantage of the
> > big memory, large HW caches. This also enhances server
> utilization
> > levels.
>
> Poor security and a performance benefit that decreases
> with
> scale out and a risk for interference between
applications
> is not an attractive solution.
>
> Arne
>

OpenVMS Customers have been doing this for decades. With
proper planning, this is not an issue. Yes, you do need to
do proper capacity and security planning, and use
technologies like the native class scheduler etc.

I know a certain 2 node prod OpenVMS Alpha cluster in the
Lottery business that brings in about $2B/year. It has at
least 10 different applications running on both systems.
Another 8 node active-active Alpha OpenVMS cluster used in
a large ISP currently supports approx. 4M users (that was
a few years ago, so the user count may have gone up).

I do agree technologies like RoCEv2 (RDMA over Ethernet)
would be beneficial to further enhance OpenVMS cluster
scalability with very low latency inter-node
communications.

Kerry Main

unread,
Aug 18, 2016, 11:00:04 PM8/18/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Kerry Main [mailto:kemain...@gmail.com]
> Sent: 18-Aug-16 10:46 PM
> To: 'comp.os.vms to email gateway' <info-
> v...@rbnsn.com>
> Subject: RE: [Info-vax] redis-cache another freeware
> opportunity
>

[snip..]

> > Accessing cache servers with in memory databases will
> be
> > a couple
> > of magnitudes faster than a shared file system and
lock
> > manager.
> >
>
> Which is why I suggested putting App servers and DB's on
> the same OS instance with TB scale local memory. If a
> reference is not in memory, then it is a local IO, not a
> remote IO across the low latency LAN network.
>

Crap - typo - last paragraph should state " not a remote
IO across the HIGH latency LAN network"

[snip]

Stephen Hoffman

unread,
Aug 19, 2016, 9:24:33 AM8/19/16
to
On 2016-08-19 01:39:23 +0000, Arne Vajh j said:

> On 8/18/2016 12:47 PM, Kerry Main wrote:
>> Imho, designing stateful applications could be one of the
>> differentiators for OpenVMS clusters - especially with the new file
>> system now cooking.
>>
>> Numerous new application groups are looking to develop stateful
>> applications as a means to create next generation applications.
>
> ????
>
> Stateful applications are totally industry standard today.
>
> IBM WXS, Oracle Coherence, JBoss Infinispan, Hazelcast, memcached, redis etc..
>
> It is off the shelf functionality.

Ayup. Been that way for a while, too.

>> Reference:
>> http://highscalability.com/blog/2015/10/12/making-the-case-for-building-scalable-stateful-services-in-t.html
>>
>
> That seems to be more about the application frameworks to make
> developing easy than about how to share the state.

Or about how to build or buy a modern cluster, depending on how you read it.

>> Question - Why do current apps need Redis distributed cache solutions?

As mentioned... redis is a distributed memory database. So it can
work for caching, or general database operations, or whatever.

>> So instead of large numbers of small core, low memory servers (each of
>> which have separate system and appl local disks to manage / upgrade),
>> why not use the best of both distributed and centralized models? What
>> about an approach that uses much fewer scale up first big core (32+)
>> servers with big memory (TB+), then scale out.

With AMD's recent demonstration, 32 cores is soon to be a single-socket
configuration.

http://www.pcworld.com/article/3109327/hardware/let-the-cpu-wars-begin-amd-shows-its-zen-cpu-can-compete-with-intels-best.html


That design is supposedly due 2017, which means Intel will be up in
that range just as soon as they can manage that.

All discussions of adding cores and clock rate and aggregate
performance TDP aside — variable clock rates are now common, and having
more cores active means more heat, which means throttling can be
necessary.

>> In addition, use a common file system (think new file system being
>> developed) across all nodes in both sites (if DT/DR required)?

If looking for speed, why would anybody hit a file system for anything
other than recovery, and then only after mirroring the data to a remote
and battery-protected server?

> Few high end systems are much more expensive than many low end systems.
>
> For many cases it is simply not competitive.

Ayup. HPE folks were reporting 80% of their server sales were
two-socket boxes, too. Increasing core counts means smaller and
denser boxes and fewer sockets overall, too. Yes, there'll always be
bigger boxes and specialized boxes available. But AFAIK OpenVMS I64
never got around to supporting SD2, and other high-end support was via
HPVM and that's now not current versions. Which certainly implies
that there wasn't much demand for such configurations with OpenVMS; of
configurations with 32+ cores and TB-sized memory. In short, I'd
expect VSI is looking at two-socket x86-64 server boxes as their
biggest potential hardware market. That'll likely get them 64 cores
and 128 threads as soon as next year, too.

>> The state can be maintained on the local file system, so no matter
>> which server node in what DC the session is directed to, the state is
>> available locally via direct reads to local HW cache or the local file
>> system that is common to both sites.
>
> Accessing cache servers with in memory databases will be a couple of
> magnitudes faster than a shared file system and lock manager.

LAN latency is pretty good, even compared with local SSD:
https://gist.github.com/jboner/2841832

Most folks that want persistence in addition to server-level
replication and something akin to RDMA are be looking at SSD data
storage — if they can't afford full-on flash, soon at the 3D XPoint
stuff — and maybe also at something akin to PostgreSQL BDR for asynch
replication. Rempote synch replication means having the data across
the LAN link, so a typical OpenVMS cluster with shadowing is going to
take the hit of LAN traffic and synchronous completion for all writes.

HBVS also has the issue of not compressing the data — it'll be
interesting to see if the new file system provides that for RMS and XQP
operations on disk, and whether OpenVMS uses memory compression to
speed paging and swapping operations — and then there's that HBVS and
RAID-1 in general doesn't have a good way to reduce the volume of data
replicated and then shuffled around as the disks get larger.

OpenVMS has no support for checkpointing for backups or replication or
such, so you're rolling your own using existing APIs (DECdtm, HBVS, I/O
flush calls, etc), or you're using a database that has these
capabilities integrated, or maybe some of the third-party add-on code
that ties into RMS and the XQP for related purposes. That code tends
to be tricky, too.

>> You can further reduce the overall latency by installing the App, DB
>> and middleware all on the same OS instance to take advantage of the big
>> memory, large HW caches. This also enhances server utilization levels.
>
> Poor security and a performance benefit that decreases with scale out
> and a risk for interference between applications is not an attractive
> solution.

That, and — if I'm looking seriously at reducing application latency —
why would I want to share the processor caches among disparate
applications? That's just asking for contention and unpredictable and
odd performance characteristics. NUMA maybe, but you're still dealing
with the cache coordination across all processors in a NUMA box that'll
support OpenVMS.

But the design all depends on the parameters, and most folks doing
high-end configurations probably aren't using OpenVMS in any case.
The folks looking for entry-level configurations definitely aren't.



--
Pure Personal Opinion | HoffmanLabs LLC

Kerry Main

unread,
Aug 19, 2016, 1:50:05 PM8/19/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On
> Behalf Of Stephen Hoffman via Info-vax
> Sent: 19-Aug-16 9:24 AM
> To: info...@rbnsn.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [Info-vax] redis-cache another freeware
> opportunity
>
As I previously stated, large systems with large physical memory would mean large caches and/or in memory database content.

File system access would be only if initial reference was not in memory. Local file IO update to flash disk is still orders of magnitude faster than distributed network update i.e. as in network update to all servers via distributed cache.

> > Few high end systems are much more expensive than
> many low end systems.
> >
> > For many cases it is simply not competitive.
>
> Ayup. HPE folks were reporting 80% of their server sales
> were
> two-socket boxes, too. Increasing core counts means
> smaller and
> denser boxes and fewer sockets overall, too. Yes, there'll
> always be
> bigger boxes and specialized boxes available. But AFAIK
> OpenVMS I64
> never got around to supporting SD2, and other high-end
> support was via
> HPVM and that's now not current versions. Which
> certainly implies
> that there wasn't much demand for such configurations
> with OpenVMS; of
> configurations with 32+ cores and TB-sized memory. In
> short, I'd
> expect VSI is looking at two-socket x86-64 server boxes as
> their
> biggest potential hardware market. That'll likely get them
> 64 cores
> and 128 threads as soon as next year, too.
>

Big servers I am talking about are blade servers.

> >> The state can be maintained on the local file system, so
> no matter
> >> which server node in what DC the session is directed
> to, the state is
> >> available locally via direct reads to local HW cache or the
> local file
> >> system that is common to both sites.
> >
> > Accessing cache servers with in memory databases will
> be a couple of
> > magnitudes faster than a shared file system and lock
> manager.
>
> LAN latency is pretty good, even compared with local SSD:
> https://gist.github.com/jboner/2841832

Old data that does not include write updates to remote network system local disks. Need to consider remote system local disk write time, firewall /switch /router / load balancer propagation delays.
All of Intel and AMD server architectures today and as far as I can tell, future architectures are NUMA based. I suspect the same is true for their desktop designs as well.

As a reminder:
- OpenVMS has been handling NUMA type issues like RAD's, remote vs. local memory etc. since GS160 days
- OpenVMS can define process affinity to specific cores

http://h41379.www4.hpe.com/openvms/journal/v16/rad.html
"With RAD soft-affinity, the OpenVMS Scheduler attempts to schedule a process on a CPU from the Home RAD of that process. This is a complementary mechanism to ensure that when a process is scheduled on a CPU, the memory references for that process become more local. Any CPU hard-affinity set for a particular process to particular CPU(s) takes precedence over the RAD soft-affinity."

"When a process is ready to be scheduled, if the scheduler does not find an idle CPU from the Home RAD of that process, it skips the scheduling of that process for skip count iterations; beyond that the Scheduler chooses a CPU from a remote RAD."

In addition, even a remote access to a core not in the same RAD is still an order of magnitude faster than a remote network read.

> But the design all depends on the parameters, and most
> folks doing
> high-end configurations probably aren't using OpenVMS in
> any case.
> The folks looking for entry-level configurations definitely
> aren't.
>

High end configurations today like Google's highly distributed network architectures were put together 5-10+ years ago. It works, but with the hidden costs of managing thousands of very small systems - each with their own system and local app disks.

These designs do not take into account all of the CPU, Memory and storage hardware improvements over the last 5 years. The one component that has not improved is local LAN latency (propagation delays due to switches, routers, firewalls, LB's, NIC, TCPIP stack etc)

I suspect this is why Google is looking at PowerX so closely these days - perhaps they are looking at more centralized configurations? Perhaps with a view of solving their multiple DC data consistency challenges?

Richard Maher

unread,
Aug 19, 2016, 8:19:15 PM8/19/16
to
On 19-Aug-16 9:18 AM, Arne Vajhøj wrote:

>
> Redis is really a database.
>
> it is an in-memory NoSQL database of the key-value store type with
> optional near real time persistence of transaction log.
>
> That is rather typical of what is used today for cluster wide state.
>
> Arne
>
>


Then why is Microsoft touting it as their non-database distributed cache
solution for session-state management in Azure?

This is the alternative for the .NET SessionState process solution
remember? There is no need for persistence at all.

Personally, I loved Oracle's Cache-Fusion (15 years old?) where the data
travels with the lock. In this case all access should be
sequential/queued but has no need to go to disk.

Confused.

Arne Vajhøj

unread,
Aug 20, 2016, 5:15:05 PM8/20/16
to
On 8/19/2016 8:19 PM, Richard Maher wrote:
> On 19-Aug-16 9:18 AM, Arne Vajhøj wrote:
>
>>
>> Redis is really a database.
>>
>> it is an in-memory NoSQL database of the key-value store type with
>> optional near real time persistence of transaction log.
>>
>> That is rather typical of what is used today for cluster wide state.
>>
>> Arne
>>
>>
>
>
> Then why is Microsoft touting it as their non-database distributed cache
> solution for session-state management in Azure?

http://redis.io/

<quote>
Redis is an open source (BSD licensed), in-memory data structure store,
used as database, cache and message broker.
</quote>

> This is the alternative for the .NET SessionState process solution
> remember?

Yes. And it is fine for that.

> There is no need for persistence at all.

Persistence to disk of transaction log is optional. MS probably does not
enable that.

But databases does not necessarily need to be persisted to disk.

It is a good idea if data need to survive a server crash, but ...

There are a whole bunch of these. One of the better known is
probably Oracle TimesTen (and it is actually relational not NoSQL!).

Arne



Stephen Hoffman

unread,
Aug 21, 2016, 8:22:28 AM8/21/16
to
On 2016-08-20 21:14:59 +0000, Arne Vajh j said:

> But databases does not necessarily need to be persisted to disk.
>
> It is a good idea if data need to survive a server crash, but ...

Ayup... That's one of the reasons why folks replicate data to other
servers, and use transaction logs or journals when persisting data to
disk past what's provided by redundant distributed servers.
Clustering, but without the slow parts, and with rather less data
persisted to slow parts. That combines the speed of DRAM memory with
the low latency and high bandwidth of 40 GbE connections, and the
persisted data is effectively one long sequential write.

On other platforms, there's more than a little software available here
and in use in these configurations, too — redis is just one of various
packages and tools targeting these sorts of modern cluster
configurations.

As for hardware... Folks around here are used to HDDs, even though the
tech itself — if you really look at how a HDD works — is insanely
complex and fragile tech. SSDs are starting to become more acceptable
for many OpenVMS sites. (Not that the internals of flash are entirely
sane, but at least SSDs are built using what was once commonly called
solid state electronics.) OpenVMS hasn't gotten around to PCIe flash
storage, and many sites might not ever. Though that hardware is
available. No compressed memory or compressed disk storage yet,
either. We'll see 3D XPoint and similar arriving here over the next
few years (past the flash DIMMs that are becoming available), but
adoption of that will undoubtedly be slow.

Arrays of battery-protected servers are a little new for some folks'
preferences, just as 3D XPoint will have some folks wondering about
stability and corruption resistance and related factors for a while.

Kerry Main

unread,
Aug 21, 2016, 10:40:05 AM8/21/16
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On
> Behalf Of Stephen Hoffman via Info-vax
> Sent: 21-Aug-16 8:22 AM
> To: info...@rbnsn.com
> Cc: Stephen Hoffman <seao...@hoffmanlabs.invalid>
> Subject: Re: [Info-vax] redis-cache another freeware
> opportunity
>
> On 2016-08-20 21:14:59 +0000, Arne Vajh j said:
>
> > But databases does not necessarily need to be persisted
> to disk.
> >
> > It is a good idea if data need to survive a server crash,
> but ...
>
> Ayup... That's one of the reasons why folks replicate data
> to other
> servers, and use transaction logs or journals when
> persisting data to
> disk past what's provided by redundant distributed
> servers.
> Clustering, but without the slow parts, and with rather less
> data
> persisted to slow parts. That combines the speed of
> DRAM memory with
> the low latency and high bandwidth of 40 GbE
> connections, and the
> persisted data is effectively one long sequential write.
>

While true, let's not forget the challenge for network data updates is not bandwidth, but rather latency associated with switches, routers, firewalls (deep packet inspection), LB's, NIC's, TCPIP stacks etc. This latency has not changed in the last 5 years. However, CPU / memory / storage technologies and their associated latencies have all improved exponentially.

Imho, we are going to start seeing next generation designs which do not depend so heavily on networks with all of this associated high network latencies.

For lack of a better term, I call it network tier consolidation.
Why would compressed memory would even be a consideration when TB's of non-volatile local physical memory are (or soon will be) available (3D XPoint you mentioned)?

Imho, the only thing that will slow large local memory adoption down will be cost and that may indeed become a factor for some Customers.

> Arrays of battery-protected servers are a little new for
> some folks'
> preferences, just as 3D XPoint will have some folks
> wondering about
> stability and corruption resistance and related factors for a
> while.
>
>

Battery protected servers are imho, rapidly going to become last year's technology.
0 new messages