IBM Power7+ vs. Oracle SPARC T5

2,952 views
Skip to first unread message

DrQ

unread,
Sep 1, 2012, 6:40:43 PM9/1/12
to guerrilla-cap...@googlegroups.com

steve jenkin

unread,
Sep 2, 2012, 12:48:39 AM9/2/12
to guerrilla-cap...@googlegroups.com
Wonderful to see the Dinosaurs still dukeing it out.

The eDRAM from IBM for the L3 cache is a big move. Like when they figured out how to use Copper in chips.

I suspect we're seeing a replay of the late 1980's demise of mainframes... Not fast, not universal, not complete, but 90+% of the business goes away, killing weak supporting businesses.

Everyone but IBM's System Z and Unisys ClearPath  went away - or into emulation. [Clearpath = emulation on Xeon of 2200 & B-series?]

In my view, two forces have converged to push these high-end niche processors into irrelevance:
 - Patterson's Brick Wall (2006): Power Wall + Memory Wall + ILP Wall = Brick Wall
 - "infinite" IO/Sec and virtual-RAM with PCI-SSD. eg Fusion-IO

With cheap PCI-SSD by the Terabyte, the majority of Apps/enterprises don't need:
 - Big Iron Databases
 - Big Iron Storage Arrays and supporting SAN's
 - Big Iron multi-chip fast uniprocessing cores.

A lot of the complexity of Big Iron DB's (like Oracle) is aimed at achieving "speed" in the face of low-performing HDD's... [slow IO/sec, not streaming throughput]

If the whole of a relational DB (tables) fits in memory (or fast Virtual memory), then doesn't the DB become *very* simple, modulo ACID tests and writing "commits" to persistent, high-reliability storage?

Which means we might start seeing a bunch of in-memory DB's, like NOSQL, but for normal-sized DB's (1-5Gb) not large collections.

There's an economic rule on product substitution that led to the relatively quick decline in IBM mainframe sales:
 - when the capital expenditure on a substitute is less than the operational costs of the current system, barriers to adoption are removed.

Hardware maintenance fees are typically 15-20% of capital costs.
Whilst Oracle licensing costs are beyond me (I don't track them) - but are becoming a major

How many "little" DB applications need to succeed with two low-cost ($10k) servers, 3 SATA drives each in simple RAID and 1 Fusion-IO board, run as H/A with an in-memory DB?

If organisations can build a complete, high-performance, high-availability, simple-admin solution for $20-$30k per group of DB's, they can afford to deploy them immediately based on direct maintenance savings.


Link for Patterson:
<http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf>

DrQ wrote on 2/09/12 8:40 AM:


--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To view this discussion on the web visit https://groups.google.com/d/msg/guerrilla-capacity-planning/-/i-2z6tC6oFEJ.
To post to this group, send email to guerrilla-cap...@googlegroups.com.
To unsubscribe from this group, send email to guerrilla-capacity-...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/guerrilla-capacity-planning?hl=en.


-- 
Steve Jenkin, Info Tech, Systems and Design Specialist.
0412 786 915 (+61 412 786 915)
PO Box 48, Kippax ACT 2615, AUSTRALIA

stev...@gmail.com http://members.tip.net.au/~sjenkin

M. Edward (Ed) Borasky

unread,
Sep 2, 2012, 10:38:55 PM9/2/12
to guerrilla-cap...@googlegroups.com
On Sat, Sep 1, 2012 at 9:48 PM, steve jenkin <stev...@gmail.com> wrote:

[snip]

> In my view, two forces have converged to push these high-end niche
> processors into irrelevance:
> - Patterson's Brick Wall (2006): Power Wall + Memory Wall + ILP Wall =
> Brick Wall
> - "infinite" IO/Sec and virtual-RAM with PCI-SSD. eg Fusion-IO

Intel are looking over their shoulder at becoming dinosaurs. Maybe I
won't live to see it, but ARM servers could very well do in the
x86_64.

[snip]

> If the whole of a relational DB (tables) fits in memory (or fast Virtual
> memory), then doesn't the DB become *very* simple, modulo ACID tests and
> writing "commits" to persistent, high-reliability storage?
>
> Which means we might start seeing a bunch of in-memory DB's, like NOSQL, but
> for normal-sized DB's (1-5Gb) not large collections.

1 - 5 GB of RAM is small scale for some of the really heavy real-time
folks. Redis (redis.io) is the 800-pound gorilla here, and there are
ways you can get ACID if you're willing to write an AOF file to SSD or
even iron disks for every operation. There are also configurations
with memcached bolted onto CouchDB, MongoDB or Riak.

--
Twitter: http://twitter.com/znmeb; Computational Journalism Publishers
Workbench: http://j.mp/QCsXOr

How the Hell can the lion sleep with all those people singing "A weem
oh way!" at the top of their lungs?

steve jenkin

unread,
Sep 3, 2012, 7:23:39 PM9/3/12
to guerrilla-cap...@googlegroups.com
M. Edward (Ed) Borasky wrote on 3/09/12 12:38 PM:
> Intel are looking over their shoulder at becoming dinosaurs. Maybe I
> won't live to see it, but ARM servers could very well do in the
> x86_64.
Ed,

And so they should for most general purpose computing.

I remember seeing John Mashey of MIPS talk in 1988 where he plotted CPU
speed for each of ECL, bipolar and CMOS technologies. ECL had been
overtaken by then, bipolar was due to lose the lead within a few years.
The 486 in 1991 was a complete system-on-a-chip and changed the landscape.

It answers the question "Where did all the supercomputers go?"
A: Inside Intel. [and Power and SPARC. possibly Z series]

The Intel chips seek "maximum performance" - they pull all the tricks
that super-computer designs used, and its is that technology that is
approaching Pattersons' "Brick Wall" [heat, memory, ILP]

And as an aside, GPU's are filling the "vector processor" niche of CDC
and Cray.

ARM has pursued a very different strategy, more based around
'efficiency': MIPS/Watt

So, while I agree with you, I think the situation is nuanced.

ARM processors are obvious choices for low-power and mobile/battery devices.
Because of design simplicity (small PSU, no CPU-fan) and smaller size,
they'll become more interesting for low-end PC's.

There is a company, Calxeda, now producing high-density ARM boards for
servers.
They are hoping to leverage MIPS/Watt for highly-parallelisable loads,
like web-servers.

But I can't see anyone taking on Intel soon in the
supercomputer-on-a-chip market.
It's not just servers, esp for large DB's, but workstations and
'performance' laptops.

The problem with that evolution of the market for Intel is ARM taking
sales from multiple market segments. Seeing that Winders-8 will run on
ARM, we might see the end of WinTel for low-end & mid-tier laptops.

As a company, can Intel survive such a radical change in demand for its
major product line?
Will its work on MLC flash fill the financial void?

I've no idea how that will go.
But like you said, ARM is going to shake up even the Intel server market.

The "secret sauce" that the ARM architecture has is that it's a
*licensed* design.
Although chip design companies might not own or be able to access chip
FABs within 2 or 3 design cycles of Intel, they can produce highly
optimised and use-case targetted chips.

Which Intel can't do. They are focussed on the bleeding edge of CPU
performance and FAB design.

Manufacturers like Apple/A5 and Calxeda can produced ARM-based designs
that can outperform Intel-based systems by an order-of-magnitude on
non-MIPs metrics.

As Apple has shown, there are very big markets where raw MIPs isn't the
"figure of merit" in designs.

cheers
steve

Baron Schwartz

unread,
Sep 3, 2012, 8:09:19 PM9/3/12
to guerrilla-cap...@googlegroups.com
Hi,

> How many "little" DB applications need to succeed with two low-cost ($10k)
> servers, 3 SATA drives each in simple RAID and 1 Fusion-IO board, run as H/A
> with an in-memory DB?

Most people I know using Fusion-IO aren't doing it with in-memory
databases, but with databases much larger than memory. They do it
because they are overwhelmingly disk-bound on reads. If you're using
an in-memory database, spending a large amount of money on that
caliber of storage is likely to be a waste. An in-memory database
pretty much needs durable storage for a (relatively) occasional and
(mostly) sequential write workload, which can be handled quite well by
a good RAID controller with a battery-backed write cache -- with
performance comparable or better than PCIe flash storage. The
exception would be an extremely high write volume that exceeds the
throughput of spindles, but that isn't characteristic of in-memory
databases.

- Baron
Reply all
Reply to author
Forward
0 new messages