PAPER: The Landscape of Parallel Computing Research

4 views
Skip to first unread message

Saifi Khan

unread,
Jul 3, 2009, 9:48:25 PM7/3/09
to twin...@yahoogroups.com, bigc...@googlegroups.com
Hi all:

While there is hoo-haa about 'parallelism research' etc,
i'd like to recommend a serious paper in this regard.

This paper came up with the '13 Dwarf' concept.

The Landscape of Parallel Computing Research:
A View from Berkeley

by
Asanovic, Bodik, Catanzaro, Gebis, Husbands, Keutzer, Patterson,
Plishker, Shalf, Williams and Yelick


Quick Take home
Our view is that this evolutionary approach to parallel hardware
and software may work from 2 or 8 processor systems, but is
likely to face diminishing returns as 16 and 32 processor
systems are realized, just as returns fell with greater
instruction-level parallelism.


Abstract
The recent switch to parallel microprocessors is a milestone in
the history of computing. Industry has laid out a roadmap for
multicore designs that preserves the programming paradigm of the
past via binary compatibility and cache coherence. Conventional
wisdom is now to double the number of cores on a chip with each
silicon generation.

A multidisciplinary group of Berkeley researchers met nearly two
years to discuss this change. Our view is that this evolutionary
approach to parallel hardware and software may work from 2 or 8
processor systems, but is likely to face diminishing returns as
16 and 32 processor systems are realized, just as returns fell
with greater instruction-level parallelism.

We believe that much can be learned by examining the success of
parallelism at the extremes of the computing spectrum, namely
embedded computing and high performance computing. This led us
to frame the parallel landscape with seven questions, and to
recommend the following:

* The overarching goal should be to make it easy to write
programs that execute efficiently on highly parallel computing
systems

* The target should be 1000s of cores per chip, as these chips
are built from processing elements that are the most efficient
in MIPS (Million Instructions per Second) per watt, MIPS per
area of silicon, and MIPS per development dollar.

* Instead of traditional benchmarks, use 13 "Dwarfs" to design
and evaluate parallel programming models and architectures. (A
dwarf is an algorithmic method that captures a pattern of
computation and communication.)

* "Autotuners" should play a larger role than conventional
compilers in translating parallel programs.

* To maximize programmer productivity, future programming models
must be more human-centric than the conventional focus on
hardware or applications.

* To be successful, programming models should be independent of
the number of prance counters and energy counters.

* Traditional operating systems will be deconstructed and
operating system functionality will be orchestrated using
libraries and virtual machines.

* To explore the design space rapidly, use system emulators
based on Field Programmable Gate Arrays (FPGAs) that are highly
scalable and low cost.

Since real world applications are naturally parallel and
hardware is naturally parallel, what we need is a programming
model, system software, and a supporting architecture that are
naturally parallel. Researchers have the rare opportunity to
re-invent these cornerstones of computing, provided they
simplify the efficient programming of highly parallel systems.

Download
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf


thanks
Saifi.

Reply all
Reply to author
Forward
0 new messages