Seminário de Arquitetura

2 views
Skip to first unread message

Rodolfo Azevedo

unread,
Nov 22, 2014, 9:27:25 AM11/22/14
to Seminários de Arquitetura e Compiladores, people-lsc
Próxima segunda: 24/11 - 10h - Sala 351

Gennady Pekhimenko, Vivek Seshadri, Yoongu Kim, Hongyi Xin, Onur Mutlu, Phillip B. Gibbons, Michael A. Kozuch, and Todd C. Mowry. 2013. Linearly compressed pages: a low-complexity, low-latency main memory compression framework. In Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-46). ACM, New York, NY, USA, 172-184. DOI=10.1145/2540708.2540724

--
Rodolfo Jardim de Azevedo
http://www.ic.unicamp.br/~rodolfo
IC - University of Campinas - UNICAMP

Rodolfo Azevedo

unread,
Nov 27, 2014, 9:51:11 AM11/27/14
to Seminários de Arquitetura e Compiladores, people-lsc
Próxima segunda, 01/12, 10h, sala 351

TITLE: Hybrid Programming Challenges for Extreme Scale Software

SPEAKER: Professor Vivek Sarkar, E.D. Butcher Chair in Engineering, Rice University

ABSTRACT:

It is widely recognized that computer systems in the next decade will
be qualitatively different from current and past computer systems.
Specifically, they will be built using homogeneous and heterogeneous
many-core processors with 100's of cores per chip, their performance
will be driven by parallelism (million-way parallelism just for a
departmental server), and constrained by energy and data movement.
They will also be subject to frequent faults and failures.  Unlike
previous generations of hardware evolution, these Extreme Scale
systems will have a profound impact on future software.  The software
challenges are further compounded by the need to support new workloads
and application domains that have traditionally not had to worry about
parallel computing in the past.

In general, a holistic redesign of the entire software stack is needed
to address the programmability and performance requirements of Extreme
Scale systems.  This redesign will need to span programming models,
languages, compilers, runtime systems, and system software.  A major
challenge in this redesign arises from the fact that current
programming systems have their roots in execution models that focused
on homogeneous models of parallelism e.g., OpenMP and Java's roots are
in SMP parallelism, MPI and Hadoop's roots are in cluster parallelism,
and CUDA and OpenCL's roots are in GPU parallelism.  This in turn
leads to the "hybrid programming" challenge for application
developers, as they are forced to explore approaches to combine two or
all three of these models in the same application.  Despite some early
experiences and attempts by some of the programming systems to broaden
their scope (e.g., addition of accelerator pragmas to OpenMP), hybrid
programming remains an open problem and a major obstacle for
application enablement on future systems.

In this talk, we summarize experiences with hybrid programming in the
Habanero Extreme Scale Software Research project [1] which targets a
wide range of homogeneous and heterogeneous manycore processors in
both single-node and cluster configurations.  We focus on key
primitives in the Habanero execution model that simplify hybrid
programming, while also enabling a unified runtime system for
heterogeneous hardware.  Some of these primitives are also being
adopted by the new Open Community Runtime (OCR) open source project
[2].  These primitives for hybrid programming have been validated in a
range of applications studied in the NSF Expeditions Center for
Domain-Specific Computing (CDSC) [3], which include medical imaging
applications and Hadoop-style parallelization of machine learning
algorithms.

Background material for this talk will be drawn in part from the DARPA
Exascale Software Study report [4] led by the speaker.  This talk will
also draw from a recent (March 2013) study led by the speaker on
Synergistic Challenges in Data-Intensive Science and Exascale
Computing [5] for the US Department of Energy's Office of Science.  We
would like to acknowledge the contributions of all participants in
both studies, as well as the contributions of all members of the
Habanero, OCR, and CDSC projects.

BIO:

Vivek Sarkar conducts research in multiple aspects of parallel
software including programming languages, program analysis, compiler
optimizations and runtimes for parallel and high performance computer
systems.  He currently leads the Habanero Extreme Scale Software Research
project at Rice University, and serves as Associate Director of the
NSF Expeditions project on the Center for Domain-Specific Computing.
Prior to joining Rice in July 2007, Vivek was Senior Manager of
Programming Technologies at IBM Research.  His responsibilities at IBM
included leading IBM's research efforts in programming model, tools,
and productivity in the PERCS project during 2002- 2007 as part of the
DARPA High Productivity Computing System program.  His past projects
include the X10 programming language, the Jikes Research Virtual
Machine for the Java language, the MIT RAW multicore project, the ASTI
optimizer used in IBM's XL Fortran product compilers, the PTRAN
automatic parallelization system, and profile-directed partitioning
and scheduling of Sisal programs.  Vivek holds a B.Tech. degree from
the Indian Institute of Technology, Kanpur, an M.S. degree from
University of Wisconsin-Madison, and a Ph.D. from Stanford University.
He became a member of the IBM Academy of Technology in 1995, the
E.D. Butcher Chair in Engineering at Rice University in 2007, and was
inducted as an ACM Fellow in 2008.  Vivek has been serving as a member
of the US Department of Energy's Advanced Scientific Computing
Advisory Committee (ASCAC) since 2009.  He has also become the chair
of the Computer Science Department at Rice University since July 2013.

REFERENCES:
[1] Habanero Extreme Scale Software Research project.  http://habanero.rice.edu.
[2] Open Community Runtime (OCR) open source project.  https://xstackwiki.modelado.org/Open_Community_Runtime.
[3] Center for Domain-Specific Computing (CDSC).  http://cdsc.ucla.edu.
[4] DARPA Exascale Software Study report, September 2009.  http://users.ece.gatech.edu/~mrichard/ExascaleComputingStudyReports/ECS_reports.htm.
[5] DOE report on Synergistic Challenges in Data-Intensive Science and Exascale Computing, March 2013. http://science.energy.gov/~/media/ascr/ascac/pdf/reports/2013/ASCAC_Data_Intensive_Computing_report_final.pdf.
Reply all
Reply to author
Forward
0 new messages