11/24 Reviews - EnergyProportional

17 views
Skip to first unread message

Rodrigo

unread,
Nov 23, 2009, 6:57:11 PM11/23/09
to CSCI2950-u Fall 09 - Brown
Please post your reviews here.
This review is short, no need for evaluation, reproducibility, etc.
Mainly summary, questions, criticism.

Kevin Tierney

unread,
Nov 23, 2009, 9:05:59 PM11/23/09
to brown-cs...@googlegroups.com
Barroso and Hölzle argue that servers do not currently have suitable
energy efficiency for the loads they are given. They claim that energy
efficiency is maximized at minimum and maximum loads, but servers tend
to have loads somewhere in the middle of that in terms of CPU
utilization. Given the large number of machines in data centers
operating at these medium loads, there is a great potential for saving
energy.

How was this data gathered? They are a bit sloppy in terms of showing
this histogram of data (figure 1) and claiming it is average CPU
utilization over a 6 month period, but not giving any other details.
How often was this measured? This could greatly effect the
measurements- full load for 1 minute and being idle for the next
results in an average load of 50%, but in terms of power efficiency it
is rather different than 50% load. I do not doubt that the data will
show a need for more efficient servers, but it would be nice to have a
better view of it.

Marcelo Martins

unread,
Nov 24, 2009, 12:44:20 AM11/24/09
to brown-cs...@googlegroups.com
Paper Title "The Case for Energy-Proportional Computing"

Author(s) Luiz Andre Barroso and Urs Holzle

Date December 2007

Novel Idea

Computers that leverage energy proportionality would enable large energy
savings. In order to do promote this, memory and disk technologies must
be improved to promote the same features of energy-proportional CPUs:
wide dynamic power range and active low-power modes.

Main Result(s)

Apart from a simulation showing the promises of a energy-proportional
server compared to today's standards, no solid results were shown.

Impact

Provisioning energy proportionality to server hardware requires changes
to nowadays manufacturing processes of electronic components. Such
changes would bring innovation to the IT industry. More importantly,
energy-proportional hardware would reduce its role as the dominant
factor in the total cost of ownership. In addition, it would soften the
strict requirements for huge data centers, such as closeness to a power
plant, special cooling infrastructure, etc.

Evidence

Energy-efficient CPUs have shown that it is possible to reduce peak
power draw and provide efficient energy-savings modes with low wake-up
penalties.

Questions

What are the major technical challenges that prevent memory and disk
subsystems to become as efficient as the CPU?

Criticism

1.) The authors base their proposal on a wishful thinking that in the
future, memory and disk components of servers will be capable of
performing as energy-efficiently as CPUs do today. However, no evidence
nor references to changes necessary to current architectures are
presented showing support to such transition.

Andrew Ferguson

unread,
Nov 23, 2009, 7:00:27 PM11/23/09
to brown-cs...@googlegroups.com
Paper Title
"The Case for Energy-Proportional Computing"

Authors
Luiz André Barroso and Urs Hölzle

Date
December 2007, IEEE Computer magazine

Novel Idea
The authors call upon hardware manufacturers to develop hardware which
is more energy efficient under the usage profile of servers. Servers
do not fit the usage profile of laptops, and thus have not benefitted
as much from energy efficient designs. The authors suggest
manufacturers develop active low-power modes and give hardware wider
dynamic power range.

Impact
Hopefully, due to the size and prestige of Google, this essay will
have a large impact on hardware vendors.

Evidence
The authors present an appropriate amount of evidence. They show that
server CPUs are generally in the 10-50% utilization range, which has
poor energy efficiency. Figure 3 shows that the share of total energy
cost in a server "charged" to the CPU has gone down over the last few
years. The enormous cooling & electric bills for internet-scale server
farms are well known.

Prior Work
Prior work on energy efficient computing has mostly been constrained
to the arenas of mobile and embedded devices. The current generations
of Intel laptop CPUs with TurboBoost are a prime example.

Criticism
I have two criticisms: the first is that the authors did not consider
technologies outside the server domain, and secondly, some of the
authors' statements seemed too fanciful. Technologies such as flash
hard drives or phase-change memory could drastically change the energy
footprint of servers, but the authors remain focused on traditional
disk drives in their analysis. In Figure 4, the authors illustrate how
a theoretical 90 percent efficient energy machine would reduce the
power usage to 50%; but it seems to me that the authors just adjusted
the red curve willy-nilly until the green curve was interesting -- not
a great argument.

Ideas for further work
Measuring energy usage, improving energy efficiency, raising
awareness, the list goes on and on... One of the projects I am working
on relates to partitioning more-frequently-accessed and less-
frequently-accessed data, so that data which does not need to be
available immediately can be stored on lower-powered systems.

James Tavares

unread,
Nov 23, 2009, 9:54:34 PM11/23/09
to brown-cs...@googlegroups.com
November 24, 2009

*EnergyProportional*

Paper Title: The Case for Energy-Proportional Computing

Author(s): Luiz Andr� Barroso and Urs Holzla

Date: IEEE Computer �2007

The authors make an argument for the need for energy-proportional
computing in the data center. As their primary evidence the authors cite
statistics which show that servers seldom operate near their maximum or
minimum utilization levels, instead operating more typically between 10%
and 50% utilization (Figure 1). This is precisely the range in which
they are most inefficient! A na�ve assessment would be that data centers
are underutilized and therefore some machines should be powered off to
raise the utilization levels of the remaining machines. However, the
authors argue that many applications make it difficult to simply �power
off� idle machines (e.g. if nodes contain data which must remain
accessible), as well as the fact that sustaining underutilized servers
is often times desirable for traffic engineering purposes.

Citing significant advances made by CPU manufacturers in recent years in
giving their CPUs the ability to adjust power consumption under varying
loads, the authors argue that system designers should strive to make
similar inroads in DRAM, disk, and networking equipment as well. The
authors essentially argue that energy proportionality is a feature
critically needed to ensure stable and efficient data center expansion.

I�m not sure if I agree in entirety. For one, I find it a little bit
ironic that this paper originated from Google, whose previous works have
stressed the use of black-box commodity hardware while simultaneously
placing all critical functionality into their software instead.
Operating at or near saturation is probably a difficult regime to be in,
but could higher utilization levels be attained by using virtual
machines & crafty �global� scheduling algorithms? I suppose this
wouldn�t address the issue of GFS nodes, but surely solutions exist
there as well (maybe network-attached disks with embedded processors
just fast enough to shuffle data, but too stupid for anything else).

Dongbo Wang

unread,
Nov 23, 2009, 11:17:39 PM11/23/09
to brown-cs...@googlegroups.com
Main Summary: The paper summarizes the energy efficiency problem for computer clusters and data centers. First, the paper compares servers with ordinary mobile devices, and shows that the server has its particular characteristic, which is that servers are never fully idle the way a end mobile device usually does. This characteristic makes it hard to directly take advantage of the existing energy saving methods that is highly effective in mobile devices. Then the paper introduce the term energy efficiency, which is the utilization divided by the power value consumed. Current difference between the peak and lowest point of the power value is almost 50 percent, so the energy efficiency is low when the utilization of the server is low. Lastly, the paper describes the energy proportional features of modern CPU: wide dynamic power range and active low-power modes.

Question & Criticism: none




2009/11/24 Rodrigo <rodrigo...@gmail.com>

Steve Gomez

unread,
Nov 23, 2009, 8:41:43 PM11/23/09
to CSCI2950-u Fall 09 - Brown
Title: "The Case for Energy-Proportional Computing"
Authors: Luiz Andre Barroso and Urs Holzle
Date: December 2007, in IEEE Computer magazine

This article looks at energy consumption in servers, surveying past
progress and drawing on efficiency boosts in mobile devices to inform
the argument for 'energy-proportional' behavior in server operating
modes.

The authors motivate their call for more energy efficiency by pointing
out the environmental and economical challenge of running badly
efficient hardware. They point out that "the lowest energy-effiency
region corresponds to their most common operating mode" in servers,
meaning that these machines spend the majority of time in their least
efficient power state.

One overlooked point that the authors bring up is how recent
improvement in CPU energy efficiency has not been mirrored in other
types of hardware that have narrower dynamic power ranges or less
active low-power modes. An example includes discs that have to spin
up when transitioning from inactivity to more active modes.

The authors ask whether it is possible for applications developers and
system architects to use these resources more effectively in low-
activity modes, as a way of eliminating a costly transition between
modes (that only eats power without doing real work). I took two
important questions away from this:
1) Can some work be done in the 'off-time' for systems well
provisioned but simply waiting for work?
2) Can low-activity modes (if they can't be more utilized) use less
power and be more efficient stand-bys?

A slight criticism I have with underscoring proportional efficiency
is, this may be a distraction from making generally energy-efficient
systems (even if the most common mode is least efficient). The
authors don't address this perspective. It makes a lot of sense that
the most common operating mode could be the most energy consuming:
usually systems are designed to perform specific tasks (and not idle),
and these modes are probably doing more (e.g. trying to maximize
performance), or must be provisioned to do more if needed, than in
other modes.

Still, the article makes a valid point -- especially for mostly idle
machines (like infrequently visited web servers) than burn through a
lot of energy just waiting to handle traffic. This will be an
escalating problem as data centers expand, so it will be interesting
to see how these issues are handled by engineers.

qiao xie

unread,
Nov 23, 2009, 11:31:47 PM11/23/09
to brown-cs...@googlegroups.com

This article talked about the need for energy-proportional computers. Energy bills have become a major cost for IT companys.
The flaw of current servers is that the most common operating mode has the worst energy-efficiency. This mismatch results in a

large amount of energy waste. The solution lies in redesign system components in a fashion that the energy comsuption is propotonal

to the workload. The goal is to achieve graduate energy consumption as workload increases.
 

Most power saving techniques from mobile devices are not suitable for servers because their operating mode has a lot of idle

status, while the servers has to be up most of the time. Studies from biology, such as human body energy consumption, may help to

cultivate new ideas to design computers. Design ideas from CPU are also worthy to be imitated by other system components.
 

In conclusion, servers spend most of their time with utilizations of 10 to 50 percent and have bad energy efficiency. Therefore the

advent of energy-proportional computers would greatly cut the electricity bills.



2009/11/24, Rodrigo <rodrigo...@gmail.com>:

小柯

unread,
Nov 23, 2009, 11:42:05 PM11/23/09
to brown-cs...@googlegroups.com
Paper Title:    The Case for Energy-Proportional Computing
Authors:        Luiz Andre Barroso
                    Urs Holzle

Date:           2007

Novel Idea:
    Introducing many energy-saving strategies applied to today's server, laptop, mobile and embedded system. Pointing out the future focus to reduce energy consuming in distributed would be disk and memory, instead of CPU. Showing some observations and comparison between energy-proportional design and a normal one.

Main Result:
    Some observation, discussion and introduction to basic energy issues and solutions. Saying that defining many energy-usage modes would be an approach to gain a energy-proportional design.

Impact:
    Many researchers now focus on better energy-efficient design of not only mobile system but servers.

Evidence:
    Authors start discussion from mobile system to servers, showing their differences in energy-consuming pattern. Then, authors state that CPU is no longer a primary cause of energy consumption and explain what technologies are applied to improve the CPU energy usage. Finally, authors talks about disk and memory energy issues.

Question:

Criticism:


2009/11/23 Rodrigo <rodrigo...@gmail.com>

Spiros E.

unread,
Nov 23, 2009, 6:58:09 PM11/23/09
to CSCI2950-u Fall 09 - Brown
The article discusses the current state of energy-efficient server
hardware, how energy consumption in servers relates to the mobile
market, and the unique challenges servers present to energy
efficiency.

Whereas most mobile devices experience peaks of high usage followed by
long stretches of idle time, servers tend to operate at somewhere
between 10 and 50 percent of utilization. Servers cannot take
advantage techniques used to reduce power consumption in mobile
devices, such as low-performance or sleep modes that consume close-to-
no power for two reasons. First, the long stretches of idle time are
uncommon in servers, and secondly, the overhead of switching between
these efficiency modes usually outweighs the benefits of switching.

The paper points out that the only server hardware components whose
power consumption has been reduced in recent years is the processor
and the power supply. The rest of the server has remained for the most
part untouched.

It seems as though the network interface of a server in a MapReduce
cluster is only utilized at the beginning of the job and during the
shuffle. Could we take advantage of this somehow to reduce power
consumption in MapReduce clusters?

Dan Rosenberg

unread,
Nov 23, 2009, 11:08:09 PM11/23/09
to brown-cs...@googlegroups.com
Paper Title
The Case for Energy-Proportional Computing

Authors
Luiz Andre Barroso and Urs Holzle

Date
2007

Novel Idea
The energy-efficiency model currently employed meshes poorly with actual
performance demands of servers.

Main Result
The article notes that CPUs have incorporated many desirable energy saving
properties, and this efficiency should be sought in disk drives and RAM.

Impact
Perhaps the article will increase awareness and encourage research in
energy-efficient server technology.

Evidence
N/A

Prior Work
N/A

Reproducibility
N/A

Criticism
This article presents the problem clearly but does little in the way of
proposing solutions, other than "we need to be better at this".

Questions/Ideas for Further Work
This entire paper serves as motivation for future work in this field.

Juexin Wang

unread,
Nov 24, 2009, 12:36:39 AM11/24/09
to brown-cs...@googlegroups.com
Energy cost have became a big burden for IT companies. This article talked about the situations of current computer components, they are lack of energy-efficiency, which results in a large amount of energy waste.

This paper claimed a solution that redesign system components in a fashion that the energy consumption is proportional to the workload, to achieve graduate energy consumption as workload increases. The author pointed out that we should especially redesign the components for servers, since the technology used for mobile devices are not suitable for servers because their operating mode has a lot of idle status, while the servers has to be up most of the time.  Some related ideas, such as CPU power management can be referenced in designing other system components.

We can  conclude that servers spend most of their time with utilizations of 10 to 50 percent and have low energy efficiency. But i don't think the different usage of servers and mobile devices make them in different way to reduce power consuming, because what we really caring about should be the efficiency/power riot.

On Mon, Nov 23, 2009 at 6:57 PM, Rodrigo <rodrigo...@gmail.com> wrote:



--
J.W
Happy to receive ur message

Xiyang Liu

unread,
Nov 24, 2009, 10:40:07 AM11/24/09
to CSCI2950-u Fall 09 - Brown
Paper Title
The Case for Energy-Proportional Computing

Author(s)
Luiz André Barroso and Urs Hölzle

Date
December 2007

Main Idea
The article presented that current servers achieves half of the energy
efficiency in the 20 to 30 percent utilization - their most common
status. The authors thus proposed energy-proportional design for
computing systems to improve energy usage efficiency and reduce
overall energy consumption. Breaking down server power consumption
into sub-components and exploring hardware energy-proportionality can
be useful to achieve it on machines. Energy-saving scheme is a useful
software technique to reduce energy consumption even when mode
transition penalty is high.

Question
The distributed system scheduler designed for energy-proportionality
is contradict with the current load-balancing goal. How to incorporate
the two designs and improves system's robustness and energy-efficiency
as well?

Criticism
Energy saving techniques on system level are not explored but might
contribute more to current data center architecture.


On Nov 23, 6:57 pm, Rodrigo <rodrigo.fons...@gmail.com> wrote:
Reply all
Reply to author
Forward
0 new messages