On 11/23/2021 9:50 PM, Bart wrote:
> On 23/11/2021 19:59, Manfred wrote:
>> On 11/23/2021 5:51 PM, David Brown wrote:
>>> On 23/11/2021 16:50, Kenny McCormack wrote:
>>>> In article <snivri$15p1$
1...@gioia.aioe.org>, Manfred
>>>> <non...@add.invalid> wrote:
>>>>> On 11/22/2021 11:19 PM, Lynn McGuire wrote:
>>>>>> Our results show interesting find- ings, such as, slower/faster
>>>>>> languages consuming less/more energy, and how memory usage influences
>>>>>> energy consump- tion
>>>>>
>>>>> I find the correlation slower/faster language respectively less/more
>>>>> energy quite confusing. In fact I believe it is the opposite.
>>>>
>>>> Yes. I think this was misstated/sloppily-written in the original text.
>>>>
>>>> It depends, of course, on what exactly you mean by a "slower" language.
>>>> It is true that if you run the CPU at a slower speed (and that would
>>>> make
>>>> for a slower processing model), then you will use less energy.
>>>>
>>>
>>> It is /not/ true that running the CPU at a slower speed uses less energy
>>> - at least, it is often not true. It is complicated.
>>>
>>> There are many aspects that affect how much energy is taken for a given
>>> calculation.
>>>
>>> Regarding programming languages, it is fairly obvious that a compiled
>>> language that takes fewer instructions to do a task using optimised
>>> assembly is going to use less energy than a language that has less
>>> optimisation and more assembly instructions, or does some kind of
>>> interpretation. Thus C (and other optimised compiled languages like
>>> C++, Rust or Ada) are going to come out top.
>>>
>>> It is less obvious how the details matter. Optimisation flags have an
>>> effect, as do choices of instruction (as functional blocks such as SIMD
>>> units or floating point units) may be dynamically enabled. For some
>>> target processors, unrolling a loop to avoid branches will reduce energy
>>> consumption - on others, rolled loops to avoid cache misses will be
>>> better. Some compilers targeting embedded systems (where power usage is
>>> often more important) have "optimise for power" as a third option to the
>>> traditional "optimise for speed" and "optimise for size".
>>>
>>> The power consumption for a processor is the sum of the static power and
>>> the dynamic power. Dynamic power is proportional to the frequency and
>>> the square of the voltage. And energy usage is power times time.
>>>
>>> A processor that is designed to run at high frequency is likely to have
>>> high-leakage transistors, and therefore high static power - when the
>>> circuits are enabled. But the faster you get the work done, the higher
>>> a proportion of the time you can have in low-power modes with minimal
>>> static power. On the other hand, higher frequencies may need higher
>>> voltages.
>>>
>>> As a rule of thumb, it is better to run your cpu at its highest
>>> frequency - or at the highest it can do without raising the voltage -
>>> and get the calculation done fast. Then you can spend more time in
>>> low-power sleep modes. However, entering and exiting sleep modes takes
>>> time and energy, so you don't want to do it too often - hence the
>>> "big-little" processor combinations where you have a slower core that
>>> can be switched on and off more efficiently.
>>>
>>
>> I am not sure about that "rule" - in general modern integrated
>> electronics waste most energy during state transitions, and that is
>> directly coupled with clock frequency, but I admit there may be more
>> to it that I don't know.
>>
>> One consideration about the balance between clock speed and efficiency
>> is that it is totally relative to how old the technology is. I am
>> pretty confident that an old Pentium is much less efficient than a
>> modern i7 (just to consider popular PC stuf only), even if the latter
>> runs faster than the former.
>> The reason is not just because of the added value of the environmental
>> footprint in today's public opinion; the fact is that heat dissipation
>> is a major bottleneck in IC technology, so in order for a modern
>> processor to perform according to nowadays' standards and not melt
>> after a few minutes of operation, it /has/ to be built with efficient
>> technology.
>
>> But then comes "modern" programming, with all its fuzz of managed
>> code, JIT gibber, thousands of DLL dependencies, etc., not to forget
>> "modern" OS's that eat Tflops just to perform even the simplest of
>> tasks, and there goes all of that efficiency.
>
> This is why talk of the greenest language is nonsense.
Not really, this study does show a significant difference between
language choices for the same task.
>
> How about writing efficient applications?
Yes, and that includes choosing the right language.
And efficient OSes (there
> might be 1000 processes on my PC right now).
That's a different matter than applications. It still affects overall
energy consumption of course. The choice of the language still matters
for OS processes, but general OS design has a major impact as well.
My critics is addressed to components that are shipped with many OS's,
yet are not technically part of the core OS infrastructure. Think e.g.
the Windows Update infrastructure - it's probably one of the most
bloated features I've ever seen.
>
> Access any webpage, and the fast download speeds mask how much data is
> being transmitted, which all still requires code to process it. Hardware
> can barely keep up.
>
> Or, how about efficient compilers? David Brown always contemptuously
> dismisses mine even though they are fast, yet they show that the basic
> job of source->binary translation can be done 1-2 magnitutes faster than
> the heavy-duty compilers he favours.
As others have pointed out, this does not matter, since compilers are
not part of a typical runtime environment.
>
> (For a couple of years one of my compilers was run as interpreted,
> dynamic bytecode. It was still double the speed of gcc!)