Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"C Is The Greenest Programming Language" by: Chris Lott

231 views
Skip to first unread message

Lynn McGuire

unread,
Nov 22, 2021, 5:19:45 PM11/22/21
to
"C Is The Greenest Programming Language" by: Chris Lott
https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/

"Have you ever wondered if there is a correlation between a computer’s
energy consumption and the choice of programming languages? Well, a
group Portuguese university researchers did and set out to quantify it.
Their 2017 research paper entitled Energy Efficiency across Programming
Languages / How Do Energy, Time, and Memory Relate? may have escaped
your attention, as it did ours."
https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sleFinal.pdf

"Abstract: This paper presents a study of the runtime, memory usage and
energy consumption of twenty seven well-known soft- ware languages. We
monitor the performance of such lan- guages using ten different
programming problems, expressed in each of the languages. Our results
show interesting find- ings, such as, slower/faster languages consuming
less/more energy, and how memory usage influences energy consump- tion.
We show how to use our results to provide software engineers support to
decide which language to use when energy efficiency is a concern."

Lynn

Chris M. Thomasson

unread,
Nov 22, 2021, 5:59:16 PM11/22/21
to
Interesting. However, I can write something in C that uses all CPU's
100%. So, green okay... What is it doing, is another matter, so to speak?

Richard Damon

unread,
Nov 22, 2021, 6:26:22 PM11/22/21
to
Well since one of the goals of the C Language was to enable programmers
to write fast code, it can makes sense for C to be 'Green', at least by
some measures.

Fast code will be more energy efficient as processors basically use
power based on the number of instructions executed (and memory accessed).

What also needs to be considered is the time/energy needed to initially
WRITE the code and get it working, and then factor in how many times the
program will be used compared to the effort to write it.

Juha Nieminen

unread,
Nov 23, 2021, 2:10:24 AM11/23/21
to
In comp.lang.c++ Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
> Interesting. However, I can write something in C that uses all CPU's
> 100%.

I don't think that's the point.

Juha Nieminen

unread,
Nov 23, 2021, 2:21:47 AM11/23/21
to
In comp.lang.c++ Richard Damon <Ric...@damon-family.org> wrote:
> Well since one of the goals of the C Language was to enable programmers
> to write fast code, it can makes sense for C to be 'Green', at least by
> some measures.
>
> Fast code will be more energy efficient as processors basically use
> power based on the number of instructions executed (and memory accessed).

I think that for as long as programmable computers have existed, there has
been a direct correlation between the "levelness" ("higher-level" vs.
"low-level") of a language and disregard towards how much resources the
language uses. As computers have become faster and faster, and the amount
of resources (primarily RAM) has increased, this indifference towards
resource consumption in higher-level languages has only likewise
increased.

The farther away the design of the language has been from the details of
the underlying hardware, the more the question of "how efficient is
the language, and how much RAM does it consume?" has been (implicitly
or explicitly) answered with, essentially, "it doesn't matter" and
"who cares?"

Take pretty much any scripting language, or any other interpreted
language (like the original BASIC and most of its subsequent variants).
Pretty much nobody cared how fast they are, or how much RAM they
consume. When someone is, let's say, implementing something in PHP
they seldom stop to think how much memory it consumes or how fast it is.
When someone is implementing something in shell script, one most
definitely does not think about speed or memory consumption.

Most object-oriented languages don't really care about, especially,
memory consumption. Especially garbage-collected OO languages don't
give a flying F about memory consumption. So what if an object needs
something like 16 or 32 bytes of bookeeping data? Who cares? That's
nothing! Trivial and inconsequential! Modern computers have gigabytes
of RAM! Why should you care if an object takes 32 bytes in addition
to whatever is inside it? What a silly notion!

Ironically, the programming world has in the last couple of decades awakened
to the harsh realization that this complete disregard of memory usage
actually causes inefficiencies in modern CPU architectures (because
cache misses are very expensive).

Oh well. As long as it works, who cares? Buy a faster computer.

Otto J. Makela

unread,
Nov 23, 2021, 7:18:14 AM11/23/21
to
Lynn McGuire <lynnmc...@gmail.com> wrote:

> "C Is The Greenest Programming Language" by: Chris Lott
> https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/

Perhaps one contributing factor is that not so much development is any
longer done using C or C++, and the programs that are run (and still
being developed) were originally created in the era when CPU power was
much lower than these days, so code optimization was more important.
--
/* * * Otto J. Makela <o...@iki.fi> * * * * * * * * * */
/* Phone: +358 40 765 5772, ICBM: N 60 10' E 24 55' */
/* Mail: Mechelininkatu 26 B 27, FI-00100 Helsinki */
/* * * Computers Rule 01001111 01001011 * * * * * * */

Bonita Montero

unread,
Nov 23, 2021, 8:39:25 AM11/23/21
to
Developing in C is a magnitude more effort than in C++, and if you've
the right programming-style you get the same code speed like in C. And
sometimes you're even faster in C++ and in C you'd shoot yourself in
your head instead of implementing the complexity you can handle in C++
in minutes.

Manfred

unread,
Nov 23, 2021, 10:03:33 AM11/23/21
to
On 11/22/2021 11:19 PM, Lynn McGuire wrote:
> Our results show interesting find- ings, such as, slower/faster
> languages consuming less/more energy, and how memory usage influences
> energy consump- tion

I find the correlation slower/faster language respectively less/more
energy quite confusing. In fact I believe it is the opposite.

An interpreted language (á la Java :O) may require orders of magnitude
more CPU instructions than C to perform the same task. This makes the
former slower and more energy consuming than the latter.

Slower or faster /hardware/ is a totally different thing, of course.

Kenny McCormack

unread,
Nov 23, 2021, 10:50:49 AM11/23/21
to
In article <snivri$15p1$1...@gioia.aioe.org>, Manfred <non...@add.invalid> wrote:
>On 11/22/2021 11:19 PM, Lynn McGuire wrote:
>> Our results show interesting find- ings, such as, slower/faster
>> languages consuming less/more energy, and how memory usage influences
>> energy consump- tion
>
>I find the correlation slower/faster language respectively less/more
>energy quite confusing. In fact I believe it is the opposite.

Yes. I think this was misstated/sloppily-written in the original text.

It depends, of course, on what exactly you mean by a "slower" language.
It is true that if you run the CPU at a slower speed (and that would make
for a slower processing model), then you will use less energy.

--
https://en.wikipedia.org/wiki/Mansplaining

It describes comp.lang.c to a T!

Bart

unread,
Nov 23, 2021, 11:14:51 AM11/23/21
to
On 23/11/2021 15:03, Manfred wrote:
> On 11/22/2021 11:19 PM, Lynn McGuire wrote:
>> Our results show interesting find- ings, such as, slower/faster
>> languages consuming less/more energy, and how memory usage influences
>> energy consump- tion
>
> I find the correlation slower/faster language respectively less/more
> energy quite confusing. In fact I believe it is the opposite.
>
> An interpreted language (á la Java :O) may require orders of magnitude
> more CPU instructions than C to perform the same task.

Java is probably not a good example.

Properly interpreted code may need 1-2 magnitudes more instructions as
it has to perform the task indirectly (this is with dynamic typing).

But this is only relevant if the processor is executing 100% indirect
code versus 100$ direct code.

In practice it will be a mix, which if done properly makes means the
overheads of interpretation are not significant.

More significant is overall design: you can write slow, bloated,
inefficient programs in C too!

Also many interpreted languages are now JIT-accelerated, to close the gap.

David Brown

unread,
Nov 23, 2021, 11:51:26 AM11/23/21
to
On 23/11/2021 16:50, Kenny McCormack wrote:
> In article <snivri$15p1$1...@gioia.aioe.org>, Manfred <non...@add.invalid> wrote:
>> On 11/22/2021 11:19 PM, Lynn McGuire wrote:
>>> Our results show interesting find- ings, such as, slower/faster
>>> languages consuming less/more energy, and how memory usage influences
>>> energy consump- tion
>>
>> I find the correlation slower/faster language respectively less/more
>> energy quite confusing. In fact I believe it is the opposite.
>
> Yes. I think this was misstated/sloppily-written in the original text.
>
> It depends, of course, on what exactly you mean by a "slower" language.
> It is true that if you run the CPU at a slower speed (and that would make
> for a slower processing model), then you will use less energy.
>

It is /not/ true that running the CPU at a slower speed uses less energy
- at least, it is often not true. It is complicated.

There are many aspects that affect how much energy is taken for a given
calculation.

Regarding programming languages, it is fairly obvious that a compiled
language that takes fewer instructions to do a task using optimised
assembly is going to use less energy than a language that has less
optimisation and more assembly instructions, or does some kind of
interpretation. Thus C (and other optimised compiled languages like
C++, Rust or Ada) are going to come out top.

It is less obvious how the details matter. Optimisation flags have an
effect, as do choices of instruction (as functional blocks such as SIMD
units or floating point units) may be dynamically enabled. For some
target processors, unrolling a loop to avoid branches will reduce energy
consumption - on others, rolled loops to avoid cache misses will be
better. Some compilers targeting embedded systems (where power usage is
often more important) have "optimise for power" as a third option to the
traditional "optimise for speed" and "optimise for size".

The power consumption for a processor is the sum of the static power and
the dynamic power. Dynamic power is proportional to the frequency and
the square of the voltage. And energy usage is power times time.

A processor that is designed to run at high frequency is likely to have
high-leakage transistors, and therefore high static power - when the
circuits are enabled. But the faster you get the work done, the higher
a proportion of the time you can have in low-power modes with minimal
static power. On the other hand, higher frequencies may need higher
voltages.

As a rule of thumb, it is better to run your cpu at its highest
frequency - or at the highest it can do without raising the voltage -
and get the calculation done fast. Then you can spend more time in
low-power sleep modes. However, entering and exiting sleep modes takes
time and energy, so you don't want to do it too often - hence the
"big-little" processor combinations where you have a slower core that
can be switched on and off more efficiently.

Dozi...@thekennel.co

unread,
Nov 23, 2021, 11:54:02 AM11/23/21
to
On Tue, 23 Nov 2021 14:17:50 +0200
o...@iki.fi (Otto J. Makela) wrote:
>Lynn McGuire <lynnmc...@gmail.com> wrote:
>
>> "C Is The Greenest Programming Language" by: Chris Lott
>> https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/
>
>Perhaps one contributing factor is that not so much development is any
>longer done using C or C++, and the programs that are run (and still
>being developed) were originally created in the era when CPU power was
>much lower than these days, so code optimization was more important.

Its good to see the attitude of just throw more CPU at something instead of
optimising it is still around. These days when server farms are taking up
a siginificant percentage of the planet's electrical output its beholder on
programmers to make their code as efficient as is reasonable.

Dozi...@thekennel.co

unread,
Nov 23, 2021, 11:55:45 AM11/23/21
to
On Tue, 23 Nov 2021 14:39:08 +0100
Bonita Montero <Bonita....@gmail.com> wrote:
>Developing in C is a magnitude more effort than in C++, and if you've

That depends on the problem. If you're writing code that needs to store a
lot of structured data then C wouldn't be your first choice of language. But
if you're writing something that simply interfaces with system calls then
there's probably not much if any extra effort in using C over C++.

Guillaume

unread,
Nov 23, 2021, 11:57:24 AM11/23/21
to
Le 23/11/2021 à 08:21, Juha Nieminen a écrit :
> Oh well. As long as it works, who cares? Buy a faster computer.

This sentence shows that you quite obviously got what "green" means.

Dozi...@thekennel.co

unread,
Nov 23, 2021, 12:00:25 PM11/23/21
to
On Tue, 23 Nov 2021 17:51:09 +0100
David Brown <david...@hesbynett.no> wrote:
>The power consumption for a processor is the sum of the static power and
>the dynamic power. Dynamic power is proportional to the frequency and
>the square of the voltage. And energy usage is power times time.
>
>A processor that is designed to run at high frequency is likely to have
>high-leakage transistors, and therefore high static power - when the
>circuits are enabled. But the faster you get the work done, the higher
>a proportion of the time you can have in low-power modes with minimal
>static power. On the other hand, higher frequencies may need higher
>voltages.
>
>As a rule of thumb, it is better to run your cpu at its highest
>frequency - or at the highest it can do without raising the voltage -
>and get the calculation done fast. Then you can spend more time in
>low-power sleep modes. However, entering and exiting sleep modes takes
>time and energy, so you don't want to do it too often - hence the
>"big-little" processor combinations where you have a slower core that
>can be switched on and off more efficiently.

Why do many battery powered systems throttle the CPU to save the battery when
its getting low then and why does undertaking CPU intensive tasks deplete the
battery faster? You seem to be claiming that you can get something for nothing.

Richard Damon

unread,
Nov 23, 2021, 12:28:40 PM11/23/21
to
I would disagree. With a decent compiler, C code can generate close to
assembly level optimizations for most problems. (Maybe it doesn't have
good support for defining Multiple Datapath Single Instruction
sequences, but a GOOD compiler maybe be able to detect and generate this).

ANYTHING in terms of data-structures that another language can generate,
you can generate in C.

The big disadvantage is that YOU as the programmer need to deal with a
lot of the issues rather than to compiler doing things for you, but that
is exactly why you can do things possibly more efficiently then the
compiler. You could have always generated the same algorithm that the
compiler did.

The one point where the compiler can do better is if there is a piece
like instruction sequencing to optimize performance, but that is where
the C language gives the implementation the freedom to adjust things to
allow for that.

The C language, with the common extensions, give you the power to do as
well as any other language.

Now, if you want to talk of the efficiency of WRITING the code, (as
opposed to executing it) all this power it gives you is a negative,
which seems to be what you are talking about.

If we want to talk 'Greenness', we need to define the development/usage
life cycle of the code.

Richard Damon

unread,
Nov 23, 2021, 12:37:16 PM11/23/21
to
CPUs, at a given operating voltage will consume an approximately fixed
amount of energy per instruction.

One effect of slowing down the processor is that if it was running at
25% utilization, that means that 75% of the instructions executed did no
'useful' work, so slowing down the processor to make instructions take
longer means you do less of these wasteful cycles.

Some processors have the ability that when it get to those 'wasteful'
instructions it automatically 'stops' and drops its power concumption
until something happens that needs running again.

The other affect is that in many cases if you slow down the processor
speed, you can slightly drop the voltage you are running the processor
at, and the power consumed turns out to go largely as the square of the
voltage (as the dynamic power is consumed in charging and discharging
tiny capacitance through the system)


So, through various tricks in the system, you can sometimes save some
power when the processor is 'idle' or running 'slower'. Battery powered
system especially try to implement these sorts of capability.

Bonita Montero

unread,
Nov 23, 2021, 1:05:17 PM11/23/21
to
> ANYTHING in terms of data-structures that another language can generate,
> you can generate in C.

With a lot of effort compared to C++.

> The big disadvantage is that YOU as the programmer need to deal with a
> lot of the issues rather than to compiler doing things for you, but that
> is exactly why you can do things possibly more efficiently then the
> compiler. You could have always generated the same algorithm that the
> compiler did.

Why shoul one use sth. different than std::vector<>, std::string,
std::unordered_map<> ... ? There are no opportunities to make the
same more efficient.

David Brown

unread,
Nov 23, 2021, 1:25:11 PM11/23/21
to
If you have to do a certain calculation or set of calculations, it is
/usually/ more efficient to do them quickly and then let the processor
sleep deeper and longer.

Modern cpus generally do /not/ throttle the cpu speed to save battery
power. It only makes sense to slow down the processor if you have large
leakage currents that you can't turn off, in which case a slow clock can
mean lower energy overall. With modern devices, clock gating lets you
turn off all or parts of the core much more effectively.

Richard Damon

unread,
Nov 23, 2021, 1:42:21 PM11/23/21
to
Except where there are.

For instance, I regularly use a variant of std:string that the char
const* constructor checks if the input is from 'read only' memory, and
if it is reuses that data instead of making a copy, at least until in
wants to change it. This saves me a LOT of memory in embedded systems
where most of my 'string' data is constant, but some spedific cases need
to dynamically compute the string.

The C++ standard library is very good code for the general case. There
can be cases where specific application requirements make alternative
better.

As I said, the fundamental issue is the trade off of final execution
efficiency for efficiency in writing the code.

C++ keeps a lot of the efficiencies of C, and adds some significant
'power' to the expresiveness. But sometimes implementing the C++
features directly in C while needing more coding by the programmer can
make some things more efficient.

For example, rather than letting the C++ class system implicitly handle
the 'vtable' for a class, there are tricks you can do to make some
operation more efficient in C with an explicit vtable (at the expense of
it adding all the explicit code). Things like changing the 'type' of a
structure to that of another compatible type with just a change of the
vtable pointer.

Richard Damon

unread,
Nov 23, 2021, 1:50:52 PM11/23/21
to
Not sure if this holds for desktop/laptop caliber processors, but in the
embedded world, many processors can have their core voltage dropped down
when running at slower speeds, and since switching energy is
proportional to V^2, slowing the processor and waiting less CAN save power.

I beleive this applies to processors with a 'Turbo' or 'Overclocked'
mode in the desktop/laptop space, to run their fastest, they need to
boost their power supplies at the cost of increase power consumption but
fast speeds become available.

Then there is the fact that the 'simple' idle modes don't stop
everything, so you are still burning the higher frequency power even
when idle in parts of the circuit, and switching to more power savings
mode can take a bit of time and actually cost power (as you power down
then back up sections of the die) means that slower, higher usage, uses
less power.

The problem is that this also limits you PEAK operational speed, so you
need to balence those needs with your power concumption.

Bonita Montero

unread,
Nov 23, 2021, 1:53:20 PM11/23/21
to
Am 23.11.2021 um 19:41 schrieb Richard Damon:

> For instance, I regularly use a variant of std:string that the char
> const* constructor checks if the input is from 'read only' memory, and
> if it is reuses that data instead of making a copy, at least until in
> wants to change it. ...

Then use string_view.

David Brown

unread,
Nov 23, 2021, 2:02:44 PM11/23/21
to
That depends on the class of embedded system. If you are talking about
large embedded processors - running embedded Linux, for instance - then
that's true. But even there you usually aim for high speed and lots of
sleep if you can. However, as I mentioned, entering and exiting sleep
modes takes some time - if you are doing it too frequently, that
overhead becomes dominant and it is better to run the whole thing at a
low clock rate.

Smaller embedded systems rarely change the voltage to the core.

>
> I beleive this applies to processors with a 'Turbo' or 'Overclocked'
> mode in the desktop/laptop space, to run their fastest, they need to
> boost their power supplies at the cost of increase power consumption but
> fast speeds become available.
>
> Then there is the fact that the 'simple' idle modes don't stop
> everything, so you are still burning the higher frequency power even
> when idle in parts of the circuit, and switching to more power savings
> mode can take a bit of time and actually cost power (as you power down
> then back up sections of the die) means that slower, higher usage, uses
> less power.
>
> The problem is that this also limits you PEAK operational speed, so you
> need to balence those needs with your power concumption.

It is all a complicated balance, and a subject of continuous development
and improvement - no one choice fits everything. (And the ideal power
management decisions need to know what a process is going to do before
it does it.)

Manfred

unread,
Nov 23, 2021, 2:29:20 PM11/23/21
to
On 11/23/2021 4:50 PM, Kenny McCormack wrote:
> In article <snivri$15p1$1...@gioia.aioe.org>, Manfred <non...@add.invalid> wrote:
>> On 11/22/2021 11:19 PM, Lynn McGuire wrote:
>>> Our results show interesting find- ings, such as, slower/faster
>>> languages consuming less/more energy, and how memory usage influences
>>> energy consump- tion
>>
>> I find the correlation slower/faster language respectively less/more
>> energy quite confusing. In fact I believe it is the opposite.
>
> Yes. I think this was misstated/sloppily-written in the original text.
>
> It depends, of course, on what exactly you mean by a "slower" language.
> It is true that if you run the CPU at a slower speed (and that would make
> for a slower processing model), then you will use less energy.
>

Well, CPU speed is not a property of the language...

Manfred

unread,
Nov 23, 2021, 2:59:52 PM11/23/21
to
I am not sure about that "rule" - in general modern integrated
electronics waste most energy during state transitions, and that is
directly coupled with clock frequency, but I admit there may be more to
it that I don't know.

One consideration about the balance between clock speed and efficiency
is that it is totally relative to how old the technology is. I am pretty
confident that an old Pentium is much less efficient than a modern i7
(just to consider popular PC stuf only), even if the latter runs faster
than the former.
The reason is not just because of the added value of the environmental
footprint in today's public opinion; the fact is that heat dissipation
is a major bottleneck in IC technology, so in order for a modern
processor to perform according to nowadays' standards and not melt after
a few minutes of operation, it /has/ to be built with efficient technology.
But then comes "modern" programming, with all its fuzz of managed code,
JIT gibber, thousands of DLL dependencies, etc., not to forget "modern"
OS's that eat Tflops just to perform even the simplest of tasks, and
there goes all of that efficiency.

Bart

unread,
Nov 23, 2021, 3:50:42 PM11/23/21
to
This is why talk of the greenest language is nonsense.

How about writing efficient applications? And efficient OSes (there
might be 1000 processes on my PC right now).

Access any webpage, and the fast download speeds mask how much data is
being transmitted, which all still requires code to process it. Hardware
can barely keep up.

Or, how about efficient compilers? David Brown always contemptuously
dismisses mine even though they are fast, yet they show that the basic
job of source->binary translation can be done 1-2 magnitutes faster than
the heavy-duty compilers he favours.

(For a couple of years one of my compilers was run as interpreted,
dynamic bytecode. It was still double the speed of gcc!)

Chris M. Thomasson

unread,
Nov 23, 2021, 4:13:07 PM11/23/21
to
Humm... I was just thinking along the lines of 'what does the C code
actually do?' If it ends up blasting the system, how green is it? Well,
yeah, that is a different point. You are right Juha.

Chris M. Thomasson

unread,
Nov 23, 2021, 4:24:10 PM11/23/21
to
On 11/22/2021 11:10 PM, Juha Nieminen wrote:
Side note: Humm... Perhaps we can say that lock-free is "greener" than
using locks. Say a computation using a nice lock-free algo takes 5
minutes. Vs, something using locks that takes 30 minutes to get the same
result. I remember back in the day where I had to terminate the
lock-based tests because they took way too long.

Ian Collins

unread,
Nov 23, 2021, 5:18:19 PM11/23/21
to
On 24/11/2021 09:50, Bart wrote:
>
> Or, how about efficient compilers? David Brown always contemptuously
> dismisses mine even though they are fast, yet they show that the basic
> job of source->binary translation can be done 1-2 magnitutes faster than
> the heavy-duty compilers he favours.
>
> (For a couple of years one of my compilers was run as interpreted,
> dynamic bytecode. It was still double the speed of gcc!)
>

Well if it generates unoptimised code, it is still not very "green" is
it? Unless you are compiling often and running once...

--
Ian.

David Brown

unread,
Nov 23, 2021, 5:35:42 PM11/23/21
to
That is correct. There are two aspects to the energy usage of the
switching - there is the charge transfer that depends on the capacitance
of the gates and the voltage, and there is the leakage current during
switching while the gate is half-open. The first is pretty much
independent of frequency of switching, so the same transitions (i.e.,
the same calculations) take the same energy regardless of how fast they
are done. The second is dependent on the switching speed, which goes up
with the voltage (that's why you increase the voltage for higher
frequencies, to get shorter switching times - a higher voltage pulls the
electrons across the gaps faster). The switching time is basically
independent of the switching frequency, except that it has to be shorter
to support higher frequencies. I believe (and this is getting a bit
outside my knowledge) that for CPU internals, it is the charge transfer
that dominates, not the loses during the switching period.

(I have plenty of practical experience with low-power devices, but that
is primarily for microcontrollers. If people really want to go into
depth about the power consumption of "big" processors, it would probably
make more sense to start a thread in comp.arch rather than in C and C++
groups. There are folks over there who have worked on serious cpu
design and know a lot about this stuff.)

> One consideration about the balance between clock speed and efficiency
> is that it is totally relative to how old the technology is. I am pretty
> confident that an old Pentium is much less efficient than a modern i7
> (just to consider popular PC stuf only), even if the latter runs faster
> than the former.

Yes, mostly. Modern devices have smaller geometries - that means faster
switching times and less capacitance, so smaller charge transfers and
lower dynamic power for the work done. But it also means more leakage
and thus more static power. And in order to get higher clock
frequencies and more instructions per clock, pipelines are longer, paths
are wider, there is more speculative execution, branch prediction,
caching, and all the rest of it. These features don't actually
contribute anything to the work being done. Thus an old Pentium running
a given piece of code will involve perhaps orders of magnitude fewer
transistor switches than you would have in a modern i7. (This is
another reason for the big-little pairings for low-power processors -
the simpler "little" cpu will have fewer switches and less energy than
the "big" cpu for the same real work, regardless of frequency.)

Modern devices also have much smarter power and clock gating that was
simply not used in older devices - otherwise the static leakage power
would be far too high.

> The reason is not just because of the added value of the environmental
> footprint in today's public opinion; the fact is that heat dissipation
> is a major bottleneck in IC technology, so in order for a modern
> processor to perform according to nowadays' standards and not melt after
> a few minutes of operation, it /has/ to be built with efficient technology.

Absolutely. I remember reading that the first Itanium processors had
higher power densities than the core of a nuclear reactor - these
devices handle a lot of power in a small space, and it all ends up as heat.

> But then comes "modern" programming, with all its fuzz of managed code,
> JIT gibber, thousands of DLL dependencies, etc., not to forget "modern"
> OS's that eat Tflops just to perform even the simplest of tasks, and
> there goes all of that efficiency.

Wirth's law - "Software gets slower faster than hardware gets faster" -
is over 20 years old, and there is no sign of a change yet. That's one
of the reasons I like programming small microcontrollers - it's all /my/
code, and there's no fuzz or gibber unless I put it in myself.

David Brown

unread,
Nov 23, 2021, 5:46:36 PM11/23/21
to
On 23/11/2021 21:50, Bart wrote:
> On 23/11/2021 19:59, Manfred wrote:

>> But then comes "modern" programming, with all its fuzz of managed
>> code, JIT gibber, thousands of DLL dependencies, etc., not to forget
>> "modern" OS's that eat Tflops just to perform even the simplest of
>> tasks, and there goes all of that efficiency.
>
> This is why talk of the greenest language is nonsense.
>

Agreed (mostly).

> How about writing efficient applications? And efficient OSes (there
> might be 1000 processes on my PC right now).

Yes - it is the software that takes energy, not the language. It's fair
to say that the software does a lot more now than it used to, but it's
equally fair to question whether that extra work is necessary.

>
> Access any webpage, and the fast download speeds mask how much data is
> being transmitted, which all still requires code to process it. Hardware
> can barely keep up.
>
> Or, how about efficient compilers? David Brown always contemptuously
> dismisses mine even though they are fast, yet they show that the basic
> job of source->binary translation can be done 1-2 magnitutes faster than
> the heavy-duty compilers he favours.
>

Yes. But that is because I understand the difference between compiling
code and /running/ compiled code. As long as your compilation process
is not so slow that it hinders the development process, it doesn't
matter. If a program is only going to be used on a few systems or by a
few people, then the dominant cost is the programmer (and their energy
needs massively outweighs the compiler's). If it is going to be used a
lot, then the dominant cost is the target systems' runtime (and thus you
want as efficient results as you can get from the source code). At no
point is the effort or energy required by the compiler at all relevant
in the total sum.

Bart

unread,
Nov 23, 2021, 5:59:10 PM11/23/21
to
On 23/11/2021 22:18, Ian Collins wrote:
> On 24/11/2021 09:50, Bart wrote:
>>
>> Or, how about efficient compilers? David Brown always contemptuously
>> dismisses mine even though they are fast, yet they show that the basic
>> job of source->binary translation can be done 1-2 magnitutes faster than
>> the heavy-duty compilers he favours.
>>
>> (For a couple of years one of my compilers was run as interpreted,
>> dynamic bytecode. It was still double the speed of gcc!)
>>
>
> Well if it generates unoptimised code, it is still not very "green" is
> it?

Well, so was gcc-O0, which was how the comparison was made. Otherwise
the disparity would have been greater.

And aside from the speed, my stuff also tends to be small, so also
economising on storage, loading, storing, downloading, memory...

Here are compilation rates for my tools (build from source), using my
current compiler:

mm 10 Hz (main compiler)
pc 14 Hz (IL project, my version of llvm)
aa 20 Hz (assembler/linker)
qq 10 Hz (interpreter)
bcc 9 Hz (C compiler)

10 Hz means I can build the project 10 times in one second (and using
one core). I'd be interested to see the equivalent figure for gcc or llvm.

If it was only running my stuff, my PC would have very little to do!

> Unless you are compiling often and running once...

During development that's exactly what you do.

For a production program, then yes you can go to town on optimising,
because you don't do it repeatedly.

Lynn McGuire

unread,
Nov 23, 2021, 8:00:26 PM11/23/21
to
On 11/23/2021 1:21 AM, Juha Nieminen wrote:
> In comp.lang.c++ Richard Damon <Ric...@damon-family.org> wrote:
>> Well since one of the goals of the C Language was to enable programmers
>> to write fast code, it can makes sense for C to be 'Green', at least by
>> some measures.
>>
>> Fast code will be more energy efficient as processors basically use
>> power based on the number of instructions executed (and memory accessed).
>
> I think that for as long as programmable computers have existed, there has
> been a direct correlation between the "levelness" ("higher-level" vs.
> "low-level") of a language and disregard towards how much resources the
> language uses. As computers have become faster and faster, and the amount
> of resources (primarily RAM) has increased, this indifference towards
> resource consumption in higher-level languages has only likewise
> increased.
>
> The farther away the design of the language has been from the details of
> the underlying hardware, the more the question of "how efficient is
> the language, and how much RAM does it consume?" has been (implicitly
> or explicitly) answered with, essentially, "it doesn't matter" and
> "who cares?"
>
> Take pretty much any scripting language, or any other interpreted
> language (like the original BASIC and most of its subsequent variants).
> Pretty much nobody cared how fast they are, or how much RAM they
> consume. When someone is, let's say, implementing something in PHP
> they seldom stop to think how much memory it consumes or how fast it is.
> When someone is implementing something in shell script, one most
> definitely does not think about speed or memory consumption.
>
> Most object-oriented languages don't really care about, especially,
> memory consumption. Especially garbage-collected OO languages don't
> give a flying F about memory consumption. So what if an object needs
> something like 16 or 32 bytes of bookeeping data? Who cares? That's
> nothing! Trivial and inconsequential! Modern computers have gigabytes
> of RAM! Why should you care if an object takes 32 bytes in addition
> to whatever is inside it? What a silly notion!
>
> Ironically, the programming world has in the last couple of decades awakened
> to the harsh realization that this complete disregard of memory usage
> actually causes inefficiencies in modern CPU architectures (because
> cache misses are very expensive).
>
> Oh well. As long as it works, who cares? Buy a faster computer.

Not in California.

https://nichegamer.com/high-end-gaming-pcs-banned-in-six-us-states-after-california-energy-bill-limits-sales-on-high-performance-pcs/

Lynn

Lynn McGuire

unread,
Nov 23, 2021, 8:01:57 PM11/23/21
to
On 11/23/2021 6:17 AM, Otto J. Makela wrote:
> Lynn McGuire <lynnmc...@gmail.com> wrote:
>
>> "C Is The Greenest Programming Language" by: Chris Lott
>> https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/
>
> Perhaps one contributing factor is that not so much development is any
> longer done using C or C++, and the programs that are run (and still
> being developed) were originally created in the era when CPU power was
> much lower than these days, so code optimization was more important.

I write C++ and Fortran code just about every day for our software products.

Lynn

Richard Damon

unread,
Nov 23, 2021, 9:27:50 PM11/23/21
to
Doesn't work.

I have a lot of long lived object that store a string with a 'name' of
the object.

Most are created with a fixed compile time name that I pass as a const
char* string to a read only literal.

A few cases need to dynaically create a name based on specific usages,
so these need that name stored as a dynamic value, but that memory wants
to be reclaimed if/when the long lived object does go away.

string_view doesn't help here, as when I create the few dynamic names,
there is no place to keep that char array that is holding the name.

Otto J. Makela

unread,
Nov 24, 2021, 4:33:47 AM11/24/21
to
Do you consider this to be a common situation?
--
/* * * Otto J. Makela <o...@iki.fi> * * * * * * * * * */
/* Phone: +358 40 765 5772, ICBM: N 60 10' E 24 55' */
/* Mail: Mechelininkatu 26 B 27, FI-00100 Helsinki */
/* * * Computers Rule 01001111 01001011 * * * * * * */

Otto J. Makela

unread,
Nov 24, 2021, 4:41:15 AM11/24/21
to
I keep rereading that first sentence, is the meaning reversed?

Philipp Klaus Krause

unread,
Nov 24, 2021, 5:43:51 AM11/24/21
to
Am 23.11.21 um 00:25 schrieb Richard Damon:
>
> Well since one of the goals of the C Language was to enable programmers
> to write fast code, it can makes sense for C to be 'Green', at least by
> some measures.
>
> Fast code will be more energy efficient as processors basically use
> power based on the number of instructions executed (and memory accessed).

The paper mentions that that is a common assumption, but not true.

Looking at table 4, we see that C is in the top spot both when it comes
to fast code and energy efficient code. And many other languages also
rank similarly in energy consumption as they do in speed. But there are
others. E.g.:

Go is fast (only 183% slower than C), but energy inefficient (323% more
energy consumed than C).

Lisp is slow (240% slower than C) but energy efficient (127% more energy
consumed than C).

Juha Nieminen

unread,
Nov 24, 2021, 5:54:19 AM11/24/21
to
In comp.lang.c++ Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
> On 11/22/2021 11:10 PM, Juha Nieminen wrote:
>> In comp.lang.c++ Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>>> Interesting. However, I can write something in C that uses all CPU's
>>> 100%.
>>
>> I don't think that's the point.
>>
>
> Humm... I was just thinking along the lines of 'what does the C code
> actually do?' If it ends up blasting the system, how green is it? Well,
> yeah, that is a different point. You are right Juha.

I am assuming that the article considers the cases where the CPU is at
full load in all the cases, with the program having been implemented
with different programming languages. The faster the program is done,
the less overall energy it will require. If a C program does the job
in 1 second and a PHP program does it in 30 seconds, it's obvious that
the C program is going to consume less energy for the same task.

So it's not a question of "how many % of the CPU can I use with this
programming language", but "how long do I need to use the CPU in
order to perform this task".

Juha Nieminen

unread,
Nov 24, 2021, 5:55:19 AM11/24/21
to
In comp.lang.c++ Guillaume <mes...@bottle.org> wrote:
> Le 23/11/2021 à 08:21, Juha Nieminen a écrit :
>> Oh well. As long as it works, who cares? Buy a faster computer.
>
> This sentence shows that you quite obviously got what "green" means.

I have no idea what you are talking about.

Maybe you missed the fact that that sentence you quoted is sarcastic?

Philipp Klaus Krause

unread,
Nov 24, 2021, 5:56:28 AM11/24/21
to
Am 23.11.21 um 16:03 schrieb Manfred:
> On 11/22/2021 11:19 PM, Lynn McGuire wrote:
>> Our results show interesting find- ings, such as, slower/faster
>> languages consuming less/more energy, and how memory usage influences
>> energy consump- tion
>
> I find the correlation slower/faster language respectively less/more
> energy quite confusing. In fact I believe it is the opposite.
>

Well, the do state in the paper that the common assumption is that
faster languages are more energy efficient. And their data supports that
this assumption actually hold for many languages.

But the surprising, and thus interesting result from their paper is that
this is not universally true. You can see one example when looking at
the data for Go and Lisp in the paper:

Philipp Klaus Krause

unread,
Nov 24, 2021, 6:02:03 AM11/24/21
to
Am 24.11.21 um 11:54 schrieb Juha Nieminen:
>
> So it's not a question of "how many % of the CPU can I use with this
> programming language", but "how long do I need to use the CPU in
> order to perform this task".
>

On the other hand, the energy consumption of that CPU during that time
also depends on what it is doing (and to some degree an what data it is
processing). People have been doing side-channel attacks on cryptography
by very accurately measuring the energy consumption of the computers
doing the encrypting or decrypting.

Long ago, for fun, I did measure the energy consumption of a Z80 (in a
ColecoVision) with reasonably high time resolution (current probe on
oscilloscope). From the trace of the energy consumption, I could see
which instructions were executed.

David Brown

unread,
Nov 24, 2021, 7:58:44 AM11/24/21
to
I used to debug my assembly code on my ZX Spectrum (with a Z80 cpu) by
the sound of the power supply.

However, with modern cpus there is so much going on that you are
unlikely to get much detail by tracking power consumption. You could
perhaps distinguish when blocks such as SIMD or the FPU are active, but
otherwise there are so many instructions in flight at a time that you
could not identify them. Differential power analysis for cryptoanalysis
is almost entirely a thing of the past. (It could be practical for
simpler processors and dedicated cryptography devices, except that
designers of these things are usually aware of such attacks and they are
easily blocked.)


Öö Tiib

unread,
Nov 24, 2021, 8:54:38 AM11/24/21
to
He didn't, he just wanted to troll, not worth to reply.

Bart

unread,
Nov 24, 2021, 8:59:32 AM11/24/21
to
On 23/11/2021 22:46, David Brown wrote:
> On 23/11/2021 21:50, Bart wrote:

>> Or, how about efficient compilers? David Brown always contemptuously
>> dismisses mine even though they are fast, yet they show that the basic
>> job of source->binary translation can be done 1-2 magnitutes faster than
>> the heavy-duty compilers he favours.
>>
>
> Yes. But that is because I understand the difference between compiling
> code and /running/ compiled code. As long as your compilation process
> is not so slow that it hinders the development process,

I find compilers such as gcc slower enough that they would hinder /me/.
I can build my 3 main language tools, about 120Kloc and 100 files, in
1/4 second, the same time it takes gcc to build hello.c, about 4 lines
in one file.

> it doesn't
> matter. If a program is only going to be used on a few systems or by a
> few people, then the dominant cost is the programmer (and their energy
> needs massively outweighs the compiler's). If it is going to be used a
> lot, then the dominant cost is the target systems' runtime (and thus you
> want as efficient results as you can get from the source code). At no
> point is the effort or energy required by the compiler at all relevant
> in the total sum.

It was a example of the sort of program that I can make run efficiently
by making some extra effort, and also keeping things small, which
actually was not written in C.

It isn't about language, except that everything else being equal, a C
implementation of a computationally intensive application like mine is
going to be faster than an equivalent CPython version, and mostly likely
still faster than PyPy.

So if gcc was implemented in CPython, compiling hello.c might takes 10
seconds instead of 0.25 seconds, but tcc does the job in 0.03 seconds.

Consider that on my machine, even an empty program takes 0.02 seconds to
run...

Öö Tiib

unread,
Nov 24, 2021, 9:04:12 AM11/24/21
to
On Wednesday, 24 November 2021 at 12:56:28 UTC+2, Philipp Klaus Krause wrote:
>
> But the surprising, and thus interesting result from their paper is that
> this is not universally true. You can see one example when looking at
> the data for Go and Lisp in the paper:
> Go is fast (only 183% slower than C), but energy inefficient (323% more
> energy consumed than C).
>
> Lisp is slow (240% slower than C) but energy efficient (127% more energy
> consumed than C).

That is probably because of different concurrency support. Dozen years old
language (like Go) is anticipated to be better designed to support concurrency
than 5 dozens years old language (like Lisp).

Manfred

unread,
Nov 24, 2021, 9:31:03 AM11/24/21
to
On 11/23/2021 11:35 PM, David Brown wrote:
> On 23/11/2021 20:59, Manfred wrote:
[...]
>> The reason is not just because of the added value of the environmental
>> footprint in today's public opinion; the fact is that heat dissipation
>> is a major bottleneck in IC technology, so in order for a modern
>> processor to perform according to nowadays' standards and not melt after
>> a few minutes of operation, it /has/ to be built with efficient technology.
>
> Absolutely. I remember reading that the first Itanium processors had
> higher power densities than the core of a nuclear reactor - these
> devices handle a lot of power in a small space, and it all ends up as heat.
>
About anecdotes..
I remember during my university days a professor showing the difference
in power management for different technologies; something like a 1MB RAM
chip built on TTL would have to dissipate kilowatts of power....

Bart

unread,
Nov 24, 2021, 9:43:37 AM11/24/21
to
On 24/11/2021 14:30, Manfred wrote:
> On 11/23/2021 11:35 PM, David Brown wrote:
>> On 23/11/2021 20:59, Manfred wrote:
> [...]
>>> The reason is not just because of the added value of the environmental
>>> footprint in today's public opinion; the fact is that heat dissipation
>>> is a major bottleneck in IC technology, so in order for a modern
>>> processor to perform according to nowadays' standards and not melt after
>>> a few minutes of operation, it /has/ to be built with efficient
>>> technology.
>>
>> Absolutely.  I remember reading that the first Itanium processors had
>> higher power densities than the core of a nuclear reactor - these
>> devices handle a lot of power in a small space, and it all ends up as
>> heat.
>>
> About anecdotes..
> I remember during my university days a professor showing the difference
> in power management for different technologies; something like a 1MB RAM
> chip built on TTL would have to dissipate kilowatts of power....

Well, using the 74LS189 device, which contains 64 bits of RAM, then you
would have more practical problems first, such as needing 130,000 of the
things to make 1MB, plus all the address decoding circuitry.

It might be quite a few kilowatts you'd need.


Dozi...@thekennel.co

unread,
Nov 24, 2021, 10:48:54 AM11/24/21
to
On Tue, 23 Nov 2021 12:28:12 -0500
Richard Damon <Ric...@Damon-Family.org> wrote:
>On 11/23/21 11:55 AM, Dozi...@thekennel.co wrote:
>> On Tue, 23 Nov 2021 14:39:08 +0100
>> Bonita Montero <Bonita....@gmail.com> wrote:
>>> Developing in C is a magnitude more effort than in C++, and if you've
>>
>> That depends on the problem. If you're writing code that needs to store a
>> lot of structured data then C wouldn't be your first choice of language. But
>> if you're writing something that simply interfaces with system calls then
>> there's probably not much if any extra effort in using C over C++.
>>
>
>I would disagree. With a decent compiler, C code can generate close to
>assembly level optimizations for most problems. (Maybe it doesn't have

So can a C++ compiler and with modern additions such constexpr C++ can optimise
in ways that C simply can't.

>ANYTHING in terms of data-structures that another language can generate,
>you can generate in C.

True, but you have to re-invent the wheel each time because frankly the level
of complex data structure library support in the standard library in C is
woeful. Eg the hsearch() and bsearch() functionality are frankly rubbish (only
1 tree/hash per process!) and Berkeley DB is a PITA to use.

>The big disadvantage is that YOU as the programmer need to deal with a
>lot of the issues rather than to compiler doing things for you, but that
>is exactly why you can do things possibly more efficiently then the
>compiler. You could have always generated the same algorithm that the
>compiler did.

I doubt I could for example write a RB tree system better than that implemented
in the C++ STL. It has 30 years of refining behind it.

>Now, if you want to talk of the efficiency of WRITING the code, (as
>opposed to executing it) all this power it gives you is a negative,
>which seems to be what you are talking about.

Indeed, but your argument that C always produces more efficient code than C++
probably hasn't been true for 20 years.

Dozi...@thekennel.co

unread,
Nov 24, 2021, 10:50:01 AM11/24/21
to
On Wed, 24 Nov 2021 11:40:54 +0200
o...@iki.fi (Otto J. Makela) wrote:
>Dozi...@thekennel.co wrote:
>
>> On Tue, 23 Nov 2021 14:17:50 +0200
>> o...@iki.fi (Otto J. Makela) wrote:
>>>Lynn McGuire <lynnmc...@gmail.com> wrote:
>>>
>>>> "C Is The Greenest Programming Language" by: Chris Lott
>>>> https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/
>>>
>>>Perhaps one contributing factor is that not so much development is any
>>>longer done using C or C++, and the programs that are run (and still
>>>being developed) were originally created in the era when CPU power was
>>>much lower than these days, so code optimization was more important.
>>
>> Its good to see the attitude of just throw more CPU at something
>> instead of optimising it is still around. These days when server
>> farms are taking up a siginificant percentage of the planet's
>> electrical output its beholder on programmers to make their code
>> as efficient as is reasonable.
>
>I keep rereading that first sentence, is the meaning reversed?

No.

Scott Lurndal

unread,
Nov 24, 2021, 11:04:32 AM11/24/21
to
And Hawaii, Colorado, Oregon, Vermont and Washington State.

Why did you single out California?

And one can certainly purchase the most powerful computers in all of those
states by assembling them personally - note also that servers aren't covered
by the regulation, only desktop systems.

Did you actually read the article you posted a link to?

Here's an article without the panic, and with actual, you know, facts.

https://www.makeuseof.com/why-the-california-ban-on-power-hogging-pcs-is-a-good-thing/

Richard Damon

unread,
Nov 24, 2021, 11:55:07 AM11/24/21
to
On 11/24/21 10:48 AM, Dozi...@thekennel.co wrote:
> On Tue, 23 Nov 2021 12:28:12 -0500
> Richard Damon <Ric...@Damon-Family.org> wrote:
>> On 11/23/21 11:55 AM, Dozi...@thekennel.co wrote:
>>> On Tue, 23 Nov 2021 14:39:08 +0100
>>> Bonita Montero <Bonita....@gmail.com> wrote:
>>>> Developing in C is a magnitude more effort than in C++, and if you've
>>>
>>> That depends on the problem. If you're writing code that needs to store a
>>> lot of structured data then C wouldn't be your first choice of language. But
>>> if you're writing something that simply interfaces with system calls then
>>> there's probably not much if any extra effort in using C over C++.
>>>
>>
>> I would disagree. With a decent compiler, C code can generate close to
>> assembly level optimizations for most problems. (Maybe it doesn't have
>
> So can a C++ compiler and with modern additions such constexpr C++ can optimise
> in ways that C simply can't.
>
>> ANYTHING in terms of data-structures that another language can generate,
>> you can generate in C.
>
> True, but you have to re-invent the wheel each time because frankly the level
> of complex data structure library support in the standard library in C is
> woeful. Eg the hsearch() and bsearch() functionality are frankly rubbish (only
> 1 tree/hash per process!) and Berkeley DB is a PITA to use.

Right, I never said it was the BEST way, especially if you include
programmer productivity.

If your goal is ABSOLUTELY TOP CPU EFFICIENCY, then 'standard libraries'
become just optional (use only if it IS the best for YOUR job).

Yes, this might mean 3 years of programming to say a millisecond of
execution, but it still puts the hand crafted code ahead of the higher
level langauge in RAW efficiency.

>
>> The big disadvantage is that YOU as the programmer need to deal with a
>> lot of the issues rather than to compiler doing things for you, but that
>> is exactly why you can do things possibly more efficiently then the
>> compiler. You could have always generated the same algorithm that the
>> compiler did.
>
> I doubt I could for example write a RB tree system better than that implemented
> in the C++ STL. It has 30 years of refining behind it.

So you can directly implement that algorithm in C. yes, it gets messier,
but you CAN do it.

And, yes, it also means you need to research (or look at implementations
and 'borrow') the best methods. Lots of programmer effort to get the
greenest code.

>
>> Now, if you want to talk of the efficiency of WRITING the code, (as
>> opposed to executing it) all this power it gives you is a negative,
>> which seems to be what you are talking about.
>
> Indeed, but your argument that C always produces more efficient code than C++
> probably hasn't been true for 20 years.
>

FLAW: Not ALWAYS, but CAN, big difference.

Yes, C++ idioms can generate complex code faster than C, and allows
things to be put into libraries that C can't readily do, and thus an
expert can right the library and 'lend' that expertise to an ordinary
code by them using the library. A custom coded C version can do just as
well, because you can express the same instruction sequences in C as
were generated in C++, you just need to be or explicit about it.



Bonita Montero

unread,
Nov 24, 2021, 12:32:26 PM11/24/21
to
A class having a variant<string_view, string> member and an interface
to get or modify that string ? F.e. you could add a modify-interface
that does a copy-on-write modification of the string_view state,
chaning it into a string-state ?


Richard Damon

unread,
Nov 24, 2021, 12:55:05 PM11/24/21
to
Your getting very heavy now.

It was much simpler to just implement the version of string I wanted,
which kept a pointer to the data, and an allocation size.

When constructed from a char const* value, it checked if this pointer
was to the read only block of memory, and if so didn't copy that string,
but just kept the original pointer, with an allocated size of 0 (to make
it easy to see if it was a read only string or a writable).

If your use case doesn't fit what the library supports well, you can
make you own.



Keith Thompson

unread,
Nov 24, 2021, 1:14:24 PM11/24/21
to
What does "183% slower" mean? I presume it doesn't run backwards at 83%
of C's speed.

For that matter, does "323% more energy consumed" mean 3.23 times as
much energy or 4.23 times as much energy? (Consider that "10%" more
means "1.1 times as much.)

> Lisp is slow (240% slower than C) but energy efficient (127% more energy
> consumed than C).

--
Keith Thompson (The_Other_Keith) Keith.S.T...@gmail.com
Working, but not speaking, for Philips
void Void(void) { Void(); } /* The recursive call of the void */

David Brown

unread,
Nov 24, 2021, 1:56:06 PM11/24/21
to
On 24/11/2021 19:14, Keith Thompson wrote:
> Philipp Klaus Krause <p...@spth.de> writes:
>> Am 23.11.21 um 16:03 schrieb Manfred:
>>> On 11/22/2021 11:19 PM, Lynn McGuire wrote:
>>>> Our results show interesting find- ings, such as, slower/faster
>>>> languages consuming less/more energy, and how memory usage influences
>>>> energy consump- tion
>>>
>>> I find the correlation slower/faster language respectively less/more
>>> energy quite confusing. In fact I believe it is the opposite.
>>>
>>
>> Well, the do state in the paper that the common assumption is that
>> faster languages are more energy efficient. And their data supports that
>> this assumption actually hold for many languages.
>>
>> But the surprising, and thus interesting result from their paper is that
>> this is not universally true. You can see one example when looking at
>> the data for Go and Lisp in the paper:
>>
>> Go is fast (only 183% slower than C), but energy inefficient (323% more
>> energy consumed than C).
>
> What does "183% slower" mean? I presume it doesn't run backwards at 83%
> of C's speed.
>

I would guess it means it takes 1.83 times as long to handle the same
basic task. I can't really see any logical alternative explanation.
But "183% slower" is not exactly a good way to express that!

> For that matter, does "323% more energy consumed" mean 3.23 times as
> much energy or 4.23 times as much energy? (Consider that "10%" more
> means "1.1 times as much.)
>

I would guess 3.32 times as much energy, based on similarity to my
interpretation of "183% slower". Again, I agree that it's a poor way to
express it.

Bonita Montero

unread,
Nov 24, 2021, 2:53:17 PM11/24/21
to
>> A class having a variant<string_view, string> member and an interface
>> to get or modify that string ? F.e. you could add a modify-interface
>> that does a copy-on-write modification of the string_view state,
>> chaning it into a string-state ?

> Your getting very heavy now.

Ok, developing a variant-string-class with all constructors,
methods and operators would be complex. But having s simple
variant<string, string_view> is easy.

Philipp Klaus Krause

unread,
Nov 24, 2021, 3:12:17 PM11/24/21
to
Am 24.11.21 um 19:14 schrieb Keith Thompson:
> Philipp Klaus Krause <p...@spth.de> writes:
>> Am 23.11.21 um 16:03 schrieb Manfred:
>>> On 11/22/2021 11:19 PM, Lynn McGuire wrote:
>>>> Our results show interesting find- ings, such as, slower/faster
>>>> languages consuming less/more energy, and how memory usage influences
>>>> energy consump- tion
>>>
>>> I find the correlation slower/faster language respectively less/more
>>> energy quite confusing. In fact I believe it is the opposite.
>>>
>>
>> Well, the do state in the paper that the common assumption is that
>> faster languages are more energy efficient. And their data supports that
>> this assumption actually hold for many languages.
>>
>> But the surprising, and thus interesting result from their paper is that
>> this is not universally true. You can see one example when looking at
>> the data for Go and Lisp in the paper:
>>
>> Go is fast (only 183% slower than C), but energy inefficient (323% more
>> energy consumed than C).
>
> What does "183% slower" mean? I presume it doesn't run backwards at 83%
> of C's speed.

By "183% slower", I meant "takes 183% more time to execute", i.e. 2.83
times the execution time.

>
> For that matter, does "323% more energy consumed" mean 3.23 times as
> much energy or 4.23 times as much energy? (Consider that "10%" more
> means "1.1 times as much.)

That was a mistake, that should have been "223% more energy consumed",
i.e. 3.23 times the energy of the C program.

Philipp

Philipp Klaus Krause

unread,
Nov 24, 2021, 3:13:26 PM11/24/21
to
Am 24.11.21 um 19:55 schrieb David Brown:
> Again, I agree that it's a poor way to
> express it.

Yes. I should have written that differently.

Richard Damon

unread,
Nov 24, 2021, 3:13:48 PM11/24/21
to
Building a variant<string, string_view> may be easy but would be a HOG.

My variant-string class implement all the functionality I needed, and
none of the stuff I wanted, and saved a LOT of memory as the standard
string class include a LOT of virtual functions to do things I don't
need, and my implementation isn't smart enough to do the whole program
recompile to trim out unneeded virtual functions.

For 1 day of work I saves about 25% of my programs size becuase I could
drop all the locale support that I didn't need.

The first cut string version was to fix a program to big error for an
embedded system because std::string pulled in WAY too much code just to
exist.

The second cut added the const char* string optimization to save a lot
of ram.

Situation awareness is key here.

Philipp Klaus Krause

unread,
Nov 24, 2021, 3:17:59 PM11/24/21
to
Let's do this again:

When normalizing to the energy use and execution time of C (i.e. C is
1.0 for both energy use and execution time):

Go code takes 3.23 energy, and 2.83 execution time.

Lisp code takes 2.27 energy, and 3.40 execution time.

Bart

unread,
Nov 24, 2021, 3:42:30 PM11/24/21
to
Do you have direct links/locations to those figures?

The OP's PDF link seems to be making using the Shootout benchmarks,
which I don't rate that highly.

There can be half a dozen different versions even in the same language,
using different algorithms.

The actual tasks are dubious too; a benchmark based around one
bottleneck is a long way from real code.

(Also, how did Lisp manage to be only 3 times as slow as C? It was both
interpreted /and/ dynamically typed last I looked.)

Lynn McGuire

unread,
Nov 24, 2021, 4:28:55 PM11/24/21
to
On 11/24/2021 3:33 AM, Otto J. Makela wrote:
> Lynn McGuire <lynnmc...@gmail.com> wrote:
>
>> On 11/23/2021 6:17 AM, Otto J. Makela wrote:
>>> Lynn McGuire <lynnmc...@gmail.com> wrote:
>>>> "C Is The Greenest Programming Language" by: Chris Lott
>>>> https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/
>>> Perhaps one contributing factor is that not so much development
>>> is any longer done using C or C++, and the programs that are run
>>> (and still being developed) were originally created in the era
>>> when CPU power was much lower than these days, so code
>>> optimization was more important.
>>
>> I write C++ and Fortran code just about every day for our software
>> products.
>
> Do you consider this to be a common situation?

I really have no idea.

Lynn

Chris M. Thomasson

unread,
Nov 24, 2021, 5:12:25 PM11/24/21
to
On 11/22/2021 2:19 PM, Lynn McGuire wrote:
> "C Is The Greenest Programming Language" by: Chris Lott
>    https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/
>
> "Have you ever wondered if there is a correlation between a computer’s
> energy consumption and the choice of programming languages? Well, a
> group Portuguese university researchers did and set out to quantify it.
> Their 2017 research paper entitled Energy Efficiency across Programming
> Languages / How Do Energy, Time, and Memory Relate?  may have escaped
> your attention, as it did ours."
>    https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sleFinal.pdf
>
> "Abstract: This paper presents a study of the runtime, memory usage and
> energy consumption of twenty seven well-known soft- ware languages. We
> monitor the performance of such lan- guages using ten different
> programming problems, expressed in each of the languages. Our results
> show interesting find- ings, such as, slower/faster languages consuming
> less/more energy, and how memory usage influences energy consump- tion.
> We show how to use our results to provide software engineers support to
> decide which language to use when energy efficiency is a concern."
>

Humm... Perhaps, a main question can be, is there such a thing as a
"green" programmer? One can write crappy and bloated code in C, that
goes much slower than, say Python. However, one can write really
efficient code in C... Therefore, the quality of a programmer seems to
just want to come into play, so to speak...

Using undefined behavior wrt the standard can be a key aspect to writing
faster, more efficient code. Possible scenario: This os has built-in
User-space Read-Copy-Update (URCU). However, if I use it then my program
is no longer portable...

Can things be greener with a talented programmer and a low-level
language... Yes!

Can things be greener with a crappy programmer and a low-level
language... Humm, No? By chance? Unknown...?

Philipp Klaus Krause

unread,
Nov 24, 2021, 5:25:33 PM11/24/21
to
Am 24.11.21 um 21:42 schrieb Bart:
> On 24/11/2021 20:17, Philipp Klaus Krause wrote:
>> Let's do this again:
>>
>> When normalizing to the energy use and execution time of C (i.e. C is
>> 1.0 for both energy use and execution time):
>>
>> Go code takes 3.23 energy, and 2.83 execution time.
>>
>> Lisp code takes 2.27 energy, and 3.40 execution time.
>>
>
> Do you have direct links/locations to those figures?

Table 4 in the paper at the link the OP gave
(https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sleFinal.pdf).

Manfred

unread,
Nov 25, 2021, 6:00:27 AM11/25/21
to
On 11/24/2021 11:12 PM, Chris M. Thomasson wrote:
> On 11/22/2021 2:19 PM, Lynn McGuire wrote:
>> "C Is The Greenest Programming Language" by: Chris Lott
>>
>> https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/
>>
>> "Have you ever wondered if there is a correlation between a computer’s
>> energy consumption and the choice of programming languages? Well, a
>> group Portuguese university researchers did and set out to quantify
>> it. Their 2017 research paper entitled Energy Efficiency across
>> Programming Languages / How Do Energy, Time, and Memory Relate?  may
>> have escaped your attention, as it did ours."
>>     https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sleFinal.pdf
>>
>> "Abstract: This paper presents a study of the runtime, memory usage
>> and energy consumption of twenty seven well-known soft- ware
>> languages. We monitor the performance of such lan- guages using ten
>> different programming problems, expressed in each of the languages.
>> Our results show interesting find- ings, such as, slower/faster
>> languages consuming less/more energy, and how memory usage influences
>> energy consump- tion. We show how to use our results to provide
>> software engineers support to decide which language to use when energy
>> efficiency is a concern."
>>
>
> Humm... Perhaps, a main question can be, is there such a thing as a
> "green" programmer? One can write crappy and bloated code in C, that
> goes much slower than, say Python.

Everything is possible, but achieving that requires some specialized
skillset(!)

However, one can write really
> efficient code in C... Therefore, the quality of a programmer seems to
> just want to come into play, so to speak...
>
> Using undefined behavior wrt the standard can be a key aspect to writing
> faster, more efficient code. Possible scenario: This os has built-in
> User-space Read-Copy-Update (URCU). However, if I use it then my program
> is no longer portable...
>
> Can things be greener with a talented programmer and a low-level
> language... Yes!
>
> Can things be greener with a crappy programmer and a low-level
> language... Humm, No? By chance? Unknown...?
>

The issue is about idiomatic constructs, i.e. the mass of code running
out there. I think that special circumstances and perverse cases do not
really matter.
Even more considering that C code that is so bad as to manage to perform
worse than some interpreted language is most likely pretty much
unmaintainable too, so doomed to fast extinction in the digital ecosystem.

Manfred

unread,
Nov 25, 2021, 6:12:21 AM11/25/21
to
On 11/23/2021 9:50 PM, Bart wrote:
> On 23/11/2021 19:59, Manfred wrote:
>> On 11/23/2021 5:51 PM, David Brown wrote:
>>> On 23/11/2021 16:50, Kenny McCormack wrote:
>>>> In article <snivri$15p1$1...@gioia.aioe.org>, Manfred
>>>> <non...@add.invalid> wrote:
>>>>> On 11/22/2021 11:19 PM, Lynn McGuire wrote:
>>>>>> Our results show interesting find- ings, such as, slower/faster
>>>>>> languages consuming less/more energy, and how memory usage influences
>>>>>> energy consump- tion
>>>>>
>>>>> I find the correlation slower/faster language respectively less/more
>>>>> energy quite confusing. In fact I believe it is the opposite.
>>>>
>>>> Yes.  I think this was misstated/sloppily-written in the original text.
>>>>
>>>> It depends, of course, on what exactly you mean by a "slower" language.
>>>> It is true that if you run the CPU at a slower speed (and that would
>>>> make
>>>> for a slower processing model), then you will use less energy.
>>>>
>>>
>>> It is /not/ true that running the CPU at a slower speed uses less energy
>>> - at least, it is often not true.  It is complicated.
>>>
>>> There are many aspects that affect how much energy is taken for a given
>>> calculation.
>>>
>>> Regarding programming languages, it is fairly obvious that a compiled
>>> language that takes fewer instructions to do a task using optimised
>>> assembly is going to use less energy than a language that has less
>>> optimisation and more assembly instructions, or does some kind of
>>> interpretation.  Thus C (and other optimised compiled languages like
>>> C++, Rust or Ada) are going to come out top.
>>>
>>> It is less obvious how the details matter.  Optimisation flags have an
>>> effect, as do choices of instruction (as functional blocks such as SIMD
>>> units or floating point units) may be dynamically enabled.  For some
>>> target processors, unrolling a loop to avoid branches will reduce energy
>>> consumption - on others, rolled loops to avoid cache misses will be
>>> better.  Some compilers targeting embedded systems (where power usage is
>>> often more important) have "optimise for power" as a third option to the
>>> traditional "optimise for speed" and "optimise for size".
>>>
>>> The power consumption for a processor is the sum of the static power and
>>> the dynamic power.  Dynamic power is proportional to the frequency and
>>> the square of the voltage.  And energy usage is power times time.
>>>
>>> A processor that is designed to run at high frequency is likely to have
>>> high-leakage transistors, and therefore high static power - when the
>>> circuits are enabled.  But the faster you get the work done, the higher
>>> a proportion of the time you can have in low-power modes with minimal
>>> static power.  On the other hand, higher frequencies may need higher
>>> voltages.
>>>
>>> As a rule of thumb, it is better to run your cpu at its highest
>>> frequency - or at the highest it can do without raising the voltage -
>>> and get the calculation done fast.  Then you can spend more time in
>>> low-power sleep modes.  However, entering and exiting sleep modes takes
>>> time and energy, so you don't want to do it too often - hence the
>>> "big-little" processor combinations where you have a slower core that
>>> can be switched on and off more efficiently.
>>>
>>
>> I am not sure about that "rule" - in general modern integrated
>> electronics waste most energy during state transitions, and that is
>> directly coupled with clock frequency, but I admit there may be more
>> to it that I don't know.
>>
>> One consideration about the balance between clock speed and efficiency
>> is that it is totally relative to how old the technology is. I am
>> pretty confident that an old Pentium is much less efficient than a
>> modern i7 (just to consider popular PC stuf only), even if the latter
>> runs faster than the former.
>> The reason is not just because of the added value of the environmental
>> footprint in today's public opinion; the fact is that heat dissipation
>> is a major bottleneck in IC technology, so in order for a modern
>> processor to perform according to nowadays' standards and not melt
>> after a few minutes of operation, it /has/ to be built with efficient
>> technology.
>
>> But then comes "modern" programming, with all its fuzz of managed
>> code, JIT gibber, thousands of DLL dependencies, etc., not to forget
>> "modern" OS's that eat Tflops just to perform even the simplest of
>> tasks, and there goes all of that efficiency.
>
> This is why talk of the greenest language is nonsense.

Not really, this study does show a significant difference between
language choices for the same task.

>
> How about writing efficient applications?
Yes, and that includes choosing the right language.

And efficient OSes (there
> might be 1000 processes on my PC right now).
That's a different matter than applications. It still affects overall
energy consumption of course. The choice of the language still matters
for OS processes, but general OS design has a major impact as well.
My critics is addressed to components that are shipped with many OS's,
yet are not technically part of the core OS infrastructure. Think e.g.
the Windows Update infrastructure - it's probably one of the most
bloated features I've ever seen.

>
> Access any webpage, and the fast download speeds mask how much data is
> being transmitted, which all still requires code to process it. Hardware
> can barely keep up.
>
> Or, how about efficient compilers? David Brown always contemptuously
> dismisses mine even though they are fast, yet they show that the basic
> job of source->binary translation can be done 1-2 magnitutes faster than
> the heavy-duty compilers he favours.

As others have pointed out, this does not matter, since compilers are
not part of a typical runtime environment.

>
> (For a couple of years one of my compilers was run as interpreted,
> dynamic bytecode. It was still double the speed of gcc!)

Dozi...@thekennel.co

unread,
Nov 25, 2021, 6:19:35 AM11/25/21
to
On Wed, 24 Nov 2021 11:54:50 -0500
Richard Damon <Ric...@Damon-Family.org> wrote:
>On 11/24/21 10:48 AM, Dozi...@thekennel.co wrote:
>> I doubt I could for example write a RB tree system better than that
>implemented
>> in the C++ STL. It has 30 years of refining behind it.
>
>So you can directly implement that algorithm in C. yes, it gets messier,
>but you CAN do it.

But there's no guarantee that it would be any faster.

>> Indeed, but your argument that C always produces more efficient code than C++
>
>> probably hasn't been true for 20 years.
>>
>
>FLAW: Not ALWAYS, but CAN, big difference.
>
>Yes, C++ idioms can generate complex code faster than C, and allows
>things to be put into libraries that C can't readily do, and thus an
>expert can right the library and 'lend' that expertise to an ordinary
>code by them using the library. A custom coded C version can do just as
>well, because you can express the same instruction sequences in C as
>were generated in C++, you just need to be or explicit about it.

Not always. eg:

#include <iostream>

constexpr int func(int cnt)
{
int val = 1;
while(--cnt) val *= 2;
return val;
}


int main()
{
constexpr int val = func(10);
std::cout << val << std::endl;
return 0;
}


In C++14 and onwards func() will literally be run by the compiler at compile
time and the result hard coded into the binary. No C compiler can do that not
because the language couldn't do it in theory, but because its never been
specified in the standard.

Juha Nieminen

unread,
Nov 25, 2021, 6:47:13 AM11/25/21
to
In comp.lang.c++ Dozi...@thekennel.co wrote:
> constexpr int func(int cnt)
> {
> int val = 1;
> while(--cnt) val *= 2;
> return val;
> }
>
>
> int main()
> {
> constexpr int val = func(10);
> std::cout << val << std::endl;
> return 0;
> }
>
>
> In C++14 and onwards func() will literally be run by the compiler at compile
> time and the result hard coded into the binary. No C compiler can do that not
> because the language couldn't do it in theory, but because its never been
> specified in the standard.

Actually gcc (and probably clang) will inline that function and calculate it
at compile time in C. You'll need a much more complicated constexpr function
for it to not be calculated at compile time by the compiler in C.

Richard Damon

unread,
Nov 25, 2021, 7:05:21 AM11/25/21
to
But you as the C programmer can do that too.

Rather than write the expression func(10), since for this to work in
C++, that 10 MUST be a known constant, I can manually compute the value.

Again, the claim is that you can write the optimal code in C, but it may
be a lot of work. It can be optimal for the use of CPU, but may well not
be optimal for the use of Programmer.

Dozi...@thekennel.co

unread,
Nov 25, 2021, 10:58:09 AM11/25/21
to
Possibly if you set optimisation quite high (and remove the constexpr keyword),
but you can't rely on it, whereas you can in C++ because its in the standard.

Dozi...@thekennel.co

unread,
Nov 25, 2021, 10:59:43 AM11/25/21
to
You could manually compute logarithm and trig tables but I imagine you'd
prefer to use log() and sin() instead.

>Again, the claim is that you can write the optimal code in C, but it may
>be a lot of work. It can be optimal for the use of CPU, but may well not
>be optimal for the use of Programmer.

Well quite.

Bonita Montero

unread,
Nov 25, 2021, 1:13:53 PM11/25/21
to
> My variant-string class implement all the functionality I needed, and
> none of the stuff I wanted, and saved a LOT of memory as the standard
> string class include a LOT of virtual functions to do things I don't
> need, ...

basic_string<> doesn't include virtual functions and if basic_string<>
would use virtual functions, they would take only one VMT-pointer per
object.

Paavo Helde

unread,
Dec 5, 2021, 3:52:04 PM12/5/21
to
23.11.2021 00:19 Lynn McGuire kirjutas:
> "C Is The Greenest Programming Language" by: Chris Lott
>    https://hackaday.com/2021/11/18/c-is-the-greenest-programming-language/
>
> "Have you ever wondered if there is a correlation between a computer’s
> energy consumption and the choice of programming languages? Well, a

In my experience, any efforts towards more efficient software are easily
defeated by users throwing more data to the program.

Wow, so you can fully analyze a 1024x1024 microscope image in less than
a second! Wonderful, let's upgrade our hardware to take 2160x2160 images
in the next version!

So, you managed to speed up the calculation time of morphological
features by 50%. Fine, now we want to calculate 1000 features instead of
the former 20!

Thus I'm afraid any hopes for a "green" programming language are at best
naïve.

Chris M. Thomasson

unread,
Dec 5, 2021, 10:21:03 PM12/5/21
to
Fwiw, a hardcore blow up is when I have to render volumes. 256^3 fine,
512^3 okay, 1024^3 slow, 2048^3 slow as shit...

4096^3 slower than a snail hiking up a mountain of salt!

Yikes!

Manfred

unread,
Dec 6, 2021, 6:28:38 AM12/6/21
to
True, but this doesn't make inefficient languages any better, does it?

Paavo Helde

unread,
Dec 6, 2021, 6:40:55 AM12/6/21
to
No, it does not.

Bonita Montero

unread,
Dec 6, 2021, 10:22:01 AM12/6/21
to
Am 05.12.2021 um 21:51 schrieb Paavo Helde:

> In my experience, any efforts towards more efficient software
> are easily defeated by users throwing more data to the program.

And that should determine the absolute possible efficiency of a
programming language? I think not.

anti...@math.uni.wroc.pl

unread,
Dec 11, 2021, 8:26:08 AM12/11/21
to
In comp.lang.c Bart <b...@freeuk.com> wrote:
> Consider that on my machine, even an empty program takes 0.02 seconds to
> run...

Get a better system if you care about such runtimes. _Statically_
compiled program on Linux needs less than 300 uS to run (of course
this assumes file caching works as intended, if you have spinning
disk first access still needs seek time).

--
Waldek Hebisch

anti...@math.uni.wroc.pl

unread,
Dec 11, 2021, 9:07:46 AM12/11/21
to
In comp.lang.c Bart <b...@freeuk.com> wrote:
>
> (Also, how did Lisp manage to be only 3 times as slow as C? It was both
> interpreted /and/ dynamically typed last I looked.)

Lisp may be compiled to machine code and when speed is important
you will choose implementation with good compiler. Concerning
"dynamically typed": Lisp has type declarations which are
optional. Without declarations your types can be anything so
you get true dynamic typing. With declarations you can declare
that your variable contains machine integers or doubles and
good Lisp compiler will generate directly machine code.

It is debatable how much C features contribute to speed of
resulting machine code. In Lisp each data structure has
identifying header and you get efficient code only for arrays
of "primitive types", there is no way in Lisp to get something
with memory layout of C array of structs. So in principle
in C you may have more compact data structures that keep
related items together, which leads to better cache use
and consequently less time spent on memory accesses.
When it comes to computations, it is mostly matter of
effort spent on compiler. There was enormous effort spent
on C compilers and compilers like gcc are hard to beat.
Still, it depends on specific code: for simple code with
memory accesses that hit cache using Lisp I usually get about
half of speed of gcc-compiled C. More tricky cases may be
somewhat slower. For example I noted that Lisp compiler that
I use will use multiplication when sequentlially accessing
row of two dimensional array in a loop. This lead to machine
code that is 4-5 times slower than code generated by gcc.
When code is memory-intensive (say pointer chaising) speed
is essentially indepenent of language (of course assuming
that language can express reasonanably efficient pattern
of memory accesses).

Looking at generated machine code I would expect that Lisp
will give me better or equal runtime speed compared to
your C compiler.

As little anecdote, I wrote code compiled via Lisp compiler
that was intended to be speed competivive with code in
interpreted language calling to optimized C libraries to
do main work. AFAIK my code was faster, mainly because
I was able to create combined operation which saved essentally
half work compared to separate library calls. That was
enough to compensate better code generator from C compiler.
Of course, coding my routine in C I would have both gain
form combined operation and gain from better compiler
optimization. OTOH affected routines were about 60 lines
of code, while the whole thing was about 6000 lines.
Writing all in C would take _much_ more effort and the
result would be less useful as my routines were integrated
into existing program (which would be harder to do with
C code).

--
Waldek Hebisch

Bart

unread,
Dec 11, 2021, 9:55:14 AM12/11/21
to
You wouldn't normally care about 20ms overhead to start and terminate a
process, but when a program takes 60ms anyway for example, then it can
be significant if you want to know or compare actual performance.

Actually that 20ms isn't accurate. Measuring a block of 100 empty
programs (in both cases from a BAT or Bash script, both with SSD), takes
7ms per invocation on Windows, and 3ms [real time] on WSL.

3ms is 3000 us; not far off your 300 us figure, considering your machine
is likely faster and running true Linux.

Allowing for that 7ms on Windows, then 'tcc hello.c' takes 0.11ms; bcc
takes 18ms, and gcc takes 280ms.

Bart

unread,
Dec 11, 2021, 1:46:42 PM12/11/21
to
To me the most apt description of Lisp seems to be 'shape-shifting'.

The language is dynamically typed when it suits, but can switch to
static typed when someone complains of the speed (but it needs someone
to revise the program).

It also can be run-from-source, or it can compile to some interpreted
internal form, or to native code (or, I guess, use some sort of JIT).

It also seems to be capable of anything; don't like how it does
assignment; there several other ways it can pull out of the hat. That
loop not powerful enough, there are half a dozen other forms with any
number of parameters.

You just can't pin it down. And therefore it seems immune to any
criticism; there is always a way to change it.

> As little anecdote, I wrote code compiled via Lisp compiler
> that was intended to be speed competivive with code in
> interpreted language calling to optimized C libraries to
> do main work. AFAIK my code was faster, mainly because
> I was able to create combined operation which saved essentally
> half work compared to separate library calls. That was
> enough to compensate better code generator from C compiler.
> Of course, coding my routine in C I would have both gain
> form combined operation and gain from better compiler
> optimization. OTOH affected routines were about 60 lines
> of code, while the whole thing was about 6000 lines.
> Writing all in C would take _much_ more effort and the
> result would be less useful as my routines were integrated
> into existing program (which would be harder to do with
> C code).

I always reckoned that the right balance of dynamic code plus static
code, should not be more than 1-2 times as slow as 100% static (save for
programs with short runtimes)

And, sometimes, as you found, it can be a bit faster!

Ben Bacarisse

unread,
Dec 11, 2021, 3:11:38 PM12/11/21
to
Bart <b...@freeuk.com> writes:

> To me the most apt description of Lisp seems to be 'shape-shifting'.
>
> The language is dynamically typed when it suits, but can switch to
> static typed when someone complains of the speed (but it needs someone
> to revise the program).

Handy that. Rapid prototyping and robust production code. It's the
same language though, you just use more if it as and when needed.

> It also can be run-from-source, or it can compile to some interpreted
> internal form, or to native code (or, I guess, use some sort of JIT).

Yes, there are a wide variety of implementations to suit different
needs.

> It also seems to be capable of anything;

It is. That is obviously literally correct (Turing completeness and all
that) but modern Common Lisp is extraordinarily capable.

> don't like how it does assignment; there several other ways it can
> pull out of the hat. That loop not powerful enough, there are half a
> dozen other forms with any number of parameters.

There /is/ a lot to learn. That's the perennial language design
dilemma. You can have small and a bit rigid, or large and very
flexible.

> You just can't pin it down. And therefore it seems immune to any
> criticism; there is always a way to change it.

Actually it is pretty well pinned down. There is a stable standard for
Common Lisp which is widely implemented and does not change half as much
as, say, the C standard.

--
Ben.

Tim Rentsch

unread,
Dec 16, 2021, 10:55:59 AM12/16/21
to
Juha Nieminen <nos...@thanks.invalid> writes:

> [...] Most object-oriented languages don't really care about,
> especially, memory consumption. Especially garbage-collected OO
> languages don't give a flying F about memory consumption. [...]

A humorous statement, considering that there were OOP systems,
including garbage collection, running years before even the
predecessor to C++, "C with Classes", existed. And these systems
ran on hardware platforms that don't have enough memory to run
probably any C++ compiler of the last 20+ years.

Note by the way that using garbage collection very likely reduced
the overall memory footprint, because all the places where there
would have been code to free memory no code is needed, replaced
by a single piece of code in the memory manager.

Richard

unread,
Dec 16, 2021, 11:24:38 AM12/16/21
to
[Please do not mail me a copy of your followup]

Tim Rentsch <tr.1...@z991.linuxsc.com> spake the secret code
<861r2c4...@linuxsc.com> thusly:

>A humorous statement, considering that there were OOP systems,
>including garbage collection, running years before even the
>predecessor to C++, "C with Classes", existed.

Are you thinking of Smalltalk or LISP?

>Note by the way that using garbage collection very likely reduced
>the overall memory footprint, because all the places where there
>would have been code to free memory no code is needed, replaced
>by a single piece of code in the memory manager.

An interesting hypothesis, but I'm not convinced without data.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

Tim Rentsch

unread,
Dec 16, 2021, 10:46:28 PM12/16/21
to
legaliz...@mail.xmission.com (Richard) writes:

> Tim Rentsch <tr.1...@z991.linuxsc.com> spake the secret code
> <861r2c4...@linuxsc.com> thusly:
>
>> A humorous statement, considering that there were OOP systems,
>> including garbage collection, running years before even the
>> predecessor to C++, "C with Classes", existed.
>
> Are you thinking of Smalltalk or LISP?

Smalltalk.

>> Note by the way that using garbage collection very likely reduced
>> the overall memory footprint, because all the places where there
>> would have been code to free memory no code is needed, replaced
>> by a single piece of code in the memory manager.
>
> An interesting hypothesis, but I'm not convinced without data.

Probably it depends on what applications are being programmed, given
that Smalltalk is an interactive programming and development system.
But surely it becomes true at some point as more code is written,
since the amount of non-automatic memory management code would grow
as additional applications are written, but code (and so the amount
of code) in the garbage collector stays fixed regardless of how much
application code is added.

Juha Nieminen

unread,
Dec 17, 2021, 1:30:24 AM12/17/21
to
Tim Rentsch <tr.1...@z991.linuxsc.com> wrote:
> A humorous statement, considering that there were OOP systems,
> including garbage collection, running years before even the
> predecessor to C++, "C with Classes", existed. And these systems
> ran on hardware platforms that don't have enough memory to run
> probably any C++ compiler of the last 20+ years.
>
> Note by the way that using garbage collection very likely reduced
> the overall memory footprint, because all the places where there
> would have been code to free memory no code is needed, replaced
> by a single piece of code in the memory manager.

What systems had so little memory that they couldn't even run a C++
compiler, yet had an operating system that offered a memory manager
with GC? And, ostensibly, was a multitasking OS (given that you are
arguing the programs had an overall smaller memory footprint).

Tim Rentsch

unread,
Dec 18, 2021, 10:45:31 AM12/18/21
to
There was no operating system in the conventional sense of the
term. Garbage collection was part of the VM underlying the
OOP environment. Multitasking was done in and by the OOP
environment itself, with "stack frames" being objects, and so
could be queued for later resumption, switched between, etc.

Richard

unread,
Dec 30, 2021, 7:02:14 AM12/30/21
to
[Please do not mail me a copy of your followup]

Tim Rentsch <tr.1...@z991.linuxsc.com> spake the secret code
<86v8zl2...@linuxsc.com> thusly:

>There was no operating system in the conventional sense of the
>term. Garbage collection was part of the VM underlying the
>OOP environment. Multitasking was done in and by the OOP
>environment itself, with "stack frames" being objects, and so
>could be queued for later resumption, switched between, etc.

Smalltalk and LISP Machine environments are really quite different
from what we assume computers are like today. For small resource
machines, FORTH probably comes close, although AFAIK it never had a
stock GUI, like the Smalltalk and LISPM environments typically did.

Tim Rentsch

unread,
Jan 16, 2022, 3:22:58 PM1/16/22
to
legaliz...@mail.xmission.com (Richard) writes:

> Tim Rentsch <tr.1...@z991.linuxsc.com> spake the secret code
> <86v8zl2...@linuxsc.com> thusly:
>
>> There was no operating system in the conventional sense of the
>> term. Garbage collection was part of the VM underlying the
>> OOP environment. Multitasking was done in and by the OOP
>> environment itself, with "stack frames" being objects, and so
>> could be queued for later resumption, switched between, etc.
>
> Smalltalk and LISP Machine environments are really quite different
> from what we assume computers are like today. [...]

Lisp machines are special purpose hardware. Smalltalk VMs
run on standard hardware, much like Java bytecode interpreters
do today. Also the memory mangement code runs inside the VM,
so it is very much line standard code running on conventional
hardware of today (after taking into account the more limited
memory space and 16-bit registers, etc). The key point is
that the system as a whole is very careful with how it uses
memory, unlike the earlier claim about OOP environments using
memory with wild abandon.

Richard

unread,
Jan 17, 2022, 4:28:34 PM1/17/22
to
[Please do not mail me a copy of your followup]

Tim Rentsch <tr.1...@z991.linuxsc.com> spake the secret code
<86lezft...@linuxsc.com> thusly:

>legaliz...@mail.xmission.com (Richard) writes:
>
>> Smalltalk and LISP Machine environments are really quite different
>> from what we assume computers are like today. [...]
>
>Lisp machines are special purpose hardware. Smalltalk VMs
>run on standard hardware, much like Java bytecode interpreters
>do today.

I was thinking of the Smalltalk workstations from the 80s like those
made by Tektronix. I don't think they used a VM, but ran on the bare
metal.

Tim Rentsch

unread,
Jan 17, 2022, 6:48:39 PM1/17/22
to
legaliz...@mail.xmission.com (Richard) writes:

> Tim Rentsch <tr.1...@z991.linuxsc.com> spake the secret code
> <86lezft...@linuxsc.com> thusly:
>
>> legaliz...@mail.xmission.com (Richard) writes:
>>
>>> Smalltalk and LISP Machine environments are really quite different
>>> from what we assume computers are like today. [...]
>>
>> Lisp machines are special purpose hardware. Smalltalk VMs
>> run on standard hardware, much like Java bytecode interpreters
>> do today.
>
> I was thinking of the Smalltalk workstations from the 80s like those
> made by Tektronix. I don't think they used a VM, but ran on the bare
> metal.

As I recall Tektronix was one of four companies licensed by Xerox
to port a Smalltalk-80 VM to other systems. One motivation for
offering these licenses was to help debug the writing in
"Smalltalk-80: The language and its implementation", where most
of the implementation part was about how the VM works and how to
write one. Given that, it would be strange if the Tektronix
effort did not use a VM but instead ran a standard Smalltalk
image on bare hardware.

Richard

unread,
Jan 18, 2022, 1:48:07 AM1/18/22
to
[Please do not mail me a copy of your followup]

Tim Rentsch <tr.1...@z991.linuxsc.com> spake the secret code
<861r15s...@linuxsc.com> thusly:

>As I recall Tektronix was one of four companies licensed by Xerox
>to port a Smalltalk-80 VM to other systems. One motivation for
>offering these licenses was to help debug the writing in
>"Smalltalk-80: The language and its implementation", where most
>of the implementation part was about how the VM works and how to
>write one. Given that, it would be strange if the Tektronix
>effort did not use a VM but instead ran a standard Smalltalk
>image on bare hardware.

Interesting! I didn't know that! I found a PDF of the book on
archive.org, so I'm going to take a look at that! (If others are
interested: <https://archive.org/details/smalltalk80langu00gold>)
0 new messages