Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is reference counting slower than GC?

84 views
Skip to first unread message

Sky89

unread,
May 2, 2018, 4:15:55 PM5/2/18
to
Hello..


Is reference counting slower than GC?

I think that this Java and other garbage collected interpreters and
compilers are not general purpose, read for example this:


"Total memory matters

GC tends to result in more memory being used in total. The trigger for
scanning is often low memory conditions, meaning that a lot of memory
needs to be allocated prior to a scanning operation. This has
significant OS level implications: memory used by the app must be
swapped in/out, and also prevents other apps, and file caches, from
using the memory.

It’s my feeling, from experience, that it’s this aspect of GC’d
applications that cause the biggest performance impact. The overall
system just goes slower due to the large amount of memory involved. It
doesn’t seem like it needs to be a fundamental problem of a GC though.
The structure of a standard library, and typical programming paradigms,
contributes a lot to this effect."


Read more here:

https://mortoray.com/2016/05/24/is-reference-counting-slower-than-gc/




Thank you,
Amine Moulay Ramdane.


Siri Cruise

unread,
May 2, 2018, 5:06:15 PM5/2/18
to
In article <pcd69g$aqh$3...@dont-email.me>, Sky89 <Sk...@sky68.com> wrote:

> Is reference counting slower than GC?

Reference count distributes the cost over the entire execution while GC tends to
split between program execution and collection.

> I think that this Java and other garbage collected interpreters and
> compilers are not general purpose, read for example this:

GC is more general. It can manage any kind cyclic and acyclic graph. Reference
count only works on cyclic graphs if the programmer distinguishes forward edges
from back edges as they are made which can put a considerable burden on the
programmer. With GC you just add edges as needed and then you can use something
like Tarjan to distinguish forward and back edges if needed.

> GC tends to result in more memory being used in total. The trigger for

Typical modern computers have lots of memory. I've been using Boehm for years
without a problem while Safari Web Content, presumed reference count because It
Came From Apple, regularly crashes on a memory leak, sometimes taking out the
whole system.

Apple went with reference count despite the cyclic graph problem because they
more concerned about an interface freezing in a collection phase than that they
will have to deal with cyclic graphs. My code generally has to be prepared for
cyclic graphs.

--
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted. @
'I desire mercy, not sacrifice.' /|\
I'm saving up to buy the Donald a blue stone This post / \
from Metebelis 3. All praise the Great Don! insults Islam. Mohammed

Sky89

unread,
May 2, 2018, 5:34:22 PM5/2/18
to
On 5/2/2018 5:06 PM, Siri Cruise wrote:
> In article <pcd69g$aqh$3...@dont-email.me>, Sky89 <Sk...@sky68.com> wrote:
>
>> Is reference counting slower than GC?
>
> Reference count distributes the cost over the entire execution while GC tends to
> split between program execution and collection.
>
>> I think that this Java and other garbage collected interpreters and
>> compilers are not general purpose, read for example this:
>
> GC is more general. It can manage any kind cyclic and acyclic graph. Reference
> count only works on cyclic graphs if the programmer distinguishes forward edges
> from back edges as they are made which can put a considerable burden on the
> programmer. With GC you just add edges as needed and then you can use something
> like Tarjan to distinguish forward and back edges if needed.
>
>> GC tends to result in more memory being used in total. The trigger for
>
> Typical modern computers have lots of memory. I've been using Boehm for years
> without a problem while Safari Web Content, presumed reference count because It
> Came From Apple, regularly crashes on a memory leak, sometimes taking out the
> whole system.
>
> Apple went with reference count despite the cyclic graph problem because they
> more concerned about an interface freezing in a collection phase than that they
> will have to deal with cyclic graphs. My code generally has to be prepared for
> cyclic graphs.
>


Hello,

"Perhaps the most significant problem is that programs that rely on
garbage collectors often exhibit poor locality (interacting badly with
cache and virtual memory systems), occupy more address space than the
program actually uses at any one time, and touch otherwise idle pages.
These may combine in a phenomenon called thrashing, in which a program
spends more time copying data between various grades of storage than
performing useful work. They may make it impossible for a programmer to
reason about the performance effects of design choices, making
performance tuning difficult. They can lead garbage-collecting programs
to interfere with other programs competing for resources"

Vir Campestris

unread,
May 3, 2018, 4:57:54 PM5/3/18
to
On 02/05/2018 21:15, Sky89 wrote:
> Is reference counting slower than GC?

That depends. On lots of things. This is pretty widely studied.

GC needs more memory than reference counting, but on the other hand it
neatly solves the cyclic graph problem Siri Cruise mentioned.

A lot of small programs terminate before they need to run GC - and for
them it's probably faster.

Andy

Alf P. Steinbach

unread,
May 3, 2018, 7:54:32 PM5/3/18
to
It might be a tragedy of the commons. For each individual program it can
be faster to just leak memory, provided it terminates before running out
of memory. But when most or all programs do that, I can imagine that
things will run much more slowly on the whole.

So that brings in the idea of government (OS) regulation and compliance
enforcement.

Everything is politics. :-p


Cheers!,

- Alf

Jorgen Grahn

unread,
May 4, 2018, 7:21:50 AM5/4/18
to
On Thu, 2018-05-03, Vir Campestris wrote:
> On 02/05/2018 21:15, Sky89 wrote:
>> Is reference counting slower than GC?
>
> That depends. On lots of things. This is pretty widely studied.

It's worth noting that C++ code typically uses neither. (Specifically,
a few std::shared_ptr here and there, versus Python's "everything is a
refcounted object".)

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Chris M. Thomasson

unread,
May 4, 2018, 4:05:40 PM5/4/18
to
On 5/2/2018 1:15 PM, Sky89 wrote:
> Hello..
>
>
> Is reference counting slower than GC?

[...]

What type of reference counting? Proxy reference counting can get around
cycles because each object does not need to reference one another. They
are confined within a proxy count where each object does not need its on
counter.

Juha Nieminen

unread,
May 7, 2018, 2:04:15 AM5/7/18
to
Sky89 <Sk...@sky68.com> wrote:
> Is reference counting slower than GC?

GC implementations tend to make allocation and deallocation significantly
more efficient (in the immediate sense at least). If you allocate and
deallocate a million individual small objects in C++ at random, it will
be really slow (which is why you generally want to avoid doing that
like the plague), while in languages like Java and C# it's a complete
non-issue.

Also, handling reference-counted pointers adds a layer of overhead
which doesn't need to exist in a GC'd environment (each time you copy
or assign a reference-counted pointer, there's an additional step
involved, which isn't so in GC'd environment. This is also why you
generally want to avoid reference-counted smart pointers in C++
if it's reasonable to do so, at least in situations where such
pointers are copied and assigned around a lot.)

Also, many/most GC language implementations allow for optimization
techniques that are simply not possible in C/C++, such as memory
compaction (which helps with memory fragmentation and cache
locality).

A lot of work has been done over the decades to make the garbage
collection sweeping process as light-weight as possible, and have
as little impact as possible on the performance of the program.
I haven't followed how well they have succeeded in this, however.

That being said, there are of course compromises with GC. Programs
written in GC'd languages tend to consume more memory than eg.
C++ programs that have even a modicum of memory usage optimality
(because in C++ you can handle objects by value instead of being
forced to allocate every single object dynamically). C++ often
allows for low-level ("hacker") optimizations to make programs
more efficient in terms of speed and memory usage, which many
"higher-level" GC'd languages simply don't support.

OTOH, many advocates of GC'd (so-called "safe") languages are
completely ready to pay that small price for the commodity
of not having to think about memory management and let the
language and runtime environment take care of it, even if
it means slightly increased memory usage and perhaps a bit
slower of a program.

Öö Tiib

unread,
May 7, 2018, 2:27:53 AM5/7/18
to
On Monday, 7 May 2018 09:04:15 UTC+3, Juha Nieminen wrote:
>
> That being said, there are of course compromises with GC. Programs
> written in GC'd languages tend to consume more memory than eg.
> C++ programs that have even a modicum of memory usage optimality
> (because in C++ you can handle objects by value instead of being
> forced to allocate every single object dynamically). C++ often
> allows for low-level ("hacker") optimizations to make programs
> more efficient in terms of speed and memory usage, which many
> "higher-level" GC'd languages simply don't support.

Choice of handling some objects by value and others by reference
does not have to be so low-level like in C++. For example in Swift
(the idea came perhaps from D) if to declare a type "struct" then
instances are handled by value and "class" type instances are
handled by reference.
0 new messages