Manual deallocation

1027 views
Skip to first unread message

Pablo Zubieta

unread,
Jul 7, 2014, 4:56:35 PM7/7/14
to julia...@googlegroups.com
Let's say I have some large matrices I need to do some calculations and I use them in order to get some results that I will use in a second part of a computation where I not longer need the initial matrices. Suppose also that I preallocate those matrices.

Would it be ok to bind the names of those matrices to nothing (or something similar) from the moment I won't be using them anymore, or should I leave the deallocation work to the GC?

Ivar Nesje

unread,
Jul 8, 2014, 2:04:43 AM7/8/14
to julia...@googlegroups.com
In julia we don't say you shouldn't do something that could give better performance (if you really want it). The thing is that Julia uses automatic garbage collection because it is a pain to do manually, and then you have to live with the semantics of a garbage collector.

If your program is not really constrained by memory in the second part, I would guess that it is unlikely that it would matter to your program when the arrays are released. Freeing memory in julia (and other GC based languages), is about ensuring that no references remains to the allocated object.

If it is a global variable, you can assign `nothing`, and if it is a global constant, you can change the type, so you must reassign it to a smaller array with the same dimensionality and type and ensure that you don't have local variables that references the same array.

If it is a local variable, I'm not sure there is other options than to arrange the function boundaries, so that the large array goes out of scope when it is not needed any more.

Stefan Karpinski

unread,
Jul 8, 2014, 12:18:37 PM7/8/14
to Julia Users
Writing `A = nothing` in Julia will not cause the memory used by A to be freed immediately. That happens in reference counted systems, which many dynamic languages traditionally have been, but which Julia is not. Instead, the memory for A will be freed the next time a garbage collection occurs. This consists of the language runtime stopping everything it's doing, tracing through the graph of all objects in memory, marking the ones it can still reach, and freeing all the rest. So if doing `A = nothing` causes there to be no more reachable references to the object that A used to point at, then that object will be freed when the next garbage collection occurs. Normally, garbage collection occurs automatically when the system tries to allocate something and doesn't have enough memory to do so: it runs the garbage collector and then tries again. You can, however, call gc() to force garbage collection to occur now. This is generally not necessary or recommended.

Abraham Egnor

unread,
Jul 8, 2014, 12:31:12 PM7/8/14
to julia...@googlegroups.com
Are there any plans to document the Julia GC (i.e. your comment implies that it's a stop-the-world GC) and/or add performance tuning knobs?

Stefan Karpinski

unread,
Jul 8, 2014, 12:33:29 PM7/8/14
to Julia Users
It is currently, stop-the-world, although that could change. What kind of documentation would you be interested in? There's really nothing interesting about Julia's GC at this point – it's completely vanilla stop-the-world, mark-and-sweep.

Patrick O'Leary

unread,
Jul 8, 2014, 1:26:05 PM7/8/14
to julia...@googlegroups.com
You may also find this pull request interesting: https://github.com/JuliaLang/julia/pull/5227

Abraham Egnor

unread,
Jul 8, 2014, 3:46:13 PM7/8/14
to julia...@googlegroups.com
Mostly I was looking for documentation on performance characteristics, although I'd settle for anything at all :)  The Julia home page and manual have zero mention of memory management that I could find, except for a FAQ entry about unbinding large values in the REPL.  I inferred GC but the runtime could also have been using refcounting, region inference, or something more exotic.

Stefan Karpinski

unread,
Jul 8, 2014, 3:58:16 PM7/8/14
to Julia Users
Yes, we should probably have a manual chapter on memory management.

Stefan Karpinski

unread,
Jul 8, 2014, 4:00:32 PM7/8/14
to Julia Users

Pablo Zubieta

unread,
Jul 8, 2014, 9:08:40 PM7/8/14
to julia...@googlegroups.com
Thanks for the answers. I agree that a manual chapter would be helpful.

Jorge Fernández de Cossío Díaz

unread,
Apr 27, 2016, 8:39:13 PM4/27/16
to julia-users
Why Julia is not reference counted?
Probably someone has written something explanation about this that I can read, so if anyone can point me in the right direction, that would be great.

Stefan Karpinski

unread,
Apr 27, 2016, 9:01:21 PM4/27/16
to Julia Users
Performance. If you want to be as fast as C, reference counting doesn't cut it.

Yichao Yu

unread,
Apr 27, 2016, 9:18:20 PM4/27/16
to Julia Users
On Wed, Apr 27, 2016 at 9:00 PM, Stefan Karpinski <ste...@karpinski.org> wrote:
> Performance. If you want to be as fast as C, reference counting doesn't cut
> it.

With slightly more detail: RC has relatively low latency but also has
a low throughput. The issue is that RC adds a lot of overhead to
common operations like stack and heap store. (You naively need an
atomic increment and an atomic decrement per store, which is a huge
cost).

Of course there are ways to optimize this. What's interesting though
is that tracing collector sometime implement something similar to RC
(in the form of write barrier) in order to minimize latency and good
RC system implement optimizations that are very similar to tracing
collector (effectively delaying RC and do it in batch) in order to
improve throughput and handle cyclic reference.

Cedric St-Jean

unread,
Apr 28, 2016, 8:53:23 AM4/28/16
to julia-users
I'd like to know the cost of weak references / dicts. In SBCL, they had a catastrophic O(N^2) impact on GC.

Yichao Yu

unread,
Apr 28, 2016, 9:35:40 AM4/28/16
to Julia Users


On Apr 28, 2016 8:53 AM, "Cedric St-Jean" <cedric...@gmail.com> wrote:
>
> I'd like to know the cost of weak references / dicts. In SBCL, they had a catastrophic O(N^2) impact on GC.

What's N, heap size?live cell?dead cell?num of weak ref?

Cedric St-Jean

unread,
Apr 28, 2016, 10:10:25 AM4/28/16
to julia-users
Number of weak references. That was 7-8 years ago, I can't find a source now. Maybe it was fixed. Maybe I misunderstood - O(N^2) doesn't make much sense. But I clearly remember using them a lot to "add fields" to existing objects, and profiling to see >90% GC time.

Yichao Yu

unread,
Apr 28, 2016, 10:15:21 AM4/28/16
to Julia Users
On Thu, Apr 28, 2016 at 10:10 AM, Cedric St-Jean
<cedric...@gmail.com> wrote:
> Number of weak references. That was 7-8 years ago, I can't find a source
> now. Maybe it was fixed. Maybe I misunderstood - O(N^2) doesn't make much
> sense. But I clearly remember using them a lot to "add fields" to existing
> objects, and profiling to see >90% GC time.

The julia impl of weak ref should add a O(number of weak ref) cost to
each GC which is already O(number of (young) live object). I'm not
sure how a bad implementation would cause the problem you described
although I haven't looked at too many gc impl either. Please report a
performance bug if you see similar issue.

Cedric St-Jean

unread,
Apr 28, 2016, 12:11:17 PM4/28/16
to julia-users
Thanks!
Reply all
Reply to author
Forward
0 new messages