haxe memory management best practices

1,649 views
Skip to first unread message

Pier Bover

unread,
Oct 8, 2014, 3:15:22 PM10/8/14
to haxe...@googlegroups.com
Sorry if this has been discussed before, but I did a few google searches and couldn't find any practical info.

If I understand correctly the GC is implemented to each target differently. I presume the C++ target must be using shared pointers and such to achieve this, and so forth for each target.

So what are the best practices for dealing with the Haxe GC? Is there a way to control when / how it works (for example when the app is idle)?

My usual pattern of dealing with GC in other languages is creating a destroy() method to dereference any member vars that could still mark the class as not garbage. Then I simply null the object. Is this ok in Haxe? Is there something else I should be aware of?


Juraj Kirchheim

unread,
Oct 8, 2014, 3:30:12 PM10/8/14
to haxe...@googlegroups.com
AFAIK the hxcpp runtime uses a Boehm GC, as described here: http://en.wikipedia.org/wiki/Boehm_garbage_collector

You can force the GC to run like so: http://api.haxe.org/cpp/vm/Gc.html#run

The best practices for any kind of performance issues (which includes dealing with the GC) is to profile the app to find bottlenecks and then deal with those accordingly. 

Having destroy methods such as the one you describe is unlikely to have much of an effect with a mark and sweep GC. Such a GC will be able to collect isolated object graphs without all the same and you may actually be wasting time by setting all the references to zero. Not only that, you are also introducing a source of bugs, since you are explicitly putting objects in an invalid state. If you are willing to take on the responsibility of handling allocation and deallocation yourself, then I suggest using a pool, as explained here: https://groups.google.com/forum/#!topic/haxelang/nkRDqO6ul9s

So anyway, measure first, then take action. It may be that in some cases even a destroy method as the one you have actually speeds up things. You never know. That is, unless you measure ;)

Best,
Juraj

--
To post to this group haxe...@googlegroups.com
http://groups.google.com/group/haxelang?hl=en
---
You received this message because you are subscribed to the Google Groups "Haxe" group.
For more options, visit https://groups.google.com/d/optout.

Raoul Duke

unread,
Oct 8, 2014, 3:32:40 PM10/8/14
to haxe...@googlegroups.com
> AFAIK the hxcpp runtime uses a Boehm GC, as described here:
> http://en.wikipedia.org/wiki/Boehm_garbage_collector

https://www.google.com/search?q=hxcpp+gc+immix

Juraj Kirchheim

unread,
Oct 8, 2014, 3:41:36 PM10/8/14
to haxe...@googlegroups.com
Oh, ok, thanks. Immix it is. The rest still applies though ;)

Pier Bover

unread,
Oct 8, 2014, 3:42:36 PM10/8/14
to haxe...@googlegroups.com
So what you are saying is that the GC is smart enough to know that an object that still has references can still be garbage collected?
--
Pier Bover
y...@pierbover.com

Raoul Duke

unread,
Oct 8, 2014, 3:44:52 PM10/8/14
to haxe...@googlegroups.com
> So what you are saying is that the GC is smart enough to know that an object
> that still has references can still be garbage collected?

if they are cycles, yes, that is the whole selling point of GC over
ref-counting :-)

note that no matter what, all forms of memory management suck one way
or another. i've used 'em all, son, lemme tell ya... :-{

Luca

unread,
Oct 8, 2014, 3:45:34 PM10/8/14
to haxe...@googlegroups.com
I wouldn't be so quick to say that back2dos :) we've hit a lot of issues before at work trying to convince GC in firefox/chrome for JS and similar issues in the past with Flash that when you have a really heavy app using 100's mb up even towards a gb of ram, even if you're object pooling almost everything, you still get serious GC slow downs and have issues with things not being garbage collected over time even when you 'do' set every single reference to them to null. However, in general, setting things to null for many GC's simply makes it's job a little easier, and it's slowdowns more predictable. Certainly the flash runtime used (may still) have issues with object graphs being too large leading to the GC simply 'giving up' on trying to collect the graph even if it 'is' disconnected at the root.

David Elahee

unread,
Oct 8, 2014, 3:45:57 PM10/8/14
to haxe...@googlegroups.com
From my experience profiling is a part but is very limited as you willhit hidden cost wall that few  know how to detect and solve *emphasis on that*.  

Having experience with optimising for gc is a must.
In our own top (in order):

- ensure that there is not byte buffer copies right from the data to the hardware
- check state of the art optimising hints from hardawe nerds
- use inline constructors as they will eliminates many tiny allocs
- avoid Dynamic
- focus on interactive parts
- eliminate fat allocs, everythign more than 32 bytes is best pooled
- have empathy with your hardware and be cache aware ( your algorithms too)

Good luck.


--
To post to this group haxe...@googlegroups.com
http://groups.google.com/group/haxelang?hl=en
---
You received this message because you are subscribed to the Google Groups "Haxe" group.
For more options, visit https://groups.google.com/d/optout.



--
David Elahee


Pier Bover

unread,
Oct 8, 2014, 3:50:26 PM10/8/14
to haxe...@googlegroups.com
So what is the most appropriate way to say in Haxe "I don't want to use this object anymore, delete it from memory" ?

Juraj Kirchheim

unread,
Oct 8, 2014, 3:52:01 PM10/8/14
to haxe...@googlegroups.com
Well, my main point was: measure ;)

So yes, big object graphs may cause the GC to fail. OTOH destroying all refs manually has other issues. You want to hit the sweet spot. No, the question for 100 points: how do you know when you have hit the sweet spot? :D

On Wed, Oct 8, 2014 at 9:45 PM, Luca <delta...@hotmail.com> wrote:
I wouldn't be so quick to say that back2dos :) we've hit a lot of issues before at work trying to convince GC in firefox/chrome for JS and similar issues in the past with Flash that when you have a really heavy app using 100's mb up even towards a gb of ram, even if you're object pooling almost everything, you still get serious GC slow downs and have issues with things not being garbage collected over time even when you 'do' set every single reference to them to null. However, in general, setting things to null for many GC's simply makes it's job a little easier, and it's slowdowns more predictable. Certainly the flash runtime used (may still) have issues with object graphs being too large leading to the GC simply 'giving up' on trying to collect the graph even if it 'is' disconnected at the root.

--

Raoul Duke

unread,
Oct 8, 2014, 4:01:09 PM10/8/14
to haxe...@googlegroups.com
> So what is the most appropriate way to say in Haxe "I don't want to use this
> object anymore, delete it from memory" ?

even with GC you have to take some responsibility. you can easily have
memory leaks because of references that are still around, that you
forgot to remove. the classic thing is forgetting to remove event
ilsteners (and/or not having them be weak references) in flash or
javascript or anything :-)

so basically yes you have to make sure references to the object are
set to null. (*the special case* of having 2 things pointing to each
other *but nothing else* pointing to them is handled by GC.)

Pier Bover

unread,
Oct 8, 2014, 4:07:00 PM10/8/14
to haxe...@googlegroups.com
A last question...

Do you guys think the same memory management approach can be applied to all targets?

--
To post to this group haxe...@googlegroups.com
http://groups.google.com/group/haxelang?hl=en
---
You received this message because you are subscribed to the Google Groups "Haxe" group.
For more options, visit https://groups.google.com/d/optout.



--
Pier Bover
y...@pierbover.com

David Elahee

unread,
Oct 9, 2014, 1:11:42 AM10/9/14
to haxe...@googlegroups.com

The better way to help the gc is to null out big variables early as you don't need them.

And usually yes, allocation,cache awareness and fragmentation being non linear problems, having a good mem policy will always profit.

Reply all
Reply to author
Forward
0 new messages