Automatic garbage collection is enabled!

24 views
Skip to first unread message

talksmall

unread,
Nov 1, 2008, 3:44:44 PM11/1/08
to Strongtalk-general
I've just checked in a change (r154) to add automatic garbage
collection to the VM. Actually the code to automate garbage collection
is Smalltalk code invoked in the fail blocks of the various allocation
primitives. The change to the VM just allows the primitive to fail
when insufficient space is available to fulfil the allocation.

To use this code you will need to download and replace your image and
source directory with the version found in the strongtalk-autogc-
r154.tgz file on the Downloads tab of the project website.

I've updated the relevant issue with some brief details of the
changes.

Regards, talksmall

David Griswold

unread,
Nov 1, 2008, 4:21:14 PM11/1/08
to strongtal...@googlegroups.com
That's great!  Something that has been needed for a long time. 

Question: how does this work for things like block closures that are not allocated explicitly, and thus have no fail block in the image code?
-Dave

talksmall

unread,
Nov 1, 2008, 7:39:00 PM11/1/08
to Strongtalk-general
Dave,
Anything that the VM allocates internally uses the old logic and I
haven't touched the behaviour for block and context allocation.

I've added some parameters to the allocate methods on the various
klass subclasses to indicate whether a scavenge is allowed when newgen
is full (and therefore, implicitly, expansion of oldgen if it, too, is
full). If not the methods just return NULL, which the primitive catch
and return a marked symbol to invoke the failure blocks.

The two extra flags have default values that equate to the previous
behaviour, so the only place that I made changes was in the new
primitives and the amendment to the optimised allocation primitives
used by the compiler. Since the name of the replacement allocation
primitive for non-indexables matches the same pattern as the old name,
it gets replaced by the PrimInliner in just the same way. I had to
patch the primitiveNew[0-9] code so that they return a marked symbol
rather than calling scavenge, but that seems to work just fine.

Regards, Steve

On Nov 1, 8:21 pm, "David Griswold" <david.griswold....@gmail.com>
wrote:
> That's great!  Something that has been needed for a long time.
> Question: how does this work for things like block closures that are not
> allocated explicitly, and thus have no fail block in the image code?
> -Dave
>

David Griswold

unread,
Nov 2, 2008, 7:50:22 AM11/2/08
to strongtal...@googlegroups.com
Hm, that is not ideal.  It would be highly desirable for there to not be holes in the GC policy coverage.  Ideally, we want a policy that covers all cases, regardless of what causes the allocation, otherwise we will get really strange behavior, such as code that doesn't do any explicit allocation but allocates contexts or blocks implicitly, causing totally different VM GC behavior than when code that does explicit allocation is running.

talksmall

unread,
Nov 2, 2008, 10:22:53 AM11/2/08
to Strongtalk-general
Dave,
You are right it is not ideal, but I wanted to avoid adding a
dependency in the VM on behaviour in the Smalltalk code in the image.

Another factor is that, to minimise duplication between the failure
code for each of the allocation primitive variants the failure blocks
for each end up allocating a couple of closures to specify the
allocation code and the expansion code for each variant. Other methods
called by the failure handling code also allocate closures. This
could, of course, be inlined to avoid the closure creations, but would
make the code much uglier, and hence harder to read, maintain, etc. I
figured that in the worst case this could involve another scavenge and
possibly another expansion in code that had already invoked the
failure block anyway.

Actually, that is not strictly accurate, since the indexable
allocations actually create the closures for every allocation -
something I'm going to clean up right now.

Applying the same policy to the context and block allocations involved
in the creation of closures would not work unless we avoided the use
of blocks in the failure code for allocations generally, or at least
had separate failure code to handle the failure of block and context
allocations, since otherwise it would necessarily involve an infinite
recursion.

Other VM related allocations are more difficult to handle without
coupling the VM to code in the image, something I've tried to avoid.
As mentioned above, the failure code is parameterized by a couple of
closures to handle repeated attempts to allocate the object at various
stages in the failure handling code. Another way would be needed if we
were to leverage this within VM code directly. A couple of options
occur to me, though, obviously, I'm open to any others.

1. Have the VM continue with the existing behaviour, but mark a flag
in the VM - NeedsGC - whenever expansion occurs due to a failed
tenured allocation. Test the flag at suitable execution points (such
as interpreter method entry code, nmethod prologue, backwards
branches) and, if set, invoke a collectAndShrink method on the VM
class. This would retain most of the essence of the memory policy
while involving minimal changes to the VM.
2. Have all VM-driven allocations call methods on the VM class to
handle the various types of object allocation failures that could
occur in the VM. This would inevitably inflate the interface to the
allocation code (which could probably already do with being factored
out of the VM class, possibly into a separate MemoryPolicy). It would
also be error prone, since we would need to be sure that we caught all
of the places that allocations occurred within the VM and apply the
new policy.

Either way we have an additional dependency on the image code, though,
in the first case the dependency is relatively narrow - a single
message send. In the second case the dependency would be significantly
broader.

Any other suggestions? As I say, I am open to ideas, but I wanted to
get the code out there. I think that the current solution will suffice
for the moment. There are other stability issues that I feel are more
pressing now that we have a solution for GC in place, even if it is
less than perfect.

Regards, Steve

On Nov 2, 12:50 pm, "David Griswold" <david.griswold....@gmail.com>
wrote:
> Hm, that is not ideal.  It would be highly desirable for there to not be
> holes in the GC policy coverage.  Ideally, we want a policy that covers all
> cases, regardless of what causes the allocation, otherwise we will get
> really strange behavior, such as code that doesn't do any explicit
> allocation but allocates contexts or blocks implicitly, causing totally
> different VM GC behavior than when code that does explicit allocation is
> running.
Reply all
Reply to author
Forward
0 new messages