Intent to implement: Blink heap compaction

282 views
Skip to first unread message

Sigbjorn Finne

unread,
Nov 28, 2016, 5:29:12 AM11/28/16
to blink-dev

CL: https://codereview.chromium.org/2531973002/


* Contact emails

sigb...@opera.com


* Summary

The Blink GC infrastructure (“Oilpan”) has shipped (M50) and settled
well. However, the run-time behavior of its heaps is a cause of some
concern. Fragmentation is increasing over time, leaving unused holes on
heap pages, something that leads to memory usage increases and lower
overall performance.


Based on experiments done at Opera and subsequent implementation, this
intent proposes adding a simple and effective heap compaction to Blink,
periodically removing fragmentation from the heaps where it manifests
itself the most.


* Motivation

Having a managed heap that performs well for longer-lived renderer
processes, is clearly in everyone’s interest.

An explainer / design document which tries to generally motivate the
need, along with data:
https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU


* Interoperability and Compatibility Risk

Engine level change, does not apply.


* Ongoing technical constraints

“None.”


* Will this feature be supported on all six Blink platforms (Windows,
Mac, Linux, Chrome OS, Android, and Android WebView)?

Yes.

* Requesting approval to ship?

No.

Jochen Eisinger

unread,
Nov 28, 2016, 5:32:47 AM11/28/16
to Sigbjorn Finne, blink-dev
Wow, very exciting!

I expect that this won't collide with the incremental marking of V8/Blink wrappers as it's done atomically?


Sigbjorn Finne

unread,
Nov 28, 2016, 5:38:37 AM11/28/16
to Jochen Eisinger, blink-dev
Den 11/28/2016 11:32, Jochen Eisinger skreiv:
> Wow, very exciting!
>
> I expect that this won't collide with the incremental marking of V8/Blink
> wrappers as it's done atomically?
>

That's one thing to make certain of; I don't think the object
compaction&movement here would be observable to that delayed marking,
but haven't followed the trace wrapper implementation CL-by-CL.

--sigbjorn

Jochen Eisinger

unread,
Nov 28, 2016, 5:43:10 AM11/28/16
to Sigbjorn Finne, blink-dev
the marking deque has pointers into to-be-marked oilpan objects, but I expect that those pointers would get updated on compaction.

Michael Lippautz

unread,
Nov 28, 2016, 5:57:42 AM11/28/16
to Jochen Eisinger, Sigbjorn Finne, blink-dev
Awesome stuff! As far as I see we currently don't need any fix up passes, as wrapper tracing cannot transparently handle containers, i.e., we never put them on the marking deque. In future this might change though, requiring an additional pass over the marking deque. Added this note to the document.

Kentaro Hara

unread,
Nov 28, 2016, 6:36:23 AM11/28/16
to Michael Lippautz, Jochen Eisinger, Sigbjorn Finne, blink-dev
This is a big improvement to the Oilpan's infrastructure! LGTM.

Memory consumption of long-running apps is a thing we haven't yet investigated a lot. Thanks for exploring the problem space and optimizing the engine for those scenarios :)


On Mon, Nov 28, 2016 at 7:57 PM, Michael Lippautz <mlip...@chromium.org> wrote:
Awesome stuff! As far as I see we currently don't need any fix up passes, as wrapper tracing cannot transparently handle containers, i.e., we never put them on the marking deque. In future this might change though, requiring an additional pass over the marking deque. Added this note to the document.

Yes, the proposed heap compaction only makes backing stores of on-heap collections (i.e., HeapVector, HeapHashTable and HeapLinkedHashSet) movable. It should not be observable from the customers of the on-heap collections.

(We might want to make normal objects movable in the future, but that's totally a separate story.)

 
On Mon, Nov 28, 2016 at 11:43 AM Jochen Eisinger <joc...@chromium.org> wrote:
the marking deque has pointers into to-be-marked oilpan objects, but I expect that those pointers would get updated on compaction.

On Mon, Nov 28, 2016 at 11:38 AM Sigbjorn Finne <sigb...@opera.com> wrote:
Den 11/28/2016 11:32, Jochen Eisinger skreiv:
> Wow, very exciting!
>
> I expect that this won't collide with the incremental marking of V8/Blink
> wrappers as it's done atomically?
>

That's one thing to make certain of; I don't think the object
compaction&movement here would be observable to that delayed marking,
but haven't followed the trace wrapper implementation CL-by-CL.

--sigbjorn



--
Kentaro Hara, Tokyo, Japan

Sigbjorn Finne

unread,
Jan 3, 2017, 2:32:18 PM1/3/17
to Kentaro Hara, Michael Lippautz, Jochen Eisinger, blink-dev
Hi,

heap compaction has now been available behind a flag for 3-4 weeks, with
no stability or other problems encountered. Hence it is time to extend
our testing, to more accurately determine if it is ready to advance
towards being a stable feature.

Our plan is to cycle the feature to enabled over the upcoming weekend,
for a couple of Canary builds, and then revert & assess where we are for
Chrome's various targets (and perf tests.)

Comments on 'shipping' plans most welcome; either on this thread or via
https://crbug.com/672030

thanks
--sigbjorn

Dave Tapuska

unread,
Jan 3, 2017, 2:59:18 PM1/3/17
to Sigbjorn Finne, Kentaro Hara, Michael Lippautz, Jochen Eisinger, blink-dev
Why isn't this launched via a finch trial?



--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.


Sigbjorn Finne

unread,
Jan 3, 2017, 3:42:05 PM1/3/17
to Dave Tapuska, Kentaro Hara, Michael Lippautz, Jochen Eisinger, blink-dev

It could perhaps be arranged, if that's the preferable & prudent route -
but that's hard to drive from the outside as a Google external.

--sigbjorn

(ftr, heap compaction has shipped as a stable feature in Opera for 6+
months.)
>> email to blink-dev+...@chromium.org.
>>
>>
>

Kentaro Hara

unread,
Jan 3, 2017, 4:21:15 PM1/3/17
to Sigbjorn Finne, Dave Tapuska, Michael Lippautz, Jochen Eisinger, blink-dev
Given that this is not a web-exposed change and that it has been stable in Opera for 6+ months, I guess it would be okay with enabling it on ToT and trying to fix crash reports.



Philip Rogers

unread,
Jan 3, 2017, 4:21:40 PM1/3/17
to Sigbjorn Finne, Dave Tapuska, Kentaro Hara, Michael Lippautz, Jochen Eisinger, blink-dev
An option for the finch trial is to launch the feature, then have a googler run a reverse finch trial where you disable the launched feature for a small percentage of users in order to collect metrics.

Chris Harrelson

unread,
Jan 3, 2017, 4:30:18 PM1/3/17
to Philip Rogers, Sigbjorn Finne, Dave Tapuska, Kentaro Hara, Michael Lippautz, Jochen Eisinger, blink-dev
+1 to a Finch trial. The main problem is not stability, it's lack of data on the impact of this change on performance and other metrics. I think there should be a reverse Finch trial to measure this change. We often learn quite a bit from such data -- in many cases justifying the work to implement the feature and celebrate its success!

Kentaro Hara

unread,
Jan 3, 2017, 4:33:00 PM1/3/17
to Chris Harrelson, Philip Rogers, Sigbjorn Finne, Dave Tapuska, Michael Lippautz, Jochen Eisinger, blink-dev
Sounds reasonable. keishi@ or I will handle the Finch trial :)


Keishi Hattori

unread,
Jan 25, 2017, 4:02:19 AM1/25/17
to Kentaro Hara, Chris Harrelson, Philip Rogers, Sigbjorn Finne, Dave Tapuska, Michael Lippautz, Jochen Eisinger, blink-dev
I ran the Finch trial and here are my findings.

It looks like we can expect a 7% reduction in peak BlinkGC committed memory. For the average case, heap compaction will take 5.6% of the time it takes for BlinkGC marking to reduce 750KB of memory.

I think this confirms that heap compaction's effect on gc pause time is minimal and we will see a clear reduction in BlinkGC memory usage.


--
- Keishi

Sigbjorn Finne

unread,
Jan 25, 2017, 1:37:48 PM1/25/17
to Keishi Hattori, Kentaro Hara, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, Jochen Eisinger, blink-dev

Thanks keishi@ for running the experiment, much appreciated.

I won't repeat it here, but https://crbug.com/672030#c11 has my
interpretation, which aligns with the one below.

If that experiment addresses everyone's concerns sufficiently,
I could prepare a CL to always-enable the feature next.

--sigbjorn
>>>>>>> email to blink-dev+...@chromium.org.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "blink-dev" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to blink-dev+...@chromium.org.

Kentaro Hara

unread,
Jan 25, 2017, 1:41:43 PM1/25/17
to Sigbjorn Finne, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, Jochen Eisinger, blink-dev
The number looks pretty promising. Non-owner LGTM.



--
You received this message because you are subscribed to the Google
Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send


--
Kentaro Hara, Tokyo, Japan

Jochen Eisinger

unread,
Jan 25, 2017, 1:59:02 PM1/25/17
to Kentaro Hara, Sigbjorn Finne, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
it's not really web visible, so I don't know whether we need an intent to ship? I'd totally just turn it on

--
You received this message because you are subscribed to the Google
Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send


--
Kentaro Hara, Tokyo, Japan

Charles Harrison

unread,
Jan 25, 2017, 2:00:51 PM1/25/17
to Jochen Eisinger, Kentaro Hara, Sigbjorn Finne, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
Are there any CPU workloads measured by UMA that we would expect to improve with a less fragmented oilpan heap (i.e. better cache locality)?

--
You received this message because you are subscribed to the Google
Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send


--
Kentaro Hara, Tokyo, Japan




--
Kentaro Hara, Tokyo, Japan

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

Sigbjorn Finne

unread,
Jan 25, 2017, 2:26:50 PM1/25/17
to Charles Harrison, Jochen Eisinger, Kentaro Hara, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev

Unknown to me, I'd also be interested to hear if there are worthy
signals. However, I believe the arch team is (or will be soon) doing
some work on understanding the performance and behavior of
longer-running renderer processes, so relevant UMAs might appear as a
result.

--sigbjorn
>>> email to blink-dev+...@chromium.org.
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "blink-dev" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to blink-dev+...@chromium.org.
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Kentaro Hara, Tokyo, Japan
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Kentaro Hara, Tokyo, Japan
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "blink-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to blink-dev+...@chromium.org.
>>
>

Charles Harrison

unread,
Jan 25, 2017, 2:45:27 PM1/25/17
to Sigbjorn Finne, Jochen Eisinger, Kentaro Hara, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
Maybe Jochen can give a better metric, but I just looked at V8.Execute on Android Canary+Dev for 28 days and the data looks very suggestive of an improvement, though I think still technically not statistically significant. I would  recommend keeping a holdout group with the feature turned off to reach stable channel to evaluate wins on a larger population.








--
You received this message because you are subscribed to the Google
Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send






--
Kentaro Hara, Tokyo, Japan







--
Kentaro Hara, Tokyo, Japan

--
You received this message because you are subscribed to the Google Groups
"blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an

Jochen Eisinger

unread,
Jan 25, 2017, 2:48:27 PM1/25/17
to Charles Harrison, Sigbjorn Finne, Kentaro Hara, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
I'd be surprised if V8.Execute was influenced by changes in Blink's heap layout (and looking at the histogram, it doesn't seem to have much valuable data in it anyways)

I don't know of good histograms that would give you an overall idea of how we do performance wise, as it's not possible to factor out e.g. changes in the scripts people see on average etc..








--
You received this message because you are subscribed to the Google
Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send






--
Kentaro Hara, Tokyo, Japan







--
Kentaro Hara, Tokyo, Japan

--
You received this message because you are subscribed to the Google Groups
"blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an

Kentaro Hara

unread,
Jan 25, 2017, 3:33:16 PM1/25/17
to Jochen Eisinger, Charles Harrison, Sigbjorn Finne, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
Are there any CPU workloads measured by UMA that we would expect to improve
with a less fragmented oilpan heap (i.e. better cache locality)?

In theory it can improve CPU performance but the impact would be negligible. Note that the heap compaction happens only when the heap fragmentation exceeds some threshold.

The main goal of the heap compaction is reducing the memory footprint in long-running apps.









--
You received this message because you are subscribed to the Google
Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send






--
Kentaro Hara, Tokyo, Japan







--
Kentaro Hara, Tokyo, Japan

--
You received this message because you are subscribed to the Google Groups
"blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an

Sigbjorn Finne

unread,
Jan 26, 2017, 2:42:05 AM1/26/17
to Jochen Eisinger, Kentaro Hara, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev

Den 1/25/2017 19:58, Jochen Eisinger skreiv:
> it's not really web visible, so I don't know whether we need an intent to
> ship? I'd totally just turn it on
>

Yes, the feature is ready for it.

I'll give people a chance to comment for another couple of days (busy
days for all atm), but https://crrev.com/2653413002 alters the status.

--sigbjorn

Jeremy Roman

unread,
Jan 26, 2017, 11:27:55 AM1/26/17
to Sigbjorn Finne, Jochen Eisinger, Kentaro Hara, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
I previously mentioned this on corp G+, but I probably should have mentioned it on this thread:


Is there clear documentation? I could imagine there being other pointers to the vector's backing allocation for a number of reasons, though it's believable to me that we don't currently have any.

For instance: the following otherwise seems like reasonable code to me (and was before):

// Makes a list of widgets and asynchronously frobnicates them.
//
// WTF::Vector promises that iterators remain valid unless the vector is mutated,
// so this was previously legal code. It becomes illegal if compaction moves the
// backing allocation.
class Frobnicator : public GarbageCollected<Frobnicator> {
 public:
  Frobnicator() {
    // populate m_widgets with some widgets
    m_iterator = m_widgets.begin();
  }

  void frobnicate() {
    for (; m_iterator != m_widgets.end(); ++m_iterator) {
      if (shouldYield()) {
        postTask(WTF::bind(&Frobnicator::frobnicate, WTF::makePersistent(this)));
        return;
      }
      (*m_iterator)->frobnicate();
    }
  }

 private:
  bool shouldYield() const;

  HeapVector<Widget> m_widgets;
  HeapVector<Widget>::iterator m_iterator;
};
Similarly, it was previously legal (if rare) to take pointers into the vector's allocating buffer for other reasons, so long as you knew the vector wouldn't be modified afterwards.

If we're not going to support rewriting these generally (it does seem hard to do so), could we clearly document in Vector.h when this can happen? Might also be worth a separate blink-dev PSA of pitfalls once this turns on.

Sigbjorn Finne

unread,
Jan 26, 2017, 11:44:55 AM1/26/17
to Jeremy Roman, Jochen Eisinger, Kentaro Hara, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev

Good point; we don't allow such unsafe iterators to be kept on heap
objects following https://codereview.chromium.org/2588943002/

--sigbjorn

Den 1/26/2017 17:27, Jeremy Roman skreiv:
> I previously mentioned this on corp G+, but I probably should have
> mentioned it on this thread:
>
>
> Is there clear documentation? I could imagine there being other pointers to
> the vector's backing allocation for a number of reasons, though it's
> believable to me that we don't currently have any.
>
> For instance: the following otherwise seems like reasonable code to me (and
> was before):
>
> // Makes a list of widgets and asynchronously frobnicates them.////
> WTF::Vector promises that iterators remain valid unless the vector is
> mutated,// so this was previously legal code. It becomes illegal if
> compaction moves the// backing allocation.class Frobnicator : public

Jeremy Roman

unread,
Jan 26, 2017, 1:26:49 PM1/26/17
to Sigbjorn Finne, Jochen Eisinger, Kentaro Hara, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
On Thu, Jan 26, 2017 at 11:44 AM, Sigbjorn Finne <sigb...@opera.com> wrote:

Good point; we don't allow such unsafe iterators to be kept on heap objects following https://codereview.chromium.org/2588943002/

Ah, good. That at least takes care of the most likely case (it's still possible to take other kinds of pointers into the buffer, though). Hopefully reviewers can handle any others that come up.

Sigbjorn Finne

unread,
Jan 26, 2017, 1:41:34 PM1/26/17
to Jeremy Roman, Jochen Eisinger, Kentaro Hara, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
Den 1/26/2017 19:26, Jeremy Roman skreiv:
> On Thu, Jan 26, 2017 at 11:44 AM, Sigbjorn Finne <sigb...@opera.com>
> wrote:
>
>>
>> Good point; we don't allow such unsafe iterators to be kept on heap
>> objects following https://codereview.chromium.org/2588943002/
>
>
> Ah, good. That at least takes care of the most likely case (it's still
> possible to take other kinds of pointers into the buffer, though).
> Hopefully reviewers can handle any others that come up.
>

Why would you want to keep a pointer into backing stores except in a
"typeful" manner via iterators? If such use cases should come up in
practice, we'll definitely have to consider them carefully and address.
The GC infrastructure has to be sound (and have a usable programming model.)

--sigbjorn

Jeremy Roman

unread,
Jan 26, 2017, 2:41:17 PM1/26/17
to Sigbjorn Finne, Jochen Eisinger, Kentaro Hara, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
I haven't had such a need for GC containers, but I can contrive ones that don't seem absurd to me (though there are easy workarounds if you're aware it's a potential issue -- relying on data not moving does already rely on the vector not resizing). Something like:

void fetchOneExample(const String& name, WTF::Function<void(Example*)> callback);

class Example : public GarbageCollected<Example> { ... };

class ExampleFetcher : public GarbageCollected<ExampleFetcher> {
 public:
  void fetchExamples() {
    const char* exampleNames[] = {"first", "second", "third"};
    m_examples.resize(3);
    for (int i = 0; i < 3; i++) {
      fetchAsync(exampleNames[i], &m_examples[i]);
    }
    fetchAsync("other", &m_otherExample);
  }

 private:
  void fetchAsync(const String& name, Member<Example>* destination) {
    fetchOneExample(name, WTF::bind(&ExampleFetcher::fetchDone, wrapPersistent(this), WTF::unretained(destination)));
    ++m_examplesRemaining;
  }
  void fetchDone(Member<Example>* destination, Example* example) {
    *destination = example;
    if (--m_examplesRemaining == 0) notifyDone();
  }
  void notifyDone() { /* uses m_examples and m_otherExample */ }

  HeapVector<Member<Example>> m_examples;
  Member<Example> m_otherExample;
  int m_examplesRemaining = 0;
};

I admit my example is contrived (I haven't seen this myself) -- and I don't think it should prevent shipping backing store compaction -- but I do think we should clearly document that HeapVector backing stores may move (contrary to the usual vector invariants) and pointers into them aren't valid through a GC. The [blink-gc] check for iterators seems like a great way of checking for the more likely case.

Kentaro Hara

unread,
Jan 26, 2017, 2:52:29 PM1/26/17
to Jeremy Roman, Sigbjorn Finne, Jochen Eisinger, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
I agree that we should document it, but in general it is not allowed to hold a pointer to a middle of an object in Oilpan. It's unsafe anyway because the object won't be traced.

Note: It seems like there're lots of discussion on this thread, but basically the heap compaction is *just* a very internal (but amazing) performance optimization in Oilpan.



Dominic Cooney

unread,
Feb 2, 2017, 9:40:55 PM2/2/17
to Kentaro Hara, Jeremy Roman, Sigbjorn Finne, Jochen Eisinger, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
This is probably obvious, but I didn't see this in the design document, but this compaction will only happen when the GC is running in its "no pointers on stack" mode, right?

On Thu, Jan 26, 2017 at 11:51 AM, Kentaro Hara <har...@chromium.org> wrote:
I agree that we should document it, but in general it is not allowed to hold a pointer to a middle of an object in Oilpan. It's unsafe anyway because the object won't be traced.

It's not safe if on the stack, or if paired with a pointer to the start of the object ;) FWIW I'm not aware of anywhere dom/ html/ do this.

Dominic Cooney

unread,
Feb 2, 2017, 9:41:36 PM2/2/17
to Kentaro Hara, Jeremy Roman, Sigbjorn Finne, Jochen Eisinger, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
On Thu, Feb 2, 2017 at 6:40 PM, Dominic Cooney <domi...@chromium.org> wrote:
This is probably obvious, but I didn't see this in the design document, but this compaction will only happen when the GC is running in its "no pointers on stack" mode, right?

On Thu, Jan 26, 2017 at 11:51 AM, Kentaro Hara <har...@chromium.org> wrote:
I agree that we should document it, but in general it is not allowed to hold a pointer to a middle of an object in Oilpan. It's unsafe anyway because the object won't be traced.

It's not safe if on the stack, or if paired with a pointer to the start of the object ;) FWIW I'm not aware of anywhere dom/ html/ do this.

Sorry, it's not *un*safe ...

Kentaro Hara

unread,
Feb 2, 2017, 10:46:05 PM2/2/17
to Dominic Cooney, Jeremy Roman, Sigbjorn Finne, Jochen Eisinger, Keishi Hattori, Chris Harrelson, Philip Rogers, Dave Tapuska, Michael Lippautz, blink-dev
This is probably obvious, but I didn't see this in the design document, but this compaction will only happen when the GC is running in its "no pointers on stack" mode, right?

Right. The heap compaction can happen only in a precise GC.
Reply all
Reply to author
Forward
0 new messages