Moving Oilpan development to Blink trunk

1,859 views
Skip to first unread message

Mads Sig Ager

unread,
Dec 3, 2013, 7:58:29 AM12/3/13
to blink-dev
Hi all,

the Oilpan project (replacing reference counting in Blink with a tracing garbage collector) has been on a branch for 6 months and things are looking good. Therefore, we would like to move Oilpan development to Blink trunk. In preparation for that, here is a design document for the Oilpan system including a guide to write code for Blink with Oilpan.


Comments and questions are very welcome.

We would like to land Oilpan incrementally in trunk instead of landing in one go from a branch. Therefore, we are planning on adding a compile-time flag for switching between the current build and a build with Oilpan. The plan is to enable and ship Oilpan step by step for self-contained modules first (indexeddb comes to mind).

We will start preparing the first patches for trunk. Landing incrementally will lead to some inconvenience in the transition period. There will be a couple of new types that Blink programmers have to relate to and there will have to be different destructor code (and therefore some ifdefs in the code) for the oilpan build and for the current build. We will move as fast as we can to make the transition period as short as possible and we will appreciate help from the rest of the team.

Let's get a real memory management system for Blink!

Cheers,    -- Mads

Tony Gentilcore

unread,
Dec 3, 2013, 9:44:58 AM12/3/13
to Mads Sig Ager, blink-dev
I understand there's a performance waterfall that compares oilpan to
default. What benchmarks does it run and how are the numbers looking?

-Tony

Mads Sig Ager

unread,
Dec 3, 2013, 10:09:46 AM12/3/13
to Tony Gentilcore, blink-dev
It is here:


It is linked from the design document as well. Overall, it is looking very good. Most things are on par with a few things better and a few things worse. One of the explicit goals of moving development to trunk is to be able to get on the normal performance tracking infrastructure. Currently the branch is too old to track everything easily. Moving the work to trunk, we can set up a full set of infrastructure where the only change in setup is compiling with the oilpan compile-time flag. That will make it much easier to make sure that performance is where it should be when we ship individual parts with Oilpan.

Cheers,    -- Mads

Paweł Hajdan, Jr.

unread,
Dec 3, 2013, 11:05:41 AM12/3/13
to Mads Sig Ager, Tony Gentilcore, blink-dev
Yay, this is so exciting!

Congratulations for getting Oilpan to work and ready to begin landing on trunk.

I'm looking forward to the improved memory management and better security.

Paweł

Daniel Bratell

unread,
Dec 3, 2013, 11:40:45 AM12/3/13
to blink-dev, Mads Sig Ager
On Tue, 03 Dec 2013 13:58:29 +0100, Mads Sig Ager <ag...@chromium.org>
wrote:

> Hi all,
>
> the Oilpan project (replacing reference counting in Blink with a tracing
> garbage collector) has been >on a branch for 6 months and things are
> looking good. Therefore, we would like to move Oilpan >development to
> Blink trunk. In preparation for that, here is a design document for the
> Oilpan system >including a guide to write code for Blink with Oilpan.
>
>
>
> https://docs.google.com/document/d/1y7_0ni0E_kxvrah-QtnreMlzCDKN3QP4BN1Aw7eSLfY/edit?usp=sharing
>
>
> Comments and questions are very welcome.

Very nice document.

I wonder some things about memory usage. In general using garbage
collection delays the reuse of memory causing higher memory usage, and on
many pages script and dom environments are a large part of the used memory.

At the same time you write "On the page cyclers, our measurements show no
difference in terms of both time and space when comparing to the branch
point." which I interpret as "no noticable memory usage change".

How confident are you in those numbers and what magic did you use to make
the results so promising? :-)

/Daniel

Adam Barth

unread,
Dec 3, 2013, 12:28:49 PM12/3/13
to Daniel Bratell, blink-dev, Mads Sig Ager
On Tue, Dec 3, 2013 at 8:40 AM, Daniel Bratell <bra...@opera.com> wrote:
On Tue, 03 Dec 2013 13:58:29 +0100, Mads Sig Ager <ag...@chromium.org> wrote:
the Oilpan project (replacing reference counting in Blink with a tracing garbage collector) has been >on a branch for 6 months and things are looking good. Therefore, we would like to move Oilpan >development to Blink trunk. In preparation for that, here is a design document for the Oilpan system >including a guide to write code for Blink with Oilpan.

https://docs.google.com/document/d/1y7_0ni0E_kxvrah-QtnreMlzCDKN3QP4BN1Aw7eSLfY/edit?usp=sharing

Comments and questions are very welcome.

Very nice document.

Yes, thanks for the detailed doc.  I left a few minor questions in the doc.  The results from the branch look very encouraging!

Adam

Vyacheslav Egorov

unread,
Dec 3, 2013, 2:07:28 PM12/3/13
to Daniel Bratell, Mads Sig Ager, blink-dev

It is true that GC systems do not have prompt finalization guarantees. However even in the current system you loose these guarantees as soon as a single node of your DOM tree gets a JavaScript wrapper because lifetime of this wrapper and by transitive closure the whole tree is controlled by JS VM GC.

Hence we think situation does not change to the worse in most important real world scenarios.

Emil A Eklund

unread,
Dec 3, 2013, 5:20:15 PM12/3/13
to Mads Sig Ager, blink-dev
Could you talk about the programming model changes in a bit more detail please?

The document claims that it'll simplify things without going into any
details and when we last talked about this a couple of months ago that
wasn't the case, requiring magic constructs to get decent performance.
What has changed since and what type of changes to the programming
model to you envision?

I'm concerned that it'll add complexity and introduce hard to track
down bugs and regressions but would love to be proven wrong :)

Vyacheslav Egorov

unread,
Dec 3, 2013, 5:54:29 PM12/3/13
to e...@chromium.org, Mads Sig Ager, blink-dev
What has changed since and what type of changes to the programming model to you envision?

We changed to conservative stack scanning from precise stack scanning.

Precise stack scanning required making each pointer in each activation frame to be made known to GC via a special smart pointer (aka Handle). After multiple experiments we determined that with the current state of C++ programming language and C++ compilers it is impossible to implement precise stack scanning for C++ code in a portable way that without overhead.

This means Handle/Result/HandleScope things you saw before are not needed anymore and nether cognitive nor performance penalty was introduced.

So I would say there is almost no change in the way you would write code. For local references you use raw pointers and things just work.

If I were to highlight important changes they would be: 

1) way you declare new classes of heap allocated objects: you need to inherit from the right class and ensure that you scan pointers from this class to another ones correctly;

2) the way to rely on weakness to perform cleanup that would previously require back pointers and special logic in destructors.

Both of these things are described in details in the document.

Once you have declared you class in the right way you can work with it just using raw pointers. You no longer have to care about doing something like RefPtr protect(this) and other terrible hacks: GC will manage live time of local objects and everything reachable from them completely transparently.  




Vyacheslav Egorov

Emil A Eklund

unread,
Dec 3, 2013, 6:15:45 PM12/3/13
to Vyacheslav Egorov, Mads Sig Ager, blink-dev
On Wed, Dec 4, 2013 at 6:54 AM, Vyacheslav Egorov <veg...@chromium.org> wrote:
>> What has changed since and what type of changes to the programming model
>> to you envision?
>
> We changed to conservative stack scanning from precise stack scanning.
>
> Precise stack scanning required making each pointer in each activation frame
> to be made known to GC via a special smart pointer (aka Handle). After
> multiple experiments we determined that with the current state of C++
> programming language and C++ compilers it is impossible to implement precise
> stack scanning for C++ code in a portable way that without overhead.
>
> This means Handle/Result/HandleScope things you saw before are not needed
> anymore and nether cognitive nor performance penalty was introduced.

That is great news and makes me very happy!

> If I were to highlight important changes they would be:
>
> 1) way you declare new classes of heap allocated objects: you need to
> inherit from the right class and ensure that you scan pointers from this
> class to another ones correctly;
>
> 2) the way to rely on weakness to perform cleanup that would previously
> require back pointers and special logic in destructors.
>
> Both of these things are described in details in the document.
>
> Once you have declared you class in the right way you can work with it just
> using raw pointers. You no longer have to care about doing something like
> RefPtr protect(this) and other terrible hacks: GC will manage live time of
> local objects and everything reachable from them completely transparently.

Makes sense and seems entirely reasonable. Could you point out a
typical class in the branch that might serve as a good example and
that you think demonstrates the model well? The examples in the doc
are great but getting a little more context would help.

Thanks!

--
Emil

Adam Barth

unread,
Dec 3, 2013, 6:16:44 PM12/3/13
to Vyacheslav Egorov, Emil A Eklund, Mads Sig Ager, blink-dev
On Tue, Dec 3, 2013 at 2:54 PM, Vyacheslav Egorov <veg...@chromium.org> wrote:
What has changed since and what type of changes to the programming model to you envision?

We changed to conservative stack scanning from precise stack scanning.

Precise stack scanning required making each pointer in each activation frame to be made known to GC via a special smart pointer (aka Handle). After multiple experiments we determined that with the current state of C++ programming language and C++ compilers it is impossible to implement precise stack scanning for C++ code in a portable way that without overhead.

This means Handle/Result/HandleScope things you saw before are not needed anymore and nether cognitive nor performance penalty was introduced.

So I would say there is almost no change in the way you would write code. For local references you use raw pointers and things just work.

That is a bit of a change in how we write the code.  Instead of using a mix of RefPtr, PassRefPtr, and raw pointers, you use the following rules:

1) If you're holding the value on the stack (e.g., in a local variable), you always use a raw pointer (i.e, Foo* or Foo&).
2) If you're holding the value as a member variable of another HeapAllocated object, you use either Member<Foo> or WeakMember<Foo> depending on whether you want a strong or a weak reference.
3) If you're holding the value as a member variable of a non-HeapAllocated object, you use Persistent<Foo>, which is a strong reference.  (Maybe we should have WeakPersistent too?)

There are some other changes:

A) Objects that subclass HeapAllocated need to implement a |trace| function that visits each of their members so the GC can trace the object's pointers to other objects.
B) Objects are no longer finalized in a deterministic order, which means we need to be a bit more careful in how we write destructors for HeapAllocated objects.  Specifically, we shouldn't access other HeapAllocated objects in destructors of HeapAllocated objects.

Adam

Dominic Cooney

unread,
Dec 3, 2013, 6:36:25 PM12/3/13
to Adam Barth, Vyacheslav Egorov, Emil A Eklund, Mads Sig Ager, blink-dev
I'm looking forward to Oilpan landing. I'd really like to use it in Custom Elements, which has both JavaScript entanglements and lots of weak-lifetime-observation of documents, elements, etc.

I understand on your branch you took an incremental approach using something which melded RefCounted and HeapAllocated. Will that exist on trunk?

Dominic

Kentaro Hara

unread,
Dec 3, 2013, 7:13:58 PM12/3/13
to e...@chromium.org, Vyacheslav Egorov, Mads Sig Ager, blink-dev
Makes sense and seems entirely reasonable. Could you point out a
typical class in the branch that might serve as a good example and
that you think demonstrates the model well? The examples in the doc
are great but getting a little more context would help.

For example, this is a CL that moves several IndexedDB classes to Oilpan.

The CL is doing almost mechanical replacement that Adam described below.

1) If you're holding the value on the stack (e.g., in a local variable), you always use a raw pointer (i.e, Foo* or Foo&).
2) If you're holding the value as a member variable of another HeapAllocated object, you use either Member<Foo> or WeakMember<Foo> depending on whether you want a strong or a weak reference.
3) If you're holding the value as a member variable of a non-HeapAllocated object, you use Persistent<Foo>, which is a strong reference.  (Maybe we should have WeakPersistent too?) 
 
There are some other changes: 
 
A) Objects that subclass HeapAllocated need to implement a |trace| function that visits each of their members so the GC can trace the object's pointers to other objects.
B) Objects are no longer finalized in a deterministic order, which means we need to be a bit more careful in how we write destructors for HeapAllocated objects.  Specifically, we shouldn't access other HeapAllocated objects in destructors of HeapAllocated objects.



--
Kentaro Hara, Tokyo, Japan

Kentaro Hara

unread,
Dec 3, 2013, 7:23:04 PM12/3/13
to Dominic Cooney, Adam Barth, Vyacheslav Egorov, Emil A Eklund, Mads Sig Ager, blink-dev
I understand on your branch you took an incremental approach using something which melded RefCounted and HeapAllocated. Will that exist on trunk?

Making objects both RefCounted and HeapAllocated affects performance, since we have to keep a Persistent handle internally when there is 1 or more reference count. This is OK for not-performance-sensitive objects (e.g., DOM objects in modules/), so we're going to use RefCounted and HeapAllocated for them in a transition period and make the migration incrementally. On the other hand, I'm not sure if the performance is OK for Node & CSS objects. If the performance is OK, we can take the incremental approach, but otherwise we have to develop behind a compile time flag.

Mads Sig Ager

unread,
Dec 4, 2013, 2:29:24 AM12/4/13
to e...@chromium.org, Vyacheslav Egorov, blink-dev
One example that could serve as an illustration is core/css/StyleSheetContents.[h,cpp].

It holds a strong pointer to another heap allocated object, two vectors of strong pointers to other heap allocated objects and a set of weak pointers to other heap allocate objects. So it uses both strong and weak pointers.

In the implementation file, there used to be a 'RefPtr<StyleSheetContents> protect(this)' in checkLoaded. That is no longer needed. All pointers to Document, Node, StyleSheetContents and other objects that are allocated in the oilpan heap are just raw pointers.

Cheers,    -- Mads


Andy Wingo

unread,
Dec 4, 2013, 4:50:21 AM12/4/13
to Vyacheslav Egorov, e...@chromium.org, Mads Sig Ager, blink-dev
Hi,

Just a curious question :)

On Tue 03 Dec 2013 23:54, Vyacheslav Egorov <veg...@chromium.org> writes:

> We changed to conservative stack scanning from precise stack scanning.
>
> Precise stack scanning required making each pointer in each activation
> frame to be made known to GC via a special smart pointer (aka Handle).
> After multiple experiments we determined that with the current state of
> C++ programming language and C++ compilers it is impossible to implement
> precise stack scanning for C++ code in a portable way that without
> overhead.
>
> This means Handle/Result/HandleScope things you saw before are not
> needed anymore and nether cognitive nor performance penalty was
> introduced.

Clearly it's possible to do precise stack scanning, as you already
implemented it, and it is what is done in V8 itself. (I understand that
would also make it possible to compact; losing that must have been quite
irritating!) But do I understand correctly that it was just too slow?
By "the state of the C++ compilers" you mean that there are
handle-related optimizations that the compilers could have made but
aren't making? What was the penalty?

Curious minds want to know ;-)

Cheers,

Andy

Mads Sig Ager

unread,
Dec 4, 2013, 5:13:45 AM12/4/13
to Andy Wingo, Vyacheslav Egorov, e...@chromium.org, blink-dev
On Wed, Dec 4, 2013 at 10:50 AM, Andy Wingo <wi...@igalia.com> wrote:
Hi,

Just a curious question :)

Happy to answer! :-) 
 
On Tue 03 Dec 2013 23:54, Vyacheslav Egorov <veg...@chromium.org> writes:

> We changed to conservative stack scanning from precise stack scanning.
>
> Precise stack scanning required making each pointer in each activation
> frame to be made known to GC via a special smart pointer (aka Handle).
> After multiple experiments we determined that with the current state of
> C++ programming language and C++ compilers it is impossible to implement
> precise stack scanning for C++ code in a portable way that without
> overhead.
>
> This means Handle/Result/HandleScope things you saw before are not
> needed anymore and nether cognitive nor performance penalty was
> introduced.

Clearly it's possible to do precise stack scanning, as you already
implemented it, and it is what is done in V8 itself.  (I understand that
would also make it possible to compact; losing that must have been quite
irritating!)

Compaction with a stack was going to be really hard even with handles to keep track of pointers on the stack. The only way to make that safe would be to make sure that all uses of the |this| pointer be explicit and go through a handle for |this| always. There are a lot of implicit accesses to |this| in C++ code, so even with precise stack information I find it unlikely that we would ever attempt to compact with an active Blink stack.

The good news is that we can postpone most GCs to the event loop. At the event loop there is no Blink stack and therefore no Blink pointers on the stack. That also means that there is no running code that will rely on the |this| pointer not changing. Therefore, at the event loop we should be able to compact with conservative stack scanning because we know there is nothing on the stack that we need to update.
 
 But do I understand correctly that it was just too slow?
 
What was the penalty?

Yes, it was very slow. We went for a handle system that was as easy as they get to work with for programmers. That type of handles turns all pointers on the stack into an indirect pointer through a handle stack that the garbage collector knows about. Allocating slots in the handle stack and the extra indirection is really expensive. We saw slowdowns on the order of 2-10x slower on micro benchmarks.

There are more efficient ways to do handles where the handles are on the stack, but they are pretty hostile to the programmer. So we didn't want to go there. It would make programming Blink a lot harder.

(As an aside, the reason why handles are not a problem in language runtimes such as V8 is that the language runtime will do everything it can to stay in generated code where the compiler generates stack maps that gives precise information about the stack (or tags everything on the stack). When running generated code you therefore do not pay for handles. Going into the runtime system is expensive and there you pay for the handles. However, most of the time you can avoid it and stay in generated code. That option is not available in Blink.)
 
By "the state of the C++ compilers" you mean that there are
handle-related optimizations that the compilers could have made but
aren't making?  

We experimented with various ways to make the Handle implementation faster by relying on the compiler to optimize certain patterns. As an example, we had a simple class that was just wrapping a raw pointer and had a couple of constructors. We expected compilers to be able to treat instances of such a class as a raw pointer and pass it in registers. However, that is not the case (at least not on 32-bit and not at all on Windows I believe). In the end we concluded that we couldn't make our handle implementation fast enough. Conservative scanning is working really well for us in this setting and it makes it much easier to use the system for the Blink programmer.

Cheers,    -- Mads

Erik Corry

unread,
Dec 4, 2013, 7:38:04 AM12/4/13
to Mads Sig Ager, Andy Wingo, Vyacheslav Egorov, e...@chromium.org, blink-dev
One of the deal breakers was the calling conventions.  All 32 bit calling conventions treat smart pointers as second class citizens, producing much poorer code than for raw pointers when returning things from and passing things to functions.  However good an optimizer you have, it's hard to do anything about the calling convention.

Another big issue was C++ iterators.  C++ iterators are not heavyweight objects like Java iterators that are created once per loop.  The normal style for C++ iterators is to have them as very lightweight value objects that are created and destructed extremely frequently.  You want the iterators to keep the collections alive (and to make weak collections strong to avoid surprises), but if you give iterators a constructor or destructor, you lose a lot of performance.  In the conservatively collected scenario, on-stack iterators are just raw pointers into their backing collections, which works fine.


To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.



--
Erik Corry
Google Denmark ApS
Sankt Petri Passage 5, 2. sal
1165 København K - Denmark - CVR nr. 28 86 69 84

Eric Seidel

unread,
Dec 9, 2013, 2:04:19 AM12/9/13
to Mads Sig Ager, blink-dev
Congrats! I'm very excited to see this moving closer towards shipping!

My primary concerns are:
- Lack of short-term performance regression.
- Potential for long-term performance gain.
- Not making things harder for developers.

Your doc seems to largely address those.


I have some further concerns of:
- These changes are basically irreversible (once we remove all our
delicate cycle avoidance, getting it back would be extremely
difficult).
- Making sure all of Blink knows how to develop with (and ideally even
improve on) Oilpan. (Oilpan was largely developed by folks outside of
the historical core/OWNERS. Hopefully that means we're getting more
core/OWNERS or if not, at least that the core/OWNERS are ready to
maintain should the Oilpanners move on to greener pastures.)

I'm pretty convinced Blink is ready to take the plunge. But as for
spreading the knowledge: I hope that there are continued discussions
in the coming months about Oilpan best practices and future plans. I
would also encourage moving (as much as possible) all oilpan related
discussions to blink-dev instead of any oilpan specific lists now that
oilpan is moving from a branch onto trunk.

Thanks again for all this awesome work!

Mads Sig Ager

unread,
Dec 9, 2013, 2:13:41 AM12/9/13
to Eric Seidel, blink-dev
Thanks Eric!

I understand and agree with your concerns. We will be responsible owners of Oilpan and do all we can to spread knowledge about it.

Cheers,    -- Mads 


Kentaro Hara

unread,
Dec 9, 2013, 4:07:17 AM12/9/13
to Mads Sig Ager, Eric Seidel, blink-dev
Today I gave a tech talk of Oilpan in Tokyo. Here is a slide:

https://docs.google.com/a/google.com/presentation/d/1YtfurcyKFS0hxPOnC3U6JJroM8aRP49Yf0QWznZ9jrk/edit#slide=id.p

Hope it will be helpful to understand Oilpan. Feel free to add comments to the slide :)


David Turner

unread,
Dec 9, 2013, 4:43:47 AM12/9/13
to Kentaro Hara, Mads Sig Ager, Eric Seidel, blink-dev
Thanks, that's really fascinating, 

Quick question: Did you try to perform performance measurements on ARM? Given the vast differences in CPU performance with desktop-grade, it would be interesting to see how the results change for mobile (and many Chromebooks :-)).

- Digit

Mads Sig Ager

unread,
Dec 9, 2013, 4:53:52 AM12/9/13
to David Turner, Kentaro Hara, Eric Seidel, blink-dev
We did not get performance measurements for ARM yet. I tried to get the branch building for Android a couple of weeks ago and failed because I couldn't get a checkout of a version old enough to be compatible with our oilpan Blink branch. We will make sure to get coverage on all platforms while developing under a compile-time flag so we have full information before turning this on and shipping it.

Cheers,    -- Mads

dongseo...@intel.com

unread,
Dec 9, 2013, 7:14:33 AM12/9/13
to blin...@chromium.org, Mads Sig Ager, Eric Seidel
Very good slide that is easy to understand complicated contents!

One question. Will Oilpan effect code related to OwnPtr also? I mean that will we manage OwnPtr off the olipan heap like as-is?

- DS

Kentaro Hara

unread,
Dec 9, 2013, 7:27:24 AM12/9/13
to dongseo...@intel.com, blink-dev, Mads Sig Ager, Eric Seidel
One question. Will Oilpan effect code related to OwnPtr also? I mean that will we manage OwnPtr off the olipan heap like as-is?

Good question. Oilpan sometimes affects OwnPtrs.

We need to move OwnPtr to Oilpan's heap if the OwnPtr produces a cycle. For example, the following X and Y produce a cycle.

  class X {  // This is off-heap.
    Persistent<Y> m_y;
  };

  class Y : GarbageCollectedFinalized<Y> {  // This is on-heap.
    OwnPtr<X> m_x;
  };

In this case, we need to move X to Oilpan's heap and change the code to:

  class X : GarbageCollected<X> {  // This is on-heap.
    Member<Y> m_y;
  };

  class Y : GarbageCollected<Y> {  // This is on-heap.
    Member<X> m_x;
  };

dongseo...@intel.com

unread,
Dec 9, 2013, 7:45:18 AM12/9/13
to blin...@chromium.org, dongseo...@intel.com, Mads Sig Ager, Eric Seidel
Thank you for quick answer. It's very reasonable answer :)

I guess OwnPtr can live until Blink allows c++11 unique_ptr.

- DS

David Turner

unread,
Dec 9, 2013, 9:29:10 AM12/9/13
to Mads Sig Ager, Kentaro Hara, Eric Seidel, blink-dev
On Mon, Dec 9, 2013 at 10:53 AM, Mads Sig Ager <ag...@chromium.org> wrote:
We did not get performance measurements for ARM yet. I tried to get the branch building for Android a couple of weeks ago and failed because I couldn't get a checkout of a version old enough to be compatible with our oilpan Blink branch. We will make sure to get coverage on all platforms while developing under a compile-time flag so we have full information before turning this on and shipping it.

Thanks, that's really good to hear. Good luck, and let us know if you have Android-specific issues :-)

Chris Harrelson

unread,
Dec 9, 2013, 12:59:17 PM12/9/13
to Mads Sig Ager, Eric Seidel, blink-dev
On Sun, Dec 8, 2013 at 11:13 PM, Mads Sig Ager <ag...@chromium.org> wrote:
Thanks Eric!

I understand and agree with your concerns. We will be responsible owners of Oilpan and do all we can to spread knowledge about it.

Hi Mads,

Looking at the slides and reading this discussion, it appears there are 6 different ways to write a pointer in Oilpan. And also you have to write a (somewhat non-trivial I guess?) trace() method.

It wasn't that clear from reading the slides what the consequences of doing it wrong are, or what tools we have to make sure we don't mess it up, so I figured I would just ask. Is it as complicated to get right as it sounds? I do agree that it is probably no more complicated than the existing system, but it would be even nicer to be less complicated..

Chris

Mads Sig Ager

unread,
Dec 9, 2013, 1:28:54 PM12/9/13
to Chris Harrelson, Eric Seidel, blink-dev
On Mon, Dec 9, 2013 at 6:59 PM, Chris Harrelson <chri...@chromium.org> wrote:

On Sun, Dec 8, 2013 at 11:13 PM, Mads Sig Ager <ag...@chromium.org> wrote:
Thanks Eric!

I understand and agree with your concerns. We will be responsible owners of Oilpan and do all we can to spread knowledge about it.

Hi Mads,

Looking at the slides and reading this discussion, it appears there are 6 different ways to write a pointer in Oilpan. And also you have to write a (somewhat non-trivial I guess?) trace() method.

It wasn't that clear from reading the slides what the consequences of doing it wrong are, or what tools we have to make sure we don't mess it up, so I figured I would just ask. Is it as complicated to get right as it sounds? I do agree that it is probably no more complicated than the existing system, but it would be even nicer to be less complicated..

I don't find it particularly complicated and we have made it as simple as we could. As Haraken points out in his slides, there are clear rules on when to use which type of pointer. Also, I find that there are a lot of cases where it actually clarifies the code. Currently in the code, when you have a raw pointer you do not know if it is a weak pointer or if it is only raw in order to break a ref cycle. With the Member/WeakMember distinction it is now clear what the meaning of the pointer is. Either it is a strong traced pointer or it is a weak pointer that does not keep what it is pointing to alive and which will be cleared when what is being pointed to is no longer reachable.

It sounds complicated to have to write a trace method. However, in reality, most of the trace methods just trace all of the members that have pointers to other heap objects and have the overall form:

void trace(Visitor* visitor)
{
    visitor->trace(m_firstMember);
    visitor->trace(m_secondMember);
    ...
}

Because this is so stylized we are going to write a clang plugin to verify that all fields of a class that contains Member or WeakMember in its type has to be traced in the trace method. Tracing correctly is very important. If you get it wrong the garbage collector will reuse the memory and you get a use-after-free situation. Therefore, the clang plugin is a high priority for us to help developers avoid this pitfall.

There are other classes of problems that you can get into. Using Persistent in something that is in the oilpan heap is disallowed (because it can lead to leaks - you should use Member and trace the pointer in the trace method). Again, this is something that can be detected with the clang plugin.

Overall, it is hard to answer the question of "is this more complicated than before". There are new types and you do have to understand what they mean and how to use them. However, we have designed the system so there are clear rules about which type to use when. We are going to write clang plugins to verify the usage of the types. I guess the real question is if the benefits (no need to worry about cycles, fewer dangling pointers, simplified language bindings) are worth the additional cognitive load. In my mind the answer to that question is a clear yes. :-)

Cheers,    -- Mads

Chris Harrelson

unread,
Dec 9, 2013, 1:41:32 PM12/9/13
to Mads Sig Ager, Eric Seidel, blink-dev
On Mon, Dec 9, 2013 at 10:28 AM, Mads Sig Ager <ag...@chromium.org> wrote:
On Mon, Dec 9, 2013 at 6:59 PM, Chris Harrelson <chri...@chromium.org> wrote:

On Sun, Dec 8, 2013 at 11:13 PM, Mads Sig Ager <ag...@chromium.org> wrote:
Thanks Eric!

I understand and agree with your concerns. We will be responsible owners of Oilpan and do all we can to spread knowledge about it.

Hi Mads,

Looking at the slides and reading this discussion, it appears there are 6 different ways to write a pointer in Oilpan. And also you have to write a (somewhat non-trivial I guess?) trace() method.

It wasn't that clear from reading the slides what the consequences of doing it wrong are, or what tools we have to make sure we don't mess it up, so I figured I would just ask. Is it as complicated to get right as it sounds? I do agree that it is probably no more complicated than the existing system, but it would be even nicer to be less complicated..

I don't find it particularly complicated and we have made it as simple as we could. As Haraken points out in his slides, there are clear rules on when to use which type of pointer. Also, I find that there are a lot of cases where it actually clarifies the code. Currently in the code, when you have a raw pointer you do not know if it is a weak pointer or if it is only raw in order to break a ref cycle. With the Member/WeakMember distinction it is now clear what the meaning of the pointer is. Either it is a strong traced pointer or it is a weak pointer that does not keep what it is pointing to alive and which will be cleared when what is being pointed to is no longer reachable.

The Member/WeakMember distinction I can see being inevitable. The part that worries me most is the other flavors of pointer depending on whether the pointer destination is on/off heap, or the pointer sits in the stack.

From your commentary below, it sounds like there are several cases where if you pick the wrong one you can have a serious memory error? I'm also wondering if there is a way to better utilize the type system to disallow some categories of bad usage.
 

It sounds complicated to have to write a trace method. However, in reality, most of the trace methods just trace all of the members that have pointers to other heap objects and have the overall form:

void trace(Visitor* visitor)
{
    visitor->trace(m_firstMember);
    visitor->trace(m_secondMember);
    ...
}

Because this is so stylized we are going to write a clang plugin to verify that all fields of a class that contains Member or WeakMember in its type has to be traced in the trace method. Tracing correctly is very important. If you get it wrong the garbage collector will reuse the memory and you get a use-after-free situation. Therefore, the clang plugin is a high priority for us to help developers avoid this pitfall.

There are other classes of problems that you can get into. Using Persistent in something that is in the oilpan heap is disallowed (because it can lead to leaks - you should use Member and trace the pointer in the trace method). Again, this is something that can be detected with the clang plugin.

Great. Is it reasonable to wait for the plugin to be ready before executing the bulk of the conversion?
 

Overall, it is hard to answer the question of "is this more complicated than before". There are new types and you do have to understand what they mean and how to use them. However, we have designed the system so there are clear rules about which type to use when. We are going to write clang plugins to verify the usage of the types. I guess the real question is if the benefits (no need to worry about cycles, fewer dangling pointers, simplified language bindings) are worth the additional cognitive load. In my mind the answer to that question is a clear yes. :-)

+1 to the benefits being real and nice improvements.

Mads Sig Ager

unread,
Dec 9, 2013, 2:18:55 PM12/9/13
to Chris Harrelson, Eric Seidel, blink-dev
On Mon, Dec 9, 2013 at 7:41 PM, Chris Harrelson <chri...@chromium.org> wrote:
On Mon, Dec 9, 2013 at 10:28 AM, Mads Sig Ager <ag...@chromium.org> wrote:
On Mon, Dec 9, 2013 at 6:59 PM, Chris Harrelson <chri...@chromium.org> wrote:

On Sun, Dec 8, 2013 at 11:13 PM, Mads Sig Ager <ag...@chromium.org> wrote:
Thanks Eric!

I understand and agree with your concerns. We will be responsible owners of Oilpan and do all we can to spread knowledge about it.

Hi Mads,

Looking at the slides and reading this discussion, it appears there are 6 different ways to write a pointer in Oilpan. And also you have to write a (somewhat non-trivial I guess?) trace() method.

It wasn't that clear from reading the slides what the consequences of doing it wrong are, or what tools we have to make sure we don't mess it up, so I figured I would just ask. Is it as complicated to get right as it sounds? I do agree that it is probably no more complicated than the existing system, but it would be even nicer to be less complicated..

I don't find it particularly complicated and we have made it as simple as we could. As Haraken points out in his slides, there are clear rules on when to use which type of pointer. Also, I find that there are a lot of cases where it actually clarifies the code. Currently in the code, when you have a raw pointer you do not know if it is a weak pointer or if it is only raw in order to break a ref cycle. With the Member/WeakMember distinction it is now clear what the meaning of the pointer is. Either it is a strong traced pointer or it is a weak pointer that does not keep what it is pointing to alive and which will be cleared when what is being pointed to is no longer reachable.

The Member/WeakMember distinction I can see being inevitable. The part that worries me most is the other flavors of pointer depending on whether the pointer destination is on/off heap, or the pointer sits in the stack.

From your commentary below, it sounds like there are several cases where if you pick the wrong one you can have a serious memory error? I'm also wondering if there is a way to better utilize the type system to disallow some categories of bad usage.

There are a couple of bad usages. However, we can guard against them with a clang plugin. I don't think we can use the type system directly to disallow more categories of bad usage. At least I don't see how at this point.
 
It sounds complicated to have to write a trace method. However, in reality, most of the trace methods just trace all of the members that have pointers to other heap objects and have the overall form:

void trace(Visitor* visitor)
{
    visitor->trace(m_firstMember);
    visitor->trace(m_secondMember);
    ...
}

Because this is so stylized we are going to write a clang plugin to verify that all fields of a class that contains Member or WeakMember in its type has to be traced in the trace method. Tracing correctly is very important. If you get it wrong the garbage collector will reuse the memory and you get a use-after-free situation. Therefore, the clang plugin is a high priority for us to help developers avoid this pitfall.

There are other classes of problems that you can get into. Using Persistent in something that is in the oilpan heap is disallowed (because it can lead to leaks - you should use Member and trace the pointer in the trace method). Again, this is something that can be detected with the clang plugin.

Great. Is it reasonable to wait for the plugin to be ready before executing the bulk of the conversion?

Yes. It will take a bit of time to land the basic infrastructure for oilpan. While we are doing that we will also be working on the clang plugin so that we can get the verification in as we start doing the actual Blink changes that use Oilpan. 

Erik Corry

unread,
Dec 9, 2013, 3:06:51 PM12/9/13
to dongseo...@intel.com, blink-dev, Mads Sig Ager, Eric Seidel
In the long run, I expect OwnPtr, RefPtr and PassRefPtr all to go away.  They will remain in the short run for the benefit of objects that have not been moved to the GCed heap yet.  The same applies to raw pointers and references anywhere but the stack.  Persistents will be used as sparsely as possible, but they will never go away completely, since they are the roots from which the transient reachability tracing starts.


To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.



--

Dongseong Hwang

unread,
Dec 9, 2013, 5:15:29 PM12/9/13
to blin...@chromium.org, dongseo...@intel.com, Mads Sig Ager, Eric Seidel
Could I ask why OwnPtr will also go away? imo, the reasons of Olipan is not related to removing OwnPtr: cycle reference, binding hack, read-after-delete, and etc..
In addition, removing OwnPtr may affect performance badly, because GC needs to trace the object as-is OwnPtr.


On the other hands, how about introducing some macro to remove boilerplate trace().

class X : GarbageCollected<X> {
...
TRACABLE_MEMBER_BEGIN
    Foo m_firstMember;
    Member<Bar> m_secondMember;
TRACABLE_MEMBER_END
...
}

INSTEAD OF

void trace(Visitor* visitor)
{
    visitor->trace(m_firstMember);
    visitor->trace(m_secondMember);
    ...
}

- DS

Kentaro Hara

unread,
Dec 9, 2013, 7:55:40 PM12/9/13
to Mads Sig Ager, Chris Harrelson, Eric Seidel, blink-dev
Yes. It will take a bit of time to land the basic infrastructure for oilpan. While we are doing that we will also be working on the clang plugin so that we can get the verification in as we start doing the actual Blink changes that use Oilpan. 

Regarding correctness verification, we're planning to use a couple of verification tools to make sure that Oilpan changes won't cause any use-after-frees nor memory leaks.

- The clang plugin will catch most of programming errors about Oilpan. This is a static verification.

- Oilpan supports ASAN. Oilpan poisons freed memory so that ASAN can catch use-after-frees that take place in Oilpan's heap. Also ASAN can detect destructors which rely on destruction order. This is a dynamic verification.

- A leak detector will verify that Oilpan changes won't cause memory leaks. The leak detector loads each layout test to <iframe> and counts # of DOM objects before and after the loading. Specifically, the detector (1) calls GC, (2) counts # of DOM objects, (3) loads one layout test to <iframe>, (4) clears the <iframe>, (5) calls GC, (6) counts # of DOM objects. If there is any difference between the counts in (2) and (6), the detector reports that there is a leak.

Dominic Cooney

unread,
Dec 9, 2013, 8:00:28 PM12/9/13
to Kentaro Hara, Mads Sig Ager, Chris Harrelson, Eric Seidel, blink-dev
Can someone from V8 give some indication of when a v8::Value and internal field can point to an Oilpan heap object with the same semantics as a Member<T> and WeakMember<T> (and *not* Persistent<T>)?

Some of the benefits I imagined this having for the Custom Elements implementation rely on also simplifying cross Oilpan-V8 heap cycles. Is there a rough timetable for that?

Dominic

Kentaro Hara

unread,
Dec 9, 2013, 8:06:42 PM12/9/13
to Dongseong Hwang, blink-dev, Mads Sig Ager, Eric Seidel
Could I ask why OwnPtr will also go away? imo, the reasons of Olipan is not related to removing OwnPtr: cycle reference, binding hack, read-after-delete, and etc..
In addition, removing OwnPtr may affect performance badly, because GC needs to trace the object as-is OwnPtr.

Yeah, I don't think we want to move all OwnPtrs to Oilpan's heap. Our first target would be all RefPtrs and OwnPtrs that need to be migrated to Oilpan's heap for some reasonable reasons (breaking cycles, removing binding hacks, etc). I think we can leave other OwnPtrs as is (and replace it with C++11 unique_ptr eventually).


Kentaro Hara

unread,
Dec 9, 2013, 8:26:10 PM12/9/13
to Dominic Cooney, Mads Sig Ager, Chris Harrelson, Eric Seidel, blink-dev
Can someone from V8 give some indication of when a v8::Value and internal field can point to an Oilpan heap object with the same semantics as a Member<T> and WeakMember<T> (and *not* Persistent<T>)?
 
Some of the benefits I imagined this having for the Custom Elements implementation rely on also simplifying cross Oilpan-V8 heap cycles. Is there a rough timetable for that?

[Adding a context of the question]
We need to make sure that Oilpan's object X is kept alive as long as X's wrapper is alive. Currently we're realizing it by simply storing a Persistent<X> into the internal field of X's wrapper. This implies that although cycles inside Oilpan's heap are OK, cycles that cross Oilpan and V8 are not allowed. To allow cycles that cross Oilpan and V8, we need to store Member<X> to the internal field of X's wrapper and implement a good GC interaction between V8's GC and Oilpan's GC. (Specifically, we need to implement a GC interaction so that two GCs can exchange cross-boundary pointers iteratively until they mark everything.) Once we implement the GC interaction, we can dramatically simplify a lot of binding architectures, including Custom Elements, Promises, and other features that use ScriptValues or [GenerateIsReachable] etc.

Thus we do want to implement the GC interaction in the near future, but it won't happen in the first iteration of Oilpan because of the complexity. For example, we need to consider how to make a minor GC workable in the GC interaction (It's doable but needs consideration). I'm not sure about the timetable but I think it will happen after we stabilize Oilpan's architecture in the Blink side (That's the first priority at this point).

Mads might have some idea about the timeline.

Mads Sig Ager

unread,
Dec 10, 2013, 1:53:33 AM12/10/13
to Dongseong Hwang, blink-dev, Eric Seidel
On Mon, Dec 9, 2013 at 11:15 PM, Dongseong Hwang <dongseo...@intel.com> wrote:
Could I ask why OwnPtr will also go away? imo, the reasons of Olipan is not related to removing OwnPtr: cycle reference, binding hack, read-after-delete, and etc..
In addition, removing OwnPtr may affect performance badly, because GC needs to trace the object as-is OwnPtr.

Time will tell if OwnPtr will go away. In many situations the owned object will end up pointing to other heap objects. When that happens we need to trace through it anyway and it becomes simpler to make the owned object garbage collected as well so that the structure is clear and everything is just traced. There are many simple objects as well that are currently owned with OwnPtr and for those I don't think it makes much of a difference if they stay OwnPtr or if we put them in the oilpan heap.
 

On the other hands, how about introducing some macro to remove boilerplate trace().

class X : GarbageCollected<X> {
...
TRACABLE_MEMBER_BEGIN
    Foo m_firstMember;
    Member<Bar> m_secondMember;
TRACABLE_MEMBER_END
...
}

INSTEAD OF

void trace(Visitor* visitor)
{
    visitor->trace(m_firstMember);
    visitor->trace(m_secondMember);
    ...
}

We considered using macros but it quickly becomes ugly and unclear what is going on. Having the explicit trace method it is at least clear what is going on and we can build tooling to catch cases where you forget to visit something.

Mads Sig Ager

unread,
Dec 10, 2013, 2:09:21 AM12/10/13
to Dominic Cooney, Kentaro Hara, Chris Harrelson, Eric Seidel, blink-dev
On Tue, Dec 10, 2013 at 2:00 AM, Dominic Cooney <domi...@chromium.org> wrote:
Can someone from V8 give some indication of when a v8::Value and internal field can point to an Oilpan heap object with the same semantics as a Member<T> and WeakMember<T> (and *not* Persistent<T>)?

Some of the benefits I imagined this having for the Custom Elements implementation rely on also simplifying cross Oilpan-V8 heap cycles. Is there a rough timetable for that?

We haven't started working on that part yet. For the time being we are focussing on making Blink garbage collected and are keeping the bindings mostly as is. There are nasty life-time management hacks where reference structures that should be on the Blink side are being pushed into JavaScript because Blink cannot deal with cycles. Those are the V8 interactions that we will be simplifying first by making Blink capable of dealing with these cycles on its own.

Once we are well under way with making Blink garbage collected we will start working on having the GCs of V8 and Blink interact. That work requires *all* of the Blink objects that have wrappers to be in the oilpan heap and it requires V8 and Blink to perform GCs together and exchange pointer information until they agree on reachability. There are a number of challenges involved in that. One challenge is making sure that we do not slow down any of the systems too much because it has to wait for the other one to finish its GC marking phase.

I'm really looking forward to this work. It will be fun and it will allow us to greatly simplify a lot of structures. However, it will probably be a couple of quarters before we can really get started on that.

Cheers,    -- Mads

Dominic Cooney

unread,
Dec 10, 2013, 4:03:02 AM12/10/13
to Mads Sig Ager, Kentaro Hara, Chris Harrelson, Eric Seidel, blink-dev
Thanks for the detailed reply Mads.

Re: moving reference cycles that Blink "exports" to the V8 heap back into the Blink side, was this feasible in practice? Because the pattern of references on the Blink side are basically controlled by people hacking Blink C++ code, but on the V8 side the page author can always create references between arbitrary JavaScript wrappers with expando properties.

When the cycles and the author's ad-hoc references are on the V8 side, V8 GC can deal with them; if the cycles are on the Oilpan heap but the author's ad-hoc references remain on the V8 side, won't this create a cross-heap cycle and leak?

Or are there existing cycles that never include an object that can have a JavaScript wrapper?

Dominic

Mads Sig Ager

unread,
Dec 10, 2013, 4:19:32 AM12/10/13
to Dominic Cooney, Kentaro Hara, Chris Harrelson, Eric Seidel, blink-dev
On Tue, Dec 10, 2013 at 10:03 AM, Dominic Cooney <domi...@chromium.org> wrote:
Thanks for the detailed reply Mads.

Re: moving reference cycles that Blink "exports" to the V8 heap back into the Blink side, was this feasible in practice? Because the pattern of references on the Blink side are basically controlled by people hacking Blink C++ code, but on the V8 side the page author can always create references between arbitrary JavaScript wrappers with expando properties.

When the cycles and the author's ad-hoc references are on the V8 side, V8 GC can deal with them; if the cycles are on the Oilpan heap but the author's ad-hoc references remain on the V8 side, won't this create a cross-heap cycle and leak?

Or are there existing cycles that never include an object that can have a JavaScript wrapper?

The situations I'm thinking of are situations where you would like to have a cycle on the Blink C++ side but instead of doing that you create the cycle through V8 instead. If you have two objects A and B that should keep each other alive but you cannot easily express that on the Blink side because of reference cycles, sometimes artificial wrappers are created and references put there instead. Then A and B will keep each other alive because A keeps B alive on the Blink side and if B is alive, then the wrapper for B keeps the wrapper for A alive and therefore A will be kept alive as well.

     V8                                   Blink

  AWrapper   <--------------------->  A
         ^                                   |
         |                                    v
  BWrapper   <--------------------->  B

All pointers from Blink into the V8 heap are weak pointers and therefore the cross-heap cycles in these situations will not create leaks. If they did there would be a lot of leaks already because you can always make wrappers point to each other. However, when the wrappers are no longer reachable from JavaScript the wrappers can go away and therefore they will no longer keep the Blink side objects alive.

For these special situations, the pointer structure in V8 is only there to represent something that reference cycles makes hard to express in Blink and those we can move back into Blink.

Cheers,    -- Mads

Kentaro Hara

unread,
Dec 10, 2013, 4:35:47 AM12/10/13
to Mads Sig Ager, Dominic Cooney, Chris Harrelson, Eric Seidel, blink-dev
     V8                                   Blink
  AWrapper   <--------------------->  A
         ^                                   |
         |                                    v
  BWrapper   <--------------------->  B 
 
All pointers from Blink into the V8 heap are weak pointers and therefore the cross-heap cycles in these situations will not create leaks. If they did there would be a lot of leaks already because you can always make wrappers point to each other.

In the current binding architecture, ScriptValue is the only class that can have a strong reference from Blink to V8. We've got rid of most of the ScriptValues and taken care of the remaining ScriptValues carefully so that they won't cause leaks. That's why we don't observe leaks described above.

Daniel Bratell

unread,
Dec 10, 2013, 10:18:12 AM12/10/13
to blin...@chromium.org
On Mon, 09 Dec 2013 10:07:17 +0100, Kentaro Hara <har...@chromium.org>
wrote:

> Today I gave a tech talk of Oilpan in Tokyo. Here is a slide:
>
> https://docs.google.com/a/google.com/presentation/d/1YtfurcyKFS0hxPOnC3U6JJroM8aRP49Yf0QWznZ9jrk/>edit#slide=id.p
>
>
>
> Hope it will be helpful to understand Oilpan. Feel free to add comments
> to the slide :)

It's a very nice set of slides. And maybe one of the first time I've seen
someone baking pastries for inclusion in a tech-talk. :-)

There is one area where I wonder if terminology will end up confusing
people though. Heap. The slides have a very distinct idea of what the
"heap" is, and it's not fully compatible with the traditional definition (
http://en.wikipedia.org/wiki/Heap_(programming) I'd guess, or one Knuth).

As objects at the border between scripts and the rest of chromium will
interact both with the C++ Heap and the V8 Heap, I wonder if someone could
think of some excellent new naming scheme to reduce future confusion.

/Daniel

Dominic Cooney

unread,
Dec 10, 2013, 10:59:58 PM12/10/13
to Mads Sig Ager, Kentaro Hara, Chris Harrelson, Eric Seidel, blink-dev
On Tue, Dec 10, 2013 at 6:19 PM, Mads Sig Ager <ag...@chromium.org> wrote:
On Tue, Dec 10, 2013 at 10:03 AM, Dominic Cooney <domi...@chromium.org> wrote:
Thanks for the detailed reply Mads.

Re: moving reference cycles that Blink "exports" to the V8 heap back into the Blink side, was this feasible in practice? Because the pattern of references on the Blink side are basically controlled by people hacking Blink C++ code, but on the V8 side the page author can always create references between arbitrary JavaScript wrappers with expando properties.

When the cycles and the author's ad-hoc references are on the V8 side, V8 GC can deal with them; if the cycles are on the Oilpan heap but the author's ad-hoc references remain on the V8 side, won't this create a cross-heap cycle and leak?

Or are there existing cycles that never include an object that can have a JavaScript wrapper?

The situations I'm thinking of are situations where you would like to have a cycle on the Blink C++ side but instead of doing that you create the cycle through V8 instead. If you have two objects A and B that should keep each other alive but you cannot easily express that on the Blink side because of reference cycles, sometimes artificial wrappers are created and references put there instead. Then A and B will keep each other alive because A keeps B alive on the Blink side and if B is alive, then the wrapper for B keeps the wrapper for A alive and therefore A will be kept alive as well.

     V8                                   Blink

  AWrapper   <--------------------->  A
         ^                                   |
         |                                    v
  BWrapper   <--------------------->  B

All pointers from Blink into the V8 heap are weak pointers and therefore the cross-heap cycles in these situations will not create leaks. If they did there would be a lot of leaks already because you can always make wrappers point to each other. However, when the wrappers are no longer reachable from JavaScript the wrappers can go away and therefore they will no longer keep the Blink side objects alive.

For these special situations, the pointer structure in V8 is only there to represent something that reference cycles makes hard to express in Blink and those we can move back into Blink.

Cheers,    -- Mads


Got it, thanks Mads. I forgot that the Blink->V8 references are strategically made weak during V8 GC.

Erik Corry

unread,
Dec 11, 2013, 3:51:06 AM12/11/13
to Daniel Bratell, blink-dev
The Wiki page you linked to says:

Memory requests are satisfied by allocating portions from a large pool of memory called the heap. At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations.

That sounds like the same way we are using it.  Could you elaborate on why you don't think we are using it the same way?




To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

Daniel Bratell

unread,
Dec 11, 2013, 5:39:45 AM12/11/13
to Erik Corry, blink-dev
On Wed, 11 Dec 2013 09:51:06 +0100, Erik Corry <erik...@google.com> wrote:

The Wiki page you linked to says:

Memory requests are satisfied by allocating portions from a large pool of memory called the heap. At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations.

That sounds like the same way we are using it.  Could you elaborate on why you don't think we are using it the same way?

I'm not saying it's wrong to call the v8 memory space heap (we do the same in Presto Carakan) but it's called heap in contrast to "Off-heap". Off-heap seems to refer to non-V8-heap, but they are still heap in many other respects. To a C++ programmer I think they mostly think of the heap as something they use with malloc() and new so calling that "off-heap" is confusing.

It all depends on your point of view, and I merely wanted to point out that the terminology is slightly confusing/misleading when looked at from a non-v8 point of view.

/Daniel

Nico Weber

unread,
Dec 12, 2013, 3:23:12 PM12/12/13
to Mads Sig Ager, blink-dev
Hi Mads,

I didn't find anything in the document about gc pauses. Did I just miss it? Is that not something worth worrying about?

Nico

Mads Sig Ager

unread,
Dec 13, 2013, 1:53:03 AM12/13/13
to Nico Weber, blink-dev
Hi Nico,

yes, that is definitely something that we need to keep track of and it should have been part of the document!

In our measurements, the typical Blink heap when moving the DOM and CSS hierarchies into the heap are fairly small (if I remember correctly we are at about 20-30MB for a full day GMail session) and therefore the pause times are also small. In micro benchmarks that stress the allocation of Blink objects we have seen very long pauses (we have seen 100-150ms pauses). We haven't (yet) found a case where Oilpan regresses something like animation performance. Probably because most of that is driven by JavaScript rendering into a canvas and therefore Blink allocations are rare. Also, we don't have a good baseline for this. RefCounting leads to pauses when there are deref storms and a lot of object deallocation happens and I haven't been able to find any benchmarks that measure what the current pause times are and therefore it is hard to compare. We are very interested in finding such benchmarks so we can do a proper comparison - so if you know of any do let us know. One of the things that should help us compared to ref counting is that we reduce the amount of work that needs to be done in finalizers. There is a lot of clearing of fields in destructors going on in current Blink that we do not have to perform. We haven't done much to optimize the garbage collector itself at this point and that is on our plate as well.

Cheers,    -- Mads

Pavel Feldman

unread,
Dec 13, 2013, 2:11:28 AM12/13/13
to Mads Sig Ager, blink-dev, Nico Weber

Mads, sorry if I missed it, but could you share links to the branch with good and bad examples of migration of some core Blink code. That would help assessing the changes and see how the code will look "after". (Why using artificial small examples if we have a luxury of taking a look into the future right away?)

Many parts of the plan resamble the humble attempt of inspector team to traverse Blink heap and collect native memory stats that happened half a year ago. That attempt proved that it was bloating code, making authors of incremental changes more involved with memory management, i.e. was adding a fair bit of complexity.

I do understand that it was primarily due to lack of awereness and I think you are doing a great job of educating people upfront. But it would be great to see whether in fact complexity raises and if yes, how much.

Regards
Pavel

Mads Sig Ager

unread,
Dec 13, 2013, 2:32:44 AM12/13/13
to Pavel Feldman, blink-dev, Nico Weber
This thread is becoming hard to keep track of. :-)

One good example StyleSheetContents:


It holds a strong pointer to another heap allocated object, two vectors of strong pointers to other heap allocated objects and a set of weak pointers to other heap allocate objects. So it uses both strong and weak pointers. In the implementation file, there used to be a 'RefPtr<StyleSheetContents> protect(this)' in checkLoaded. That is no longer needed. All pointers to Document, Node, StyleSheetContents and other objects that are allocated in the oilpan heap are just raw pointers.

The worst example is probably Document:


It is such a central structure that there is a lot of stuff going on. There are many members that need to be visited and there is a bunch of non-trivial weakness going on as well. Therefore, the trace method ends up tracing a lot of fields (we are currently writing a clang plugin and have verified that if we leave out one of the lines it will complaint) and registering an extra weak callback to perform aditional weak processing. We should be able to remove a lot of the extra weak processing steps by moving the weak processing to the objects that actually hold the weak pointers. We should do that when moving this to trunk.

Cheers,    -- Mads

Eric Seidel

unread,
Jan 27, 2014, 5:58:17 PM1/27/14
to Mads Sig Ager, David Turner, Kentaro Hara, blink-dev
I'm curious if there was any update to the ARM measurements? The more
I work on mobile performance, the more I'm surprised at how different
it can be from Desktop.

I just recently got my own profiling setup back working on Android and
happy to help one of the Oilpan folks do similar.

https://code.google.com/p/chromium/wiki/ProfilingContentShellOnAndroid
are the instructions I use.
(There is also an internal version of that doc with a couple
short-cuts for flashing your device at
http://goto.google.com/cr-android-perf-howto.)

On Mon, Dec 9, 2013 at 1:53 AM, Mads Sig Ager <ag...@chromium.org> wrote:
> We did not get performance measurements for ARM yet. I tried to get the
> branch building for Android a couple of weeks ago and failed because I
> couldn't get a checkout of a version old enough to be compatible with our
> oilpan Blink branch. We will make sure to get coverage on all platforms
> while developing under a compile-time flag so we have full information
> before turning this on and shipping it.
>
> Cheers, -- Mads
>
>
> On Mon, Dec 9, 2013 at 10:43 AM, David Turner <di...@chromium.org> wrote:
>>
>> Thanks, that's really fascinating,
>>
>> Quick question: Did you try to perform performance measurements on ARM?
>> Given the vast differences in CPU performance with desktop-grade, it would
>> be interesting to see how the results change for mobile (and many
>> Chromebooks :-)).
>>
>> - Digit
>>
>>
>> On Mon, Dec 9, 2013 at 10:07 AM, Kentaro Hara <har...@chromium.org>
>> wrote:
>>>
>>> Today I gave a tech talk of Oilpan in Tokyo. Here is a slide:
>>>
>>>
>>> https://docs.google.com/a/google.com/presentation/d/1YtfurcyKFS0hxPOnC3U6JJroM8aRP49Yf0QWznZ9jrk/edit#slide=id.p
>>>
>>> Hope it will be helpful to understand Oilpan. Feel free to add comments
>>> to the slide :)
>>>
>>>
>>>
>>>
>>> On Mon, Dec 9, 2013 at 4:13 PM, Mads Sig Ager <ag...@chromium.org> wrote:
>>>>
>>>> Thanks Eric!
>>>>
>>>> I understand and agree with your concerns. We will be responsible owners
>>>> of Oilpan and do all we can to spread knowledge about it.
>>>>
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to blink-dev+...@chromium.org.
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Kentaro Hara, Tokyo, Japan
>>>
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to blink-dev+...@chromium.org.
>>
>>
>

Mads Sig Ager

unread,
Jan 27, 2014, 6:42:46 PM1/27/14
to Eric Seidel, David Turner, Kentaro Hara, blink-dev
We haven't moved any objects to the oilpan heap on trunk yet and therefore we have no measurements. The branch is too old and I have been unable to build it for Android. We'll get perf numbers on mobile when we have moved Blink objects to the oilpan heap under the compile time flag.

Thanks for the pointers. Those will be useful when we get there! :-)

Cheers,    -- Mads
Reply all
Reply to author
Forward
0 new messages