[sorry for previous early send error]
Hi,
Both of these use cases are largely equivalent. I've been feriously
against weak refs based on this use case because I believe (admittedly
without concrete proof) that good API design can achieve the same
benefits without having to expose GC non-determinism.
I acknowledge that in large software cases, "good APIs" is hard to
retrofit and make good use of in multi-layered software. Harder, it
looks, than retroffiting weakrefs indeed, but doable anyway.
In any case, I've stopped being against weakrefs after a message by Mark
Miller and the use case he described
https://mail.mozilla.org/pipermail/es-discuss/2013-April/029598.html
At first, I didn't understand any of it ("distributed acyclic garbage
collector"? wat?!?), so I felt I needed to understand what it was about
before being against. After discussing with Mark Miller, my
understanding is as follow.
The problem being solved by what Mark Miller describes is cross-vat
(unit of computation with its own unshared memory, so a process or a
machine in practice) object references. In object oriented programing
and object capabilities (extreme form of object orientation, most often
used as a security framework), objects and what they represent usually
only exists in one process. We usually then resort to sending data
across units that don't share memory when such units need to
collaborate, but this breaks the notion of object reference and the
benefits that come with it (like solving the grant matcher problem [1]).
This is also true of RPC systems (what I know of them at least) where
it's a machine speaking to another machine without really a smaller
granualrity (object granularity is lost).
To keep the object granularity across machines, in E, they've created a
protocol (CapTP) that, in essence, streches object references across
machines. Cross-machine wrappers if you will.
When designing this protocol, at some point comes the problem of GC.
Distributed GC... Sometimes, a machine has a reference to a remote
object and at some point, all refs to this objects in this machine are
dropped. The remote machine needs to know that this machine doesn't hold
a reference anymore. My understanding is that there are roughly 2 ways
to solve this problem. Either the protocol is built-in in the language
and that can all be taken care of under the hood or one has to implement
it and that requires a form of weak ref (so you know when to tell the
remote machine what it needs to know).
I tried to be as brief as possible in explaining the use case, feel free
to ask me more if I was too brief or unclear.
I believe this use case is valuable (there is an interesting discussion
to have here; Mark Miller message was a bit dry as is. Preferably, let's
have it on es-discuss, so other interested parties can participate...
and it'll re-happen there anyway). Adding a built-in cross-machine
protocol in JavaScript shouldn't happen, because first, it's too much
work to get right, then, there might be other different protocols with
different characteristics that makes sense in different contexts, so no
need to force one in the language. It may be better to let people
implement their own and experiment different protocols.
That requires weakrefs in some way, I'm afraid. I'm interested if other
solutions can be proposed to achieve object-granularity communication
across machines (I'm sure Mark Miller will be interested as well).
One use case of cross-vat communication is the remote debugger protocol
implemented in Firefox/OS. I haven't taken the time to run over related
bugs and follow its development (and probably won't for now, because
it'd be a lot of work), but it'd be interesting to think of how a
cross-vat protocol would have made its implementation easier/safer/less
error-prone/less leak-prone.
Anyone knows how leaky the remote debugger protocol is now?
David
[1]
http://www.erights.org/elib/equality/grant-matcher/index.html