Thread model: the standard, multi-threaded with only one active thread at a time, i.e. the standard combination of a green thread VM architecture with native threads, as seen in Strongtalk. Different threads must use a lock on the entire VM to use it. There is a preemptive scheduler to share the VM between threads waiting on the lock.
Parses direct to an AST.
Generates native code directly from AST.
No bytecode intermediate form.
In-lines simple arithmetic ops (see codegen-ia32.cc/codegen-arm.cc) when one side or the other is literal.
Has monomorphic & megamorphic inline caches, but no polymorphic inline caches (see e.g. MONOMORPHIC in globals.h).
I see no evidence of adaptive optimization/speculative inlining, performance counters et al. Seems to generate stubs to defer code generation. So perhaps some deferred optimization is done at send/link time, but its not obvious to me.
There are counters, but I think this is for understanding the dynamic behaviour, not yet for optimization.
> Thread model: the standard, multi-threaded with only one active
> thread at a time, i.e. the standard combination of a green thread VM
> architecture with native threads, as seen in Strongtalk. Different
> threads must use a lock on the entire VM to use it. There is a
> preemptive scheduler to share the VM between threads waiting on the
> lock.
One thing to note here is that the semantics of Javascript don't
include concurrency at all. Javascript code only runs in response to
sequential events generated from outside the Javascript context, so it
makes perfect sense for V8 to be implemented this way.
Colin
>
> On Sep 3, 11:57 am, Colin Putney <cput...@wiresong.ca> wrote:
>
>> One thing to note here is that the semantics of Javascript don't
>> include concurrency at all. Javascript code only runs in response to
>> sequential events generated from outside the Javascript context, so
>> it
>> makes perfect sense for V8 to be implemented this way.
>
> True ... except that you can spawn off timeouts and intervals that act
> in some ways similarly to threads, except (at least in current
> implementations) with no parallel execution - the VM schedules them
> sequentially - and with no suspension or synchronization mechanism, so
> one task (event handling function) needs to complete before another
> can be scheduled.
True, you can schedule events from Javascript. But I think the
differences you point out are more important than the similarity -
which is only that you can schedule code to be executed without having
to do an immediate synchronous function call.
Not to belabour it too much, but I actually think this nitpicky
distinction is quite important. All the action in computer science
these days in figuring out how to deal with multiprocessing. Shared-
state concurrency, as traditionally used by OO languages like
Smalltalk, Java, Objective-C, C++ or Ruby can be problematic, so I'm
interested in alternatives.
What's cool about V8 is that it's designed for multi-theading on the
outside, and single threading on the inside. That is, the VM supports
being used in a multithreaded application, but it doesn't attempt to
execute Javascript concurrently. At the same time, it supports
multiple Javascript execution contexts within the same VM. Maybe that
could be used to implement the same sort of concurrency model that's
used in E and Croquet.
Colin
Hi David,I spent some time reading the code yesterday. Here's what I could gleanThread model: the standard, multi-threaded with only one active thread at a time, i.e. the standard combination of a green thread VM architecture with native threads, as seen in Strongtalk. Different threads must use a lock on the entire VM to use it. There is a preemptive scheduler to share the VM between threads waiting on the lock.
Parses direct to an AST.
Generates native code directly from AST.
No bytecode intermediate form.
In-lines simple arithmetic ops (see codegen-ia32.cc/codegen-arm.cc) when one side or the other is literal.
Has monomorphic & megamorphic inline caches, but no polymorphic inline caches (see e.g. MONOMORPHIC in globals.h).
I see no evidence of adaptive optimization/speculative inlining, performance counters et al. Seems to generate stubs to defer code generation. So perhaps some deferred optimization is done at send/link time, but its not obvious to me.
There are counters, but I think this is for understanding the dynamic behaviour, not yet for optimization.
Dave: regarding PICs, I'm pretty sure it's critical that you have a
monomorphic send attempt followed by a megamorphic send for the case
where a call site is nearly-monomorphic.
In essence, this is a
degenerate PIC of length 1, and I guess it seems natural to allow
bigger PICs.
[...] It would actually be
interressing to benchmark strongtalk with various settings (no PICs,
PICs limited to various sizes). If PICs of length 1 are found to be
enough, then the PICs code could indeed be much more simple.
> [...] suggest can't be distinguished from a truly megamorphic send. Such a send
> would be slower for megamorphic sends [...]
By the way, I don't know who coined the word "megamorphic", but he was
clearly a barbarian (originally, someone who doesn't speak Greek).
"Monomorphic" and "polymorphic" are fine, but "megamorphic" would mean
having a big form, as in "You've grown rather megamorphic since you
left school."
Can we please switch to "perissomorphic"? It's better Greek and onlly
a little bit longer. We already have the rare words "perissology"
(too many words) and "perissosyllabic" (having too many syllables), as
well as the more common word "perissodactyl" (ungulates that have an
odd number of toes, such as horses, from a second sense of Greek
_perissos_, 'odd (in mathematics')'.
"Megamorphic": kill it before it spreads!
--
GMail doesn't have rotating .sigs, but you can see mine at
http://www.ccil.org/~cowan/signatures