Evil threads

544 views
Skip to first unread message

Jorge

unread,
Nov 8, 2011, 9:52:55 PM11/8/11
to nod...@googlegroups.com
pith.rb
test01.js

Matt

unread,
Nov 8, 2011, 11:45:08 PM11/8/11
to nod...@googlegroups.com
Nope, didn't get it. What is this trying to demonstrate? That V8 is faster than Ruby? That's pretty much a given...

On Tue, Nov 8, 2011 at 9:52 PM, Jorge <jo...@jorgechamorro.com> wrote:
http://javascriptology.com/evilthreads.mov








257x times faster than ruby
2,6x times faster than macruby

And it's a beta... :-)
--
Jorge.
--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com
To unsubscribe from this group, send email to
nodejs+un...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Mark Hahn

unread,
Nov 9, 2011, 1:24:34 AM11/9/11
to nod...@googlegroups.com
I couldn't see anything except a frozen screen with some green columns and a small box with two cpu history graphs.  Maybe it's a windows thing.

Jorge

unread,
Nov 9, 2011, 5:58:52 AM11/9/11
to nod...@googlegroups.com
On 09/11/2011, at 07:24, Mark Hahn wrote:

> I couldn't see anything except a frozen screen with some green columns and a small box with two cpu history graphs. Maybe it's a windows thing.

http://www.youtube.com/watch?v=lnCjk2_3WLQ
--
Jorge.

Christophe Eymard

unread,
Nov 9, 2011, 6:06:04 AM11/9/11
to nod...@googlegroups.com
Still don't get it...

Did you bring threads to the javascript world ?

Are they usable ? 

Jorge

unread,
Nov 9, 2011, 6:19:10 AM11/9/11
to nod...@googlegroups.com

Yes.

> Are they usable ?

Yes. The API is just like that of an I/O call:

nuThread= Threads.give_me_a_thread( function, data, callback );

You simply provide a function to run, the data it's going to receive as a parameter, and a callback, and give_me_a_thread() runs it in a separate thread (a true operating system pthread running a v8 isolate) in parallel with the main thread. When the function ends, it returns a value, and that's what you get in the callback. If you've got say, a cpu with six cores, and you launch 6 threads, each one is going to run flat-out in its core... That's truly getting all the juice out of you cpu cores !

Awesome, isn't it ? :-)
--
Jorge.

Arnout Kazemier

unread,
Nov 9, 2011, 6:24:48 AM11/9/11
to nod...@googlegroups.com
Got any code we can glaze at? :)

Diogo Resende

unread,
Nov 9, 2011, 6:32:48 AM11/9/11
to nod...@googlegroups.com
On Wed, 9 Nov 2011 12:19:10 +0100, Jorge wrote:
> Yes. The API is just like that of an I/O call:
>
> nuThread= Threads.give_me_a_thread( function, data, callback );
>
> You simply provide a function to run, the data it's going to receive
> as a parameter, and a callback, and give_me_a_thread() runs it in a
> separate thread (a true operating system pthread running a v8
> isolate)
> in parallel with the main thread. When the function ends, it returns
> a
> value, and that's what you get in the callback. If you've got say, a
> cpu with six cores, and you launch 6 threads, each one is going to
> run
> flat-out in its core... That's truly getting all the juice out of you
> cpu cores !

Will be possible to use it similar to an EventEmitter? I mean, having
the
possibility to emit/call the callback from inside the thread more than
once?

> Awesome, isn't it ? :-)

Yes :)

---
Diogo R.

Axel Kittenberger

unread,
Nov 9, 2011, 6:56:16 AM11/9/11
to nod...@googlegroups.com
And you just dont care about semaphores? Must be new to threaded
coding eh? :-) Sounds like opening that pandoras box again, full of
race conditions, reciprocal lockups, mashed up variables, lockup
circumvention code, and the endless horrors of debugging said problems
that only might happen after every 3 hours of running, if you make one
tiny semaphore anywhere a tad too wrong. Been there, done that.

V8 has pthreads built in, so it isn't much of a problem to enable
these, but it has reason why e.g. node-fibers makes a global lock
around everything, effectively making them like green-threads (just
unfortunately with the cost of full context switching), or node out of
the box comes with child_process.fork() instead.

Sorry, but isn't the whole idea of node NOT to have threads and pay
their memory and context switching costs? Because microseconds DO
matter?

Dominic Tarr

unread,
Nov 9, 2011, 7:54:44 AM11/9/11
to nod...@googlegroups.com
he did admit that they where evil.

Bruno Jouhier

unread,
Nov 9, 2011, 11:29:11 AM11/9/11
to nodejs
Hi Jorge,

If I understand well, you are enabling something like:

doSomethingBig(arg1, arg2, function(err, result) { ... });

where doSomethingBig runs from a V8 isolate in a different thread
while the main thread continues its normal business and doSomethingBig
returns its result to the main thread via a regular callback.

As the thread runs in a V8 isolate, this is safe (and more efficient
than spawning processes). But this also implies that arg1, arg2, err
and result be serializable. How do you pass them from one thread to
the other (JSON?). It would be nice to be able to pass big data as
immutable data without copying. Did you crack this one too?

Also, what's the cost of creating the thread + isolate. Do you
recreate them on every call, or do you maintain a pool (could be
problematic if doSomethingBig modifies globals)?

Seeing the API would help lift some of the mystery. Is this somewhere
on GitHub?

Bruno

Jorge

unread,
Nov 9, 2011, 12:09:52 PM11/9/11
to nod...@googlegroups.com
On 09/11/2011, at 17:29, Bruno Jouhier wrote:
> On Nov 9, 11:58 am, Jorge <jo...@jorgechamorro.com> wrote:
>>
>>
>> http://www.youtube.com/watch?v=lnCjk2_3WLQ
>
> Hi Jorge,

Hi !

> If I understand well, you are enabling something like:
>
> doSomethingBig(arg1, arg2, function(err, result) { ... });
>
> where doSomethingBig runs from a V8 isolate in a different thread
> while the main thread continues its normal business and doSomethingBig
> returns its result to the main thread via a regular callback.

Yes, exactly.

> As the thread runs in a V8 isolate, this is safe (and more efficient
> than spawning processes).

Yes, exactly, too. People hear thread and start vomiting nonsenses, it's so funny !

> But this also implies that arg1, arg2, err
> and result be serializable. How do you pass them from one thread to
> the other (JSON?).

At the moment (this is a beta!) the only typeof data is string, and yes, it's passed by copy. To pass an object you'd have to serialize it :-( , and yes, most likely, you'd use JSON.stringify() for that.

> It would be nice to be able to pass big data as
> immutable data without copying. Did you crack this one too?

Not yet, but I'm going to open the pandora box, and want it to allow passing buffers by reference :-)) SHARED STATE PARTY !

> Also, what's the cost of creating the thread + isolate.

The pthread is cheap (+30k per second in my mac core...@2.5GHz), but the isolates + its context are not (about 2.5k per second). With thousands of concurrent threads, it quickly allocates several gigabytes of virtual memory...

> Do you
> recreate them on every call,

No.

> or do you maintain a pool

Yes :-)

> (could be
> problematic if doSomethingBig modifies globals)?

Yes, but I see that as a feature. Say if you want to 'install' some utilities in the thread, you can then reuse them again an again.

> Seeing the API would help lift some of the mystery. Is this somewhere
> on GitHub?

No, it's not ready yet, but if you're interested send me a pm.

Cheers,
--
Jorge.

Axel Kittenberger

unread,
Nov 9, 2011, 12:26:11 PM11/9/11
to nod...@googlegroups.com
> Yes, exactly, too. People hear thread and start vomiting nonsenses, it's so funny !

By all respect, if you do not bother to describe what you are doing
(is writing that hard?) using normal text and just put up some .mov
(.mov?) with green screens I'd rethink accusing people of vomiting
nonsenses.

Axel Kittenberger

unread,
Nov 9, 2011, 12:27:28 PM11/9/11
to nod...@googlegroups.com
> No, it's not ready yet, but if you're interested send me a pm.

Finish it first. Put it online, Then bother us. I don't see what you
want to achieve with this thread.

Jeff Fifield

unread,
Nov 9, 2011, 1:49:52 PM11/9/11
to nod...@googlegroups.com
How is your proposal better/worse/same/different than webworkers or just communicating with a pool of worker processes?

Is the callback at thread completion thread safe?

On Tue, Nov 8, 2011 at 7:52 PM, Jorge <jo...@jorgechamorro.com> wrote:
http://javascriptology.com/evilthreads.mov








257x times faster than ruby
2,6x times faster than macruby

And it's a beta... :-)

jacombs

unread,
Nov 9, 2011, 2:58:35 PM11/9/11
to nod...@googlegroups.com
Wasn't Vomiting Nonsenses an 80's punk band?

Bruno Jouhier

unread,
Nov 9, 2011, 3:52:21 PM11/9/11
to nodejs
> > It would be nice to be able to pass big data as
> > immutable data without copying. Did you crack this one too?
>
> Not yet, but I'm going to open the pandora box, and want it to allow passing buffers by reference :-)) SHARED STATE PARTY !

If I remember well you posted a little sample a while ago which
demonstrated that buffers passed to node's core APIs may also be
altered concurrently with JS code.
So you would not be doing anything fundamentally new, just following a
trend!

You can probably assume that people who will be using this feature
will be smart enough to understand a simple rule like "the buffer does
not belong to you any more". There is a point where preventing people
to shoot themselves in the foot becomes counter productive. You have
to start to trust the programmer when raw performance really matters.

>
> > Also, what's the cost of creating the thread + isolate.
>
> The pthread is cheap (+30k per second in my mac core2...@2.5GHz), but the isolates + its context are not (about 2.5k per second). With thousands of concurrent threads, it quickly allocates several gigabytes of virtual memory...
>
> > Do you
> > recreate them on every call,
>
> No.
>
> > or do you maintain a pool
>
> Yes :-)
>
> > (could be
> > problematic if doSomethingBig modifies globals)?
>
> Yes, but I see that as a feature. Say if you want to 'install' some utilities in the thread, you can then reuse them again an again.

So the main thread can hold a handle on a thread and pass it things to
do. Cool.

>
> > Seeing the API would help lift some of the mystery. Is this somewhere
> > on GitHub?
>
> No, it's not ready yet, but if you're interested send me a pm.

I'll do.

>
> Cheers,
> --
> Jorge.

Jorge

unread,
Nov 9, 2011, 7:09:29 PM11/9/11
to nod...@googlegroups.com

Yes, as many times as you want.
--
Jorge.

Jorge

unread,
Nov 9, 2011, 7:18:34 PM11/9/11
to nod...@googlegroups.com
On 09/11/2011, at 19:49, Jeff Fifield wrote:

> How is your proposal better/worse/same/different than webworkers or just communicating with a pool of worker processes?

I'm not sure, we'll see.

> Is the callback at thread completion thread safe?

Totally. The thread sends an async event to node's event loop, so the callback is run just as any other JS code in node, in its single main thread.
--
Jorge.

Diogo Resende

unread,
Nov 9, 2011, 7:44:57 PM11/9/11
to nod...@googlegroups.com

You just made my day :) I'm looking anxious for this to be released.

---
Diogo R.

Jorge

unread,
Nov 10, 2011, 3:59:29 AM11/10/11
to nod...@googlegroups.com
On 10/11/2011, at 01:18, Jorge wrote:
> On 09/11/2011, at 19:49, Jeff Fifield wrote:
>
>> How is your proposal better/worse/same/different than webworkers or just communicating with a pool of worker processes?
>
> I'm not sure, we'll see.

One more thing:

The ability to launch a pool of node *processes* (not threads), + the ability to inter communicate them, with a good API that makes it easy (and fast), is at least as important as (if not more than) the need to run an in-process cpu-bound task asynchronously, without blocking the event loop, in the background.

The former is being developed by ry and co. (RyCo, SF), it's the .fork() thingy.
But the latter is not, and that's what I'm trying to address with this.

There's a reason for wanting the latter: if you delegate a long running cpu-bound task to a node worker *process*, you're gonna block its event loop.
That means that it will in fact go 'offline' for the duration of the task!
Not so with b/g threads.

OTOH, the OS resources such a file descriptors available to a single node process are not unlimited: it's easy to hit the limit if you try e.g. to open 100k files at once. This kind of problems are solved by a pool of (well) interconnected node processes, but not with threads.
--
Jorge.

Liam

unread,
Nov 10, 2011, 7:45:57 AM11/10/11
to nodejs
On Nov 10, 12:59 am, Jorge <jo...@jorgechamorro.com> wrote:
> There's a reason for wanting the latter: if you delegate a long running cpu-bound task to a node worker *process*, you're gonna block its event loop.
> That means that it will in fact go 'offline' for the duration of the task!
> Not so with b/g threads.

If I had a CPU-bound task, I'd consider implementing it in a C++
module, which can perform work in a thread and access Buffer memory
using the native APIs.

Jorge, what's the memory cost of a V8 Isolate?

Jorge

unread,
Nov 12, 2011, 5:34:11 AM11/12/11
to nod...@googlegroups.com

Perhaps. But you may prefer to write it in five minutes in JS.

> Jorge, what's the memory cost of a V8 Isolate?

Ok, I've got some numbers now. Keep in mind that it's creating a thread plus a clean, new JS context per isolate. The contexts in these figures have nearly nothing in them: The tests were done passing an empty function () {} to each thread:

function ƒ () {}
function cb () {}
var i= number of threads;
while (--i) Thread.give_me_a_thread( ƒ, '', cb);

So really, in terms of memory, these are the absolute minimum possible costs per thread. If an ƒ in a thread allocates a zillion objects, that thread will use zillions of bytes, obviously :-P

The threads are long lived, once you create one it will exist forever until you explicitly .destroy() it. So once it finishes with its initial ƒ, you can reuse it as many times as you want with other ƒ's, and that's much faster than creating/destroying threads+isolates+contexts again and again.

These are numbers, they are in Mb except where noted, and the columns are #of threads, real mem, virtual mem:

0, 12, 40
1, 13, 50
10, 24, 136
100, 135 1014
500, 631, 4.83 Gb
1000, 1.22 Gb, 9.62 Gb
2000, 2.43Gb, 19.21 Gb

These figures are very, very good news in fact, because node *processes* take 4 times as much. IOW, a thread not only launches much much faster but in addition it takes only 1/4 the memory of a node process :-)
--
Jorge.


Thomas Fritz

unread,
Nov 16, 2011, 4:30:42 AM11/16/11
to nod...@googlegroups.com
Hi,

I am really looking forward to get my my hands dirty on your code! ;)
When will you publish your code?

Regards

---
Thomas FRITZ
web http://fritzthomas.com
twitter http://twitter.com/thomasf

2011/11/12 Jorge <jo...@jorgechamorro.com>:

Ryan Dahl

unread,
Nov 16, 2011, 4:21:59 PM11/16/11
to nod...@googlegroups.com
On Thu, Nov 10, 2011 at 12:59 AM, Jorge <jo...@jorgechamorro.com> wrote:
> On 10/11/2011, at 01:18, Jorge wrote:
>> On 09/11/2011, at 19:49, Jeff Fifield wrote:
>>
>>> How is your proposal better/worse/same/different than webworkers or just communicating with a pool of worker processes?
>>
>> I'm not sure, we'll see.
>
> One more thing:
>
> The ability to launch a pool of node *processes* (not threads), + the ability to inter communicate them, with a good API that makes it easy (and fast), is at least as important as (if not more than) the need to run an in-process cpu-bound task asynchronously, without blocking the event loop, in the background.
>
> The former is being developed by ry and co. (RyCo, SF), it's the .fork() thingy.
> But the latter is not, and that's what I'm trying to address with this.
>
> There's a reason for wanting the latter: if you delegate a long running cpu-bound task to a node worker *process*, you're gonna block its event loop.
> That means that it will in fact go 'offline' for the duration of the task!
> Not so with b/g threads.

I'm not sure what you mean about blocking here...

> OTOH, the OS resources such a file descriptors available to a single node process are not unlimited: it's easy to hit the limit if you try e.g. to open 100k files at once. This kind of problems are solved by a pool of (well) interconnected node processes, but not with threads.


We're also adding Isolate support for v0.8. We should combine forces.
Let's talk on IRC.

Diogo Resende

unread,
Nov 16, 2011, 4:29:38 PM11/16/11
to nod...@googlegroups.com
On Wed, 16 Nov 2011 13:21:59 -0800, Ryan Dahl wrote:
> We're also adding Isolate support for v0.8. We should combine forces.
> Let's talk on IRC.

+1000

---
Diogo R.

Jorge

unread,
Dec 1, 2011, 4:51:28 AM12/1/11
to nod...@googlegroups.com
b00_fibonacci_server_with_threads.js
b01_fibonacci_server_no_threads.js

Jake Verbaten

unread,
Dec 1, 2011, 7:40:28 AM12/1/11
to nod...@googlegroups.com
Now do it again but use `--cluster` on the NO_THREADS example.

Also it looks like your running 2 threads but you have super linear speedup. That is clearly madness.

Jorge

unread,
Dec 1, 2011, 8:18:49 AM12/1/11
to nod...@googlegroups.com
On 01/12/2011, at 13:40, Jake Verbaten wrote:

> Now do it again but use `--cluster` on the NO_THREADS example.

$ node -v
v0.6.1

$ node --cluster benchmarks/b01_fibonacci_server_no_threads.js
Error: unrecognized flag --cluster
Try --help for options

With node v0.6.3:

$ cd nodev0.6.3 && git pull origin master && ./configure && etc ...
$ JOBS=4 make install
`make install` is not implemented yet. Bug bnoordhuis about it in #node.js

@ bnoordhuis, help me please !

> Also it looks like your running 2 threads but you have super linear speedup. That is clearly madness.

It goes flat-out... 3x faster with just 2 threads in a Mac with 2 cores.
--
Jorge.

Jake Verbaten

unread,
Dec 1, 2011, 8:26:32 AM12/1/11
to nod...@googlegroups.com
$ node --cluster benchmarks/b01_fibonacci_server_no_threads.js
Error: unrecognized flag --cluster

When did we remove the --cluster flag? It doesn't seem to be there anymore. In that case a proper comparison would be using the cluster API to start two workers.
 
It goes flat-out... 3x faster with just 2 threads in a Mac with 2 cores.

You understand that super linear speedup is madness right? Your using 2 threads and you managed to get 3 times the amount of work done.

This implies something is horribly wrong with your benchmark. 

Axel Kittenberger

unread,
Dec 1, 2011, 8:30:53 AM12/1/11
to nod...@googlegroups.com
On Thu, Dec 1, 2011 at 10:51 AM, Jorge <jo...@jorgechamorro.com> wrote:
> See:
>
> http://youtube.com/watch?v=_wyQvafohlE

Please can you write text? You know like in the good days we weren't
all functional illiterates, where one wrote a report what s/he has
done, the results, a conclusio, bla.

Jann Horn

unread,
Dec 1, 2011, 8:39:18 AM12/1/11
to nod...@googlegroups.com
2011/12/1 Jorge <jo...@jorgechamorro.com>:

> With node v0.6.3:
>
> $ cd nodev0.6.3 && git pull origin master && ./configure && etc ...
> $ JOBS=4 make install
> `make install` is not implemented yet. Bug bnoordhuis about it in #node.js
>
> @ bnoordhuis, help me please !

That's not node v0.6.3. That's git HEAD. Hell, if you can't even tell
which version of node you're installing, I guess your benchmark isn't
THAT useful.

Jorge

unread,
Dec 1, 2011, 8:44:29 AM12/1/11
to nod...@googlegroups.com
On 01/12/2011, at 14:26, Jake Verbaten wrote:

> On 01/12/2011, at 14:18, Jorge wrote:
>> It goes flat-out... 3x faster with just 2 threads in a Mac with 2 cores.
>
> You understand that super linear speedup is madness right? Your using 2 threads and you managed to get 3 times the amount of work done.
>
> This implies something is horribly wrong with your benchmark.

No... we are using 3 or perhaps 4 threads, node is using one (serving the 'quick' responses as fast as it can) and perhaps 2 (is it using also one of libeio's for something ?), and fib()s are running in parallel in two additonal threads.
--
Jorge.

Jake Verbaten

unread,
Dec 1, 2011, 8:50:58 AM12/1/11
to nod...@googlegroups.com
>> It goes flat-out... 3x faster with just 2 threads in a Mac with 2 cores.
>
> You understand that super linear speedup is madness right? Your using 2 threads and you managed to get 3 times the amount of work done.
>
> This implies something is horribly wrong with your benchmark.

fib()s are running in parallel in two additonal threads.

If the fibs are the bottleneck ( which they should be for the example ). And your running two fibs in threads instead of one then the speed up should be 2 at most.

Your benchmark implies your threading does some more optimisations / black magic internally.

And again don't compare threads to node, that's a waste of time. Of multiple cores are faster then one.

Make a comparison between threads and multiple node processes using the standard cluster API. 

Ilya Dmitrichenko

unread,
Dec 1, 2011, 8:54:01 AM12/1/11
to nod...@googlegroups.com
On 1 December 2011 13:30, Axel Kittenberger <axk...@gmail.com> wrote:
> Please can you write text? You know like in the good days we weren't
> all functional illiterates, where one wrote a report what s/he has
> done, the results, a conclusio, bla.

I absolutely agree, this type of video posts can not be considered as
proof of any reasonable kind. If one sees a problem of this nature, he
or she should write a formal report describing all the steps with an
aid of block diagrams and graphs. This should include results of
testing on different hardware and OS platforms.

Jake Verbaten

unread,
Dec 1, 2011, 8:55:20 AM12/1/11
to nod...@googlegroups.com
http://youtube.com/watch?v=_wyQvafohlE


Node isn't cancer. Threads aren't evil.

Thoughts ?

As an aside, the music is good. Is that you beat boxing? \o/ 

Jorge

unread,
Dec 1, 2011, 9:03:13 AM12/1/11
to nod...@googlegroups.com
On 01/12/2011, at 14:26, Jake Verbaten wrote:
On 01/12/2011, at 14:18, Jorge wrote:
>> $ node --cluster benchmarks/b01_fibonacci_server_no_threads.js
>> Error: unrecognized flag --cluster
>
> When did we remove the --cluster flag? It doesn't seem to be there anymore. In that case a proper comparison would be using the cluster API to start two workers.

How would that be ?
I'd like to benchmark that against threads.
--
Jorge.

Jorge

unread,
Dec 1, 2011, 9:13:07 AM12/1/11
to nod...@googlegroups.com
On 01/12/2011, at 14:50, Jake Verbaten wrote:

On 01/12/2011, at 14:44, Jorge wrote:
> >> It goes flat-out... 3x faster with just 2 threads in a Mac with 2 cores.
> >
> > You understand that super linear speedup is madness right? Your using 2 threads and you managed to get 3 times the amount of work done.
> >
> > This implies something is horribly wrong with your benchmark.
>
> fib()s are running in parallel in two additonal threads.
>
> If the fibs are the bottleneck ( which they should be for the example ). And your running two fibs in threads instead of one then the speed up should be 2 at most.
>
> Your benchmark implies your threading does some more optimisations / black magic internally.

No, it does not do black magic, I promise :-P

It simply lets the node event loop turn freely and continue accepting connections and doing what it's supposed to do instead of staying blocked, while the threads do the blocking jobs in parallel.

> And again don't compare threads to node, that's a waste of time. Of multiple cores are faster then one.
>
> Make a comparison between threads and multiple node processes using the standard cluster API.

How would that be, please ?

I someone writes it I promise to post the results in another video. With more good music :-)
--
Jorge.

Jake Verbaten

unread,
Dec 1, 2011, 9:19:13 AM12/1/11
to nod...@googlegroups.com
> And again don't compare threads to node, that's a waste of time. Of multiple cores are faster then one.
>
> Make a comparison between threads and multiple node processes using the standard cluster API.

How would that be, please ?

I someone writes it I promise to post the results in another video. With more good music :-)

http://nodejs.org/docs/v0.6.3/api/cluster.html I havn't used the cluster API. If you benchmark was OS I would fork it and fix it for you. 

billywhizz

unread,
Dec 1, 2011, 9:27:50 AM12/1/11
to nodejs
i think you should be running ab from a separate machine and
dedicating at least 2 cpu's to the server being tested. you should
also use the -k flag with ab to use keepalive connections so you can
eliminate the overhead of creating and tearing down connections on
every request.

will be interesting to have a look at this when it's released.

Jorge

unread,
Dec 1, 2011, 9:31:49 AM12/1/11
to nod...@googlegroups.com
On 01/12/2011, at 15:19, Jake Verbaten wrote:

> On 01/12/2011, at 15:13, Jorge wrote:
>> How would that be, please ?
>>
>> I someone writes it I promise to post the results in another video. With more good music :-)
>
> http://nodejs.org/docs/v0.6.3/api/cluster.html I havn't used the cluster API. If you benchmark was OS I would fork it and fix it for you.

It's totally OS. Free as in free beer, and free as in freedom of speech:

function fib (n) {
if (n < 2) {
return 1;
}
else {
return fib(n-2) + fib(n-1);
}
}

var i= 0;
var n= 35;
function ƒ (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
if (i++ % 10) {
res.end('quick\n');
process.stdout.write('·');
}
else {
res.end(fib(n)+ '\n');
process.stdout.write('•');
}
}

var ip= "127.0.0.1";
var port= +process.argv[2] || 1234;
require('http').createServer(ƒ).listen(port, ip);
console.log('Fibonacci server (NO THREADS) running @'+ ip+ ':'+ port);


b01_fibonacci_server_no_threads.js

Jorge

unread,
Dec 1, 2011, 9:38:17 AM12/1/11
to nod...@googlegroups.com
On 01/12/2011, at 15:27, billywhizz wrote:

> i think you should be running ab from a separate machine and
> dedicating at least 2 cpu's to the server being tested. you should
> also use the -k flag with ab to use keepalive connections so you can
> eliminate the overhead of creating and tearing down connections on
> every request.
>
> will be interesting to have a look at this when it's released.

For now, all I can send is a precompiled threads module to anyone who wants to try it.

I just need to know what version of node you have, and what OS you want me to compile it for.

It's not ready for Windows (yet), only for UNIX.
--
Jorge.

Jake Verbaten

unread,
Dec 1, 2011, 9:53:21 AM12/1/11
to nod...@googlegroups.com
> http://nodejs.org/docs/v0.6.3/api/cluster.html I havn't used the cluster API. If you benchmark was OS I would fork it and fix it for you.

It's totally OS. Free as in free beer, and free as in freedom of speech:

That's not how comparisons work. Comparing the no threads version on my machine to threads version on your machine is silly.

I need the threads version to run a real comparative benchmark. 

billywhizz

unread,
Dec 1, 2011, 9:53:58 AM12/1/11
to nodejs
this should do the trick: https://gist.github.com/1417302

this does 21 reqs/sec on my setup with a single core. scales pretty
much linearly as i add cores - 82 req/sec suing all 4 cores on the
machine.

ab -k -c 10 -n 1000 http://127.0.0.1:2222/


Concurrency Level: 10
Time taken for tests: 12.085 seconds
Complete requests: 1000
Failed requests: 897
(Connect: 0, Receive: 0, Length: 897, Exceptions: 0)
Write errors: 0
Keep-Alive requests: 0
Total transferred: 70309 bytes
HTML transferred: 6309 bytes
Requests per second: 82.75 [#/sec] (mean)
Time per request: 120.845 [ms] (mean)
Time per request: 12.085 [ms] (mean, across all concurrent
requests)
Transfer rate: 5.68 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 1
Processing: 0 119 194.8 1 892
Waiting: 0 119 194.8 1 892
Total: 0 119 194.8 1 892

Percentage of the requests served within a certain time (ms)
50% 1
66% 20
75% 177
80% 454
90% 463
95% 471
98% 474
99% 475
100% 892 (longest request)

On Dec 1, 2:31 pm, Jorge <jo...@jorgechamorro.com> wrote:
> On 01/12/2011, at 15:19, Jake Verbaten wrote:
>
> > On 01/12/2011, at 15:13, Jorge wrote:
> >> How would that be, please ?
>
> >> I someone writes it I promise to post the results in another video. With more good music :-)
>

> >http://nodejs.org/docs/v0.6.3/api/cluster.htmlI havn't used the cluster API. If you benchmark was OS I would fork it and fix it for you.


>
> It's totally OS. Free as in free beer, and free as in freedom of speech:
>
> function fib (n) {
>   if (n < 2) {
>     return 1;
>   }
>   else {
>     return fib(n-2) + fib(n-1);
>   }
>
> }
>
> var i= 0;
> var n= 35;
> function ƒ (req, res) {
>   res.writeHead(200, {'Content-Type': 'text/plain'});
>   if (i++ % 10) {
>     res.end('quick\n');
>     process.stdout.write('·');
>   }
>   else {
>     res.end(fib(n)+ '\n');
>     process.stdout.write('•');
>   }
>
> }
>
> var ip= "127.0.0.1";
> var port= +process.argv[2] || 1234;
> require('http').createServer(ƒ).listen(port, ip);
> console.log('Fibonacci server (NO THREADS) running @'+ ip+ ':'+ port);
>

>  b01_fibonacci_server_no_threads.js
> < 1KViewDownload
>
>
>
> I want to know how does it compare to threads. Please, please, fix it for me.
>
> Thanks in advance,
> --
> Jorge.

billywhizz

unread,
Dec 1, 2011, 9:55:45 AM12/1/11
to nodejs
oops. i just realised there are a bunch of failed requests. will see
what is going on...

On Dec 1, 2:53 pm, billywhizz <apjohn...@gmail.com> wrote:
> this should do the trick:https://gist.github.com/1417302
>
> this does 21 reqs/sec on my setup with a single core. scales pretty
> much linearly as i add cores - 82 req/sec suing all 4 cores on the
> machine.
>

> ab -k -c 10 -n 1000http://127.0.0.1:2222/

> > >http://nodejs.org/docs/v0.6.3/api/cluster.htmlIhavn't used the cluster API. If you benchmark was OS I would fork it and fix it for you.

billywhizz

unread,
Dec 1, 2011, 10:07:22 AM12/1/11
to nodejs
i've changed the gist now so it doesn't send any output and just calls
the fib function. 2.17 req/sec with one core, 8.56 with four cores...

On Dec 1, 2:53 pm, billywhizz <apjohn...@gmail.com> wrote:

> this should do the trick:https://gist.github.com/1417302
>
> this does 21 reqs/sec on my setup with a single core. scales pretty
> much linearly as i add cores - 82 req/sec suing all 4 cores on the
> machine.
>

> ab -k -c 10 -n 1000http://127.0.0.1:2222/

> > >http://nodejs.org/docs/v0.6.3/api/cluster.htmlIhavn't used the cluster API. If you benchmark was OS I would fork it and fix it for you.

Bruno Jouhier

unread,
Dec 1, 2011, 10:54:00 AM12/1/11
to nodejs

+3: node isn't cancer, threads aren't evil, and the music is good!

Jorge

unread,
Dec 1, 2011, 12:12:17 PM12/1/11
to nod...@googlegroups.com
On 01/12/2011, at 16:07, billywhizz wrote:

> i've changed the gist now so it doesn't send any output and just calls
> the fib function. 2.17 req/sec with one core, 8.56 with four cores...

Hi billywhizz,

Thank you very much!

I've run the benchmarks, and my threads beat their cluster two to one...

http://www.youtube.com/watch?v=OSF-tUH3gDo

Threads_a_gogo is 2x faster... :-)

P.S.
This is the cluster code I've used: perhaps there's something wrong with it ?

var cluster = require('cluster');

function fib (n) {
if (n < 2) {
return 1;
}
else {
return fib(n-2) + fib(n-1);
}
}

var i= 0;
var n= 35;

function srv (req, res) {


res.writeHead(200, {'Content-Type': 'text/plain'});
if (i++ % 10) {
res.end('quick\n');
process.stdout.write('·');
}
else {
res.end(fib(n)+ '\n');
process.stdout.write('•');
}
}

if (cluster.isMaster) {
var numCPUs = process.argv[3] || 1;
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}

cluster.on('death', function(worker) {
console.log('worker ' + worker.pid + ' died');
});
} else {
var port= + process.argv[2] || 1234;
require('http').createServer(srv).listen(port);
console.log('Fibonacci server (CLUSTERED) listening: ' + port);
}

Cheers,
--
Jorge.

Jake Verbaten

unread,
Dec 1, 2011, 12:16:52 PM12/1/11
to nod...@googlegroups.com
I've run the benchmarks, and my threads beat their cluster two to one...

http://www.youtube.com/watch?v=OSF-tUH3gDo

Cluster version has 450 failures compared to the threaded version.

There must be a bug in the code somewhere. I don't think it's a fair comparison until it's fixed. 

Jorge

unread,
Dec 1, 2011, 12:25:45 PM12/1/11
to nod...@googlegroups.com
I think that's normal: as soon as a node process hits a fib() it stops responding (and accep()ting connections), and there's only 2 processes, but concurrency was set to 4 in ab.

In any case, at the end, both have completed the 500 requests... but one one twice as fast as the other :-))
-- 
Jorge.

vincentcr

unread,
Dec 1, 2011, 12:30:12 PM12/1/11
to nod...@googlegroups.com
>> threads aren't evil

no, but shared, mutable state + threads is a most evil brew. 

i really, really hope that "isolates" means what i think it means, because this idea that i'm hearing here of sharing buffers and globals is just, well, evil. 

if you can't guarantee that your data is immutable, it should never, ever, be shareable across threads.

because next thing you know people will be asking for semaphores and atomic writes and why can't i synchronize my threads and i have this weird bug that only happens once every 3 billion iterations and my god i hate node!!!

strings only, please!

billywhizz

unread,
Dec 1, 2011, 12:32:20 PM12/1/11
to nodejs
you are getting a lot of failed requests. i would change the test to
just do the fibonnaci and return the same http response every time
(see my gist) and also use keepalive connections so you are spending
as little time doing http stuff as possible. you should also stop
writing stuff to console too.

frankly, i don't see the point in continuing to post these benchmarks
up here unless you can release the code so we can test it for
ourselves. i don't believe that threads should beat the clustering
solution 2 to 1 in a case like this as the load balancing of requests
should all be taken care of by the OS kernel in the clustered
application. in fact i would wager that the clustered solution would
be faster as it shouldn't have any context switching...

Nathan Rajlich

unread,
Dec 1, 2011, 12:34:10 PM12/1/11
to nod...@googlegroups.com
Jorge, why are you holding out on the isolates code? That's the stuff we all want to look at ;)

--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com
To unsubscribe from this group, send email to
nodejs+un...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

Jorge

unread,
Dec 1, 2011, 12:35:58 PM12/1/11
to nod...@googlegroups.com

One more thing:

The cluster has needed 12.9+12.8+9 = 34.7 MB, but the threads_a_gogo one only 16.9 MB at its peak.
--
Jorge.

Jorge

unread,
Dec 1, 2011, 12:42:30 PM12/1/11
to nod...@googlegroups.com

On 01/12/2011, at 18:32, billywhizz wrote:

> you are getting a lot of failed requests. i would change the test to
> just do the fibonnaci and return the same http response every time
> (see my gist) and also use keepalive connections so you are spending
> as little time doing http stuff as possible. you should also stop
> writing stuff to console too.
>
> frankly, i don't see the point in continuing to post these benchmarks
> up here unless you can release the code so we can test it for
> ourselves. i don't believe that threads should beat the clustering
> solution 2 to 1 in a case like this as the load balancing of requests
> should all be taken care of by the OS kernel in the clustered
> application. in fact i would wager that the clustered solution would
> be faster as it shouldn't have any context switching...

With the cluster, as soon as you get two fib()s running at once, all the cluster is dead, blocked, finito, kaputt.

With threads, I can keep serving requests ad inifnitum, because it doesn't matter how many fib()s are running at once, node's event loop is never blocked.

That's why background threads are a much better solution for this problem. And they're faster too. And they're less expensive (need less memory).
--
Jorge.

Bruno Jouhier

unread,
Dec 1, 2011, 12:48:27 PM12/1/11
to nodejs

Agreed (almost): there should not be any primitives to synchronize
threads and communication should be with immutable data. I would make
one single exception for buffers so that people can communicate
efficiently with threads. Passing a buffer to a thread should be
subject to the same rules as passing a buffer to a core I/O function.
When you do this, the I/O function may write to the buffer while your
JS code is executing. So there is (very little and well hidden) shared
mutable state in node today!

The very important point is that globals and mutable objects aren't
shared and that everything except buffers be safe.

Bruno

Jorge

unread,
Dec 1, 2011, 12:50:02 PM12/1/11
to nod...@googlegroups.com
On 01/12/2011, at 18:34, Nathan Rajlich wrote:

> Jorge, why are you holding out on the isolates code? That's the stuff we all want to look at ;)

I'm not. I and the beta testers (hi beta testers !) :-P have got all the source code, but it's not well finished yet.

And we're still trying to discover what would be the best API for this... for example, should it be sync or async looking ? (I'm kidding :-P)

Do you want to try it for yourself ?
--
Jorge.

Liam

unread,
Dec 1, 2011, 1:15:02 PM12/1/11
to nodejs
Hi Jorge! o/ Yes, we keep bugging him for more features, and he's
changing it every 15 minutes.

Jorge

unread,
Dec 1, 2011, 2:22:46 PM12/1/11
to nod...@googlegroups.com

And a cup of tea with butter cookies.
--
Jorge.

Bruno Jouhier

unread,
Dec 1, 2011, 5:25:20 PM12/1/11
to nodejs
Hi Jorge,

Beta tester #2 :-) had a bit of time to play with the latest version.
Cool stuff!

Bruno

Ted Young

unread,
Dec 1, 2011, 5:28:53 PM12/1/11
to nod...@googlegroups.com
Aren't spawning v8 isolates coming to node core soon? That's
essentially what this is, correct?

Liam

unread,
Dec 1, 2011, 5:56:04 PM12/1/11
to nodejs
It's on the 0.8 roadmap, yes.

Bruno Jouhier

unread,
Dec 1, 2011, 6:15:54 PM12/1/11
to nodejs
Yes, and this is cool too. Jorge's benchmarks shows that threads have
their place. The great thing is that we'll get only the "good parts"
of threads.

Bruno

Brandon Benvie

unread,
Dec 2, 2011, 2:04:56 AM12/2/11
to nod...@googlegroups.com
V8 3.8 uses isolates like crazy. It also comes with a lot of easily multipurpose (bindable to JS via C++) for doing naughty things between processes for facilitating isolates. The thing about isolates is that allow you to run multiple separate contexts in different threads as if they were..isolated, but obviously the whole benefit is that they live inside a single overlord. The whole magic behind isolates is that it necessitates that the JavaScript VM itself has ways to deal with the whole concurrency thread thing instead of just handing it off to something like libuv to do the dirty work for. That brings those tools a lot closer to being adapted for use from in the javascript itself.

From what I can understand the strategy for implementing isolates and allowing JavaScript stuff to move between them comes down to a couple related parts. The first is in taking the concept of Proxies and lazy objects and going all the way with it. V8 has "MaybeObjects" already but expanding the idea of it so that everything is lazy you only instanciate what you need for just as long as something looks at it. It's like the inverse of garbage collection, don't make the thing until something tries to look at it. Don't make it's arms if only its feet are being looked at. In this way you could create a, from JavaScript's standpoint, "full" context for nearly no cost. An object is represented entirely by a 32 bit uint turned into a string and that's all it costs until something tries to use it.

The other half of this is being able to have very fine grained control and responsiveness to freezing and unfreezing things. You have to be able to represent things exactly how they otherwise would be. This also becomes important when you're transporting user created things. You need to essentially be able to cryogenically freeze an object or function and then turn it into a real thing as soon as either it or something else wants it to be real. The same set of requirements also comes into play transportation of complex objects and contexts across the gab between say threads and isolated contexts. Some of it can be done by serializing data, some can be done with really clever use of pointers and carefully controlled shared memory.

If you get clever enough apparently you end up actually spending almost nothing to have threading. You just need to create an entirely fake world that looks and responds like the real one...and that's what V8 does for isolates. And then it uses the goddamn hell out of them.

Vyacheslav Egorov

unread,
Dec 2, 2011, 7:52:06 AM12/2/11
to nod...@googlegroups.com
> V8 has "MaybeObjects" already

MaybeObject has nothing to do with isolates or proxies or lazyness.
MaybeObject is a value that can be either normal Object or a Failure
(allocation failure usually or exception).

In a sense it's similar to Maybe a in Haskell:

data Maybe a = Nothing | Just a

--
Vyacheslav Egorov

vincentcr

unread,
Dec 2, 2011, 8:14:05 AM12/2/11
to nod...@googlegroups.com
a bit of a side question, but what happens with modules and types in a multithreaded node? 

a require'd module will be loaded once for each thread, not once per process, right? So whatever my module exports does not become a singleton accross threads?

and what about builtin types? is there only one instance of the String prototype?

Vyacheslav Egorov

unread,
Dec 2, 2011, 8:20:36 AM12/2/11
to nod...@googlegroups.com
> and what about builtin types? is there only one instance of the String> prototype?

No. Every context (and thus every isolate) has it's own builtin
objects. They are called isolates because nothing in their JS worlds
is shared.
--
Vyacheslav Egorov

Number 9

unread,
Dec 2, 2011, 6:44:26 AM12/2/11
to nodejs
Sounds great
Are you using omp (open multiprocessing) ?
It would be great to get that support in node core
so other native heavy lifters can be implemented cleanly

Can't wait for your release!!

On Nov 9, 11:19 am, Jorge <jo...@jorgechamorro.com> wrote:
> On 09/11/2011, at 12:06, Christophe Eymard wrote:
>
> > On Wed, Nov 9, 2011 at 11:58, Jorge <jo...@jorgechamorro.com> wrote:
> > On 09/11/2011, at 07:24, Mark Hahn wrote:
>
> > > I couldn't see anything except a frozen screen with some green columns and a small box with two cpu history graphs.  Maybe it's a windows thing.
>
> >http://www.youtube.com/watch?v=lnCjk2_3WLQ
>
> > Still don't get it...
>
> > Did you bring threads to the javascript world ?
>
> Yes.
>
> > Are they usable ?
>
> Yes. The API is just like that of an I/O call:
>
> nuThread= Threads.give_me_a_thread( function, data, callback );
>
> You simply provide a function to run, the data it's going to receive as a parameter, and a callback, and give_me_a_thread() runs it in a separate thread (a true operating system pthread running a v8 isolate) in parallel with the main thread. When the function ends, it returns a value, and that's what you get in the callback. If you've got say, a cpu with six cores, and you launch 6 threads, each one is going to run flat-out in its core... That's truly getting all the juice out of you cpu cores !
>
> Awesome, isn't it ? :-)
> --
> Jorge.

Brandon Benvie

unread,
Dec 2, 2011, 5:26:57 PM12/2/11
to nod...@googlegroups.com
Work is in progress to have isolates in Node for 0.8 and there's a branch on github that maybe kind of works. And as for MaybeObjects I was referencing how there's a kind of genesis or continuum of "existance" and "what a thing is". A MaybeObject is the first step and then the new thing that I vaguely described is basically the next least complete but is confirmed to exist thing, referreed to in V8 as an "smi" (small immediate integer, since the only thing that actually exists is its ID or token or whatever its called).

Jorge

unread,
Dec 3, 2011, 3:27:43 AM12/3/11
to nod...@googlegroups.com

By the way, the similarity of this code (above) with C code using pid_t fork( void ) (man 2 fork), is what gives node's .fork() method its name, isn't it ?
--
Jorge.

Jorge

unread,
Mar 7, 2012, 2:23:49 PM3/7/12
to nod...@googlegroups.com
On Nov 16, 2011, at 10:21 PM, Ryan Dahl wrote:
> On Thu, Nov 10, 2011 at 12:59 AM, Jorge <jo...@jorgechamorro.com> wrote:
>> On 10/11/2011, at 01:18, Jorge wrote:
>>> On 09/11/2011, at 19:49, Jeff Fifield wrote:
>>>
>>>> How is your proposal better/worse/same/different than webworkers or just communicating with a pool of worker processes?
>>>
>>> I'm not sure, we'll see.
>>
>> One more thing:
>>
>> The ability to launch a pool of node *processes* (not threads), + the ability to inter communicate them, with a good API that makes it easy (and fast), is at least as important as (if not more than) the need to run an in-process cpu-bound task asynchronously, without blocking the event loop, in the background.
>>
>> The former is being developed by ry and co. (RyCo, SF), it's the .fork() thingy.
>> But the latter is not, and that's what I'm trying to address with this.
>>
>> There's a reason for wanting the latter: if you delegate a long running cpu-bound task to a node worker *process*, you're gonna block its event loop.
>> That means that it will in fact go 'offline' for the duration of the task!
>> Not so with b/g threads.
>
> I'm not sure what you mean about blocking here...
>
>> OTOH, the OS resources such a file descriptors available to a single node process are not unlimited: it's easy to hit the limit if you try e.g. to open 100k files at once. This kind of problems are solved by a pool of (well) interconnected node processes, but not with threads.
>
>
> We're also adding Isolate support for v0.8. We should combine forces.
> Let's talk on IRC.

Today is the D day, finally: JavaScript threads for Node.js (using v8 isolates):

https://github.com/xk/node-threads-a-gogo

npm install threads_a_gogo

Special thanks to: Bruno Jouhier and Liam Breck.

Enjoy!
--
Jorge.

Bradley Meck

unread,
Mar 7, 2012, 3:37:19 PM3/7/12
to nod...@googlegroups.com
Really like the work! Shared nothing (well, message passing) is a great way to handle concurrency inside a process for node. I would suggest a means to prioritize threads if possible (consumers and producers can have different priorities). Is there an easy way to just disable the 'puts' variable or just override it for every eval?

Axel Kittenberger

unread,
Mar 7, 2012, 3:45:37 PM3/7/12
to nod...@googlegroups.com
No more music videos with green screens?

IMHO it should be possible to share deeply frozen structures without
having to serialize/deserialize and store the pointer to them into a
container that guarentees atomicness. The Clojure layout is something
to seriously consider, and IMHO you do not have to take over full
functional style to adept to its sharing immutables by default through
atomic references.

PS: the "More examples" links in Readme.md point to
https://github.com/xk/private_threads_a_gogo/

Bert Belder

unread,
Mar 7, 2012, 6:36:28 PM3/7/12
to nodejs
Just curious, what does this add over node-webworkers and similar
approaches?

Matt

unread,
Mar 7, 2012, 7:51:27 PM3/7/12
to nod...@googlegroups.com
It makes fibonacci sequences really fast :-)

Jann Horn

unread,
Mar 8, 2012, 4:43:45 AM3/8/12
to nod...@googlegroups.com
Am Mittwoch, den 07.03.2012, 20:23 +0100 schrieb Jorge:
> Today is the D day, finally: JavaScript threads for Node.js (using v8 isolates):
>
> https://github.com/xk/node-threads-a-gogo
>
> npm install threads_a_gogo
>
> Special thanks to: Bruno Jouhier and Liam Breck.

Whoah, that looks really cool! A few questions:

- Is it possible to use require() inside those isolated things, for
example by using fs.readFile from the root isolate and blocking the
child until the callback fires?

- Is there a simple API for making events in the root isolate that get
handled by the first free child isolate only?

- Can I use Buffers in the child, and can I pass them between isolates?

- How efficient would it be to use a proxy object inside of the isolate
to access things outside?

- Is there an API for catching uncaughtExceptions in the isolates?

- When an isolate in a pool fails really hard, will a new one be
created automatically?

signature.asc

Brandon Benvie

unread,
Mar 8, 2012, 12:17:39 PM3/8/12
to nod...@googlegroups.com
It should be possible to transfer objects between isolates, I'm pretty sure this is an exposed API in Chromium. In this case you aren't sharing access, you're transfering ownership, which is plenty useful in a lot of cases.

Jorge

unread,
Mar 8, 2012, 12:33:53 PM3/8/12
to nod...@googlegroups.com
On Mar 7, 2012, at 9:37 PM, Bradley Meck wrote:

> Really like the work! Shared nothing (well, message passing) is a great way to handle concurrency inside a process for node.

Thanks :-)

> I would suggest a means to prioritize threads if possible (consumers and producers can have different priorities).

Yes, that's possible, istm, both in Linux and in OSX.

> Is there an easy way to just disable the 'puts' variable or just override it for every eval?

All threads are initially .created() with a puts() global. You just have to disable it once (per thread or threadPool) with a thread.eval('delete this.puts'), and it will be gone forever.
--
Jorge.

Jorge

unread,
Mar 8, 2012, 12:36:55 PM3/8/12
to nod...@googlegroups.com
On Mar 7, 2012, at 9:45 PM, Axel Kittenberger wrote:

> No more music videos with green screens?

Oh, yes, there's a video of the event loop with music, but it isn't green:

<http://youtube.com/v/D0uA_NOb0PE?autoplay=1>

> IMHO it should be possible to share deeply frozen structures without
> having to serialize/deserialize and store the pointer to them into a
> container that guarentees atomicness.

Yeah, perhaps with proxies, and (hopefully) not necessarily frozen.

> The Clojure layout is something
> to seriously consider, and IMHO you do not have to take over full
> functional style to adept to its sharing immutables by default through
> atomic references.
>
> PS: the "More examples" links in Readme.md point to
> https://github.com/xk/private_threads_a_gogo/

Thanks, fixed.
--
Jorge.

Jorge

unread,
Mar 8, 2012, 12:50:35 PM3/8/12
to nod...@googlegroups.com
On Mar 8, 2012, at 10:43 AM, Jann Horn wrote:
> Am Mittwoch, den 07.03.2012, 20:23 +0100 schrieb Jorge:
>> Today is the D day, finally: JavaScript threads for Node.js (using v8 isolates):
>>
>> https://github.com/xk/node-threads-a-gogo
>>
>> npm install threads_a_gogo
>>
>> Special thanks to: Bruno Jouhier and Liam Breck.
>
> Whoah, that looks really cool! A few questions:

Thank you :-)

> - Is it possible to use require() inside those isolated things, for
> example by using fs.readFile from the root isolate and blocking the
> child until the callback fires?

Well, I think (I may be wrong) that the require() functionality can be emulated with a thread.load().

> - Is there a simple API for making events in the root isolate that get
> handled by the first free child isolate only?

Yes, when you create a pool, if you use pool.any.emit() it will emit() in the first thread of the pool that becomes idle.

> - Can I use Buffers in the child, and can I pass them between isolates?

Not yet...

> - How efficient would it be to use a proxy object inside of the isolate
> to access things outside?

That's in the roadmap.

> - Is there an API for catching uncaughtExceptions in the isolates?

yes, when you do an .eval(program, cb) you'll get any errors in the cb(err, result)

> - When an isolate in a pool fails really hard, will a new one be
> created automatically?

Nope.
--
Jorge.

Jorge

unread,
Mar 8, 2012, 12:58:16 PM3/8/12
to nod...@googlegroups.com
On Mar 8, 2012, at 6:17 PM, Brandon Benvie wrote:

> It should be possible to transfer objects between isolates, I'm pretty sure this is an exposed API in Chromium. In this case you aren't sharing access, you're transfering ownership, which is plenty useful in a lot of cases.

That would be what I call "pass and forget". Do you think that that's possible, today, in v8 ? I would love to have that, and I'd love to know how !

If not, I think I'd have to build some kind of extraneous, custom, proxy objects that have that magic, and I'm not sure how well would that work, or even if it's feasible.
--
Jorge.

Shripad K

unread,
Mar 8, 2012, 1:22:42 PM3/8/12
to nod...@googlegroups.com
Can you give an example as to how you would emulate require() via thread.load() ? if a module has inner requires (which it probably will) it would throw up ReferenceError(s).

Btw really cool module. :)

Jorge

unread,
Mar 8, 2012, 1:36:26 PM3/8/12
to nod...@googlegroups.com
On Mar 8, 2012, at 7:22 PM, Shripad K wrote:

> Can you give an example as to how you would emulate require() via thread.load() ? if a module has inner requires (which it probably will) it would throw up ReferenceError(s).

Yes, you're right, it would throw. I've not given much thought to that... yet. Patches and pull requests are very much (needed and) welcome :-))

> Btw really cool module. :)

Thank you :-)
--
Jorge.

Axel Kittenberger

unread,
Mar 8, 2012, 3:02:56 PM3/8/12
to nod...@googlegroups.com
>> - Can I use Buffers in the child, and can I pass them between isolates?
>
> Not yet...

It would be nice if there would be an API that stored the PID of the
thread the buffer currently "belongs" to, this would make it more
fool-proof.

>> IMHO it should be possible to share deeply frozen structures without
>> having to serialize/deserialize and store the pointer to them into a
>> container that guarentees atomicness.
>
> Yeah, perhaps with proxies, and (hopefully) not necessarily frozen.

I understand the approach its up to the next layer to decide which
access mechanism to apply. My opinion on this is, the bad reputation
threads have is to going the semaphore way, which was long (and still
is?) the propsed standard to approach multithreads on working on the
same datastructure. However, deeply frozen data structures with atomic
references are much easer to get right, or harder to break. They sound
less efficient rebuilding a whole tree everytime you change a little
thing, but in practice it isn't that dramatic. Its not Javascript
strong point, but immutable datastructures also open all kind of new
possibilities to optimiziation. Also the current Moores Law gives us
more Cores and more Memory, but hardly more linear CPU speed, so it
also runs well with hardware development to go for algorithms who
don't mind too much to use memory. To cut a long story short, please
make it easy for the user to go the immutables/atomic route if s/he
decides so.

Reply all
Reply to author
Forward
0 new messages