http://javascriptology.com/evilthreads.mov
257x times faster than ruby
2,6x times faster than macruby
And it's a beta... :-)
--
Jorge.
--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com
To unsubscribe from this group, send email to
nodejs+un...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en
> I couldn't see anything except a frozen screen with some green columns and a small box with two cpu history graphs. Maybe it's a windows thing.
Yes.
> Are they usable ?
Yes. The API is just like that of an I/O call:
nuThread= Threads.give_me_a_thread( function, data, callback );
You simply provide a function to run, the data it's going to receive as a parameter, and a callback, and give_me_a_thread() runs it in a separate thread (a true operating system pthread running a v8 isolate) in parallel with the main thread. When the function ends, it returns a value, and that's what you get in the callback. If you've got say, a cpu with six cores, and you launch 6 threads, each one is going to run flat-out in its core... That's truly getting all the juice out of you cpu cores !
Awesome, isn't it ? :-)
--
Jorge.
Will be possible to use it similar to an EventEmitter? I mean, having
the
possibility to emit/call the callback from inside the thread more than
once?
> Awesome, isn't it ? :-)
Yes :)
---
Diogo R.
V8 has pthreads built in, so it isn't much of a problem to enable
these, but it has reason why e.g. node-fibers makes a global lock
around everything, effectively making them like green-threads (just
unfortunately with the cost of full context switching), or node out of
the box comes with child_process.fork() instead.
Sorry, but isn't the whole idea of node NOT to have threads and pay
their memory and context switching costs? Because microseconds DO
matter?
Hi !
> If I understand well, you are enabling something like:
>
> doSomethingBig(arg1, arg2, function(err, result) { ... });
>
> where doSomethingBig runs from a V8 isolate in a different thread
> while the main thread continues its normal business and doSomethingBig
> returns its result to the main thread via a regular callback.
Yes, exactly.
> As the thread runs in a V8 isolate, this is safe (and more efficient
> than spawning processes).
Yes, exactly, too. People hear thread and start vomiting nonsenses, it's so funny !
> But this also implies that arg1, arg2, err
> and result be serializable. How do you pass them from one thread to
> the other (JSON?).
At the moment (this is a beta!) the only typeof data is string, and yes, it's passed by copy. To pass an object you'd have to serialize it :-( , and yes, most likely, you'd use JSON.stringify() for that.
> It would be nice to be able to pass big data as
> immutable data without copying. Did you crack this one too?
Not yet, but I'm going to open the pandora box, and want it to allow passing buffers by reference :-)) SHARED STATE PARTY !
> Also, what's the cost of creating the thread + isolate.
The pthread is cheap (+30k per second in my mac core...@2.5GHz), but the isolates + its context are not (about 2.5k per second). With thousands of concurrent threads, it quickly allocates several gigabytes of virtual memory...
> Do you
> recreate them on every call,
No.
> or do you maintain a pool
Yes :-)
> (could be
> problematic if doSomethingBig modifies globals)?
Yes, but I see that as a feature. Say if you want to 'install' some utilities in the thread, you can then reuse them again an again.
> Seeing the API would help lift some of the mystery. Is this somewhere
> on GitHub?
No, it's not ready yet, but if you're interested send me a pm.
Cheers,
--
Jorge.
By all respect, if you do not bother to describe what you are doing
(is writing that hard?) using normal text and just put up some .mov
(.mov?) with green screens I'd rethink accusing people of vomiting
nonsenses.
Finish it first. Put it online, Then bother us. I don't see what you
want to achieve with this thread.
http://javascriptology.com/evilthreads.mov
257x times faster than ruby
2,6x times faster than macruby
And it's a beta... :-)
Yes, as many times as you want.
--
Jorge.
> How is your proposal better/worse/same/different than webworkers or just communicating with a pool of worker processes?
I'm not sure, we'll see.
> Is the callback at thread completion thread safe?
Totally. The thread sends an async event to node's event loop, so the callback is run just as any other JS code in node, in its single main thread.
--
Jorge.
You just made my day :) I'm looking anxious for this to be released.
---
Diogo R.
One more thing:
The ability to launch a pool of node *processes* (not threads), + the ability to inter communicate them, with a good API that makes it easy (and fast), is at least as important as (if not more than) the need to run an in-process cpu-bound task asynchronously, without blocking the event loop, in the background.
The former is being developed by ry and co. (RyCo, SF), it's the .fork() thingy.
But the latter is not, and that's what I'm trying to address with this.
There's a reason for wanting the latter: if you delegate a long running cpu-bound task to a node worker *process*, you're gonna block its event loop.
That means that it will in fact go 'offline' for the duration of the task!
Not so with b/g threads.
OTOH, the OS resources such a file descriptors available to a single node process are not unlimited: it's easy to hit the limit if you try e.g. to open 100k files at once. This kind of problems are solved by a pool of (well) interconnected node processes, but not with threads.
--
Jorge.
Perhaps. But you may prefer to write it in five minutes in JS.
> Jorge, what's the memory cost of a V8 Isolate?
Ok, I've got some numbers now. Keep in mind that it's creating a thread plus a clean, new JS context per isolate. The contexts in these figures have nearly nothing in them: The tests were done passing an empty function () {} to each thread:
function ƒ () {}
function cb () {}
var i= number of threads;
while (--i) Thread.give_me_a_thread( ƒ, '', cb);
So really, in terms of memory, these are the absolute minimum possible costs per thread. If an ƒ in a thread allocates a zillion objects, that thread will use zillions of bytes, obviously :-P
The threads are long lived, once you create one it will exist forever until you explicitly .destroy() it. So once it finishes with its initial ƒ, you can reuse it as many times as you want with other ƒ's, and that's much faster than creating/destroying threads+isolates+contexts again and again.
These are numbers, they are in Mb except where noted, and the columns are #of threads, real mem, virtual mem:
0, 12, 40
1, 13, 50
10, 24, 136
100, 135 1014
500, 631, 4.83 Gb
1000, 1.22 Gb, 9.62 Gb
2000, 2.43Gb, 19.21 Gb
These figures are very, very good news in fact, because node *processes* take 4 times as much. IOW, a thread not only launches much much faster but in addition it takes only 1/4 the memory of a node process :-)
--
Jorge.
I am really looking forward to get my my hands dirty on your code! ;)
When will you publish your code?
Regards
---
Thomas FRITZ
web http://fritzthomas.com
twitter http://twitter.com/thomasf
2011/11/12 Jorge <jo...@jorgechamorro.com>:
I'm not sure what you mean about blocking here...
> OTOH, the OS resources such a file descriptors available to a single node process are not unlimited: it's easy to hit the limit if you try e.g. to open 100k files at once. This kind of problems are solved by a pool of (well) interconnected node processes, but not with threads.
We're also adding Isolate support for v0.8. We should combine forces.
Let's talk on IRC.
+1000
---
Diogo R.
> Now do it again but use `--cluster` on the NO_THREADS example.
$ node -v
v0.6.1
$ node --cluster benchmarks/b01_fibonacci_server_no_threads.js
Error: unrecognized flag --cluster
Try --help for options
With node v0.6.3:
$ cd nodev0.6.3 && git pull origin master && ./configure && etc ...
$ JOBS=4 make install
`make install` is not implemented yet. Bug bnoordhuis about it in #node.js
@ bnoordhuis, help me please !
> Also it looks like your running 2 threads but you have super linear speedup. That is clearly madness.
It goes flat-out... 3x faster with just 2 threads in a Mac with 2 cores.
--
Jorge.
$ node --cluster benchmarks/b01_fibonacci_server_no_threads.js
Error: unrecognized flag --cluster
It goes flat-out... 3x faster with just 2 threads in a Mac with 2 cores.
Please can you write text? You know like in the good days we weren't
all functional illiterates, where one wrote a report what s/he has
done, the results, a conclusio, bla.
That's not node v0.6.3. That's git HEAD. Hell, if you can't even tell
which version of node you're installing, I guess your benchmark isn't
THAT useful.
No... we are using 3 or perhaps 4 threads, node is using one (serving the 'quick' responses as fast as it can) and perhaps 2 (is it using also one of libeio's for something ?), and fib()s are running in parallel in two additonal threads.
--
Jorge.
>> It goes flat-out... 3x faster with just 2 threads in a Mac with 2 cores.
>
> You understand that super linear speedup is madness right? Your using 2 threads and you managed to get 3 times the amount of work done.
>
> This implies something is horribly wrong with your benchmark.
fib()s are running in parallel in two additonal threads.
I absolutely agree, this type of video posts can not be considered as
proof of any reasonable kind. If one sees a problem of this nature, he
or she should write a formal report describing all the steps with an
aid of block diagrams and graphs. This should include results of
testing on different hardware and OS platforms.
http://youtube.com/watch?v=_wyQvafohlE
Node isn't cancer. Threads aren't evil.
Thoughts ?
How would that be ?
I'd like to benchmark that against threads.
--
Jorge.
No, it does not do black magic, I promise :-P
It simply lets the node event loop turn freely and continue accepting connections and doing what it's supposed to do instead of staying blocked, while the threads do the blocking jobs in parallel.
> And again don't compare threads to node, that's a waste of time. Of multiple cores are faster then one.
>
> Make a comparison between threads and multiple node processes using the standard cluster API.
How would that be, please ?
I someone writes it I promise to post the results in another video. With more good music :-)
--
Jorge.
> And again don't compare threads to node, that's a waste of time. Of multiple cores are faster then one.
>How would that be, please ?
> Make a comparison between threads and multiple node processes using the standard cluster API.
I someone writes it I promise to post the results in another video. With more good music :-)
will be interesting to have a look at this when it's released.
It's totally OS. Free as in free beer, and free as in freedom of speech:
function fib (n) {
if (n < 2) {
return 1;
}
else {
return fib(n-2) + fib(n-1);
}
}
var i= 0;
var n= 35;
function ƒ (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
if (i++ % 10) {
res.end('quick\n');
process.stdout.write('·');
}
else {
res.end(fib(n)+ '\n');
process.stdout.write('•');
}
}
var ip= "127.0.0.1";
var port= +process.argv[2] || 1234;
require('http').createServer(ƒ).listen(port, ip);
console.log('Fibonacci server (NO THREADS) running @'+ ip+ ':'+ port);
> i think you should be running ab from a separate machine and
> dedicating at least 2 cpu's to the server being tested. you should
> also use the -k flag with ab to use keepalive connections so you can
> eliminate the overhead of creating and tearing down connections on
> every request.
>
> will be interesting to have a look at this when it's released.
For now, all I can send is a precompiled threads module to anyone who wants to try it.
I just need to know what version of node you have, and what OS you want me to compile it for.
It's not ready for Windows (yet), only for UNIX.
--
Jorge.
> http://nodejs.org/docs/v0.6.3/api/cluster.html I havn't used the cluster API. If you benchmark was OS I would fork it and fix it for you.It's totally OS. Free as in free beer, and free as in freedom of speech:
this does 21 reqs/sec on my setup with a single core. scales pretty
much linearly as i add cores - 82 req/sec suing all 4 cores on the
machine.
ab -k -c 10 -n 1000 http://127.0.0.1:2222/
Concurrency Level: 10
Time taken for tests: 12.085 seconds
Complete requests: 1000
Failed requests: 897
(Connect: 0, Receive: 0, Length: 897, Exceptions: 0)
Write errors: 0
Keep-Alive requests: 0
Total transferred: 70309 bytes
HTML transferred: 6309 bytes
Requests per second: 82.75 [#/sec] (mean)
Time per request: 120.845 [ms] (mean)
Time per request: 12.085 [ms] (mean, across all concurrent
requests)
Transfer rate: 5.68 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 1
Processing: 0 119 194.8 1 892
Waiting: 0 119 194.8 1 892
Total: 0 119 194.8 1 892
Percentage of the requests served within a certain time (ms)
50% 1
66% 20
75% 177
80% 454
90% 463
95% 471
98% 474
99% 475
100% 892 (longest request)
On Dec 1, 2:31 pm, Jorge <jo...@jorgechamorro.com> wrote:
> On 01/12/2011, at 15:19, Jake Verbaten wrote:
>
> > On 01/12/2011, at 15:13, Jorge wrote:
> >> How would that be, please ?
>
> >> I someone writes it I promise to post the results in another video. With more good music :-)
>
> >http://nodejs.org/docs/v0.6.3/api/cluster.htmlI havn't used the cluster API. If you benchmark was OS I would fork it and fix it for you.
>
> It's totally OS. Free as in free beer, and free as in freedom of speech:
>
> function fib (n) {
> if (n < 2) {
> return 1;
> }
> else {
> return fib(n-2) + fib(n-1);
> }
>
> }
>
> var i= 0;
> var n= 35;
> function ƒ (req, res) {
> res.writeHead(200, {'Content-Type': 'text/plain'});
> if (i++ % 10) {
> res.end('quick\n');
> process.stdout.write('·');
> }
> else {
> res.end(fib(n)+ '\n');
> process.stdout.write('•');
> }
>
> }
>
> var ip= "127.0.0.1";
> var port= +process.argv[2] || 1234;
> require('http').createServer(ƒ).listen(port, ip);
> console.log('Fibonacci server (NO THREADS) running @'+ ip+ ':'+ port);
>
> b01_fibonacci_server_no_threads.js
> < 1KViewDownload
>
>
>
> I want to know how does it compare to threads. Please, please, fix it for me.
>
> Thanks in advance,
> --
> Jorge.
On Dec 1, 2:53 pm, billywhizz <apjohn...@gmail.com> wrote:
> this should do the trick:https://gist.github.com/1417302
>
> this does 21 reqs/sec on my setup with a single core. scales pretty
> much linearly as i add cores - 82 req/sec suing all 4 cores on the
> machine.
>
> ab -k -c 10 -n 1000http://127.0.0.1:2222/
> > >http://nodejs.org/docs/v0.6.3/api/cluster.htmlIhavn't used the cluster API. If you benchmark was OS I would fork it and fix it for you.
On Dec 1, 2:53 pm, billywhizz <apjohn...@gmail.com> wrote:
> this should do the trick:https://gist.github.com/1417302
>
> this does 21 reqs/sec on my setup with a single core. scales pretty
> much linearly as i add cores - 82 req/sec suing all 4 cores on the
> machine.
>
> ab -k -c 10 -n 1000http://127.0.0.1:2222/
> > >http://nodejs.org/docs/v0.6.3/api/cluster.htmlIhavn't used the cluster API. If you benchmark was OS I would fork it and fix it for you.
+3: node isn't cancer, threads aren't evil, and the music is good!
> i've changed the gist now so it doesn't send any output and just calls
> the fib function. 2.17 req/sec with one core, 8.56 with four cores...
Hi billywhizz,
Thank you very much!
I've run the benchmarks, and my threads beat their cluster two to one...
http://www.youtube.com/watch?v=OSF-tUH3gDo
Threads_a_gogo is 2x faster... :-)
P.S.
This is the cluster code I've used: perhaps there's something wrong with it ?
var cluster = require('cluster');
function fib (n) {
if (n < 2) {
return 1;
}
else {
return fib(n-2) + fib(n-1);
}
}
var i= 0;
var n= 35;
function srv (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
if (i++ % 10) {
res.end('quick\n');
process.stdout.write('·');
}
else {
res.end(fib(n)+ '\n');
process.stdout.write('•');
}
}
if (cluster.isMaster) {
var numCPUs = process.argv[3] || 1;
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('death', function(worker) {
console.log('worker ' + worker.pid + ' died');
});
} else {
var port= + process.argv[2] || 1234;
require('http').createServer(srv).listen(port);
console.log('Fibonacci server (CLUSTERED) listening: ' + port);
}
Cheers,
--
Jorge.
I've run the benchmarks, and my threads beat their cluster two to one...
http://www.youtube.com/watch?v=OSF-tUH3gDo
frankly, i don't see the point in continuing to post these benchmarks
up here unless you can release the code so we can test it for
ourselves. i don't believe that threads should beat the clustering
solution 2 to 1 in a case like this as the load balancing of requests
should all be taken care of by the OS kernel in the clustered
application. in fact i would wager that the clustered solution would
be faster as it shouldn't have any context switching...
--
Job Board: http://jobs.nodejs.org/
Posting guidelines: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com
To unsubscribe from this group, send email to
nodejs+un...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en
One more thing:
The cluster has needed 12.9+12.8+9 = 34.7 MB, but the threads_a_gogo one only 16.9 MB at its peak.
--
Jorge.
> you are getting a lot of failed requests. i would change the test to
> just do the fibonnaci and return the same http response every time
> (see my gist) and also use keepalive connections so you are spending
> as little time doing http stuff as possible. you should also stop
> writing stuff to console too.
>
> frankly, i don't see the point in continuing to post these benchmarks
> up here unless you can release the code so we can test it for
> ourselves. i don't believe that threads should beat the clustering
> solution 2 to 1 in a case like this as the load balancing of requests
> should all be taken care of by the OS kernel in the clustered
> application. in fact i would wager that the clustered solution would
> be faster as it shouldn't have any context switching...
With the cluster, as soon as you get two fib()s running at once, all the cluster is dead, blocked, finito, kaputt.
With threads, I can keep serving requests ad inifnitum, because it doesn't matter how many fib()s are running at once, node's event loop is never blocked.
That's why background threads are a much better solution for this problem. And they're faster too. And they're less expensive (need less memory).
--
Jorge.
Agreed (almost): there should not be any primitives to synchronize
threads and communication should be with immutable data. I would make
one single exception for buffers so that people can communicate
efficiently with threads. Passing a buffer to a thread should be
subject to the same rules as passing a buffer to a core I/O function.
When you do this, the I/O function may write to the buffer while your
JS code is executing. So there is (very little and well hidden) shared
mutable state in node today!
The very important point is that globals and mutable objects aren't
shared and that everything except buffers be safe.
Bruno
> Jorge, why are you holding out on the isolates code? That's the stuff we all want to look at ;)
I'm not. I and the beta testers (hi beta testers !) :-P have got all the source code, but it's not well finished yet.
And we're still trying to discover what would be the best API for this... for example, should it be sync or async looking ? (I'm kidding :-P)
Do you want to try it for yourself ?
--
Jorge.
And a cup of tea with butter cookies.
--
Jorge.
Beta tester #2 :-) had a bit of time to play with the latest version.
Cool stuff!
Bruno
Bruno
MaybeObject has nothing to do with isolates or proxies or lazyness.
MaybeObject is a value that can be either normal Object or a Failure
(allocation failure usually or exception).
In a sense it's similar to Maybe a in Haskell:
data Maybe a = Nothing | Just a
--
Vyacheslav Egorov
No. Every context (and thus every isolate) has it's own builtin
objects. They are called isolates because nothing in their JS worlds
is shared.
--
Vyacheslav Egorov
Can't wait for your release!!
On Nov 9, 11:19 am, Jorge <jo...@jorgechamorro.com> wrote:
> On 09/11/2011, at 12:06, Christophe Eymard wrote:
>
> > On Wed, Nov 9, 2011 at 11:58, Jorge <jo...@jorgechamorro.com> wrote:
> > On 09/11/2011, at 07:24, Mark Hahn wrote:
>
> > > I couldn't see anything except a frozen screen with some green columns and a small box with two cpu history graphs. Maybe it's a windows thing.
>
> >http://www.youtube.com/watch?v=lnCjk2_3WLQ
>
> > Still don't get it...
>
> > Did you bring threads to the javascript world ?
>
> Yes.
>
> > Are they usable ?
>
> Yes. The API is just like that of an I/O call:
>
> nuThread= Threads.give_me_a_thread( function, data, callback );
>
> You simply provide a function to run, the data it's going to receive as a parameter, and a callback, and give_me_a_thread() runs it in a separate thread (a true operating system pthread running a v8 isolate) in parallel with the main thread. When the function ends, it returns a value, and that's what you get in the callback. If you've got say, a cpu with six cores, and you launch 6 threads, each one is going to run flat-out in its core... That's truly getting all the juice out of you cpu cores !
>
> Awesome, isn't it ? :-)
> --
> Jorge.
By the way, the similarity of this code (above) with C code using pid_t fork( void ) (man 2 fork), is what gives node's .fork() method its name, isn't it ?
--
Jorge.
Today is the D day, finally: JavaScript threads for Node.js (using v8 isolates):
https://github.com/xk/node-threads-a-gogo
npm install threads_a_gogo
Special thanks to: Bruno Jouhier and Liam Breck.
Enjoy!
--
Jorge.
IMHO it should be possible to share deeply frozen structures without
having to serialize/deserialize and store the pointer to them into a
container that guarentees atomicness. The Clojure layout is something
to seriously consider, and IMHO you do not have to take over full
functional style to adept to its sharing immutables by default through
atomic references.
PS: the "More examples" links in Readme.md point to
https://github.com/xk/private_threads_a_gogo/
Whoah, that looks really cool! A few questions:
- Is it possible to use require() inside those isolated things, for
example by using fs.readFile from the root isolate and blocking the
child until the callback fires?
- Is there a simple API for making events in the root isolate that get
handled by the first free child isolate only?
- Can I use Buffers in the child, and can I pass them between isolates?
- How efficient would it be to use a proxy object inside of the isolate
to access things outside?
- Is there an API for catching uncaughtExceptions in the isolates?
- When an isolate in a pool fails really hard, will a new one be
created automatically?
> Really like the work! Shared nothing (well, message passing) is a great way to handle concurrency inside a process for node.
Thanks :-)
> I would suggest a means to prioritize threads if possible (consumers and producers can have different priorities).
Yes, that's possible, istm, both in Linux and in OSX.
> Is there an easy way to just disable the 'puts' variable or just override it for every eval?
All threads are initially .created() with a puts() global. You just have to disable it once (per thread or threadPool) with a thread.eval('delete this.puts'), and it will be gone forever.
--
Jorge.
> No more music videos with green screens?
Oh, yes, there's a video of the event loop with music, but it isn't green:
<http://youtube.com/v/D0uA_NOb0PE?autoplay=1>
> IMHO it should be possible to share deeply frozen structures without
> having to serialize/deserialize and store the pointer to them into a
> container that guarentees atomicness.
Yeah, perhaps with proxies, and (hopefully) not necessarily frozen.
> The Clojure layout is something
> to seriously consider, and IMHO you do not have to take over full
> functional style to adept to its sharing immutables by default through
> atomic references.
>
> PS: the "More examples" links in Readme.md point to
> https://github.com/xk/private_threads_a_gogo/
Thanks, fixed.
--
Jorge.
Thank you :-)
> - Is it possible to use require() inside those isolated things, for
> example by using fs.readFile from the root isolate and blocking the
> child until the callback fires?
Well, I think (I may be wrong) that the require() functionality can be emulated with a thread.load().
> - Is there a simple API for making events in the root isolate that get
> handled by the first free child isolate only?
Yes, when you create a pool, if you use pool.any.emit() it will emit() in the first thread of the pool that becomes idle.
> - Can I use Buffers in the child, and can I pass them between isolates?
Not yet...
> - How efficient would it be to use a proxy object inside of the isolate
> to access things outside?
That's in the roadmap.
> - Is there an API for catching uncaughtExceptions in the isolates?
yes, when you do an .eval(program, cb) you'll get any errors in the cb(err, result)
> - When an isolate in a pool fails really hard, will a new one be
> created automatically?
Nope.
--
Jorge.
> It should be possible to transfer objects between isolates, I'm pretty sure this is an exposed API in Chromium. In this case you aren't sharing access, you're transfering ownership, which is plenty useful in a lot of cases.
That would be what I call "pass and forget". Do you think that that's possible, today, in v8 ? I would love to have that, and I'd love to know how !
If not, I think I'd have to build some kind of extraneous, custom, proxy objects that have that magic, and I'm not sure how well would that work, or even if it's feasible.
--
Jorge.
> Can you give an example as to how you would emulate require() via thread.load() ? if a module has inner requires (which it probably will) it would throw up ReferenceError(s).
Yes, you're right, it would throw. I've not given much thought to that... yet. Patches and pull requests are very much (needed and) welcome :-))
> Btw really cool module. :)
Thank you :-)
--
Jorge.
It would be nice if there would be an API that stored the PID of the
thread the buffer currently "belongs" to, this would make it more
fool-proof.
>> IMHO it should be possible to share deeply frozen structures without
>> having to serialize/deserialize and store the pointer to them into a
>> container that guarentees atomicness.
>
> Yeah, perhaps with proxies, and (hopefully) not necessarily frozen.
I understand the approach its up to the next layer to decide which
access mechanism to apply. My opinion on this is, the bad reputation
threads have is to going the semaphore way, which was long (and still
is?) the propsed standard to approach multithreads on working on the
same datastructure. However, deeply frozen data structures with atomic
references are much easer to get right, or harder to break. They sound
less efficient rebuilding a whole tree everytime you change a little
thing, but in practice it isn't that dramatic. Its not Javascript
strong point, but immutable datastructures also open all kind of new
possibilities to optimiziation. Also the current Moores Law gives us
more Cores and more Memory, but hardly more linear CPU speed, so it
also runs well with hardware development to go for algorithms who
don't mind too much to use memory. To cut a long story short, please
make it easy for the user to go the immutables/atomic route if s/he
decides so.