If you want to add something to core it'll need to be a GitHub issue.
For all the same reasons non-blocking APIs fail on platforms that traditionally use threading a threading API in Node will fail just the same. It's a matter of community, the two approaches are not compatible and this means that the code in the community divides along these lines.
It's not good for the community to divide in to incompatible camps. Node code runs in node, period. If you want threads you'll need to fork, and you'll need to call it something other than node because it's not Node anymore, it's something else. Maybe it's better, but it's not Node.
You can talk all day about the "possibilities" but your'e just talking about technology and waxing about what is possible. It doesn't really matter, it's not why things gain traction or grow a community, which is necessary for success. If you want to take on this science project, go for it, but take it off the list, this isn't productive.
On Sep 17, 2012, at September 17, 20122:39 AM, Jorge <jo...@jorgechamorro.com> wrote:
> On 17/09/2012, at 11:12, Ben Noordhuis wrote:
>> On Mon, Sep 17, 2012 at 10:49 AM, Jorge <jo...@jorgechamorro.com> wrote:
>>> On 17/09/2012, at 06:35, Ben Noordhuis wrote:
>>>> As for transferring objects, you don't need threads for that, just
>>> If the processes are sharing memory, then they *too* "need to serialize access to data structures"...
>> Yes, but the big difference - and I hate to spell it out because it
> But that's a different problem. If you want to make thread-safe a program that's abusing globals then yes you're ~ fucked.
> On the other hand it didn't take too long for the V8 guys to fix exactly that in the isolates branch...
>>>> We'll probably implement that someday but don't expect too much from
>>> That's the problem for transferable objects: there's no way to grab an object reference from isolate A to use it on isolate B.
>>>> By the way, if you want to hasten that day, post (non-contrived)
>>> If the processes can communicate via shared memory -which is always a given for threads- then IPC is fast.
>>> But if they can not then you've got to copy the data and speed becomes a function of ( data.length ) which might be *irremediably* slow.
>>> Big data.length copies also flush other data from the caches, which results in extra slowdowns.
>>> And as the memory bus is a shared resource, under high loads these (many) unnecessary big.data.length copies will (pretty soon) have a global impact on the performance of *all* the rest of system (á la `cat /dev/zero > /dev/null` memory bus bandwidth exhaustion).
>> No doubt. Now show me the numbers. :-)
> Ok. Please tell me the secret :-P
> Because to me it's obvious that a copy of (sizeof void*) length is faster than a copy of anything much larger than that.
You must Sign in before you can post messages.
To post a message you must first join this group.
Please update your nickname on the subscription settings page before posting.
You do not have the permission required to post.