Message from discussion Concurrent GC: a good thing?
NNTP-Posting-Date: Tue, 12 Jun 2007 03:19:54 -0500
Date: Tue, 12 Jun 2007 01:19:47 -0700
From: "Daniel C. Wang" <danwan...@gmail.com>
User-Agent: Thunderbird 184.108.40.206 (Windows/20070326)
Subject: Re: Concurrent GC: a good thing?
References: <email@example.com> <466E203A.firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
If you have a single list producer that sends the components of each
pair to two consumers you're just marshalling an int. So if you
marshaling on demand in a lazy fashion I see no reason to see why
sharing "x" would be more efficient.
Remember in a shared memory system the data goes over the memory bus to
the cache of each processor. The shared memory model is really just an
interface to message passing hardware. In the end it's all about bits
going over wires. There's nothing magical about shared memory.
I can always decompose my app into a "memory server" that reads
addresses and returns values, and a bunch of concurrent threads that
request locations and receive values back. This will give me the same
algorithmic complexity of any shared memory system.
Of course you could decompose your app in a smarter way and get
potentially better algorithmic complexity and scalability. However, if
you use shared memory you're basically limited to the "memory sever"
model I described. Of course the hardware does a good job of making it
fast. But for really parallel apps you hit a scaling limit and end up
rolling your own message passing in the end to get beyond it.
MPI exists for a reason. BTW this message is from the CAML list is
illustrative of my point.
Paul Rubin wrote:
> Vityok <bob...@ua.fm> writes:
>> It seems to me, that marshalling can be worked around by implementing
>> specific serializer/deserializer functions for data structures being
> I'm more familiar with Haskell than ML so forgive me for using Haskell
> notation. Let's say I have a list of pairs (a,b):
> x :: [(Integer, Integer)]
> I want to add up all the a's, and I also want to add up all the b's:
> sum(map fst x)
> sum(map snd x)
> Because x is a very long list, I want to use separate threads or
> processes for these summations, to do them in parallel.
> Can any method involving serialization do anywhere near as well as
> just letting both threads access x directly?