Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

C++ inventor Bjarne Stroustrup answers the Multicore Proust Questionnaire

5 views
Skip to first unread message

gremlin

unread,
Sep 27, 2008, 11:55:00 AM9/27/08
to

asm23

unread,
Sep 27, 2008, 11:39:42 PM9/27/08
to

Chris M. Thomasson

unread,
Sep 28, 2008, 12:12:25 AM9/28/08
to
"gremlin" <gre...@rosetattoo.com> wrote in message
news:v5KdncJt27DKykPV...@comcast.com...
> http://www.cilk.com/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire

I get a not found error:

The requested URL
/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
was not found on this server.

Where is the correct location?

Ian Collins

unread,
Sep 28, 2008, 12:11:39 AM9/28/08
to

The link is the correct location I just tried it.

--
Ian Collins.

Chris M. Thomasson

unread,
Sep 28, 2008, 12:22:06 AM9/28/08
to
"Ian Collins" <ian-...@hotmail.com> wrote in message
news:6k8eftF...@mid.individual.net...

Hey, it works now! Weird; perhaps temporary server glitch. Who knows.

;^/

Ian Collins

unread,
Sep 28, 2008, 12:24:29 AM9/28/08
to
Well it is running on windows :)

--
Ian Collins.

Chris M. Thomasson

unread,
Sep 28, 2008, 12:31:03 AM9/28/08
to

"Chris M. Thomasson" <n...@spam.invalid> wrote in message
news:GxDDk.13217$VS3....@newsfe12.iad...


Q: The most important problem to solve for multicore software:

A: How to simplify the expression of potential parallelism.


Humm... What about scalability? That's a very important problem to solve.
Perhaps the most important. STM simplifies expression of parallelism, but
its not really scaleable at all.

I guess I would have answered:


CT-A: How to simplify the expression of potential parallelism __without
sacrificing scalability__.


Q: My worst fear about how multicore technology might evolve:

A: Threads on steroids.


Well, threads on steroids and proper distributed algorihtms can address the
scalability issue. Nothing wrong with threading on steroids. Don't be
afraid!!!!! I am a threading freak, so I am oh so VERY BIASED!!! ;^|

Oh well, that my 2 cents.

blargg

unread,
Sep 28, 2008, 1:12:59 AM9/28/08
to
In article <2GDDk.13223$VS3...@newsfe12.iad>, "Chris M. Thomasson"
<n...@spam.invalid> wrote:

> "Chris M. Thomasson" <n...@spam.invalid> wrote in message
> news:GxDDk.13217$VS3....@newsfe12.iad...
> > "Ian Collins" <ian-...@hotmail.com> wrote in message
> > news:6k8eftF...@mid.individual.net...
> >> Chris M. Thomasson wrote:
> >>> "gremlin" <gre...@rosetattoo.com> wrote in message
> >>> news:v5KdncJt27DKykPV...@comcast.com...
> >>>>
http://www.cilk.com/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire

[...]


> Q: The most important problem to solve for multicore software:
>
> A: How to simplify the expression of potential parallelism.
>
>
> Humm... What about scalability? That's a very important problem to solve.
> Perhaps the most important. STM simplifies expression of parallelism, but
> its not really scaleable at all.
>
> I guess I would have answered:
>
> CT-A: How to simplify the expression of potential parallelism __without
> sacrificing scalability__.
>
>
> Q: My worst fear about how multicore technology might evolve:
>
> A: Threads on steroids.
>
>
> Well, threads on steroids and proper distributed algorihtms can address the
> scalability issue. Nothing wrong with threading on steroids. Don't be
> afraid!!!!! I am a threading freak, so I am oh so VERY BIASED!!! ;^|

Sounds like Stroustrup wants to minimize extra notation in source code
relating to parallel execution, to not have it take on a life of its own.
Exceptions might be an example of minimal impact, where lots of code
doesn't require any explicit mention of handling exceptions.

James Kanze

unread,
Sep 28, 2008, 3:34:39 AM9/28/08
to
On Sep 28, 6:31 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> "Chris M. Thomasson" <n...@spam.invalid> wrote in
> messagenews:GxDDk.13217$VS3....@newsfe12.iad...
> > "Ian Collins" <ian-n...@hotmail.com> wrote in message

> >news:6k8eftF...@mid.individual.net...
> >> Chris M. Thomasson wrote:
> >>> "gremlin" <grem...@rosetattoo.com> wrote in message
> >>>news:v5KdncJt27DKykPV...@comcast.com...
> >>>>http://www.cilk.com/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroust...

[...]


> Q: The most important problem to solve for multicore software:

> A: How to simplify the expression of potential parallelism.

> Humm... What about scalability? That's a very important
> problem to solve. Perhaps the most important. STM simplifies
> expression of parallelism, but its not really scaleable at
> all.

And what do you think simplifying the expression of potential
parallelism achieves, if not scalability?

[...]


> Q: My worst fear about how multicore technology might evolve:

> A: Threads on steroids.

> Well, threads on steroids and proper distributed algorihtms
> can address the scalability issue. Nothing wrong with
> threading on steroids. Don't be afraid!!!!! I am a threading
> freak, so I am oh so VERY BIASED!!! ;^|

I'm not too sure what Stroustrup was getting at here, but having
to write explicitly multithreaded code (with e.g. manual locking
and synchronization) is not a good way to achieve scalability.
Futures are probably significantly easier to use, and in modern
Fortran, if I'm not mistaken, there are special constructs to
tell the compiler that certain operations can be parallelized.
And back some years ago, there was a fair amount of research
concerning automatic parallelization by the compiler; I don't
know where it is now.

Of course, a lot depends on the application. In my server,
there's really nothing that could be parallelized in a given
transaction, but we can run many transactions in parallel. For
that particular model of parallelization, classical explicit
threading works fine.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Ian Collins

unread,
Sep 28, 2008, 3:46:23 AM9/28/08
to
James Kanze wrote:
> On Sep 28, 6:31 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
>
>> A: Threads on steroids.
>
>> Well, threads on steroids and proper distributed algorihtms
>> can address the scalability issue. Nothing wrong with
>> threading on steroids. Don't be afraid!!!!! I am a threading
>> freak, so I am oh so VERY BIASED!!! ;^|
>
> I'm not too sure what Stroustrup was getting at here, but having
> to write explicitly multithreaded code (with e.g. manual locking
> and synchronization) is not a good way to achieve scalability.
> Futures are probably significantly easier to use, and in modern
> Fortran, if I'm not mistaken, there are special constructs to
> tell the compiler that certain operations can be parallelized.
> And back some years ago, there was a fair amount of research
> concerning automatic parallelization by the compiler; I don't
> know where it is now.
>
We along with Fortran and C programmers, can use OpenMP which from my
limited experience with, works very well.

--
Ian Collins.

Szabolcs Ferenczi

unread,
Sep 28, 2008, 10:46:03 AM9/28/08
to
On Sep 27, 5:55 pm, "gremlin" <grem...@rosetattoo.com> wrote:
> http://www.cilk.com/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroust...

Q: The most important problem to solve for multicore software:
A: How to simplify the expression of potential parallelism.

That is interesting. Especially since the C++0x committee aims at the
very low level of library-based parallelism.---They will not be able
to "simplify the expression of potential parallelism" that way. Not in
C++0x, at least.

Best Regards,
Szabolcs

Daniel T.

unread,
Sep 28, 2008, 1:00:10 PM9/28/08
to
James Kanze <james...@gmail.com> wrote:

> > Q: My worst fear about how multicore technology might evolve:
>
> > A: Threads on steroids.
>
> > Well, threads on steroids and proper distributed algorihtms
> > can address the scalability issue. Nothing wrong with
> > threading on steroids. Don't be afraid!!!!! I am a threading
> > freak, so I am oh so VERY BIASED!!! ;^|
>
> I'm not too sure what Stroustrup was getting at here, but having
> to write explicitly multithreaded code (with e.g. manual locking
> and synchronization) is not a good way to achieve scalability.

Agreed. I have played around with Occam's expression of Communicating
Sequential Processes (CSP). I would like to see CSP explored further in
C++.

What I have found so far is
http://www.twistedsquare.com/cppcspv1/docs/index.html

Chris M. Thomasson

unread,
Sep 28, 2008, 4:51:06 PM9/28/08
to
"Szabolcs Ferenczi" <szabolcs...@gmail.com> wrote in message
news:184232bc-2c49-42ce...@x35g2000hsb.googlegroups.com...

> Q: The most important problem to solve for multicore software:
> A: How to simplify the expression of potential parallelism.

> That is interesting. Especially since the C++0x committee aims at the
> very low level of library-based parallelism.

Your mistaken. The language and the library are highly integrated and depend
on one another. Its definitely NOT a purely library based solution. Similar
to the way a POSIX compliant compiler interacts with a PThread library.


> ---They will not be able
> to "simplify the expression of potential parallelism" that way. Not in
> C++0x, at least.

C++ is not about simplification as its a low-level systems language.
However, due to its low-level nature you can certainly use C++0x to create a
brand new language that attempts to "simplify the expression of potential
parallelism".

Chris M. Thomasson

unread,
Sep 28, 2008, 5:02:52 PM9/28/08
to

"James Kanze" <james...@gmail.com> wrote in message
news:4a3be8d6-4291-4068...@m73g2000hsh.googlegroups.com...

On Sep 28, 6:31 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > "Chris M. Thomasson" <n...@spam.invalid> wrote in
> > messagenews:GxDDk.13217$VS3....@newsfe12.iad...
> > > "Ian Collins" <ian-n...@hotmail.com> wrote in message
> > >news:6k8eftF...@mid.individual.net...
> > >> Chris M. Thomasson wrote:
> > >>> "gremlin" <grem...@rosetattoo.com> wrote in message
> > >>>news:v5KdncJt27DKykPV...@comcast.com...
> > >>>>http://www.cilk.com/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroust...

[...]
> > Q: The most important problem to solve for multicore software:

> > A: How to simplify the expression of potential parallelism.

> > Humm... What about scalability? That's a very important
> > problem to solve. Perhaps the most important. STM simplifies
> > expression of parallelism, but its not really scaleable at
> > all.

> And what do you think simplifying the expression of potential
> parallelism achieves, if not scalability?

Take one attempt at simplifying the expression of potential parallelism;
STM. Unfortunately, its not really able to scale. The simplification can
introduce overhead which interfere with scalability. IMVHO, message-passing
has potential. At least I know how to implement it in a way that basically
scales to any number of processors.


[...]
> > Q: My worst fear about how multicore technology might evolve:

> > A: Threads on steroids.

> > Well, threads on steroids and proper distributed algorihtms
> > can address the scalability issue. Nothing wrong with
> > threading on steroids. Don't be afraid!!!!! I am a threading
> > freak, so I am oh so VERY BIASED!!! ;^|

> I'm not too sure what Stroustrup was getting at here, but having
> to write explicitly multithreaded code (with e.g. manual locking
> and synchronization) is not a good way to achieve scalability.

That's relative to the programmer. I have several abstractions packaged into
a library which allows one to create highly scaleable programs using
threads. However, if the programmer is not skilled in the art of
multi-threading, well, then its not going to do any good!

;^(


> Futures are probably significantly easier to use,

They have some "caveats". I have implemented futures, and know that a truly
scaleable impl needs to use distributed queuing which does not really follow
true global FIFO. There can be ordering anomalies that the programmer does
not know about, and will bit them in the a$% if some of their algorihtms
depend on certain orders of actions.


> and in modern
> Fortran, if I'm not mistaken, there are special constructs to
> tell the compiler that certain operations can be parallelized.
> And back some years ago, there was a fair amount of research
> concerning automatic parallelization by the compiler; I don't
> know where it is now.

No silver bullets in any way shape or form. Automatic parallelization
sometimes works for a narrow type of algorithm. Usually, breaking up arrays
across multiple threads. But, the programmer is not out of the woods,
because they will still need to manually implement enhancements that are KEY
to scalability (e.g., cache-blocking.). No silver bullets indeed.


> Of course, a lot depends on the application. In my server,
> there's really nothing that could be parallelized in a given
> transaction, but we can run many transactions in parallel. For
> that particular model of parallelization, classical explicit
> threading works fine.

Absolutely.

Chris M. Thomasson

unread,
Sep 28, 2008, 5:05:28 PM9/28/08
to
"Daniel T." <dani...@earthlink.net> wrote in message
news:daniel_t-C81B2C...@earthlink.vsrv-sjc.supernews.net...

> James Kanze <james...@gmail.com> wrote:
>
>> > Q: My worst fear about how multicore technology might evolve:
>>
>> > A: Threads on steroids.
>>
>> > Well, threads on steroids and proper distributed algorihtms
>> > can address the scalability issue. Nothing wrong with
>> > threading on steroids. Don't be afraid!!!!! I am a threading
>> > freak, so I am oh so VERY BIASED!!! ;^|
>>
>> I'm not too sure what Stroustrup was getting at here, but having
>> to write explicitly multithreaded code (with e.g. manual locking
>> and synchronization) is not a good way to achieve scalability.
>
> Agreed. I have played around with Occam's expression of Communicating
> Sequential Processes (CSP). I would like to see CSP explored further in
> C++.

IMO, CPS is WAY to high level to be integrated into the language. However,
you can definitely use C++0x to fully implement Occam and/or CSP. If you
want to use CSP out of the box, well, C++ is NOT for you; period. Keep in
mind, C++ is a low-level systems language.

Szabolcs Ferenczi

unread,
Sep 28, 2008, 6:19:52 PM9/28/08
to
On Sep 28, 7:00 pm, "Daniel T." <danie...@earthlink.net> wrote:

> [...]


> Agreed. I have played around with Occam's expression of Communicating
> Sequential Processes (CSP). I would like to see CSP explored further in
> C++.
>
> What I have found so far ishttp://www.twistedsquare.com/cppcspv1/docs/index.html

Well, CSP is a very nice language concept and OCCAM is an interesting
instance of it.

However, CSP is not matching very well with objects and shared
resources. If you want to map processes in the CSP sense to objects,
you will end up with what you would call single threaded object. The
object has its private state space and any request comes via events.
The object can await several potential event but whenever one or more
event is applicable, it selects one non-deterministically and performs
the corresponding action on its own private data space.

It is worth mentioning that at the time CSP was published, there
appeared another elegant language concept: Distributed Processes (DP).
This is something that can be mapped very naturally to objects and
gives you high level of potential parallelism. In DP an object starts
its own thread which operates on its own data space and other objects
can call its methods asynchronously. The initial thread and the called
methods then are executed in an interleaved manner. If the initial
thread finishes, the object continues to serve the potentially
simultaneous method calls, i.e. it becomes a (shared) passive object.
http://brinch-hansen.net/papers/1978a.pdf

Well, both language proposals are much higher level with respect to
parallelism than what is planned into the new brave C++0x.

Best Regards,
Szabolcs

Chris M. Thomasson

unread,
Sep 29, 2008, 6:06:13 AM9/29/08
to

"Szabolcs Ferenczi" <szabolcs...@gmail.com> wrote in message
news:0154714b-388b-4c47...@t54g2000hsg.googlegroups.com...

On Sep 28, 7:00 pm, "Daniel T." <danie...@earthlink.net> wrote:

> > [...]
> > Agreed. I have played around with Occam's expression of Communicating
> > Sequential Processes (CSP). I would like to see CSP explored further in
> > C++.
> >
> > What I have found so far
> > ishttp://www.twistedsquare.com/cppcspv1/docs/index.html

> Well, CSP is a very nice language concept and OCCAM is an interesting
> instance of it.

[...]

> It is worth mentioning that at the time CSP was published, there
> appeared another elegant language concept: Distributed Processes (DP).
> This is something that can be mapped very naturally to objects and
> gives you high level of potential parallelism. In DP an object starts
> its own thread which operates on its own data space and other objects
> can call its methods asynchronously.

Each object starts its own thread? lol, NO WAY! I can definitely implement a
high-performance version of DP in which a plurality of thread multiplexes
multiple objects. N number of threads for O number of objects. N = 4; O =
10000. Sure. No problem. DP does not have to work like you explicitly
suggest. Sorry, but you make huge mistake.


> The initial thread and the called
> methods then are executed in an interleaved manner. If the initial
> thread finishes, the object continues to serve the potentially
> simultaneous method calls, i.e. it becomes a (shared) passive object.
> http://brinch-hansen.net/papers/1978a.pdf

> Well, both language proposals are much higher level with respect to
> parallelism than what is planned into the new brave C++0x.

DP is NOT as expensive as you claim it is. E.g. each object starts it's OWN
thread. No way!

;^|

Chris M. Thomasson

unread,
Sep 29, 2008, 8:05:38 AM9/29/08
to
[`comp.lang.c++' added; this message was multi-posted

Here is link to context which does not show up on `comp.lang.c++':

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/a4a668e20be49644

]


"gremlin" <gre...@rosetattoo.com> wrote in message

news:aNGdnZOrE-PTXX3V...@comcast.com...


>
> "Chris M. Thomasson" <n...@spam.invalid> wrote in message

> news:_XRDk.1159$FV4...@newsfe07.iad...


>> "gremlin" <gre...@rosetattoo.com> wrote in message

>> news:AeGdnQFC5KvsMELV...@comcast.com...
>>> http://www.cilk.com/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
>>
>> Why did you multi-post this?
>>
>> http://groups.google.com/group/comp.lang.c++/browse_frm/thread/58c6635e208a36bf
>
> thought it might be of interest to several programming groups.

Well, if you forgot a group; fine. However, it's better to simply cross-post
instead of multi-post. I added `comp.lang.c++' and all follow-ups will
include that group. See, responders on this group will not be able to see
correspondence on others groups that you posted to (e.g., comp.lang.c++).


> bad move?

Na; you most likely forgot to add a C++ list in the message broadcast.


;^)

Szabolcs Ferenczi

unread,
Oct 20, 2008, 12:47:25 PM10/20/08
to
On Sep 29, 12:06 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> "Szabolcs Ferenczi" <szabolcs.feren...@gmail.com> wrote in message

>
> news:0154714b-388b-4c47...@t54g2000hsg.googlegroups.com...
> On Sep 28, 7:00 pm, "Daniel T." <danie...@earthlink.net> wrote:
>
> > > [...]
> > > Agreed. I have played around with Occam's expression of Communicating
> > > Sequential Processes (CSP). I would like to see CSP explored further in
> > > C++.
>
> > > What I have found so far
> > > ishttp://www.twistedsquare.com/cppcspv1/docs/index.html
> > Well, CSP is a very nice language concept and OCCAM is an interesting
> > instance of it.
>
> [...]
>
> > It is worth mentioning that at the time CSP was published, there
> > appeared another elegant language concept: Distributed Processes (DP).
> > This is something that can be mapped very naturally to objects and
> > gives you high level of potential parallelism. In DP an object starts
> > its own thread which operates on its own data space and other objects
> > can call its methods asynchronously.
>
> Each object starts its own thread? lol, NO WAY!

Hmm... Yes, *logically* "Each object starts its own thread". That is
the way it is defined by the author of the concept.

> I can definitely implement a
> high-performance version of DP in which a plurality of thread multiplexes
> multiple objects.

Obviously, you do not know what you are talking about. You go and read
about the programming concept Distributed Processes (DP) before you
start claiming how you would like to hack it.
http://brinch-hansen.net/papers/1978a.pdf

> N number of threads for O number of objects. N = 4; O =
> 10000. Sure. No problem. DP does not have to work like you explicitly
> suggest. Sorry, but you make huge mistake.

Well, it is not me who suggest it that way but the author of the
programming language concept. Again, you may perhaps try to read about
it first.
http://brinch-hansen.net/papers/1978a.pdf

> > The initial thread and the called
> > methods then are executed in an interleaved manner. If the initial
> > thread finishes, the object continues to serve the potentially
> > simultaneous method calls, i.e. it becomes a (shared) passive object.
> >http://brinch-hansen.net/papers/1978a.pdf
> > Well, both language proposals are much higher level with respect to
> > parallelism than what is planned into the new brave C++0x.
>
> DP is NOT as expensive as you claim it is. E.g. each object starts it's OWN
> thread. No way!

I did not claim it was expensive. You have concluded it but it is just
because of your ignorance. It is just because you talk about it
without knowing anything about it.

I hope, I could help, though.

Best Regards,
Szabolcs

Chris M. Thomasson

unread,
Oct 20, 2008, 11:16:36 PM10/20/08
to
"Szabolcs Ferenczi" <szabolcs...@gmail.com> wrote in message
news:8a367db1-4fe8-4938...@t18g2000prt.googlegroups.com...

On Sep 29, 12:06 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > "Szabolcs Ferenczi" <szabolcs.feren...@gmail.com> wrote in message
> >
> > news:0154714b-388b-4c47...@t54g2000hsg.googlegroups.com...
> > On Sep 28, 7:00 pm, "Daniel T." <danie...@earthlink.net> wrote:
> >
> > > > [...]
> > > > Agreed. I have played around with Occam's expression of
> > > > Communicating
> > > > Sequential Processes (CSP). I would like to see CSP explored further
> > > > in
> > > > C++.
> >
> > > > What I have found so far
> > > > ishttp://www.twistedsquare.com/cppcspv1/docs/index.html
> > > Well, CSP is a very nice language concept and OCCAM is an interesting
> > > instance of it.
> >
> > [...]
> >
> > > It is worth mentioning that at the time CSP was published, there
> > > appeared another elegant language concept: Distributed Processes (DP).
> > > This is something that can be mapped very naturally to objects and
> > > gives you high level of potential parallelism. In DP an object starts
> > > its own thread which operates on its own data space and other objects
> > > can call its methods asynchronously.
> >
> > Each object starts its own thread? lol, NO WAY!

> Hmm... Yes, *logically* "Each object starts its own thread". That is
> the way it is defined by the author of the concept.

Right. I correct your major mistake. You stated that each object starts its
own thread. Well, your idea on how to implement DP is not scaleable. I
corrected you; why don't you try learning from it? Wow.


> > I can definitely implement a
> > high-performance version of DP in which a plurality of thread
> > multiplexes
> > multiple objects.

> Obviously, you do not know what you are talking about.

Wrong. Try again.


> You go and read
> about the programming concept Distributed Processes (DP) before you
> start claiming how you would like to hack it.
> http://brinch-hansen.net/papers/1978a.pdf

Listen, you would implement it with an object per-thread because that's what
you explicitly said. I know for a fact that I can do it with in a way which
is scaleable. Multiple objects can be bound to a single thread and
communication between them is multiplexed. DP can be single-threaded in the
context. However, I would do it with a thread-pool.

> > N number of threads for O number of objects. N = 4; O =
> > 10000. Sure. No problem. DP does not have to work like you explicitly
> > suggest. Sorry, but you make huge mistake.

> Well, it is not me who suggest it that way but the author of the
> programming language concept.

He is smarter than you are.


> Again, you may perhaps try to read about
> it first.
http://brinch-hansen.net/papers/1978a.pdf

IMVHOP, the author SURELY knows that multiplexing can be used to implement
DP; you do not. Sorry, but that's the way it is. You think that a thread
per-object is needed. Well, its not; your mistaken.


> > > The initial thread and the called
> > > methods then are executed in an interleaved manner. If the initial
> > > thread finishes, the object continues to serve the potentially
> > > simultaneous method calls, i.e. it becomes a (shared) passive object.
> > >http://brinch-hansen.net/papers/1978a.pdf
> > > Well, both language proposals are much higher level with respect to
> > > parallelism than what is planned into the new brave C++0x.
> >
> > DP is NOT as expensive as you claim it is. E.g. each object starts it's
> > OWN
> > thread. No way!

> I did not claim it was expensive.

Yes you did. You said that an object creates a thread. Dare me to quote you?
Well, what if there is 100,000 objects?


> You have concluded it but it is just
> because of your ignorance. It is just because you talk about it
> without knowing anything about it.

> > I hope, I could help, though.

You helped me confirm my initial thoughts. Sorry, but DP can be implemented
via thread-pool and multiplexing. No object per-thread crap is needed. I
quote you:


"> Well, it is not me who suggest it that way but the author of the
> programming language concept. "

Sorry. But, the author knows that DP can be implemented through
message-passing, thread-pool, and multiplexing such that it can be
single-threaded, or run on a ten-thousand processor system. You need to
understand that fact. I suggest that you learn about how to create scaleable
algorithms. DP is one of them. However, the way you describe it is
detestable at best.

:^|

Chris M. Thomasson

unread,
Oct 20, 2008, 11:18:34 PM10/20/08
to

"Chris M. Thomasson" <n...@spam.invalid> wrote in message
news:cJbLk.6826$ys6....@newsfe02.iad...

I corrected you by informing you that a thread-pool and multiplexing can
implement DP using a bounded number of threads. Your answer what that I
don't know what I am writing about. Well, you make be laugh Szabolcs. One
thing you know how to do is make me laugh. Thanks.

Chris M. Thomasson

unread,
Oct 20, 2008, 11:49:13 PM10/20/08
to
I was perhaps WAY to harsh... Let me sum things up... I know for a FACT
that:

Distributed Processes
http://brinch-hansen.net/papers/1978a.pdf

can be implemented with a thread-pool, message-passing and multiplexing such
that N threads can handle O objects. Think N == 4; O == 1,000,000. I KNOW
the author understands this fact; its COMMON SENSE. So be it. However...

Szabolcs Ferenczi seemed to suggest that an object needed to have its own
thread. Well, if I take that to another extreme, an object needs its own
personal process. I think I flamed him to harshly. He is in need of
information, not flames. Well, I am sorry!

;^/

0 new messages