Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Future language support for concurrency

5 views
Skip to first unread message

Joe Seigh

unread,
Jun 17, 2007, 2:54:55 PM6/17/07
to
I was going to write up a straw man proposal for better Java
support for concurrency based on my experiment with STM, e.g.
better pointer abstraction and enclosure support, and then
thought the better of it.

I think the future of threading will be with customized languages
for for concurrency. Of course the downside is not everyone is
good at language design, compiler writing, *and* concurrency. The
other downside is that anyone who goes through the extra trouble to
write a compiler will likely have an agenda. So be prepared to
have languages that only support Hoare style monitors or message
passing for example. And you can expect some language quirkiness
as well.

--
Joe Seigh

When you get lemons, you make lemonade.
When you get hardware, you make software.

Dmitriy Vyukov

unread,
Jun 17, 2007, 3:14:53 PM6/17/07
to
On 17 , 22:54, Joe Seigh <jseigh...@xemaps.com> wrote:
> I was going to write up a straw man proposal for better Java
> support for concurrency based on my experiment with STM, e.g.
> better pointer abstraction and enclosure support, and then
> thought the better of it.

Will STM be the main feature and primitive in the language? Are there
other features/primitives?

You can see next links.

Intel: Ct: Nested Data Parallelism in C/C++
http://dataparallel.googlegroups.com/web/Intel%20CGO%20DP%202007.pdf?gda=XR1lTUYAAAAztYsAcjQSKguM0LtUcGuWz35mYHL4rxREV0Hpfhl5wWG1qiJ7UbTIup-M2XPURDTz97EvWjT7iMn6XcvbZIUmw_CLgMRG3JLY4Hqs0tvQuQ

Microsoft: Accelerator: Using Data-Parallelism to Program GPUs and
Multi-cores
http://dataparallel.googlegroups.com/web/Microsoft%20CGO%20DP%202007.pdf?gda=bCIBF0oAAAAztYsAcjQSKguM0LtUcGuWz35mYHL4rxREV0Hpfhl5wWG1qiJ7UbTIup-M2XPURDRvXLp5qROlv4bX3Rc6SW7ucbcL-0dMwNIvNEoYyIGGjw

The RapidMind Development Platform and Data-Parallel Programming
http://dataparallel.googlegroups.com/web/RapidMind%20CGO%20DP%202007.pdf?gda=jUpUjEoAAAAztYsAcjQSKguM0LtUcGuWz35mYHL4rxREV0Hpfhl5wWG1qiJ7UbTIup-M2XPURDTFUhqbhB3c4tYzHQvS8AiHcbcL-0dMwNIvNEoYyIGGjw


Dmitriy V'jukov

Joe Seigh

unread,
Jun 17, 2007, 3:33:17 PM6/17/07
to

You'd have to ask whoever is writing these languages. Note that there are
various compilers/languagess that do this now. The big change will be
making business decisions to base your product or application on one of
these languages rather than a mainstream language like C/C++ or Java.

Szabolcs Ferenczi

unread,
Jun 17, 2007, 3:53:14 PM6/17/07
to
On Jun 17, 8:54 pm, Joe Seigh <jseigh...@xemaps.com> wrote:
> I was going to write up a straw man proposal for better Java
> support for concurrency based on my experiment with STM, e.g.
> better pointer abstraction and enclosure support, and then
> thought the better of it.

What have you defined so far?

You are just lucky because Java does not provide too much language
support for traditional concurrency either. So I am just curious what
do you have in mind.

Best Regards,
Szabolcs

Joe Seigh

unread,
Jun 17, 2007, 4:30:58 PM6/17/07
to

Nothing. I'm not a compiler person. I just think that's where the
future of concurrency will be, not Java or C++. We'll see when
it happens.

Dmitriy Vyukov

unread,
Jun 17, 2007, 4:33:50 PM6/17/07
to
On 17 , 23:33, Joe Seigh <jseigh...@xemaps.com> wrote:

And what about this:

> > Will STM be the main feature and primitive in the language? Are there
> > other features/primitives?

> You'd have to ask whoever is writing these languages. Note that there are
> various compilers/languagess that do this now. The big change will be
> making business decisions to base your product or application on one of
> these languages rather than a mainstream language like C/C++ or Java.
>

I think that companies like Intel or Microsoft is targeted to C++/C#
in the upshot (I hope that Microsoft is not going to create new
mainstream language in near future :) )
And RapidMind is extension to C++

Dmitriy V'jukov

Chris Thomasson

unread,
Jun 17, 2007, 7:46:44 PM6/17/07
to
"Joe Seigh" <jsei...@xemaps.com> wrote in message
news:Y8mdnXapf6jzHujb...@comcast.com...

>I was going to write up a straw man proposal for better Java
> support for concurrency based on my experiment with STM, e.g.
> better pointer abstraction and enclosure support, and then
> thought the better of it.
>
> I think the future of threading will be with customized languages
> for for concurrency. Of course the downside is not everyone is
> good at language design, compiler writing, *and* concurrency.

[...]

Well, I guess you could do a Lang would focus the programming model around
the reader/writer problem. For instance, the language can have constructs in
which you can build a class of reader and writer "entities".


Here is a sketch of what code in such a lang might look like:

entity Foo {

struct(slist) MyNode {
proc(reader) DoSomething() {
output(console) "Hello From Reader\n";
}

proc(writer) DoSomething() {
output(console) "Hello From Writer\n";
}
};


member(pdr) MyNode TheList;


task(reader) MyTask {
proc(entry) main() {
// reader entry for task
pdr(reader::iterate) MyNode i = TheList {
i.DoSomething();
}
}
};


task(writer) MyTask {
proc(entry) main() {
bool Switch = false;
for(;;) {
if (! Switch) {
// add new node
MyNode i = new MyNode;
pdr(writer::push) TheList = i;

Switch = true;

} else {
// remove a node
MyNode i = pdr(writer::pop) TheList;
if (i) {
i.DoSomething();
}

Switch = false;
}
}
}
};


proc(entry) main() {
exec(join) {
MyTask::reader::MyTask ReadTasks[3];
MyTask::writer::MyTask WriteTasks[3];
}
}
};


The entity Foo can be executed like:


// main for the entire app...
app(entry) main() {
exec(join) {
Foo MyFoo;
}
}

The idea is to build a pdr and a lot of lock-free pdr based reader/writer
data-structures into the language itself... Well, what do you think? Am I
off my rocker!

;^)

Chris Thomasson

unread,
Jun 17, 2007, 7:56:47 PM6/17/07
to
Let my try to clarify some things wrt the source code for my "fictional"
language:


[...]

This defines a structure that represents a singly-linked list called MyNode:

> struct(slist) MyNode {
> proc(reader) DoSomething() {
> output(console) "Hello From Reader\n";
> }
>
> proc(writer) DoSomething() {
> output(console) "Hello From Writer\n";
> }
> };

The forward link is automatically generated by the compiler...

This defines a pdr instance of MyNode as a member of Foo called TheList:

> member(pdr) MyNode TheList;

This defines a reader task in Foo called MyTask:


> task(reader) MyTask {
> proc(entry) main() {
> // reader entry for task

This does a pdr read-side iteration of TheList:

> pdr(reader::iterate) MyNode i = TheList {

We drop into here for every node in TheList.

This calles the reader version of MyNode::DoSomething():
> i.DoSomething();
> }
> }
> };

This defines a writer task in Foo called MyTask:

> task(writer) MyTask {
> proc(entry) main() {
> bool Switch = false;
> for(;;) {
> if (! Switch) {
> // add new node
> MyNode i = new MyNode;


This does uses pdr to push i into TheList


> pdr(writer::push) TheList = i;
>
> Switch = true;
>
> } else {
> // remove a node


This uses pdr to try and pop a node from TheList:


> MyNode i = pdr(writer::pop) TheList;
> if (i) {


This calles the writer version of MyNode::DoSomething():


> i.DoSomething();
> }
>
> Switch = false;
> }
> }
> }
> };
>
>
> proc(entry) main() {

This creates tasks and joins them all before exiting the scope


> exec(join) {
> MyTask::reader::MyTask ReadTasks[3];
> MyTask::writer::MyTask WriteTasks[3];
> }
> }
> };
>
>

[...]

Chris Thomasson

unread,
Jun 17, 2007, 8:05:19 PM6/17/07
to
[...]

> proc(entry) main() {
> exec(join) {
> MyTask::reader::MyTask ReadTasks[3];
> MyTask::writer::MyTask WriteTasks[3];

^^^^^^^^^^^^^^^^^

that should be:

Foo::reader::MyTask ReadTasks[3];
Foo::writer::MyTask WriteTasks[3];


> }
> }
> };

[...]


Oh crap!


I think I just corrected code for a language that doesn't exist!

lol.

Chris Thomasson

unread,
Jun 17, 2007, 10:42:11 PM6/17/07
to
"Chris Thomasson" <cri...@comcast.net> wrote in message
news:rK-dnXXcl7huW-jb...@comcast.com...

> "Joe Seigh" <jsei...@xemaps.com> wrote in message
> news:Y8mdnXapf6jzHujb...@comcast.com...
> [...]
>
> Well, I guess you could do a Lang would focus the programming model around
> the reader/writer problem. For instance, the language can have constructs
> in which you can build a class of reader and writer "entities".
>
>
> Here is a sketch of what code in such a lang might look like:

[...]

Humm... A programming language that organizes applications into multiple
lock-free entities that contain multiple lock-free reader/writer tasks, with
excellent performance for the reader side... Could be useful in certain
scenarios... Humm, I guess the name of the language should be "PDR" or
something?

lol. :^)

Chris Thomasson

unread,
Jun 17, 2007, 10:46:56 PM6/17/07
to
"Chris Thomasson" <cri...@comcast.net> wrote in message
news:F6Odnf0OQcWPbejb...@comcast.com...

> "Chris Thomasson" <cri...@comcast.net> wrote in message
> news:rK-dnXXcl7huW-jb...@comcast.com...
>> "Joe Seigh" <jsei...@xemaps.com> wrote in message
>> news:Y8mdnXapf6jzHujb...@comcast.com...
>> [...]
>>
>> Well, I guess you could do a Lang would focus the programming model
>> around the reader/writer problem. For instance, the language can have
>> constructs in which you can build a class of reader and writer
>> "entities".
>>
>>
>> Here is a sketch of what code in such a lang might look like:
>
> [...]
>
> Humm... A programming language that organizes applications into multiple
> lock-free entities that contain multiple lock-free reader/writer tasks,
> with excellent performance for the reader side...

This seems to map well to multi-core arch's... The entities could be bound
to a CPU as a whole, and the tasks could be bound to their cores... It would
simplify the automatic pdr epoch detection scheme that would be built-in to
the language runtime...


Joe Seigh

unread,
Jun 18, 2007, 6:11:09 AM6/18/07
to

The only thing Microft has going on that I know of is STM which is
problematic to say the least.

Intel has admitted as much that auto parallelizing won't be the
silver bullet for concurrency
http://softwareblogs.intel.com/2007/06/15/its-nearly-impossible-to-say-ebay-in-pig-latin/

Chris Thomasson

unread,
Jun 18, 2007, 10:39:34 PM6/18/07
to
"Joe Seigh" <jsei...@xemaps.com> wrote in message
news:9s6dnb6ID-C6x-vb...@comcast.com...
> Dmitriy Vyukov wrote:
[...]

>
> The only thing Microft has going on that I know of is STM which is
> problematic to say the least.
>
> Intel has admitted as much that auto parallelizing won't be the
> silver bullet for concurrency
> http://softwareblogs.intel.com/2007/06/15/its-nearly-impossible-to-say-ebay-in-pig-latin/

They say something like:

_________________
programming these things is so complex that it will cause brain damage or
something... Oh yea, they said it would make your brains hurt.
_________________


There are selling these things along with a defeatist attitude! Not good.
Perhaps they should be upbeat and talk more about all of the original work
that is going on with USENET. Humm... They have to drop that whole "threads
and concurrency is too complex for any programmer" mentality because it
could have a marked negative effect on their company bottom line, so to
speak...

;^)

Vladimir Frolov

unread,
Jun 19, 2007, 2:44:43 AM6/19/07
to
Joe Seigh wrote:

> I think the future of threading will be with customized languages
> for for concurrency. Of course the downside is not everyone is
> good at language design, compiler writing, *and* concurrency. The
> other downside is that anyone who goes through the extra trouble to
> write a compiler will likely have an agenda. So be prepared to
> have languages that only support Hoare style monitors or message
> passing for example. And you can expect some language quirkiness
> as well.

I believe that thread programming will not become mainstream practice
while programmers must manipulate threads, locks and waits explicitly.
Thus programming language should do for multithreading the same thing
which garbage collection do for memory management. As I can see next
generation of languages should parallelize and synchronize code
without any programmer effort.

I think such automatic parallelization can be built on dependency
injection principle. Language with garbage collection has information
about all dependencies inside program and this information is enough
to parallelize control flow. (I mean something like google guice but
with ability to parallelize programs automatically)

Furthermore, stack-oriented execution of control was designed to serve
single thread execution purposes and it should be partially rejected
for such parallel programming practice and replaced with something
else that can assure massive parallelism.

Thus I believe that language should not provide such concepts as
monitors and message passing for programmer.

---
With respect,
Vladimir Frolov

Chris Thomasson

unread,
Jun 19, 2007, 4:28:10 AM6/19/07
to
"Vladimir Frolov" <void...@gmail.com> wrote in message
news:1182235483.6...@k79g2000hse.googlegroups.com...
> Joe Seigh wrote:
[...]

> As I can see next
> generation of languages should parallelize and synchronize code
> without any programmer effort.

Are you joking?

Steve Watt

unread,
Jun 19, 2007, 2:42:17 PM6/19/07
to
In article <XZydne0RH7k4D-rb...@comcast.com>,

No, I don't think he is.

A well-defined work scheduling language runtime should be able to
do this effectively without much programmer overhead beyond some
manner of declaring dependencies.

For example, a multi-core processor that I'm rather familiar with
has an ordering unit. A piece of work has a tag associated with
it, and the ordering unit assures that only a single thread can
be working on a piece of work with a given tag value. It's a
very simple and powerful concept. For example, in a packet
processing environment the tag could be generated from a hash of
the 5-tuple (src ip, dst ip, proto, src port, dest port), and
then all processing that has a protocol-control-block-like structure
doesn't need to worry about locking the PCB; the hardware has
assured that no other thread can access the PCB while the
thread is working on its packet.

The same concept is fairly easy to do in software as well.
--
Steve Watt KD6GGD PP-ASEL-IA ICBM: 121W 56' 57.5" / 37N 20' 15.3"
Internet: steve @ Watt.COM Whois: SW32-ARIN
Free time? There's no such thing. It just comes in varying prices...

Ian Collins

unread,
Jun 19, 2007, 7:21:53 PM6/19/07
to

That's done now within the limited context of Open MP and loop
parallelisation. Doing the work at runtime would be a bit more challenging.

--
Ian Collins.

Joe Seigh

unread,
Jun 19, 2007, 9:02:23 PM6/19/07
to
Vladimir Frolov wrote:
> Joe Seigh wrote:
>
>
>>I think the future of threading will be with customized languages
>>for for concurrency. Of course the downside is not everyone is
>>good at language design, compiler writing, *and* concurrency. The
>>other downside is that anyone who goes through the extra trouble to
>>write a compiler will likely have an agenda. So be prepared to
>>have languages that only support Hoare style monitors or message
>>passing for example. And you can expect some language quirkiness
>>as well.
>
>
> I believe that thread programming will not become mainstream practice
> while programmers must manipulate threads, locks and waits explicitly.
> Thus programming language should do for multithreading the same thing
> which garbage collection do for memory management. As I can see next
> generation of languages should parallelize and synchronize code
> without any programmer effort.
>
> I think such automatic parallelization can be built on dependency
> injection principle. Language with garbage collection has information
> about all dependencies inside program and this information is enough
> to parallelize control flow. (I mean something like google guice but
> with ability to parallelize programs automatically)

Intel is not too sure that's possible. They wouldn't mind too
terribly if someone proved them wrong. Proof by working example
though.

>
> Furthermore, stack-oriented execution of control was designed to serve
> single thread execution purposes and it should be partially rejected
> for such parallel programming practice and replaced with something
> else that can assure massive parallelism.
>
> Thus I believe that language should not provide such concepts as
> monitors and message passing for programmer.
>

It's reasonable to come up with new concurrency contructs without
having to do a full blown compiler. Though whatever it is it will
have to be generally applicable to all problems, not just specific
problem spaces. Otherwise it will never take off.

Ian Collins

unread,
Jun 19, 2007, 9:12:18 PM6/19/07
to
Joe Seigh wrote:
>
> It's reasonable to come up with new concurrency contructs without
> having to do a full blown compiler. Though whatever it is it will
> have to be generally applicable to all problems, not just specific
> problem spaces. Otherwise it will never take off.
>
Have we reached the point with concurrent programming where the level of
complexity is too great for a one size fits all language? I can see a
need for more specialised languages targeting different problem spaces
without having to compromise in order to support others.

This may be through advances in aspect oriented programming, or though
specialised small languages.

--
Ian Collins.

Chris Thomasson

unread,
Jun 19, 2007, 10:47:21 PM6/19/07
to
"Steve Watt" <steve.re...@Watt.COM> wrote in message
news:f59829$1bn$1...@wattres.Watt.COM...

> In article <XZydne0RH7k4D-rb...@comcast.com>,
> Chris Thomasson <cri...@comcast.net> wrote:
>>"Vladimir Frolov" <void...@gmail.com> wrote in message
>>news:1182235483.6...@k79g2000hse.googlegroups.com...
>>> Joe Seigh wrote:
>>[...]
>>
>>> As I can see next
>>> generation of languages should parallelize and synchronize code
>>> without any programmer effort.
>>
>>Are you joking?
>
> No, I don't think he is.


> A well-defined work scheduling language runtime should be able to
> do this effectively without much programmer overhead beyond some
> manner of declaring dependencies.

There has to be some effort on the programmer wrt feeding the compiler with
the information it needs... IMHO, that makes the statement 'without any
programmer effort' false.


> For example, a multi-core processor that I'm rather familiar with
> has an ordering unit. A piece of work has a tag associated with
> it, and the ordering unit assures that only a single thread can
> be working on a piece of work with a given tag value. It's a
> very simple and powerful concept.

Built-in message passing synchronization scheme? Seems like built in
scaleable mutexs in that it synchronizes access to work by only allowing a
single thread to be working on it ant any one time... Does it have a
msg-passing interface, kind of like using DMA on the Cell to communicate
between the SPUS?

Chris Thomasson

unread,
Jun 19, 2007, 10:51:24 PM6/19/07
to
"Ian Collins" <ian-...@hotmail.com> wrote in message
news:5drd7iF...@mid.individual.net...

> Joe Seigh wrote:
>>
>> It's reasonable to come up with new concurrency contructs without
>> having to do a full blown compiler. Though whatever it is it will
>> have to be generally applicable to all problems, not just specific
>> problem spaces. Otherwise it will never take off.
>>
> Have we reached the point with concurrent programming where the level of
> complexity is too great for a one size fits all language? I can see a
> need for more specialized languages targeting different problem spaces

> without having to compromise in order to support others.
[...]

I could see some possible uses for a language that specializes in solving
the reader-writer problem and providing good performance for the readers...
Something kind of like the pseudo-code I posted in this thread wrt my
'fictional' language...

Chris Thomasson

unread,
Jun 19, 2007, 11:13:22 PM6/19/07
to
"Joe Seigh" <jsei...@xemaps.com> wrote in message
news:Y8mdnXapf6jzHujb...@comcast.com...

>I was going to write up a straw man proposal for better Java
> support for concurrency based on my experiment with STM, e.g.
> better pointer abstraction and enclosure support, and then
> thought the better of it.
>
> I think the future of threading will be with customized languages
> for for concurrency.

[...]

Well, I hope they will be using C/C++/Assembly Languages to create the
runtimes for those customized languages...


Steve Watt

unread,
Jun 20, 2007, 12:34:59 AM6/20/07
to
In article <SMadnWT7__XfCeXb...@comcast.com>,

Chris Thomasson <cri...@comcast.net> wrote:
>"Steve Watt" <steve.re...@Watt.COM> wrote in message
>news:f59829$1bn$1...@wattres.Watt.COM...
>> In article <XZydne0RH7k4D-rb...@comcast.com>,
>> Chris Thomasson <cri...@comcast.net> wrote:
>>>"Vladimir Frolov" <void...@gmail.com> wrote in message
>>>news:1182235483.6...@k79g2000hse.googlegroups.com...
>>>> Joe Seigh wrote:
>>>[...]
>>>
>>>> As I can see next
>>>> generation of languages should parallelize and synchronize code
>>>> without any programmer effort.
>>>
>>>Are you joking?
>>
>> No, I don't think he is.
>
>
>> A well-defined work scheduling language runtime should be able to
>> do this effectively without much programmer overhead beyond some
>> manner of declaring dependencies.
>
>There has to be some effort on the programmer wrt feeding the compiler with
>the information it needs... IMHO, that makes the statement 'without any
>programmer effort' false.

Perhaps I'm being a bit optimistic in my assumption that the
programmer will have already understood the data dependencies in
their system.

>> For example, a multi-core processor that I'm rather familiar with
>> has an ordering unit. A piece of work has a tag associated with
>> it, and the ordering unit assures that only a single thread can
>> be working on a piece of work with a given tag value. It's a
>> very simple and powerful concept.
>
>Built-in message passing synchronization scheme? Seems like built in
>scaleable mutexs in that it synchronizes access to work by only allowing a
>single thread to be working on it ant any one time... Does it have a
>msg-passing interface, kind of like using DMA on the Cell to communicate
>between the SPUS?

Yes, any core can inject work (a message if you will) into the queue.
All 16 cores have independent L1 caches, share an L2 cache.

Szabolcs Ferenczi

unread,
Jun 20, 2007, 5:46:43 PM6/20/07
to
On Jun 17, 10:30 pm, Joe Seigh <jseigh...@xemaps.com> wrote:
> Szabolcs Ferenczi wrote:
...

> > What have you defined so far?
...

> Nothing. I'm not a compiler person. I just think that's where the
> future of concurrency will be, not Java or C++. We'll see when
> it happens.

We'll see when it happens, however, you have started this thread with
this statement: "I was going to write up a straw man proposal for
better Java support for concurrency based on my experiment with STM".

I am well aware of the language means that has been proposed so far
but that are missing from the current mainstream languages. Those are
not for SMT, however.

So what are the language means, based on your experiment, that you
would like to have in a future language? I am curious.

Best Regards,
Szabolcs

Szabolcs Ferenczi

unread,
Jun 20, 2007, 6:04:32 PM6/20/07
to
On Jun 20, 4:47 am, "Chris Thomasson" <cris...@comcast.net> wrote:

> Built-in message passing synchronization scheme?

Talking about future language support for concurrency it is worth
mentioning that there has been a very novel programming language
proposed with built-in message passing communication scheme. It was
the OCCAM programming language, which was the language form of the
mathematically sound CSP proposal (Communicating Sequential
Processes).

It was so novel, however, that programmers cannot keep up with it.

Of course, part of the failure is that the hardware background of it,
namely the Transputer, was not a business success.

Nevertheless, the novel programming language with build-in message
passing is defined and implemented.

Best Regards,
Szabolcs

Joe Seigh

unread,
Jun 20, 2007, 8:16:55 PM6/20/07
to


You can do STM without any of the things I could propose. It's
more of an ease of use or natural use kind of thing that would
make the difference between something being used or not.

I'm not the first nor in the minority on thinking that
Java anonymous inner classes are an extremely painful form
of enclosure. Ditto on some other Java language features.
The point isn't that I don't think my feature wish list
would get incorporated in to Java. It's that Java will
just progress too slowly to be significant in the concurrency
field.

If you have a novel concurrency scheme in mind, it will be a lot
easier to implemenent your own language than it will be to
get minimal support for it in some existing language.

Microsoft might have an edge here. They can put in whatever they
want into their languages. They're working on STM so you
might see STM support at some point. Of course it will be
their version and implementation of STM.

Chris Thomasson

unread,
Jun 21, 2007, 11:50:43 AM6/21/07
to
"Szabolcs Ferenczi" <szabolcs...@gmail.com> wrote in message
news:1182377072.4...@p77g2000hsh.googlegroups.com...

> On Jun 20, 4:47 am, "Chris Thomasson" <cris...@comcast.net> wrote:
>
>> Built-in message passing synchronization scheme?
[...]

> Of course, part of the failure is that the hardware background of it,
> namely the Transputer, was not a business success.

The DMA mechanism wrt the Cell architecture is kind of transputer "like":

http://groups.google.com/group/comp.arch/msg/2e990f75f60c69d2

Humm...

Chris Thomasson

unread,
Jun 21, 2007, 11:51:25 AM6/21/07
to

Chris Thomasson

unread,
Jun 21, 2007, 12:04:13 PM6/21/07
to
"Steve Watt" <steve.re...@Watt.COM> wrote in message
news:f5aapj$fsd$1...@wattres.Watt.COM...

> In article <SMadnWT7__XfCeXb...@comcast.com>,
> Chris Thomasson <cri...@comcast.net> wrote:
[...]

>>There has to be some effort on the programmer wrt feeding the compiler
>>with
>>the information it needs... IMHO, that makes the statement 'without any
>>programmer effort' false.
>
> Perhaps I'm being a bit optimistic in my assumption that the
> programmer will have already understood the data dependencies in
> their system.

;^)


>>> For example, a multi-core processor that I'm rather familiar with
>>> has an ordering unit.

[...]

>>Built-in message passing synchronization scheme?

[...]


>> Does it have a
>>msg-passing interface, kind of like using DMA on the Cell to communicate
>>between the SPUS?
>
> Yes, any core can inject work (a message if you will) into the queue.
> All 16 cores have independent L1 caches, share an L2 cache.

What architecture are you referring to? Is it something like:

http://www.crn.com/white-box/192201685
(16-core cpu by Movidis...)

http://news.com.com/Sun+puts+16+cores+on+its+Rock+chip/2100-1006_3-6141961.html
(16-core "Rock" processor by Sun..)

?

P.S.
________

The latter link referring to the 'Rock' processor should have a 'KCSS'
instruction, or full-blown STM:

http://groups.google.com/group/comp.arch/msg/5d607bc7a2433d2c
(refer to last sentence in msg...)


So, at least one company is embedding multi-word CAS functionality directly
in the hardware... Humm... Not too sure how I feel about that:

http://groups.google.com/group/comp.arch/browse_frm/thread/91cb3fbfa2eb362a


Any thoughts?

blm...@myrealbox.com

unread,
Jun 23, 2007, 8:21:44 PM6/23/07
to
In article <1182377072.4...@p77g2000hsh.googlegroups.com>,

The way I remember it, defining new languages for concurrent
programming was a popular research area for a while -- I'm thinking
1980s/1990s, but there is probably both earlier and later work.
One I know about (because I knew some of the people who worked
on it) was called PCN (Program Composition Notation). It also
was apparently too novel for most programmers and wasn't widely
adopted, though it had nice syntax for specifying concurrency.

I wonder if current work is really new, or whether different
kinds of wheels are being reinvented ....

--
B. L. Massingill
ObDisclaimer: I don't speak for my employers; they return the favor.

Szabolcs Ferenczi

unread,
Jun 24, 2007, 4:52:18 PM6/24/07
to
On Jun 24, 2:21 am, blm...@myrealbox.com <blm...@myrealbox.com> wrote:

> The way I remember it, defining new languages for concurrent
> programming was a popular research area for a while -- I'm thinking
> 1980s/1990s, but there is probably both earlier and later work.

Yes, and I think the basic language constructs were found and proposed
at that time. However, it did not become very popular perhaps because
of the required big leap in thinking. Just as in spoken languages, a
programming notation determines the thinking in programming as well.
Today, the majority of the programmers suffer from thinking in
sequential constructs when trying to work with threads. There is
almost no language support for multi-threading in the current
languages.

That is why I mentioned OCCAM because for instance it forced the
programmer to think about whether two actions should strictly follow
each other (SEQ) or whether parallel execution was allowed (PAR).
There was no default for it in that language (in most languages the
default sequential composition is the semicolon or the writing order).

It is interesting to note that for instance even one of the most basic
proposals for concurrent languages, the critical region is not taken
over properly in modern popular languages (e.g. Java, C#). Not to
mention the conditional critical region or the monitor.

By the way, having a quick check on the net, an adapted form of the
critical region and the conditional critical region is proposed for
this STM stuff as well. Again, only partly.

> I wonder if current work is really new, or whether different
> kinds of wheels are being reinvented ....

Me too. That is why I was curious whether the starter of this
discussion thread has something in mind after claiming that he has a
lot of experience in STM already.

Best Regards,
Szabolcs

Joe Seigh

unread,
Jun 24, 2007, 7:29:41 PM6/24/07
to

Experienced in how awkward Java api's for STM can be anyway.

Steve Watt

unread,
Jun 25, 2007, 12:42:46 PM6/25/07
to
In article <DM2dnRHkkMkdPefb...@comcast.com>,

Chris Thomasson <cri...@comcast.net> wrote:
>"Steve Watt" <steve.re...@Watt.COM> wrote in message
>news:f5aapj$fsd$1...@wattres.Watt.COM...
>> In article <SMadnWT7__XfCeXb...@comcast.com>,
>> Chris Thomasson <cri...@comcast.net> wrote:
>[...]
>
>
>>>> For example, a multi-core processor that I'm rather familiar with
>>>> has an ordering unit.
>[...]
>
>>>Built-in message passing synchronization scheme?
>[...]
>>> Does it have a
>>>msg-passing interface, kind of like using DMA on the Cell to communicate
>>>between the SPUS?
>>
>> Yes, any core can inject work (a message if you will) into the queue.
>> All 16 cores have independent L1 caches, share an L2 cache.
>
>What architecture are you referring to? Is it something like:
>
>http://www.crn.com/white-box/192201685
>(16-core cpu by Movidis...)

That's the CPU, but the CPU's not by Movidis, only the box is.

Nah, I prefer to work on stuff that's shipping as opposed to
futureware.

>P.S.
>________
>
>The latter link referring to the 'Rock' processor should have a 'KCSS'
>instruction, or full-blown STM:
>
>http://groups.google.com/group/comp.arch/msg/5d607bc7a2433d2c
>(refer to last sentence in msg...)

I guess I'm so far behind in comp.arch that I've been missing juicy
details. Gotta do something about that. Maybe next year. :)

>So, at least one company is embedding multi-word CAS functionality directly
>in the hardware... Humm... Not too sure how I feel about that:
>
>http://groups.google.com/group/comp.arch/browse_frm/thread/91cb3fbfa2eb362a


>Any thoughts?

I'm still a fan of load-linked/store-conditional, probably because I've
been a fan of the MIPS architecture since I first encountered it back
in '92. Their compiler technology and outstanding instructions/cycle
rates really brought the RISC theories to good application. They did
later backslide on the purely orthogonal instruction set, but that's
a discussion for comp.arch, not here.

Threading-wise, I find it easiest to express lock-free ideas with ll/sc
combined with Cavium's memory-barrier extensions. Dunno how much I
can talk about the extensions, though.

0 new messages