Google Groups unterstützt keine neuen Usenet-Beiträge oder ‑Abos mehr. Bisherige Inhalte sind weiterhin sichtbar.

Questions on "Realtime specification for Java"

98 Aufrufe
Direkt zur ersten ungelesenen Nachricht

Bo Sanden

ungelesen,
05.04.2001, 18:55:5305.04.01
an
I have studied the "Real-time specification for Java" in quite some
detail and have several questions. For example, the discussion of
AsynchronouslyInterruptedException and asynchronous transfer of control
contains one or two typos that make the meaning quite unclear. (To be
specific, a line seems to be missing after the first line on page 141,
and the example on page 141 contains the comment "//do something about
it - abort or retry" at a place that does not seem to make sense
syntactically.)

Is anybody prepared to field questions on the specification?

Is there any ongoing activity? My understanding is that a reference
implementation is underway somewhere.

Bo Sanden
Colorado Tech. University

Manas Saksena

ungelesen,
06.04.2001, 10:13:2106.04.01
an
To answer your questions:

(1) Yes, there is (hectic) ongoing activity!!
(2) TimeSys is developing a reference implementation for the specification,
and this
will be made available shortly.
(3) There are (minor) parts of the specification that have changed -- in
many cases as
a result of discussions between the RI team, and the experts group.

Feel free to post questions on the newsgroup, or if you want to take it
off-line, I will
be happy to answer them. I am travelling the next two weeks, so my responses
may
not be prompt -- especially if you post it to the newsgroup. It is possible
that there
are members of the expert group, who read these newsgroups, and may respond.

Manas Saksena
Director of Product Development
TimeSys Corporation


"Bo Sanden" <bsa...@acm.org> wrote in message
news:3ACCF78E...@acm.org...


> I have studied the "Real-time specification for Java" in quite some
> detail and have several questions. For example, the discussion of
> AsynchronouslyInterruptedException and asynchronous transfer of control
> contains one or two typos that make the meaning quite unclear. (To be
> specific, a line seems to be missing after the first line on page 141,

> and the example on page 141 contains the comment "file://do something

Michael E Thomadakis

ungelesen,
27.04.2001, 13:41:0827.04.01
an
I am wondering if it would be meaningful and/or worthwhile to let Java threads
provide the same (or a ``good'' subset) of the functionality of POSIX Pthreads.
There has been a long-standing effort to develop the POSIX interfaces, and Java
could benefit from the experience of the POSIX standard and its many (good)
implementations. POSIX is an open standard and several people are looking into
standards to develop their systems for portability and well-defined-ness
reasons. Are there people considering the possibilty of reconciling the
functionality of the RT Java and Pthreads APIs somehow?

thanks,

M. Thomadakis
Texas A&M University

Manas Saksena

ungelesen,
27.04.2001, 20:45:0827.04.01
an
I am not sure if you have looked at the real-time specification for java,
which is
available at http://www.rtj.org. Note that this effort is different from the
JConsortium's real-time java effort. When the spec was developed, there were
inputs from many experts including people who were involved in developing
the
Posix specs.

You will notice that the spec does provide what you get from Posix threads,
and
a lot more. All of that is available through additional thread classes --
called
RealTimeThread and NoHeapRealTimeThread, along with support for Posix
style fixed priority preemptive scheduling, etc. Note that while there is
similarity
in semantics, there are obvious differences in syntax, which was inevitable.

regards,
Manas Saksena


"Michael E Thomadakis" <mi...@cs.tamu.edu> wrote in message
news:3AE9AF34...@cs.tamu.edu...

Konrad Schwarz

ungelesen,
30.04.2001, 05:55:3630.04.01
an

Michael E Thomadakis schrieb:


>
> I am wondering if it would be meaningful and/or worthwhile to let Java threads
> provide the same (or a ``good'' subset) of the functionality of POSIX Pthreads.

Thread support in Java is very similar to the Pthread model.

Java groups threads into thread groups, allows threads to be suspended and
resumed, and defines the ThreadDeath exception, which is supposed to destroy
the thread. This functionallity is a superset of that provided by Pthreads.

On the other hand, mutexes and condition variables are subsumbed into monitors,
allowing only one condition variable per mutex. Furthermore, monitors are
tied to objects and can't exist independently of them and regions
of mutual exclusion are enforced syntactically. The InterruptedException is
a mechanism with which thread cancelation can be implemented.

Dave Butenhof

ungelesen,
01.05.2001, 08:05:1001.05.01
an
Konrad Schwarz wrote:

> Michael E Thomadakis schrieb:
> >
> > I am wondering if it would be meaningful and/or worthwhile to let Java threads
> > provide the same (or a ``good'' subset) of the functionality of POSIX Pthreads.
>
> Thread support in Java is very similar to the Pthread model.
>
> Java groups threads into thread groups, allows threads to be suspended and
> resumed, and defines the ThreadDeath exception, which is supposed to destroy
> the thread. This functionallity is a superset of that provided by Pthreads.

No, suspend/resume and "thread death" are BUGS, not "functionality". Asynchronous
suspend and resume cannot be used correctly without knowing precisely what each
target thread is doing at each instant. If you catch a target while it holds a
resource, you risk irresolvable deadlock. The only safe form of suspend and resume
is a "suspend self" that can only suspend the calling thread in a known state.
POSIX has this: it's called pthread_cond_wait(). The ability to asynchronously kill
a thread without cleanup capabilities leaves the entire process in an undefined and
unusable state. (Again, unless you know the exact state of the target thread, and/or
have intimate knowlege of the state such that you can unravel whatever mess you've
made.) In any but a monolithic embedded application, there's no way to continue
operation after such a kill. Instead, you should kill the entire process and restart
it cleanly. (You can do this automatically by having a monitor process that forks
the threaded child, and then simply waits for it to die and recreates it.)

These additional Java "features" are merely proof that the designers of Java paid no
attention to the experience of those researching and using threads over the past 30
years or so. (Or else they somehow thought they were writing a low-level
implementation language for monolithic embedded system development; despite the fact
that they were actually coding a large, complicated, and opaque interpreter for
large, complicated, and opaque systems of widely varied nature.)

Thread groups are different, but also fairly trivial "sugar coating" anyone could
trivially implement on top of POSIX. The same is true of Java "monitors", which
really aren't monitors in any useful sense as they have no semantic awareness of the
shared data states being protected.

> On the other hand, mutexes and condition variables are subsumbed into monitors,
> allowing only one condition variable per mutex. Furthermore, monitors are
> tied to objects and can't exist independently of them and regions
> of mutual exclusion are enforced syntactically. The InterruptedException is
> a mechanism with which thread cancelation can be implemented.

InterruptedException can easily be implemented on top of pthread_cancel() in any
system that's correctly written to use a system exception package. (Therefore,
implementing cancellation on top of InterruptedException is trivial but rather
convoluted.)

Java adds some nice syntactic support for threads and synchronization. Still, that
support is shallow and trivial, and often more limiting than enabling.

I'm not trying to argue that Java is pointless or useless. Though it's hardly the
perfect model of a portable interpretive programming language or a general threaded
programming language, it "gets the job done". (And NOTHING is perfect.) However,
anyone trying to push Java as a "higher model of threading" is jumping around rather
dangerously on an awfully thin branch.

/------------------[ David.B...@compaq.com ]------------------\
| Compaq Computer Corporation POSIX Thread Architect |
| My book: http://www.awl.com/cseng/titles/0-201-63392-2/ |
\-----[ http://home.earthlink.net/~anneart/family/dave.html ]-----/

Tom Tromey

ungelesen,
01.05.2001, 23:56:2301.05.01
an
>>>>> "Dave" == Dave Butenhof <David.B...@compaq.com> writes:

Dave> InterruptedException can easily be implemented on top of
Dave> pthread_cancel() in any system that's correctly written to use a
Dave> system exception package.

How can you do this? I thought the target thread of pthread_cancel
could not detect but ignore the cancellation.

In libgcj we eventually decided that pthread_cancel couldn't work for
this. It would be interesting to be wrong. It might not change our
implementation (though I note that what we have now isn't actually
correct).

Tom

Kaz Kylheku

ungelesen,
02.05.2001, 01:22:1202.05.01
an
On 01 May 2001 21:56:23 -0600, Tom Tromey <tro...@redhat.com> wrote:
>>>>>> "Dave" == Dave Butenhof <David.B...@compaq.com> writes:
>
>Dave> InterruptedException can easily be implemented on top of
>Dave> pthread_cancel() in any system that's correctly written to use a
>Dave> system exception package.
>
>How can you do this? I thought the target thread of pthread_cancel
>could not detect but ignore the cancellation.

What Dave probably means is that cancelation is done in terms of
a system exception package---a package that is part of the system ABI---
then the a thread experiences cancelation as a special exception,
and cleanup handlers are executed as part of stack unwinding.

>In libgcj we eventually decided that pthread_cancel couldn't work for
>this.

Proably because this libary is intended portable to pthread_cancel in systems
other than ones that are ``correctly written to use a system exception
package'', by which Dave means Digital UNIX. ;)

Dave Butenhof

ungelesen,
02.05.2001, 08:56:3502.05.01
an
Kaz Kylheku wrote:

> On 01 May 2001 21:56:23 -0600, Tom Tromey <tro...@redhat.com> wrote:
> >>>>>> "Dave" == Dave Butenhof <David.B...@compaq.com> writes:
> >
> >Dave> InterruptedException can easily be implemented on top of
> >Dave> pthread_cancel() in any system that's correctly written to use a
> >Dave> system exception package.
> >
> >How can you do this? I thought the target thread of pthread_cancel
> >could not detect but ignore the cancellation.
>
> What Dave probably means is that cancelation is done in terms of
> a system exception package---a package that is part of the system ABI---
> then the a thread experiences cancelation as a special exception,
> and cleanup handlers are executed as part of stack unwinding.

Exactly. Cleanup handlers should be, were intended to be, and MUST be, just
another kind of callframe-scoped exception handler. Anything else is an
inexcusable (though frighteningly common) bug. (I'm not trying to offend anyone
-- well, not much -- with strong language, but this really does "bug" me, and
people have avoided facing the issue long enough. So I'm just venting a little
"encouragement" here. ;-) )

> >In libgcj we eventually decided that pthread_cancel couldn't work for
> >this.
>
> Proably because this libary is intended portable to pthread_cancel in systems
> other than ones that are ``correctly written to use a system exception
> package'', by which Dave means Digital UNIX. ;)

Actually, Kaz, that's "Compaq Tru64 UNIX". ;-) Also OpenVMS, and I believe
OS/400. Unless someone busted the spec I worked so hard to fix, it also applies
to any system that conforms to The Open Group's ABI for IA-64 UNIX 98 systems.
(A broken ABI would be particularly bad, since it would preclude correct
implementations.) And I'm hoping that bit of sanity will gradually percolate up
into API conforming implementations, though I'm sure that won't happen until
implementors are forced kicking and screaming into the 1980s either by standards
or by users... and one of my subtle purposes in continuing to harp on this is to
educate users so that they'll stop expecting mediocrity.

POSIX threads (despite some silly attempts to deny the obvious) was principally
influenced by our CMA threads. CMA, and in particular it's "alerts", were based
on pervasive and generic support for exceptions in the underlying system and
languages. POSIX almost refused to accept cancellation for that reason, but some
bright people devised the current cancellation and cleanup model as a standin.
The minimal requirements, as written, provide the support necessary for
standalone threaded C code; and they're carefully written to allow (and even to
encourage) implementation on a pervasive generalized system exception mechanism.

The fact is that any system with "out of the box" support for Ada or C++ already
has this mechanism. Pull out the exception part of either's runtime, remove the
language-specific restrictions, (which really isn't a big job), and you've got a
general runtime that anyone can use. Add a few trivial extensions to the
platform's C compiler, and C code can use it, and you can easily build the
standard pthread_cleanup_push and pthread_cleanup_pop on top of that. You do
need to make sure that all languages generate correct exception code range
information in their output... but you need to do that anyway if ANY language
deals with exceptions and you've got any intent at all to allow people to mix .o
files from various languages (like, say, C and C++).

Compaq C happens to support Microsoft's try/except syntax; while I don't think
that's particularly elegant or easy to use, that's all that's necessary. Quite
honestly, it's pointless and wasteful for any platform NOT to do this, because
Ada, C++, Java, threads, and others are all going off and reinventing the same
trivial little wheel in isolation, just to make sure they won't work together.
How much does it take to get them to pool resources and do it ONCE, correctly,
for everyone to share?

If we could come up with a standard API for the exception library, like the one
in the IA-64 ABI, that'd be cool; but it's no big deal to the average
programmer, who just wants a system that works. (I certainly would not hold up
our libexc, or the OpenVMS condition handling mechanism, as a model for a
standard API; they get the job done, but they're ugly. On the other hand, they
embody and support certain critical features, such as supporting various
language exception models, "cleanup" vs "handling", dynamic runtime exception
scopes as from generated code, etc.)

(This is the end of Dave's emergency exception rant. We will now return you to
your regular programming...)

Tom Payne

ungelesen,
02.05.2001, 10:55:2202.05.01
an
In comp.programming.threads Dave Butenhof <David.B...@compaq.com> wrote:
: Kaz Kylheku wrote:

:> On 01 May 2001 21:56:23 -0600, Tom Tromey <tro...@redhat.com> wrote:
:> >>>>>> "Dave" == Dave Butenhof <David.B...@compaq.com> writes:
:> >
:> >Dave> InterruptedException can easily be implemented on top of
:> >Dave> pthread_cancel() in any system that's correctly written to use a
:> >Dave> system exception package.
:> >
:> >How can you do this? I thought the target thread of pthread_cancel
:> >could not detect but ignore the cancellation.
:>
:> What Dave probably means is that cancelation is done in terms of
:> a system exception package---a package that is part of the system ABI---
:> then the a thread experiences cancelation as a special exception,
:> and cleanup handlers are executed as part of stack unwinding.

: Exactly. Cleanup handlers should be, were intended to be, and MUST be, just
: another kind of callframe-scoped exception handler. Anything else is an
: inexcusable (though frighteningly common) bug.

[...]
: POSIX threads (despite some silly attempts to deny the obvious) was principally


: influenced by our CMA threads. CMA, and in particular it's "alerts", were based
: on pervasive and generic support for exceptions in the underlying system and
: languages. POSIX almost refused to accept cancellation for that reason, but some
: bright people devised the current cancellation and cleanup model as a standin.
: The minimal requirements, as written, provide the support necessary for
: standalone threaded C code; and they're carefully written to allow (and even to
: encourage) implementation on a pervasive generalized system exception mechanism.

: The fact is that any system with "out of the box" support for Ada or C++ already
: has this mechanism.

[...]
: Compaq C happens to support Microsoft's try/except syntax; while I don't think


: that's particularly elegant or easy to use, that's all that's necessary.

Are you speaking only of exceptions that are deliberately and
synchronously "thrown" within the process, or are you also including
external events signals, traps, and interrupts, which can be
asynchronous and non-deliberate? Per the C and C++ standards, signal
handlers can't throw exceptions (or do much else except sometimes set
flags). I'm told that allowing signal handlers to throw exceptions
would be extremely difficult (even impossible) to implement correctly
and efficiently across a reasonable spectrum of architectures. (I'm
not convinced).

In some applications, I need ways to asynchronously kill threads at
will. But only the thread itself can put its affairs in order, and
not leave the application in an inappropriate state. Signals are the
standard way to asynchronously connect an outside will to a running
program, and Ada- and C++-style exceptions are a good way for a thread
to order its affairs when it receives a surprise.

I would very much welcome a standard by which the two could work
together. (Are you saying that it already exists?)

Regards,
Tom Payne


Konrad Schwarz

ungelesen,
02.05.2001, 13:15:5902.05.01
an
Dave Butenhof schrieb:

>
> Konrad Schwarz wrote:
>
> > Java groups threads into thread groups, allows threads to be suspended and
> > resumed, and defines the ThreadDeath exception, which is supposed to destroy
> > the thread. This functionallity is a superset of that provided by Pthreads.
>
> No, suspend/resume and "thread death" are BUGS, not "functionality".
[...]

Ok, ok.

> These additional Java "features" are merely proof that the designers of Java paid no
> attention to the experience of those researching and using threads over the past 30
> years or so. (Or else they somehow thought they were writing a low-level
> implementation language for monolithic embedded system development; despite the fact
> that they were actually coding a large, complicated, and opaque interpreter for
> large, complicated, and opaque systems of widely varied nature.)

I agree. The designers clearly didn't have heaps of experience with multi-threading.
The decision to unify mutexes, condition variables, and objects, and to
make mutex locking part of the syntax has loads of disadvantages.
On the other hand, these disadvantages are similar to the disadvantages enforced by
the object-oriented approach in general. Java will never reach the speed
of C just like C will never approach the speed of hand crafted assembly language.

[...]


>
> Java adds some nice syntactic support for threads and synchronization. Still, that
> support is shallow and trivial, and often more limiting than enabling.
>
> I'm not trying to argue that Java is pointless or useless. Though it's hardly the
> perfect model of a portable interpretive programming language or a general threaded
> programming language, it "gets the job done". (And NOTHING is perfect.) However,
> anyone trying to push Java as a "higher model of threading" is jumping around rather
> dangerously on an awfully thin branch.

Well, I certainly am not.

Gerald Hilderink

ungelesen,
03.05.2001, 07:28:2503.05.01
an

"Dave Butenhof" <David.B...@compaq.com> wrote in message
news:3AEEA676...@compaq.com...

> Konrad Schwarz wrote:
>
> These additional Java "features" are merely proof that the designers of
Java paid no
> attention to the experience of those researching and using threads over
the past 30
> years or so. (Or else they somehow thought they were writing a low-level
> implementation language for monolithic embedded system development;
despite the fact
> that they were actually coding a large, complicated, and opaque
interpreter for
> large, complicated, and opaque systems of widely varied nature.)
>
> Thread groups are different, but also fairly trivial "sugar coating"
anyone could
> trivially implement on top of POSIX. The same is true of Java "monitors",
which
> really aren't monitors in any useful sense as they have no semantic
awareness of the
> shared data states being protected.

Why do Java "monitors" have no semantic awareness of the shared data states?
They are conditional, aren't they? The condition could express their states?

>
> > On the other hand, mutexes and condition variables are subsumbed into
monitors,
> > allowing only one condition variable per mutex. Furthermore, monitors
are
> > tied to objects and can't exist independently of them and regions
> > of mutual exclusion are enforced syntactically. The
InterruptedException is
> > a mechanism with which thread cancelation can be implemented.
>
> InterruptedException can easily be implemented on top of pthread_cancel()
in any
> system that's correctly written to use a system exception package.
(Therefore,
> implementing cancellation on top of InterruptedException is trivial but
rather
> convoluted.)
>
> Java adds some nice syntactic support for threads and synchronization.
Still, that
> support is shallow and trivial, and often more limiting than enabling.

And that is the best part of Java.

Java disables much hazardous stuff that you usually can do with POSIX. POSIX
is not for everyone, but it is meant for the specialists among us. IMHO, we
need multithreading (e.g., using POSIX), but multithreading is best
understood by the processor and this concept is difficult to understand by
us designers and architects. In general, multithreading is not
object-oriented. In Java, they tried to get multithreading object-oriented
by coupling synchronization to objects. They failed. Again, multithreading
is control-oriented, not object-oriented. For example, expressing
multithreading in the UML is hard and makes software unnecessary complex and
harder to understand. Identifying the thread boundaries is a complex
process. Furthermore, multithreading makes formalizing the UML even harder.
I wonder, if multithreading is formal or has a mathematical foundation?
Analyzing multithreading with Petri Nets is formal, but Perti Nests are good
for analysis, not for design.

POSIX is a good standard for low-level multithreading. At a higher-level of
abstraction, i.e, the level of design and analysis (and implementation), we
need something else .. something that is object-oriented, simple, elegant,
and has a sound foundation (formal). This something should deal with
exception handling between threads in a correct and secure way (not using
InterruptExeption or pthread_cancel()). I think the answers can be found in
Ada and Occam that have a common root derived from CSP (Communicating
Sequential Processes).

I remember that Tony Hoare (he "invented" the monitor) moved away from
monitors to CSP, because monitors were too difficult for reasoning about the
correctness of your program. He invented channel synchronization and other
nice compositional operators between processes (active objects) that allow
you to describe and analyze concurrency. At Oxford (UK), the parallel
programming language Occam wa developed as an implementation of a subset of
CSP. Other programming languages are Handel-C, ParC, Limbo and Ada. The
silicon version was the good-old transputer that executes Occam programs
with maintaining concurrency.

I have developed CSP packages for C, C++ and Java. These are developed for
real-time embedded systems. (http://www.rt.el.utwente.nl/javapp). Students
at our control laboratory build high-quality concurrent control software
using these CSP packages without being specialists in programming. The
learning cycle is small and the time-to-market is low. The Java version is
used by several universities worldwide. Furthermore, CSP provides an
excellent foundation for RT-Java and RT-UML. Also, an operating system is
not required (as with Ada).

With CSP, I am no longer using monitors or semaphores. It is great to build
my software in a plug-and-play way, which I call "compositional
programming".

>
> I'm not trying to argue that Java is pointless or useless. Though it's
hardly the
> perfect model of a portable interpretive programming language or a general
threaded
> programming language, it "gets the job done". (And NOTHING is perfect.)
However,
> anyone trying to push Java as a "higher model of threading" is jumping
around rather
> dangerously on an awfully thin branch.

I agree! However, the CSP package for Java provides a higher model of
threading for Java.

Gerald Hilderink


Dave Butenhof

ungelesen,
03.05.2001, 08:12:4603.05.01
an
Tom Payne wrote:

To say that I'm "not convinced" would be something of an understatement, since I
regularly use several systems that do it right. A Tru64 UNIX exception (or OpenVMS
"signalled condition") properly unwinds the scope of a signal handler (or OpenVMS AST
handler) to get back into normal program mode. Of course, whether the APPLICATION
correctly handles an asynchronous exception is a different matter, since exception
handlers aren't necessarily written to deal with asynchronous interruption. That's
one reason that POSIX cancellation, by default, occurs only at well defined
(synchronous) points. (While asynchronous cancelability can be enabled, you're not
allowed to do much of anything with it on, except to turn it off.)

Our implementation of POSIX threads, by default, converts "thread directed" signals
(that is, SIGSEGV, SIGBUS, SIGFPE) into native exceptions. (These are, effectively,
synchronous exceptions "thrown" by the application, though through actions perhaps a
bit more subtle than a "throw" statement.) That's a legacy from our old CMA and DCE
thread implementations, which did so (at the insistence of OSF, though they later
changed their minds because it complicated core file analysis, and arbitrarily
changed their "standard" in mid-stream after we'd already released and formed a
binary compatibility commitment). Since the behavior of an unhandled exception is to
kill the process with the original signal, there's little difference for a
"conforming" POSIX application without exception handlers, and any
application-installed signal handler will override the default. Whether this is a
good idea or a bad idea has been subject to much debate over the years; really, I'd
prefer to get rid of it, but that's not a realistic option. In any case, I wouldn't
expect any standard to require this. (In fact, I'd argue against such a requirement.
I'm not sure I'd argue against a standard that precluded it, though that might be
inconvenient for some of our users, if anyone actually depends on the current
behavior.)

> In some applications, I need ways to asynchronously kill threads at
> will. But only the thread itself can put its affairs in order, and
> not leave the application in an inappropriate state. Signals are the
> standard way to asynchronously connect an outside will to a running
> program, and Ada- and C++-style exceptions are a good way for a thread
> to order its affairs when it receives a surprise.

You NEVER need to "asynchronously kill threads". Though perhaps you really mean that
you can asynchronously REQUEST that a thread "clean up and terminate on its own at
the next convenient opportunity", which isn't the same thing. Whether that request
can be HANDLED asynchronously depends on the design of the thread you're killing. If
you're careful to design all code in all threads subject to such requests such that
an asynchronous exception at any point will properly clean up all state, you'd be
welcome to run with asynchronous cancelability enabled except when making external
calls. (Hardly any calls are async cancel safe, for the excellent reason that very
few can be made async cancel safe except by disabling async cancelability; which
would be a rather trivial and useless sense of "safety".)

> I would very much welcome a standard by which the two could work together. (Are
> you saying that it already exists?)

Depends on what you mean. As I said, the IA-64 ABI specification for UNIX 98 ought to
require that thread cancellation be based on a cross-language exception facility,
shared at least by C++. (But then, I never saw the final approved copy.) But of
course that only applies to systems implemented on IA-64 and claiming conformance to
UNIX 98 and the ABI, which certainly doesn't help you a bit unless you're writing
(exclusively) for IA-64. While I'd love to see the sanity infect other standards,
that hasn't happened. It's hard to see where else it could take hold. Cross-language
(and thread) issues aren't approached by any language standard, and the broad scope
of language support issues (syntax, runtime interfaces and packaging, object
language, runtime code region descriptors) are way below the level of source API
standards like POSIX.

This may remain only in the standard of common sense and a reasonable engineering
commitment to help customers instead of confusing them. In other words, this is a
"quality of implementation" issue. Now, if there were any evidence that the "quality"
implementations were substantially preferred by paying customers over the others,
that might help. T'ain't the case, though, so vendors will continue doing whatever
they feel like doing.

(Perhaps I should hope, explicitly and for the record, that "someone", and there are
several likely someones who follow this newsgroup, will take up the challenge of
fixing Linux and then wave it as a banner in everyone's faces. ;-) )

Konrad Schwarz

ungelesen,
03.05.2001, 08:26:0703.05.01
an

Gerald Hilderink schrieb:


> POSIX is a good standard for low-level multithreading. At a higher-level of
> abstraction, i.e, the level of design and analysis (and implementation), we
> need something else .. something that is object-oriented, simple, elegant,
> and has a sound foundation (formal). This something should deal with
> exception handling between threads in a correct and secure way (not using
> InterruptExeption or pthread_cancel()). I think the answers can be found in
> Ada and Occam that have a common root derived from CSP (Communicating
> Sequential Processes).

I had a fair amount of exposure to Occam while at university and I
never really enjoyed it.

> [praise of Communicating Sequential Processes]

The disadvantage of message passing models is that they are
more expensive than shared memory models. With a shared memory model,
I need a mutex to ensure exclusive access to some piece of data. With message
passing, I need a separate thread (which gets and returns requests) for each
piece. To enter a mutex requires a lock operation, which can be programmed
in user mode and no context switch in the uncontested case. The send/receive
pair that implements an "atomic transaction" similar to the transaction done
within the scope of a locked mutex requires two context switches.

Deadlock is possible in both.

Priority inversion is possible in both.

Pat Rogers

ungelesen,
03.05.2001, 08:44:3903.05.01
an
"Gerald Hilderink" <g.h.hi...@home.nl> wrote in message news:thbI6.78137

<snip>

> Also, an operating system is not required (as with Ada).

Just to clear up any misconceptions here -- this isn't the real point of
your post, by any means -- but Ada doesn't require an operating system.
There are plenty of bare machine systems running Ada out there.

You are probably thinking of a Run-Time System Environment, which is sort of
like an OS but is limited to providing what the language requires (but not
general things like a file system, sockets, etc.). If the programmer limits
the parts of the language they use, even the RTSE can be removed. This can
be useful for safety-critical certification, for example.

<snip>

---
Patrick Rogers Consulting and Training in:
http://www.classwide.com Real-Time/OO Languages
pro...@classwide.com Hard Deadline Schedulability Analysis
(281)648-3165 Software Fault Tolerance


Pat Rogers

ungelesen,
03.05.2001, 09:27:4603.05.01
an
"Konrad Schwarz" <konradDO...@mchpDOTsiemens.de> wrote in message
news:3AF14E5F...@mchpDOTsiemens.de...


At the risk of sounding like a skit from an old Saturday Night Live, "you're
both right".* When direct thread-to-thread communication is required,
nothing beats the rendezvous. However, when communication need not be
direct, that approach is too costly in terms of context switches. That is
why Ada 95 has both: it retains the rendezvous for use when appropriate, but
also has a high-level data-oriented synchronization mechanism that does not
invoke context switching (because the mechanism is not a thread, any more
than a semaphore or a mutex is a thread). Data-oriented synchronization is
provided by "protected objects".

The nice thing about the rendezvous is that it is very robust and high
level -- the synchronization is built in, so there is no risk of
"forgetting" to handle everything right, say when an exception occurs, and
communication is done via parameter passing. That is the same interface as
for monitors. It is very easy to use. I prefer the asymmetric
communication model to the symmetric one though, because it is more
flexible -- I don't want to specify both endpoints.

What makes protected objects special, when compared to mutexes and condition
variables, is the greater expressive power at comparable (or less) cost.
The interface is procedural, with mutual exclusion provided automatically by
the implementation as in monitors, but the programmer can also express
condition synchronization directly by associating boolean expressions with
the routines. Callers of routines with these condition expressions are
delayed until the expression becomes true. There are no race conditions on
these conditions, so no need to write the typical looping structure we have
to use with condition variables -- when a calling task is allowed to
continue because the condition has become true, the condition remains true.
That simplifies things. Too, like monitors, all the routines that reference
the protected data are in one place, so there is no need to scour the entire
program for references when changing things.

I say "less cost" above because, on a bare machine with the Real-Time
Systems Annex implemented, priorities are defined and integrated with the
semantics of tasks and protected objects such that no protected object lock
is actually required. On an operating system or RTOS, they will (probably)
be a *little* slower than a native semaphore. However, the performance will
be very close and the expressive power is much, much greater.

By the way, using the RTS Annex, deadlock is precluded and priority
inversions within the application are bounded because the Annex
requires/implements the Immediate Priority Ceiling Protocol. POSIX supports
a ceiling protocol too, of course, but we are stuck with the low-level
facilities of mutexes and condition variables.

* "It's a floor wax! It's a dessert topping!" circa 1976? Sorry if that is
to colloquial. :-)

Gerald Hilderink

ungelesen,
03.05.2001, 09:47:5603.05.01
an

"Konrad Schwarz" <konradDO...@mchpDOTsiemens.de> wrote in message
news:3AF14E5F...@mchpDOTsiemens.de...
>
>
> Gerald Hilderink schrieb:
> > POSIX is a good standard for low-level multithreading. At a higher-level
of
> > abstraction, i.e, the level of design and analysis (and implementation),
we
> > need something else .. something that is object-oriented, simple,
elegant,
> > and has a sound foundation (formal). This something should deal with
> > exception handling between threads in a correct and secure way (not
using
> > InterruptExeption or pthread_cancel()). I think the answers can be found
in
> > Ada and Occam that have a common root derived from CSP (Communicating
> > Sequential Processes).
>
> I had a fair amount of exposure to Occam while at university and I
> never really enjoyed it.

Didn't you enjoy the language, the concept, or both? I didn't really enjoy
the language, but I did enjoy the concept.

>
> > [praise of Communicating Sequential Processes]
>
> The disadvantage of message passing models is that they are
> more expensive than shared memory models. With a shared memory model,
> I need a mutex to ensure exclusive access to some piece of data. With
message
> passing, I need a separate thread (which gets and returns requests) for
each
> piece. To enter a mutex requires a lock operation, which can be
programmed
> in user mode and no context switch in the uncontested case. The
send/receive
> pair that implements an "atomic transaction" similar to the transaction
done
> within the scope of a locked mutex requires two context switches.

It is not this simple to compare CSP message passing with shared memory
models. It is not one or the other. CSP supports shared memory models in a
different way. Current channel implementations perform one context switch
per communication. A buffered channel performs no context switch until the
buffer empty or full states are reached. At least the competitive processes
get fairly scheduled. The context switch overhead is in most cases not
significant.
With shared memory models I have sometime the feeling that it kills
concurrency caused by some degree of starvation between threads; one thread
may continue much longer than the other. This can be rather unfair. Time
slicing solves this matter, but I doubt if time slicing is a good real-time
paradigm.

In a shared memory model, each time a mutex must be executed to access some
piece of shared data (even if it does not lock) it consumes some amount of
time. Channel communication is only necessary when communication is
required. Until then, synchronization happens otherwise data is used
exclusively. The network of communicating processes determines the
efficiency of the application. I have experienced that some amount of
(passive) buffering by using buffered channels could increase the throughput
and that fully buffering (asynchronous) decreases the throughput of the
application. The latter is comparable with shared memory models, I belief.
CSP is lightweight not heavyweight. Channels perform context switches that
are very fast because the lock/unlock implementations are highly optimized
within the channel implementation. Furthermore, semaphores can be used with
channel communication if necessary, for example, in case an array of data is
shared between processes.

Furthermore, the alternative construct (ALT) performs non-busy polling on
availability of channel communication. This usually eliminates complex
polling constructs. The ALT supports timeouts and exception handling in a
way that is simple and elegant (like a state machine).

The parallel construct (PAR) performs barrier synchronization, which is also
very lightweight compared to forks and joins.

I consider CSP to be bunch of design patterns for building concurrent
software at a high-level of abstraction. Ultimately, multithreading is
hidden under the hood of these patterns. CSP can even be burned on to
silicon.

>
> Deadlock is possible in both.
>

I Agree. With CSP, deadlock is traceable and simple (formal) rules can
prevent deadlock at the level of design or implementation. CSP tools are
available for tracing deadlock. I have not yet used these tools though,
because the rules or guidelines were sufficient to build deadlock-free
software.

> Priority inversion is possible in both.

Yes, I agree, but CSP needs no priority inheritence protocol to solve this
matter. A simple buffered channel may solve the problem. The change that a
CSP program suffers from priority inversion is rare. This is because
priority inversion can easily be identified and normally indicates a bad
design. Nevertheless, I agree it is possible with CSP.

Thanks for your comments.

Gerald.

Gerald Hilderink

ungelesen,
03.05.2001, 10:08:2603.05.01
an

"Pat Rogers" <pro...@classwide.com> wrote in message
news:YocI6.334$0V.1...@nnrp1.sbc.net...

> "Gerald Hilderink" <g.h.hi...@home.nl> wrote in message
news:thbI6.78137
>
> <snip>
>
> > Also, an operating system is not required (as with Ada).
>
> Just to clear up any misconceptions here -- this isn't the real point of
> your post, by any means -- but Ada doesn't require an operating system.
> There are plenty of bare machine systems running Ada out there.

Yes. The CSP package provide an embedded scheduler that does not require an
operating system and runs on bare processors. The scheduler is based on the
ideas of Alan Burns and Andy Wellings, who have contributed to Occam and Ada
95.

>
> You are probably thinking of a Run-Time System Environment, which is sort
of
> like an OS but is limited to providing what the language requires (but not
> general things like a file system, sockets, etc.). If the programmer
limits
> the parts of the language they use, even the RTSE can be removed. This
can
> be useful for safety-critical certification, for example.

Yes, this is possible. If no CSP constructs and no CSP channels are used
then no scheduler (or RTSE) is included. Or, if no channel for the file
system is used then no file system is loaded. Furthermore, if no priorities
are used then the priority mechanism can be removed from the dispatcher and
queues. This results in a more optimized kernel.


Konrad Schwarz

ungelesen,
03.05.2001, 11:05:2503.05.01
an

Gerald Hilderink schrieb:

> Didn't you enjoy the language, the concept, or both? I didn't really enjoy
> the language, but I did enjoy the concept.

It was a long time ago, but...
I didn't like the fact that recursion was no longer possible.
Pretty soon I bumped against the limitations of the compiler's theorem
prover (which was supposed to prevent certain kinds of errors)
and had to turn it off.
Using m4, I managed to create a program which caused the entire system
to freeze completely. Even the debugger didn't know what was
going on. It showed everything as normal.

There were all sorts of other annoyances. (The debugger was an example:
if I remember correctly, Occam channels were mapped directly to Transputer channels.
Since the debugger had to use these same channels, it needed to wrap
the application program's messages somehow. This had some sort
of negative consequence I can't remember anymore. etc. etc.)

So I guess I didn't enjoy the language. I liked reading Hoare's book.
But it was really far away from what I do now.

> It is not this simple to compare CSP message passing with shared memory
> models. It is not one or the other. CSP supports shared memory models in a
> different way. Current channel implementations perform one context switch
> per communication. A buffered channel performs no context switch until the
> buffer empty or full states are reached. At least the competitive processes
> get fairly scheduled. The context switch overhead is in most cases not
> significant.

I guess that this is true. However, buffering requires additional resources.
And if a response is required, two context switches are required, regardless
of buffering. Furthermore, you disregard the space aspects of requiring
a thread for each shared datum. Each of those threads requires its own
stack (not in Occam).

I can have thousands of mutexes easily, but not thousands of threads (for some
value of thousands).

> With shared memory models I have sometime the feeling that it kills
> concurrency caused by some degree of starvation between threads; one thread
> may continue much longer than the other. This can be rather unfair. Time
> slicing solves this matter, but I doubt if time slicing is a good real-time
> paradigm.

I looked at your "Wot, no Chickens?" example and I think
you criticise it unjustly. It solves
the problem "every chicken is consumed exactly once, chickens are
consumed if there is a hungry philosopher, if chickens run out,
the cook will make some more". It does not solve
the problem "No philosopher goes hungry" but there should be no
reason for anyone to believe that it does.

The following idea might do that better: give each philosopher
his own check-out area (i.e., queue) and have the cook service
all of the queues. (Another solution is to explicitly queue the
philosophers, which your CSP on Java solution probably does.)
So there are plenty of ways of ensuring fairness if it is required.

However, the question is if in the real world
the situations requiring fairness
are a sufficiently large proportion of all situations
to warrant the burden of fairness
to be placed upon every situation.



> In a shared memory model, each time a mutex must be executed to access some
> piece of shared data (even if it does not lock) it consumes some amount of
> time. Channel communication is only necessary when communication is
> required.

I don't see any fundamental difference in order of time required to lock a mutex
and the time required to queue a message in a channel.

> Until then, synchronization happens otherwise data is used
> exclusively. The network of communicating processes determines the
> efficiency of the application. I have experienced that some amount of
> (passive) buffering by using buffered channels could increase the throughput
> and that fully buffering (asynchronous) decreases the throughput of the
> application. The latter is comparable with shared memory models, I belief.
> CSP is lightweight not heavyweight. Channels perform context switches that
> are very fast because the lock/unlock implementations are highly optimized
> within the channel implementation.

But certainly no faster than the context switch potentially done by a mutex,
since the mutex is doing less.

> Furthermore, semaphores can be used with
> channel communication if necessary, for example, in case an array of data is
> shared between processes.

Ok, but we're arguing philosophies here.



> Furthermore, the alternative construct (ALT) performs non-busy polling on
> availability of channel communication. This usually eliminates complex
> polling constructs. The ALT supports timeouts and exception handling in a
> way that is simple and elegant (like a state machine).

Obviously, a wait must not busy-wait, however it is spelled.
But how is the ALT implemented internally? Surely as some kind of logic
that must test the individual cases, basically seeing if the protocol
field (is that the right word?) matches a compile-time constant.
Condition variables are much more powerful, since the user can
use arbitrary logic.

Gerald Hilderink

ungelesen,
03.05.2001, 13:23:3803.05.01
an

"Konrad Schwarz" <konradDO...@mchpDOTsiemens.de> wrote in message
news:3AF173B5...@mchpDOTsiemens.de...

>
>
> Gerald Hilderink schrieb:
> > Didn't you enjoy the language, the concept, or both? I didn't really
enjoy
> > the language, but I did enjoy the concept.
>
> It was a long time ago, but...
> I didn't like the fact that recursion was no longer possible.
> Pretty soon I bumped against the limitations of the compiler's theorem
> prover (which was supposed to prevent certain kinds of errors)
> and had to turn it off.

The Occam compiler 2.1 has all kinds of fixes and I belief an Occam version
2.2 may become available in the near future. I think that recursion is one
of the new features. I have to say that I am not up-to-date with the
developments of the Occam compiler, since I am not using Occam anymore.

>
> So I guess I didn't enjoy the language. I liked reading Hoare's book.
> But it was really far away from what I do now.
>
> > It is not this simple to compare CSP message passing with shared memory
> > models. It is not one or the other. CSP supports shared memory models in
a
> > different way. Current channel implementations perform one context
switch
> > per communication. A buffered channel performs no context switch until
the
> > buffer empty or full states are reached. At least the competitive
processes
> > get fairly scheduled. The context switch overhead is in most cases not
> > significant.
>
> I guess that this is true. However, buffering requires additional
resources.
> And if a response is required, two context switches are required,
regardless
> of buffering. Furthermore, you disregard the space aspects of requiring
> a thread for each shared datum. Each of those threads requires its own
> stack (not in Occam).

With active buffering (as in Occam using a buffer process) 5 context
switches per communication are performed. Our implementation of the buffered
channel behaves as a wrapper around a monitor. In case you use a shared
channel then two monitors are used (one exclusive reader/writer monitor and
a mutex). Thus a buffered channel is passive and no extra threads are
created. This is equal to a protected buffer object in Ada. The CSP package
also supports call-channels, which are similar to protected objects and
entry/accept tasks in Ada. Call-channels are primitives according to the
semantics of CSP.
Thus, the CSP package implements some aspects the Occam 3.0 specification
and has some features that is found in Ada.

>
> I can have thousands of mutexes easily, but not thousands of threads (for
some
> value of thousands).
>
> > With shared memory models I have sometime the feeling that it kills
> > concurrency caused by some degree of starvation between threads; one
thread
> > may continue much longer than the other. This can be rather unfair. Time
> > slicing solves this matter, but I doubt if time slicing is a good
real-time
> > paradigm.
>
> I looked at your "Wot, no Chickens?" example and I think
> you criticise it unjustly. It solves
> the problem "every chicken is consumed exactly once, chickens are
> consumed if there is a hungry philosopher, if chickens run out,
> the cook will make some more". It does not solve
> the problem "No philosopher goes hungry" but there should be no
> reason for anyone to believe that it does.
>
> The following idea might do that better: give each philosopher
> his own check-out area (i.e., queue) and have the cook service
> all of the queues. (Another solution is to explicitly queue the
> philosophers, which your CSP on Java solution probably does.)
> So there are plenty of ways of ensuring fairness if it is required.

The author of "Wot, no Chickens?" is Peter Welch. The example illustrates
that the Java monitor is not real-time and becomes dangerously unfair under
certain timing conditions. The philosophers are clients that want to be
served by the canteen process. Chickens are always served and consumed by a
client, but one client may *never* be served and dies because of starvation.
This is something we do not want in a mission-critical real-time
application. Imagine, a wing-flap on an airplane should respond to some
sensor input, but some other sensors are triggering at a certain frequency
that could overrule this sensor. The airplane could become unstable, because
of the tiny failure. Usually, correct queuing would prevent this problem,
but that's the point of the example. The queuing mechanism of the Java
monitor is based on a single queue, which ordering is wrong for real-time
systems. If Java monitors were correctly implemented according to Hoare's
description then everything was ok.
The channel approach shows that the concurrent design of this example can
easily be map to its implementation and that we can easily reason about its
concurrent behavior.

> But certainly no faster than the context switch potentially done by a
mutex,
> since the mutex is doing less.

Channels represent highly optimized rendezvous or buffer patterns. Our
channels use highly optimized versions of Hoare's monitor. Using channels is
much simpler than explicitly building monitor synchronization constructs.
Using channels is like reusing objects that implement a commonly used
monitor synchronization construct that performs communication between
threads. As with monitors, channels are universal too. The performance of
channels is comparable to monitor constructs.

> Obviously, a wait must not busy-wait, however it is spelled.
> But how is the ALT implemented internally? Surely as some kind of logic
> that must test the individual cases, basically seeing if the protocol
> field (is that the right word?) matches a compile-time constant.
> Condition variables are much more powerful, since the user can
> use arbitrary logic.

The transputer implementation must test the individual cases. We us a
queuing algorithm that is in accordance with a prioritized
first-come-first-served principle. The one on top of the queue gets served
first. The queue is priority ordered based on the arrival time and urgency.
Urgency is more important than arrival time. This algorithm eliminates a
source of priority inversion. The sorting algorithm is fast. The urgency
represents the priority that is assigned to each guard. The ALT assigns
equal priority to its guards and a PRIALT assigns declining priority to its
guards. A nested ALT/PRIALT performs another arbitrary logic.
Every guard has a condition variable in the form of a boolean that is true
or false. This condition is evaluated before the alternative construct is
executed. While the alternative construct is waiting this boolean can be
updated by the ones who change the condition. After a channel becomes ready,
the alternative construct will be resumed and it will check the condition of
the top element (the guard that is ready). This is based on a dynamic
scheme.

Notion of priority has never been fully implemented in Occam. The CSP
package supports priority by providing dynamic ALTs and real PRIPARs. Even
channels have notion of priority, which reduces (not eliminates) the
possibility danger of priority inversion. This is why channels are able to
eliminate one context switch.

As you see things have changed for CSP.

Bil Lewis

ungelesen,
03.05.2001, 17:41:5003.05.01
an
Dave,

> No, suspend/resume and "thread death" are BUGS, not "functionality". ...

That's the reason that there is no suspend/resume, async cancellation in
Java 2. As a matter of fact, you're part of the reason those things were
deprecated.
(It was after talking with you in... '96? that I started leaning on the Java
guys. I won't claim that's only reason they were deprecated, but they did indeed
get deprecated.)


> Thread groups are different, but also fairly trivial "sugar coating" anyone could
> trivially implement on top of POSIX. The same is true of Java "monitors", which
> really aren't monitors in any useful sense as they have no semantic awareness of the
> shared data states being protected.

Thread Groups were invented for a different reason & just exist as legacy. I
don't
think they're used much at all. (I recommend against using them.) Nobody's
claiming
Java does monitors ala Modula. Java just provides a nice interface to locks
which
is very convient in most cases. I like it.

> InterruptedException can easily be implemented on top of pthread_cancel()

I assume that's how it's done, though I've not looked.

> I'm not trying to argue that Java is pointless or useless.

Actually, Java IS pointer-less. (Har, har, har)

I'd suggest that Java does an excellent job of providing exactly
the required functionality (modulo a few warts which are best avoided)
at minimal cost. It's not as powerful as PThreads, but it's sufficent
to 91.3% of the tasks it's put to.

-Bil
--
================
B...@LambdaCS.com

http://www.LambdaCS.com
Lambda Computer Science
555 Bryant St. #194
Palo Alto, CA, 94301

Phone/FAX: (650) 328-8952 St Petersburg: +7 812 966 5359

Dave Butenhof

ungelesen,
04.05.2001, 09:21:5404.05.01
an
Bil Lewis wrote:

> Dave,
>
> > No, suspend/resume and "thread death" are BUGS, not "functionality". ...
>
> That's the reason that there is no suspend/resume, async cancellation in
> Java 2. As a matter of fact, you're part of the reason those things were
> deprecated.
> (It was after talking with you in... '96? that I started leaning on the Java
> guys. I won't claim that's only reason they were deprecated, but they did indeed
> get deprecated.)

My understanding was that they were indeed "deprecated", but that they hadn't been
removed. (I admit I haven't studied the more recent specs in detail.) Sure, compatibility
is important, and once they stuck themselves with these idiocies, (though there was no
good excuse for having done so), they had to tread carefully in fixing the problems. I've
had reason to wonder how widely the deprecation is known (much less understood) among the
ranks of Java programmers. Certainly, a long time went by between their initial
acknowledgement of the problems and ANY obvious admission to the general population.

> > Thread groups are different, but also fairly trivial "sugar coating" anyone could
> > trivially implement on top of POSIX. The same is true of Java "monitors", which
> > really aren't monitors in any useful sense as they have no semantic awareness of the
> > shared data states being protected.
>
> Thread Groups were invented for a different reason & just exist as legacy. I don't
> think they're used much at all. (I recommend against using them.) Nobody's claiming
> Java does monitors ala Modula. Java just provides a nice interface to locks which is
> very convient in most cases. I like it.

For a certain model of locking, where either lock granularity is coarse, (generally
inefficient), or methods are kept extremely simple with no interrelationships, (usually
impractical), Java synchronized methods are indeed seductively simple. I'm not entirely
convinced that's good, but it's clearly convenient.

The interaction between "synchronized" as a method/block qualifier and "notify/wait" as
explicit language operations, is "odd", and I think results in some of the nastier
consequences. The worst part is the management of nested synchronization scopes as,
essentially, "recursive mutexes", and the folly of defining wait to "spin out" the
recursion depth to 0. (That is, releasing the lock rather than decreasing the depth by
1.) We've talked about this before. As long as all nested synchronization scopes are
entered with no invariants broken, this isn't harmful; however, the language provides no
support to the programmer in managing this requirement, and does not even document it.
When a lock is held, the system must presume that invariants are broken. (There's no
other reason to hold a lock.)

Holding a lock across a call (or making a call with broken invariants) is almost always a
bad idea anyway... but with synchronization being a qualifier rather than an operation,
and with an OO language where nearly anything can be a call, this is a difficult goal.
Having no language support in managing this, even to the level of being able to diagnose
errors, is a disaster waiting to happen. (Actually, it probably isn't waiting. It's
probably happened a million times in all sorts of Java code that "seems to work most of
the time", where sporadic failures are simply written off as "flukes".)

None of these problems are unique to Java, but the design of the language is like a steep
sided bowl; not only leading everyone to the depths, but making it difficult and awkward
to avoid.

The worst problems could easily have been avoided had Java syntax granted to the compiler
knowledge of the protected data invariants. I.e., if it had anything remotely worthy of
the name "monitor". The locking and synchronization model is designed as if it had
monitors, but without any of the real substance to make that meaningful.

> > InterruptedException can easily be implemented on top of pthread_cancel()
>
> I assume that's how it's done, though I've not looked.
>
> > I'm not trying to argue that Java is pointless or useless.
>
> Actually, Java IS pointer-less. (Har, har, har)

I could persue this line, but there's clearly no OBJECT to continuing.

> I'd suggest that Java does an excellent job of providing exactly
> the required functionality (modulo a few warts which are best avoided)
> at minimal cost. It's not as powerful as PThreads, but it's sufficent
> to 91.3% of the tasks it's put to.

I very much doubt that it's sufficient to any more than 91.299% of the tasks to which
it's applied; but I doubt that POSIX threads, ANSI C, ANSI C++, or even LISP, are really
any different. (Emacs, on the other hand, is a different matter. ;-) )

Everything has warts. (Well, I haven't, ever since my bizarre encounter with a projecting
screw on my childhood swingset, but that's a different story...)

My objection to Java's threading support is that it was clearly an afterthought, and done
quickly and superficially without much thought (or understanding) of what it all meant.
It looks good on the surface, but accomplishes little and causes as many problems as it
solves. Hey, they didn't screw it up as badly as Microsoft, nor did they ignore nearly as
many years of experience and research. Still, they could and should have done better, and
that irks me. Call it sour coffee beans.

Kaz Kylheku

ungelesen,
04.05.2001, 11:49:4804.05.01
an
On Fri, 04 May 2001 09:21:54 -0400, Dave Butenhof <David.B...@compaq.com>
wrote:

>My objection to Java's threading support is that it was clearly an
>afterthought, and done quickly and superficially without much thought (or
>understanding) of what it all meant. It looks good on the surface, but
>accomplishes little and causes as many problems as it solves. Hey, they didn't
>screw it up as badly as Microsoft, nor did they ignore nearly as many years of
>experience and research. Still, they could and should have done better, and
>that irks me. Call it sour coffee beans.

How about just Starbucks? ;)

Patrick TJ McPhee

ungelesen,
05.05.2001, 12:22:0105.05.01
an
In article <3AF2ACF2...@compaq.com>,
Dave Butenhof <David.B...@compaq.com> wrote:

% My objection to Java's threading support is that it was clearly an
% afterthought, and done
% quickly and superficially without much thought (or understanding) of
% what it all meant.

But it was then promoted as an integral feature of the language design.
Part of the problem is marketing.

--

Patrick TJ McPhee
East York Canada
pt...@interlog.com

0 neue Nachrichten