Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CQS and multithreading

1 view
Skip to first unread message

Greg C

unread,
Sep 17, 2001, 2:28:44 PM9/17/01
to
I stumbled across a discussion on command-query separation in the
context of multithreading in the comp.object newsgroup, and came up
with the following thoughts (which are admittedly half-baked.)

The criticism of CQS in an MT environment is this: CQS implies that
locking of a shared resource has to be transfered to the client and
that the object can no longer take responsibility. On the face of it,
this argument makes sense since the object managing the shared
resource has no notion of when a client will be "done" with its
resources.

Consider a simple stack. In the CQS model, a consumer client of the
stack might have code like this:

loop from until stack.is_empty
do_something (stack.item)
stack.remove
end

If two consumers were running in separate threads, and we go along
with the argument above, the code might change to look like this:

loop from lock (stack) -- apply a lock before testing is_empty
until stack.is_empty
do_something (stack.item)
stack.remove
unlock (stack) -- give other threads a chance
lock (stack) -- and take another turn
end

But are these changes really necessary? Would it be possible to create
a stack class, say THREADSAFE_STACK that can handle multithreading
internally and correctly?

THREADSAFE_STACK would maintain a unique internal state for each
thread accessing it. Intuitively, this seems possible to me. For the
example above, the stack would manage information like the current
item, and whether or not the stack was empty, on a per-thread basis
and execute correctly with the old client code. Each feature called
would have to check on which thread was active, perform any necessary
locking, and present consistent state back to that thread.

Take the case of a stack with a single entry, shared by two threads.
It seems to me that the correct behavior is for one thread to see the
stack as having the single element, which it can process and consume,
and the other sees stack as empty. In other words, items in the stack
are never shared, only the stack itself.

Most likely, the stack would be composed of references. How does one
guarantee that the items referenced aren't being acted upon by other
threads? Or would a thread-safe stack only be able to hold thread-safe
items?

Stacks can be copied. What happens in this case? Should a thread-safe
stack by uncopy-able?

I've not taken the time to explore what ISE has done in their MT code,
or the SmallEiffel MT extensions. I wanted to toss this out and see
what kind of holes others find in this approach.

Greg

Franck Arnaud

unread,
Sep 18, 2001, 1:48:43 PM9/18/01
to
Greg C:

> But are these changes really necessary? Would it be possible to create
> a stack class, say THREADSAFE_STACK that can handle multithreading
> internally and correctly?

BM's solution in OOSC is to change the semantic of assertions:
instead of correctness, assertions become blocking conditions
(so in your stack example 'item' blocks until is_empty becomes
false). It may well be a case of throwing the baby with the
bathwater.

> Most likely, the stack would be composed of references. How does one
> guarantee that the items referenced aren't being acted upon by other
> threads?

I think you're spot on. Given aliasing, you can't really do concurrency
in a safe way, so any typical implementation of threading in Eiffel
is doomed to be at odds with Eiffel's emphasis on correctness: it
becomes much more difficult to reason on correct code.

The only way I can see to do concurrency in a way compatible with the
Eiffel spirit way is message passing: there is (in the model) no shared
data between 'threads'. It's a bit unfashionable and has it's own
issues (although lesser than concurrency with sharing IMHO). Agents
may make it more convenientely implementable.

Otherwise there's the referential transparency route (all objects
are immutable so sharing is OK) but that's at odds with the
imperative nature of Eiffel.

> I've not taken the time to explore what ISE has done in their MT code,
> or the SmallEiffel MT extensions.

I don't think they address the problems you mentions (in a more
transparent way than being careful and taking care of everything
explicitely). I'm afraid there's no magic solution, or it would
have been found and published (it's not for lack of trying that
it was not found).

Patrick Doyle

unread,
Sep 18, 2001, 5:01:17 PM9/18/01
to
In article <62340e2a.01091...@posting.google.com>,

Greg C <gmc...@yahoo.com> wrote:
>
>The criticism of CQS in an MT environment is this: CQS implies that
>locking of a shared resource has to be transfered to the client and
>that the object can no longer take responsibility. On the face of it,
>this argument makes sense since the object managing the shared
>resource has no notion of when a client will be "done" with its
>resources.

This is the beginning of what I like to call "Synchronization
by Contract", which is an idea I have been tossing around. It is
not very well developed yet, but the premise is that shared resources
are protected not by synchronization per se, but by synchronization
contracts.

For instance, your stack thing might look like this:

deferred class STACK[ ITEM ]
feature
lock: LOCK is
deferred
end

top: ITEM
deferred
end

pop is
require
lock.is_mine
deferred
ensure
lock.is_mine
end
end

(The LOCK.is_mine query returns true iff the lock is owned by the
current thread.)

Implementations of STACK can decide, with a great deal of flexibility,
what LOCK is used to synchronize access to the STACK. For instance,
each stack could have its own lock; a stack could be part of a bigger
structure that is covered by a lock; a stack used in a sequential
program could use a dummy LOCK which is always considered to be
locked by the current (and only) thread; and so on. With the proper
optimizations, the STACK may not even need to keep track of the LOCK
when assertions are turned off. Thus, though this interface is
thread-safe, it doesn't preclude any implementation that I can think of.

Synchronization by Contract (let's call it SbC) provides the same
benefits over the usual self-contained synchronization as DbC
provides over defensive programming: the responsibility for
synchronization is cleanly spelled out, thus reducing redundant
synchronization the way DbC reduces redundant error checking.
Also, just as DbC does not disallow error checking inside a
routine if that is the best design, neither does SbC disallow
the usual internal synchronization idiom if that is the best design.

This is as far as I have gone in my thinking. I'd like to hear
people's comments if they have any, especially if I am not the
first to have thought of this. :-)

--
--
Patrick Doyle
doy...@eecg.toronto.edu

Daniel Yokomiso

unread,
Sep 19, 2001, 12:36:05 AM9/19/01
to
doy...@eecg.toronto.edu (Patrick Doyle) wrote in message news:<GJvME...@ecf.utoronto.ca>...

Hi,

I think that your idea is good, but does not answer Greg problems.
I seems to me that Greg愀 problem is with doing more than one
operation with a object. In the STACK example he wanted to assure the
call "do_something (stack.item)" and the call "stack.remove" operate
on the same item. If not he can remove something very different out of
the STACK. I don愒 think that there exists a easy solution to this
problem, with enough operations, a client can call any number of them
in any conceivable order during a "transaction". I don愒 believe that
exists a way to define any option. You could define a pair of lock and
unlock operations in any shared object, in order to achieve that, but
it would require explicit calls to lock and unlock, and it can be
error prone. If you use SCOOP semantics you can write a dummy feature
to put together the calls that you want to make syncronized and call
this feature it time you want. Or you can define in the supplier a
do_as_transaction feature which takes a agent expression as parameter,
and the agent will hold the operations in the required order, and the
supplier will work it in a synchronized way. I think that last one is
the more elegant approach. I thought about it today, when dealing with
a SQL transaction. You send a expression to a object (the RDBMS) and
tell it to do it as a transaction. It can be made in the same way for
synchronizing multiple operations without explicit locking. I don愒
know if it was what Franck Arnaud was thinking when he talked about
agents. It looks the best approach for me, and it is simple (you just
need to change the class you want to work with ;-)). Like Dr. Meyer
said: "...all it takes is an original perspective". In
http://www-106.ibm.com/developerworks/library/j-king.html, if you
don愒 mind going through some Java stuff (is about Java threading
model and its problems) Alex Holub talks about improvements to make
safe using threads in a OO way. I think it has several good answers
for a safe, language-independent, threading model. And, I just thought
about it, if you let SbC have the same assertion redefinition rules,
anyone could take out that "lock.is_mine" clause from the requires,
and none could add another lock in the requires. Just a thought.

Daniel Yokomiso.

Patrick Doyle

unread,
Sep 19, 2001, 12:57:18 AM9/19/01
to
In article <61ce6e0a.01091...@posting.google.com>,

Franck Arnaud <fra...@nenie.org> wrote:
>
>I think you're spot on. Given aliasing, you can't really do concurrency
>in a safe way

Aliasing can blow DbC out of the water even without concurrency.
Look:

increment_two_counters( x: COUNTER, y: COUNTER ) is
do
x.increment
y.increment
ensure
x.value = old x.value + 1
y.value = old y.value + 1
end

This fails if x=y. Yes, you could write all your contracts to deal
with aliasing, but then couldn't you do that for the concurrent
case too?

>I'm afraid there's no magic solution, or it would
>have been found and published

To believe that, you'd have to believe that everything which is
possible has already been discovered. :-)

Franck Arnaud

unread,
Sep 19, 2001, 7:24:47 AM9/19/01
to
Patrick Doyle:

> Aliasing can blow DbC out of the water even without concurrency.
> Look:

Yes, but I think the difference is that aliasing errors without
concurrency are deterministic, if a pain. Eg, in your example
it always blows for x = y. With concurrency, reproducing those
errors is likely to be much harder, so unless they occur with
high enough frequency to make the painful debugging worth it,
you'll just live with occasional crashes or fuzzy behaviour, or
try to clumsily patch around it.

Franck Arnaud

unread,
Sep 19, 2001, 7:45:46 AM9/19/01
to
Patrick Doyle:

> pop is
> require
> lock.is_mine
> deferred
> ensure
> lock.is_mine
> end

Maybe some issues:

The assertions do not check that someone does not take and
give back the lock during the body of the call (maybe it's
unlikely?).

It's also more likely than ordinary assertions not to be
caught during testing as the possibility of errors depends
on thread scheduling, a problem made worse because it is
influenced by the fact that you are debugging (your code
does not run at the same speed with assertions enabled).
Maybe that problem could be attacked with a debugging
scheduler designed at producing maximum coverage of
possible thread scheduling scenarios? (sounds hard)

Generally, threading adds a pseudo-random generator into
the problem of extensive assertion coverage. In itself,
a bit of random could help detecting cases not otherwise
detected, but the number of possible states is multiplied
compared to the non-concurrent case....

Patrick Doyle

unread,
Sep 19, 2001, 12:53:00 PM9/19/01
to
In article <a1ae5139.01091...@posting.google.com>,

Daniel Yokomiso <daniel_...@yahoo.com.br> wrote:
>doy...@eecg.toronto.edu (Patrick Doyle) wrote in message news:<GJvME...@ecf.utoronto.ca>...
>>
>> This is the beginning of what I like to call "Synchronization
>> by Contract", which is an idea I have been tossing around.
>
[...]

>
> I think that your idea is good, but does not answer Greg problems.
>I seems to me that Greg愀 problem is with doing more than one
>operation with a object. In the STACK example he wanted to assure the
>call "do_something (stack.item)" and the call "stack.remove" operate
>on the same item.

With the SbC approach, it becomes clear from the contracts that the
client is responsible for synchronization of the STACK. From there,
it's only a short mental leap to understand that, in Greg's example,
the client must do both stack.item and stack.remove between acquiring
and releasing the lock.

>I don愒 think that there exists a easy solution to this
>problem, with enough operations, a client can call any number of them
>in any conceivable order during a "transaction".

I was hoping that SbC would make it clear that, in that case, the
responsibility is on the client to acquire the lock for the duration
of the transaction.

>Or you can define in the supplier a
>do_as_transaction feature which takes a agent expression as parameter,
>and the agent will hold the operations in the required order, and the
>supplier will work it in a synchronized way. I think that last one is
>the more elegant approach.

Yes, I think that is quite an elegant approach.

>And, I just thought
>about it, if you let SbC have the same assertion redefinition rules,
>anyone could take out that "lock.is_mine" clause from the requires,
>and none could add another lock in the requires. Just a thought.

To me, that makes sense. If the implementation turns out not to
need synchronization--perhaps it uses volatile variables instead,
or makes use of some underlying atomic operations--then this requirement
can be removed in the subclass.

And it's hard to think of a case where you'd want to require that the
caller has two locks. :-)

Patrick Doyle

unread,
Sep 19, 2001, 1:13:00 PM9/19/01
to
In article <61ce6e0a.01091...@posting.google.com>,
Franck Arnaud <fra...@nenie.org> wrote:
>Patrick Doyle:
>
>> pop is
>> require
>> lock.is_mine
>> deferred
>> ensure
>> lock.is_mine
>> end
>
>Maybe some issues:
>
>The assertions do not check that someone does not take and
>give back the lock during the body of the call (maybe it's
>unlikely?).

I'm not sure I see what you mean. The LOCK class is supposed
to be a mutual exclusion mechanism. If the lock is mine, then
there is no way another thread could get the lock unless I release
it first.

Granted, if the thread accidentally releases the lock at the wrong
time, then you'll have problems; but this is exactly the same risk
you have with any contract. Nondeterminism doesn't enter into it.

>It's also more likely than ordinary assertions not to be
>caught during testing as the possibility of errors depends
>on thread scheduling, a problem made worse because it is
>influenced by the fact that you are debugging (your code
>does not run at the same speed with assertions enabled).

This risk is minimal. The predicate "lock.is_mine" has been
chosen precisely because it cancels the effects of nondeterminism.
A system would have to be very carefully broken in order to pass
this test and still have synchronization problems.

I think that these synchronization contracts are no more susceptible
to false positives or false negatives than any other contract. Of
course, I would be delighted if someone could probe me wrong. :-)

Roger Browne

unread,
Sep 19, 2001, 2:37:10 PM9/19/01
to
Patrick Doyle:

> > pop is
> > require
> > lock.is_mine
> > deferred
> > ensure
> > lock.is_mine
> > end

Franck Arnaud:


> The assertions do not check that someone does not take and
> give back the lock during the body of the call (maybe it's
> unlikely?).

These issues are quite thoroughly explored in Microsoft's "Vault"
language, where a synchronization lock is just one kind of "resource
key":

http://research.microsoft.com/projects/Vault/

Vault is a C-like language with a C-like DBC, i.e. a DBC that addresses
low-level correctness issues using a concise notation.

Regards,
Roger
--
Roger Browne - ro...@eiffel.tm - Everything Eiffel
19 Eden Park Lancaster LA1 4SJ UK - Phone +44 1524 32428

Patrick Doyle

unread,
Sep 19, 2001, 10:41:45 PM9/19/01
to
In article <2001091918371...@localhost.local>,

Roger Browne <ro...@eiffel.tm> wrote:
>
>These issues are quite thoroughly explored in Microsoft's "Vault"
>language, where a synchronization lock is just one kind of "resource
>key":
>
> http://research.microsoft.com/projects/Vault/
>
>Vault is a C-like language with a C-like DBC, i.e. a DBC that addresses
>low-level correctness issues using a concise notation.

This is an interesting link. My take on it is that this is a step
toward what we've always wanted: compile-time DBC. The "keys" are
nothing but named propositions, and the contracts specify which
propositions are true before and after a routine runs. The semantics
of the propositions are defined by the context; the system doesn't
know what any key "means". Is there more to it than this?

Now, if it supported predicates rather than propositions, and it had a
theorem prover so it could prove "x > 1" if given "x > 2", then
we'd really have something.

Roger Browne

unread,
Sep 21, 2001, 10:15:58 AM9/21/01
to
I wrote:
> > http://research.microsoft.com/projects/Vault/

Patrick Doyle wrote:
> This is an interesting link. My take on it is that this is a step
> toward what we've always wanted: compile-time DBC.

It seems to be a _big_ step towards compile-time DBC. In the example
given on the website, we need to ensure that a file is "open" before it
is "read". The compiler can ensure this at compile-time. All that needs
to be taken "on faith" is that the lowest-level kernel routine will
either open a file successfully or set a status flag. The correctness of
the rest of the system follows statically from this.

> Is there more to it than this?

You want more than that?

(I don't know if there is more to it than that; some of the web pages
won't display with Netscape 4.77.)

> Now, if it supported predicates rather than propositions, and it had a
> theorem prover so it could prove "x > 1" if given "x > 2", then
> we'd really have something.

That would extend its application to numerical computation, where there
are too many states to enumerate. But for other applications, just
associate one proposition with each interesting predicate.

I thought the downside was that Vault is targeted to lower-level
computation (device drivers, memory managers etc) and does not integrate
with exception-handling.

Darren New

unread,
Sep 21, 2001, 1:19:10 PM9/21/01
to
Patrick Doyle wrote:
> This is an interesting link. My take on it is that this is a step
> toward what we've always wanted: compile-time DBC. The "keys" are
> nothing but named propositions, and the contracts specify which
> propositions are true before and after a routine runs.

This is something that Robert Strom at IBM TJWatson developed for a
language called Hermes. He called it "typestate". At each point in the
code, you can deduce statically what's going on (hence the "type") but
that changes for a variable as the program runs (hence the "state"). It
was used for efficiency stuff as well - when you passed a large
parameter and said "this is initialized when I call the routine and
uninitialized when that routine returns", the compiler could know that
the value could be passed by address without aliasing problems, for
example. I.e., Hermes was designed that all args were passed by value
(since there were no pointers) but the compiler knew when it could pass
by reference.

Compiling a hermes program consisted of simply saying "Ok, after this,
I'll assume it's compiled", basically. (Err, you could invoke the
compiler from within a program, that is.) If the compiler knew it was
compiled before, it didn't have to do anything, so you could store
pre-compiled programs in a library, for example.

--
Darren New
San Diego, CA, USA (PST). Cryptokeys on demand.
Who is this Dr. Ibid anyway,
and how does he know so much?

Patrick Doyle

unread,
Sep 21, 2001, 1:11:08 PM9/21/01
to
In article <2001092114155...@localhost.local>,

Roger Browne <ro...@eiffel.tm> wrote:
>
>Patrick Doyle wrote:
>> This is an interesting link. My take on it is that this is a step
>> toward what we've always wanted: compile-time DBC.
>
[...]

>
>> Is there more to it than this?
>
>You want more than that?

Hey, you chopped what I was referring to as "this"! :-)

What I was thinking of was some playing around I did a few years ago
where I tried to convert contracts into Prolog statements, and then
run the resulting Prolog program to prove or disprove the assertions.
(I got tired of it before I got anywhere with it. :-) Using only
propositions, rather than predicates, means there is a very large class
of assertions that can't be modelled.

Having said that, I'm not sure Vault's keys really aren't predicated
on the object they're attached to. It's a little hard to tell exactly
what they are.

>> Now, if it supported predicates rather than propositions, and it had a
>> theorem prover so it could prove "x > 1" if given "x > 2", then
>> we'd really have something.
>
>That would extend its application to numerical computation, where there
>are too many states to enumerate.

Actually, that's what I meant: not to view it as an enumeration of
states, but to be able to express logical implications, like this
in Prolog:

greater_than(a,c) :- greater_than(a,b), greater_than(b,c)

>But for other applications, just
>associate one proposition with each interesting predicate.

Talk about enumerating too many states! :-)

>I thought the downside was that Vault is targeted to lower-level
>computation (device drivers, memory managers etc)

Well, to me, supporting low-level computation as well as abstraction is
a very good thing.

Roger Browne

unread,
Sep 21, 2001, 4:04:28 PM9/21/01
to
> >Patrick Doyle wrote:
> >> This is an interesting link. My take on it is that this is a step
> >> toward what we've always wanted: compile-time DBC.
> [...]

> >> Is there more to it than this?

I replied:


> >You want more than that?

Patrick:


> Hey, you chopped what I was referring to as "this"! :-)

Sorry! I chopped it for brevity; I figured the chopped text was just
elaborating on the notion of "a step towards compile-time DBC". But I
agree the edited version conveys a different feeling.

> >> Now, if it supported predicates rather than propositions, and it had a
> >> theorem prover so it could prove "x > 1" if given "x > 2", then
> >> we'd really have something.
> >
> >That would extend its application to numerical computation, where there
> >are too many states to enumerate.
>

> Actually, that's what I meant: not to view it as an enumeration of
> states, but to be able to express logical implications, like this
> in Prolog:
>
> greater_than(a,c) :- greater_than(a,b), greater_than(b,c)

I take your point. This is not just needed for numeric computation - it's
also needed for design-time DBC of operations like STRING.substring which
manipulate INTEGER indices and counts.

> >I thought the downside was that Vault is targeted to lower-level
> >computation (device drivers, memory managers etc)
>

> Well, to me, supporting low-level computation as well as abstraction is
> a very good thing.

I agree of course. But if exceptions are not integrated, and you have to
use status codes instead, then a level of abstraction is lost.

Patrick Doyle

unread,
Sep 21, 2001, 10:14:26 PM9/21/01
to
In article <2001092120042...@localhost.local>,

Roger Browne <ro...@eiffel.tm> wrote:
> Patrick Doyle wrote:
>> Roger Browne wrote:
>> >
>> >I thought the downside was that Vault is targeted to lower-level
>> >computation (device drivers, memory managers etc)
>>
>> Well, to me, supporting low-level computation as well as abstraction is
>> a very good thing.
>
>I agree of course. But if exceptions are not integrated, and you have to
>use status codes instead, then a level of abstraction is lost.

Oops, now it is I who have chopped the relevant quote. I thought
you were making two separate points.

Roger Browne

unread,
Sep 21, 2001, 4:37:09 PM9/21/01
to
Patrick Doyle wrote:
> > ... contracts specify which

> > propositions are true before and after a routine runs.

Darren New wrote:
> This is something that Robert Strom at IBM TJWatson developed for a

> language called Hermes...

Hermes sounds like it embodies quite a few interesting ideas:

http://www.funet.fi/pub/languages/hermes/hermes-announcement

But that announcement is from 1991. I can't find anything recent.

Franck Arnaud

unread,
Sep 24, 2001, 11:57:52 AM9/24/01
to
Roger Browne:

> In the example given on the website, we need to ensure that a file
> is "open" before it is "read". The compiler can ensure this at
> compile-time.

But you can also do that with an ordinary type system:

deferred class FILE_OPENER ...
feature
open (a_name ...) is ...
-- open calls one of:
when_opened (a_stream: STREAM_OF_SOME_SORT) is deferred ...
when_failed (a_why: ERROR_STATUS) is deferred ...

Of course, presented like this it looks a bit heavyweight, but
I'm sure you could find a moderate amount of syntactic sugar
to make it quite natural.

Are there cases where Vault's facilities could be used and
where the type system really cannot help?

Roger Browne

unread,
Sep 24, 2001, 3:54:56 PM9/24/01
to
I wrote:
> > In the example given on the website, we need to ensure that a file
> > is "open" before it is "read". The compiler can ensure this at
> > compile-time.

Franck Arnaud wrote:
> But you can also do that with an ordinary type system:

Well ... I guess the Vault type system is just an "ordinary" type system,
with a "moderate amount of syntactic sugar" and a bit of extra compiler
support.

It's the same as regular (runtime) Design By Contract. It's just
"ordinary" executable instructions, with a "moderate amount of syntactic
sugar" and a bit of extra compiler support.

DbC without compiler support is tedious, error-prone and often more
trouble than it's worth. But DbC with compiler support is elegant,
powerful, and one of the strongest aids towards software correctness.

The syntactic sugar of DbC adds structure to the application (by
isolating the contracts into separate syntactic elements), and
systematically leverages the power of the declared contracts (e.g. by
combining inherited assertions with newly-introduced assertions).

Similarly, Vault provides systematic support to verify at compile-time
that operations are only applied to objects having the appropriate state.
Vault adds structure to the application (by isolating the correctness
requirements into separate syntactic elements), and systematically
leverages the power of the correctness requirements (by checking at
compile time that the application software respects them).

> Are there cases where Vault's facilities could be used and
> where the type system really cannot help?

Are there cases where Eiffel's DbC can be used and where the "if
instruction" really cannot help? I doubt it. I guess it's the same with
Vault's facilities.

Let's see how Vault's ideas could be applied to Eiffel. One
straightforward thing that can be done using Vault, is to determine at
compile-time that no call can be made on a void target. We can easily
achieve this in today's Eiffel by wrapping every call in an
If_instruction, like this:

if foo /= void then
foo.some_call
end

but that would be so tedious and inefficient that we don't do it (if we
are "sure" that the target will be non-void). Instead, we write "require
foo /= void" as our precondition.

Suppose instead that Eiffel allowed a compile-time attribute to be
associated with every entity. This would be a BOOLEAN-valued attribute to
indicate the voidness or otherwise of the entity, as follows:

voidness = false:
the entity is known (by the compiler) to be non-void

voidness = true:
the entity is not known (by the compiler) to be non-void

Immediately after a creation instruction, the compiler knows that
'voidness' is false. After an assignment to 'void', the compiler knows
that 'voidness' is true. After any other assignmentment, the compiler
knows that the 'voidness' of the target takes on the value of the
'voidness' of the source. Within an If_instruction (e.g. "if entity =
void then ... else ... end") the compiler knows the 'voidness' in each
branch of the If_instruction. Etc.

It's trickier in the case of a featuer call, where attachment of an
actual argument to a formal argument takes place. A smart compiler that
performs whole-program analysis may be able to determine whether the
'voidness' of the formal argument is 'true' or 'false'. Otherwise, it
must "play safe" and set 'voidness' to 'true'.

Any time there is a qualified call, the compiler will check that the
target 'voidness' is false, and therefore can ensure at compile time that
there is no chance of a "feature call on void target" error at runtime.

But it's not quite as easy as that. Consider the following Eiffel code:

bar(a: A) is
require
a /= void
local
b: B
do
a.some_call
!!b
b.some_call
end

In Vault-for-Eiffel, we would dispense with the run-time precondition:

bar(a: A) is
do
a.some_call
!!b
b.some_call
end

Suppose the compiler cannot determine statically that the 'voidness' of
'a' is 'false'. The compiler must then (conservatively) set the
'voidness' of 'a' to 'true'. Therefore, it reports a compile-time error
at the instruction "a.some_call". To get rid of this compile-time error,
we must wrap the call like this:

bar(a: A) is
do
if a /= void then
a.some_call
end
!!b
b.some_call
end

Now the compiler can determine that the 'voidness' of the target prior to
every call is 'false'.

But wait a minute - are we back to the situation where the programmer is
writing run-time tests without the assistance of DbC? Not really, because
the programmer must _only_ write the test in those situations where the
compiler isn't convinced that the 'voidness' of 'a' is 'false'. This is
likely to only be the case in a small percentage of the calls throughout
the system, yet our program is guaranteed never to core dump due to a
feature call on a void target.

With DbC, we lose our protection as soon as we switch off run-time
assertion monitoring. With Vault-for-Eiffel, we are always protected, and
yet we only need to write explicit tests where they are really needed.

The simple example above just uses a BOOLEAN-valued compile-time
attribute. But in Vault, the compile-time attributes can be what Vault
calls "Variants", which seem to be to be equivalent to (say) Haskell's
Algebraic Data Types. So it's really rather powerful.

Franck Arnaud

unread,
Sep 25, 2001, 6:10:41 AM9/25/01
to
Roger Browne:

> It's the same as regular (runtime) Design By Contract. It's just
> "ordinary" executable instructions, with a "moderate amount of syntactic
> sugar" and a bit of extra compiler support.

That's convincing. I guess my argument boils down to the question:
is making the type system more expressive an interesting avenue?

The type system is already a tool for DbC, as it really boils
down to assertions on what methods are available on each object.
So in a way the current situation is that DbC that can be
compile-time checked is in the type system, and DbC that is
run time checked is in executable assertions. I think it's
an attractive division of tasks which might be worth maintaining
if we try to do more compile-time assertions. It may be easier
to reason about programs if that division is well enforced.

Your void examples illustrates the point. Presented as you
did, you have to start to think about what a compiler can
think about ordinary executable code statically. The
void problem is, I guess, one which could be relatively
easily dealt with by the type system (remove NONE,
add library class MAYBE[x], add 'constant' attributes
and local variables or generalised default values).

I'm concerned we take the type system for granted as
something unchangeable and may miss an opportunity to
distribute the work in a nice and orthogonal way.

Conversely, if the Vault-style flow analysis proved
good, maybe the opposite argument applies, and traditional
type systems should be ditched, so that for instance from:

r (x) is
do
x.a
end

-- a static contract is inferred:
x not void
x has feature a

if a had parameters, they would have inferred contracts too, which
is equivalent (?) to typechecking types currently. There are some
immediate benefits (all supertypes are implied and optimal).

Ambiguous static contracts ("has feature" that cannot be inferred well)
could of course be made explicit and validated by the compilers, but
quite a lot of the "method not known" errors can be inferred I guess.

0 new messages