On 05/12/2022 21:39, Dmitry A. Kazakov wrote:
> On 2022-12-05 17:56, David Brown wrote:
>> On 05/12/2022 16:14, Dmitry A. Kazakov wrote:
>>> On 2022-12-05 15:38, David Brown wrote:
>>>> On 05/12/2022 12:06, Dmitry A. Kazakov wrote:
>>>>> On 2022-12-05 11:40, David Brown wrote:
>>>
>>>>> The task of a specification designer is to mandate weakest
>>>>> precondition possible. Any precondition is a heavy burden on the
>>>>> code user and a source of bugs.
>>>>>
>>>>
>>>> No.
>>>>
>>>> A good strong precondition makes the use of the function clear, the
>>>> implementation simpler and more efficient, and the whole thing
>>>> easier to test.
>>>
>>> Strong precondition is a booby trap.
>>
>> Only if it is not clear and documented.
>
> I see it in the boldest font available: "garbage ahead, proceed at your
> own risk."
>
>>>> A weak precondition makes the function more flexible to call, but
>>>> harder to implement.
>>>
>>> Weak precondition guaranties defined behavior.
>>
>> Which is pointless if the defined behaviour is useless or wrong.
>
> Any defined behavior is better than any undefined one.
>
Wrong. I've given clear arguments why that is the case - if they don't
convince you, there's little point in repeating them.
>>>>> People frequently confuse preconditions with erroneous inputs.
>>>>
>>>> If the inputs satisfy the preconditions, they are not erroneous.
>>>
>>> They are. Violated precondition is a bug. Erroneous input is just
>>> anticipated event, e.g. a user typed 68 minutes.
>>
>> That all depends on your definitions of "error" and "bug".
>
> Error is an undesired, exceptional state. Bug is an illegal state.
>
That's one choice of definitions - it is very far from a universal
choice, but we'll go with them for now.
>> If you specify your square root function with a precondition of
>> "true", then no input is erroneous.
>
> No. It is so that input is a caller's bug = contract violation. Negative
> input is an erroneous, exceptional input. No difference to inputs
> causing + to overflow. Care to write a strong precondition for +, *, /,
> gamma etc?
int add_int(int a, int b);
precondition: (a + b >= INT_MIN) && (a + b <= INT_MAX)
It's not rocket science.
And yet you want to specify what a square root function should do when
given this mathematically unworkable input.
>>>> Then the user reads the postcondition and sees that for negative x
>>>> it raises an exception. Now my code needs to handle the possibility
>>>> of an exception - that's not what I want!
>>>
>>> If you do not want it, don't pass negative number as an argument.
>>
>> I don't want the possibility to exist in my code.
>
> Use a different type. E.g. define sqrt on a subtype:
>
> subtype Non_Negative_Float is Float range 0.0..Float'Last;
> function Sqrt (X : Non_Negative_Float) return Float;
>
You could do that, yes.
> Note that this does not save you. Because it simply moves the
> postcondition to the constraint check.
>
> sqrt(-1.0)
>
> will still propagate exception: Constraint_Error instead of, maybe,
> Numeric_Error.
That's all up to the typing system you use. Presumably that would be a
compile-time error, not an exception.
>
>> I don't want to use a function that might throw an exception - that
>> would just mean more effort on my part to deal with the possibility
>> and test that handler code.
>
> You don't use arithmetic +,-,*,/? That is very courageous...
>
These don't throw exceptions in the languages I use. In some languages,
they are specified with constraints on their inputs. In other
languages, arbitrary precision arithmetic is used to get correct answers.
(In high level programming, such as Python, I do make use of exceptions
for the appropriate purposes - exceptional run-time events, rather than
program errors.)
>>>> I know I never give the sqrt function a negative number,
>>>
>>> And how do know that?
>>
>> That's my side of the bargain.
>
> Not if your code is used by somebody else in turn.
They do their job too.
I don't comprehend how you assume the person calling "sqrt" is so
incompetent, and have such an incompetent development process and team,
that they might call it with a negative number and not find the problem
unless the implementation throws an exception. And yet you think that
same programmer is competent to write essentially untestable exception
handling code without flaw, dealing with the vastly more complex
situation of code that runs partially and stops unexpectedly.
>
>> Software development is all about contracts. A function has a
>> precondition and a postcondition - it promises that if the
>> precondition is fulfilled before the call, the postcondition will be
>> fulfilled after the call. It says /nothing/ about what will happen if
>> the precondition is not fulfilled.
>
> Yep, and we don't want such contracts unless absolutely necessary.
>
They are necessary. We want them. They make life simple and clear, and
everyone knows what they are doing and gets it right.
>> If you can't rely on the calling code to do its job, what makes you
>> think you can rely on the called function to do its job?
>
> Each party must fulfill its side of contract.
>
Yes.
> The advise about strengths addresses the way contracts are designed to
> be useful, safer and easier to fulfill.
>
But it is wrong advice - it makes everything /harder/.
>> The whole concept of a "precondition" is nonsensical if the function
>> implementation cannot rely on it being true!
>
> Which is why the precondition must be weakest possible. Thanks.
>
From the function implementers viewpoint, it should be as /strong/ as
possible. And regardless of how weak or strong it is, the function
implementer can rely upon it - assuming you are actually doing software
engineering and not just faffing around getting paid by the number of
lines of code you throw out.
>>>> so the exception handling path will never be taken and cannot be
>>>> tested without writing extra test code to simulate an impossible
>>>> situation. That's more extra code, more testing, more risk of
>>>> errors, more inefficiencies, for exactly /zero/ benefit.
>>>
>>> Pre- and postconditions are for correctness proofs, not for testing.
>>> It is stupid to test program proven correct.
>>
>> Well, that's /very/ questionable - you still have to test that the
>> proofs have been made correctly.
>
> That is a test of the prover, not a test of the application program. It
> is as stupid as write compiler, linker, OS tests for the application.
> That prover, compiler, linker, CPU, Maxwell's equations work is a
> *precondition*!
>
>> Correctness proofs and tests work together, not as alternatives.
>
> Nope. Applied to the *same* object, they are complementary.
>
No, they work together. Some things are best verified by proofs, some
things by testing, some things by both. A /vastly/ better programmer
than you or I said "Be careful of this code. I have only proven it
correct, not tested it."
>> However, I was not talking about testing the pre- or postconditions -
>> I was talking about testing the exception handling code I, as the
>> caller, have to write since the function may throw an exception.
>
> No, you have to write a test since the caller is not proven not to cause
> the function to propagate an exception. Again, what is the object here?
> What do you want to test? The caller, the callee, the compiler, the CPU?
>
>>>> Or I could leave the exception unhandled,
>>>
>>> You cannot do that. Again, it is about proofs. If your subprogram has
>>> no exception in its postcondition you could not prove it correct
>>> unless x is non-negative.
>>>
>>> If you do not prove correctness, then exception propagation is just
>>> defined behavior as opposed to undefined one.
>>
>> This kind of misuse of exceptions to hide bugs in code is not about
>> proofs - it is about arse-covering.
>
> No, it is about keeping contracts clean.
What could be cleaner than the contract for a square root function that
returns the square root of valid inputs? Certainly not some drivel that
pulls exception handling and all its types, hierarchies, handlers and
mechanisms into the picture.
> It is impossible to define
> model types that would produce mathematically correct results. Start
> with + and produce types of X and Y such that the result is always
> mathematically correct X + Y. Exceptions is a method to fill the gaps
> between the model and the problem space without breaking the abstraction
> altogether.
>
>> Oh wait, you claimed that the person writing the calling code couldn't
>> possibly have forgotten to handle the exception, because then they
>> wouldn't have been able to prove their code is correct. So you trust
>> them to have the skills, training, and resources to do formal
>> correctness proofs on their code - but you don't trust them to be able
>> to see if a value is negative?
>
> Yep, and this is trivially a halting problem:
>
> X := 2.0;
> if HALT(p) then
> X := -1.0;
> end if;
> X := sqrt (X);
>
> And, BTW, prover does proofs, not the developer.
>
So you are not sure if the programmer might try to write a halting
machine inside their function that calls square root? Okay, I suppose,
if you are /that/ paranoid - but why should the person specifying or
implementing the square root function have to deal with that?
> In the above the prover would say: "Sorry, I cannot prove the
> postcondition saying no exception on negative argument. Try better."
>
> There are basically two cases #1 with prover, #2 without.
>
> #1 Precondition plays no role. It is strictly same to prove that X>=0
> and that sqrt does raise exception. One directly implies another.
>
> #2 Exception was massive advantage. Since it cannot be proven that X>=0,
> then it is expected to be X<0 and exception propagation allows most
> eager detection. While precondition hides the problem. sqrt(-1.0) might
> return 13.2 and the application will continue normally generating garbage.
>
>>>> Exceptions are a valid mechanism for handling unusual and rare
>>>> run-time circumstances, such as a database transaction failure, loss
>>>> of contact in a network connection, and so on. They are /not/ a
>>>> crutch for poor programming and coding errors!
>>>
>>> Sure undefined behavior is much better than that! (:-))
>>>
>>
>> The point about undefined behaviour is that it does not happen.
>
> sqrt(-1.0)
>
>> It is /better/ to leave situations that must not be allowed to occur,
>> as undefined.
>
> sqrt(-1.0)
The problem lies with the programmer that uses the function, or their
development team or process that failed to catch the bug. The function
specifier is not responsible for that, and cannot fix the problem
anyway. So why make everything more complicated and higher risk?
Do you really think it is realistic that in a real program, a mistake
leading to an attempt at finding a negative square root is because
someone wrote "sqrt(-1.0)", or perhaps "sqrt(-x)" instead of "sqrt(x)" ?
No, the realistic source of the problem would be far back in the code
- throwing an exception at this point will help very little. Do you
really think the code would be able to recover sensibly and handle the
situation gracefully when you throw an exception? No, that has no
rooting in real life - at the point where the negative square root is
called, the code's calculations and data are so screwed up that you
won't be able to help.
>
>> With your specification, roles are mixed - should the caller check for
>> negative x?
>
> Nothing in the contract of sqrt requires it. You are confusing contract
> parties. Without seeing the caller's contract nothing can be said.
>
The specifications for the function are the contract.
>> Is that the function's job?
>
> The function's job is to implement the contract. How it does this is not
> the caller's business.
>
Equally, the caller's job is to ensure that it fulfils /its/ side of the
contract (the precondition). How it does this is not the callee's business.
>> If the caller has checked this, does it really have to handle an
>> exception?
>
> It is not sqrt business what the caller does with the result. It is
> between the caller and the caller's caller.
>
Indeed that is true. And equally, it is not sqrt's business how the
caller ensures that it has satisfied the precondition before calling it.
The specified of sqrt has no interest in making a pointlessly weak
precondition that forces a pointlessly complex postcondition. Just let
the caller ensure that it only passes non-negative numbers.
>>>> The second version of the specification is clear and simple.
>>>
>>> It leaves behavior undefined without any good reason. Strong
>>> precondition if not an utter necessity is a grave design fault.
>>
>> /Please/ read a book about how to write specifications for functions,
>> and how to program using them. There's a good one called "Programming
>> from Specifications" that is freely available - though it is quite
>> theoretical and mathematical.
>>
>> In short - you define behaviour for what you want to happen for valid
>> inputs.
>
> -1.0 is a valid input, it is a value of the type. -1.0 is a real value.
>
Not to a square root function that is worthy of the name.
> [...]
>
>> How about debugging? After all, real programmers make mistakes. With
>> undefined behaviour, debugging tools (such as "sanitizer" options in
>> modern compilers) can spot many kinds of undefined behaviour and give
>> the developer a nice report about the problem.
>
> You fundamentally cannot detect undefined behavior, because any behavior
> can realize when undefined. It is allowed to fire a gamma jet that would
> roast the silly programmer...
>
Yes. But you can spot many types of undefined behaviour, both through
static analysis and run-time checking in debug modes.
>> But your version doesn't have undefined behaviour - throwing an
>> exception is part of the specification.
>
> Yep, and it can be trivially verified by tests or in any debugger.
>
It is not a bug. The debugger and tools don't know this defined
behaviour is not intentional - because it is fully defined behaviour
that the function caller can rely on.
>>>> The implementer knows the function is only ever called with
>>>> non-negative values and can efficiently calculate the square root.
>>>
>>> He does not know that without correctness proofs, which are rarely
>>> possible and rarely used.
>>
>> He knows because he wrote correct code that calls the function
>> appropriately.
>
> He thinks he knows...
>
>>>> The responsibility for using appropriate inputs is where it should
>>>> be - with the caller.
>>>
>>> Exactly, shoveling your problems to the caller, causing distributed
>>> overhead and making sure of having untraceable bugs in the production
>>> code under to motto:
>>>
>>> "no qualified programmer would ever do X"
>>
>> No, it is not passing on /your/ problems to anyone. It is letting the
>> caller take responsibility for /their/ code.
>
> "no qualified programmer would forget to check his code"
>
Programmers can make mistakes - including the people calling sqrt, the
people implementing it, and the people specifying it. It makes sense to
make life as clear and simple as practically possible for them.
When you give someone a recipe for merengues, you say "Beat the egg
whites then slowly add the sugar while beating all the time in order to
make the merengue mixture". You don't add "alternatively, add the sugar
then try to beat it in order to make a mess". In pretty much every walk
of life, we specify the expected input to get the desired output, and do
not concern ourselves about what happens when someone puts in garbage to
the set of instructions. Why would you want to do something different
for programming?