Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

pre- and post-conditions

165 views
Skip to first unread message

mathewji

unread,
Dec 22, 2014, 7:52:53 PM12/22/14
to
working on question - "Find a pair of values so that the pre-condition of this version of area holds, but the post-condition doesn't."

Code:

int area(int length, int width)
{
// calculate area of a rectangle;
// pre-conditions: length and width are positive
// post-condition: returns a positive value that is the area
{
if (length <= 0 || width <= 0)
error("area() pre-condition");
int a = length*width;
if (a <= 0)
error("area() post-condition");
return a;
}
}

Richard Damon

unread,
Dec 22, 2014, 9:12:35 PM12/22/14
to
Hint, by "ideal" math, this can't happen (the product of two positive
numbers is always positive), so what has probably happened?

Note that the answer will be implementation dependent (and depend on
behavior not promised by the standard), but some can be specified using
information provided by the standard.

mathewji

unread,
Dec 22, 2014, 9:15:46 PM12/22/14
to
Richard, have a less cryptic answer? :-)

Richard Damon

unread,
Dec 22, 2014, 11:47:34 PM12/22/14
to
On 12/22/14 9:15 PM, mathewji wrote:
> Richard, have a less cryptic answer? :-)
>

As I said, the mathematically defined behavior of multiplying two
positive numbers is always another positive number. The C standard
provides conditions when the program doesn't need to follow that defined
behavior. There are limits in mathematics is C.

The fact that your exercise is asking you this, you must have studied it
already.

Once you figure the basic conditions that allow this, figuring out a
particular case will be much simpler.

Louis Krupp

unread,
Dec 23, 2014, 1:27:09 AM12/23/14
to
Hint: What numbers can be represented by a variable declared "int"?

Louis

mathewji

unread,
Dec 23, 2014, 7:24:16 AM12/23/14
to
I do understand that the product of two positive ints (pre-condition) will never be a negative number. So the post-condition of this code could never happen? But the question kind of implies that two unknowns could result in getting past pre to post.

David Brown

unread,
Dec 23, 2014, 7:59:24 AM12/23/14
to
On 23/12/14 13:23, mathewji wrote:
> I do understand that the product of two positive ints (pre-condition)
> will never be a negative number. So the post-condition of this code
> could never happen?

That is true in ordinary maths - but maths in C++ (and most programming
languages) does not entirely follow the rules of "real" mathematics. An
"int" in C++ has a specific size or range. Think about what that range
is, and what happens as your input values get closer to the limits.

(As others have done, I am trying to give you hints rather than a
finished answer.)

Richard Damon

unread,
Dec 23, 2014, 8:51:57 AM12/23/14
to
The product of two mathematical positive integers is always positive.
C++ does not promise the same for "int".

See <climits>/<limits.h>

mathewji

unread,
Dec 23, 2014, 9:46:20 AM12/23/14
to
I was able to locate:

/*
* Maximum and minimum values for ints.
*/
#define INT_MAX 2147483647
#define INT_MIN (-INT_MAX-1)

Inside of limits.h

ison...@gmail.com

unread,
Dec 23, 2014, 10:08:53 AM12/23/14
to
In C++ it is impossible to find such values. Signed integer overflow is undefined behavior:

Paragraph 5/4 of the C++11 Standard
"If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined."

Integer overflow does not guarantee that the result will be negative, so there is no defined case where post-condition doesn't hold. (The only case is where undefined behavior occurs, but it's undefined - so basically anything can happen).

mathewji

unread,
Dec 23, 2014, 10:21:49 AM12/23/14
to
Got past pre to post with, 2147483647 and 2. Thx!
Message has been deleted

Scott Lurndal

unread,
Dec 23, 2014, 12:08:40 PM12/23/14
to
r...@zedat.fu-berlin.de (Stefan Ram) writes:
>mathewji <itoney...@gmail.com> writes:
>>Got past pre to post with, 2147483647 and 2. Thx!
>
> Good. Now write (this addresses all readers of this post,
> not just mathewji) a function
>
>long long average( long long j, long long k );
>
> precondition: LLONG_MIN <= j, k <= LLONG_MAX
>
> postcondition: average( j, k ) is the smallest (nearest to
> minus infinity) of all the long long values that are closest
> to the mathematical value of (j+k)/2.

return (j>>1)+(k>>1);

Message has been deleted
Message has been deleted

Victor Bazarov

unread,
Dec 23, 2014, 12:58:03 PM12/23/14
to
On 12/23/2014 12:40 PM, Stefan Ram wrote:
> r...@zedat.fu-berlin.de (Stefan Ram) writes:
>> sc...@slp53.sl.home (Scott Lurndal) writes:
>>> r...@zedat.fu-berlin.de (Stefan Ram) writes:
>>>> long long average( long long j, long long k );
>>> return (j>>1)+(k>>1);
>> That's quite good, but possibly it does not always
>> give the expected result:
>
> Or, even simpler:
>
> #include <stdio.h>
>
> long long average( long long const j, long long const k )
> { return (j>>1)+(k>>1); }
>
> int main( void ){ printf( "%lld\n", average( 3, 3 )); }
>
> prints:
>
> 2

What if we do something like

{ return (j>>1)+(k>>1)+(((j&1)+(k&1))>>1); }

?

V
--
I do not respond to top-posted replies, please don't ask

Paavo Helde

unread,
Dec 23, 2014, 1:16:59 PM12/23/14
to
r...@zedat.fu-berlin.de (Stefan Ram) wrote in
news:conditions-2...@ram.dialup.fu-berlin.de:

> I don't know whether there is a portable guarantee that all
> long long values can be represented in a long double, but I
> doubt that there is such a guarantee.

The guarantees given by the C standard are all for *minimum* required
values, so there is even no guarantee that all possible values of char can
be represented by distinct values in a long double, not to speak about long
long, on any given platform. This would be impossible for example if char
and long double are both 64-bit.

OTOH, I think all *guaranteed* values of (signed and insigned) char, short,
int and long are guaranteed as to be represented by distinct values in
double and long double - the guarantee for ULONG_MAX is 4294967295 which
has 10 decimal digits, and DBL_DIG and LDBL_DIG are guaranteed to be at
least 10 (are these the correct macros to use?)

The guarantee for long long is much larger (2^64-1) which clearly does not
fit into 10 digits.

Cheers
Paavo

mathewji

unread,
Dec 23, 2014, 7:47:08 PM12/23/14
to
I also come up with:

cpp_int area(cpp_int length, cpp_int width)
{
// calculate area of a rectangle;
// pre-conditions: length and width are positive
// post-condition: returns a positive value that is the area
{
cpp_int a = 0;
if (length <= 0 || width <= 0)
{
cout << "pre-condition true" << endl;
error("area() pre-condition");
}

a = length*width;
cout << a << endl;
if (a <= 0)
{
cout << "post-condition true" << endl;
error("area() post-condition");
}
return a;
}
}

With Boost, and and that allows product of 2147483647 and 2 to pass pre and post.

Richard Damon

unread,
Dec 23, 2014, 9:05:07 PM12/23/14
to
On 12/23/14 10:08 AM, ison...@gmail.com wrote:
> W dniu wtorek, 23 grudnia 2014 01:52:53 UTC+1 użytkownik mathewji napisał:
> > working on question - "Find a pair of values so that the
> pre-condition of this version of area holds, but the post-condition
> doesn't."
> >
> > Code:
> >
> > int area(int length, int width)
> > {
> > // calculate area of a rectangle;
> > // pre-conditions: length and width are positive
> > // post-condition: returns a positive value that is the area
> > {
> > if (length <= 0 || width <= 0)
> > error("area() pre-condition");
> > int a = length*width;
> > if (a <= 0)
> > error("area() post-condition");
> > return a;
> > }
> > }
>
> In C++ it is impossible to find such values. Signed integer overflow
> is undefined behavior:
You can not find values which the standard will promise the results.
Because it is undefined, it is possible that you will get the "desired"
results, which on many platforms is actually quite predictable.
>
> Paragraph 5/4 of the C++11 Standard
> "If during the evaluation of an expression, the result is not
> mathematically defined or not in the range of representable values for
> its type, the behavior is undefined."
>
> Integer overflow does not guarantee that the result will be negative,
> so there is no defined case where post-condition doesn't hold. (The
> only case is where undefined behavior occurs, but it's undefined - so
> basically anything can happen).
While anything *can* happen, for a given system, the results may well be
predictable (if not "defined").

This doesn't say that you should depend on how undefined behavior will
behave unless you have something from the compiler that makes promises
about it.

As was pointed out in c.l.c, it is even possible that the compiler will
be able to detect that the only way that a will be less than zero is for
undefined behavior to have occurred, and then elide the post-condition
test.

Ruben Safir

unread,
Feb 6, 2015, 7:58:00 PM2/6/15
to
are you talking about an integer overflow?

shouldn't that trigger an error?

Paavo Helde

unread,
Feb 6, 2015, 10:13:56 PM2/6/15
to
Ruben Safir <mrbr...@panix.com> wrote in
news:mb3nud$fgc$1...@reader1.panix.com:
>
> are you talking about an integer overflow?
>
> shouldn't that trigger an error?

With unsigned types there is no error, the wrap-around behavior is just
defined as the correct behavior.

With signed types integer overflow is an error and leads to UB.
Unfortunately most of the times this UB means you silently get the same
wrap-around beahvior, but at least in principle it could be detected and
reported properly.

David Brown

unread,
Feb 7, 2015, 8:00:14 AM2/7/15
to
And if the compiler can predict that integer overflow will always occur,
then it can use this guaranteed UB to optimise the code. For example,
given the following code:

int16_t a = 0x7fff;
int16_t b = a + 2;
if (b == 1) foo();

the compiler can choose whether to do foo() or not.

Conversely, since the compiler knows that the user doesn't care about
UB, it can treat code as though the UB cannot happen. So you might
think that you can detect signed overflow with code like this:

int checkedOverflow(int a, b) {
bool sa = (a < 0);
bool sb = (b < 0);
int c = a + b;
bool sc = (c < 0);
if (sa && sb && !sc) {
printf("Added two negatives and got a positive!\n");
}
if (!sa && !sb && sc) {
printf("Added two positives and got a negative!\n");
}
return c;
}

Since the compiler knows that signed overflow is UB, and you don't care
about UB, it can remove the checks and printfs.


Osmium

unread,
Feb 7, 2015, 8:50:43 AM2/7/15
to
"Ruben Safir"
Yes it should, in a perfect world. But most programming languages simply let
the programmer blunder on.


Christian Gollwitzer

unread,
Feb 7, 2015, 9:31:18 AM2/7/15
to
Am 07.02.15 um 14:00 schrieb David Brown:
> On 07/02/15 04:13, Paavo Helde wrote:
>> Ruben Safir <mrbr...@panix.com> wrote in
>> news:mb3nud$fgc$1...@reader1.panix.com:
>>>
>>> are you talking about an integer overflow?
>>>
>>> shouldn't that trigger an error?
>>
>> With unsigned types there is no error, the wrap-around behavior is just
>> defined as the correct behavior.
>>
>> With signed types integer overflow is an error and leads to UB.
>> Unfortunately most of the times this UB means you silently get the same
>> wrap-around beahvior, but at least in principle it could be detected and
>> reported properly.
>>
>
> And if the compiler can predict that integer overflow will always occur,
> then it can use this guaranteed UB to optimise the code.

This is really one of the most stupid problems with C and C++. Almost
all modern CPUs generate integer overflow flags in hardware (and the
carry flag for unsigned wrap around). Also saturated arithmetics is
available on some machines. Why is there no clean way to access this
from the compiled language? It's not even slower to insert checks around
the operation in question, it is also less clear than the equivalent in
assembly code (jump on overflow flag).

Sure there is, e.g.,
https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins.html
but still a simple clean standard way is lacking.

Christian

David Brown

unread,
Feb 8, 2015, 9:02:56 AM2/8/15
to
In most cases, I think checking for overflow after integer operations
would be unnecessary (since you should work with types and values that
won't overflow), or it will be too late (the damage is done by the time
you have detected the error).

But I agree that it would sometimes be nice to have a good way of
checking, with some sort of integer types that appear as normal integers
but have overflow detection. For C++, the obvious choice would be to
throw an exception on overflow - but many of the systems for which this
would be useful are embedded systems which often have exceptions
disabled on the compiler.

For saturated arithmetic, it would be easier to make a clean solution
that fits well in normal code, as there are no error conditions or
exceptions. There are some complications for optimisation - the
compiler can no longer reduce "a + 1 - 1" to "a". There is a move
towards standardising saturated arithmetic (along with fixed-point
arithmetic) in C:

<http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1169.pdf>
<https://gcc.gnu.org/onlinedocs/gcc/Fixed-Point.html>

However, there are two major problems so far - in a move designed to
annoy embedded programmers (the ones that need this stuff most) and to
repeat the mistakes of history as closely as possible, the new types are
given names like "_Sat long long" without any specifications of the
exact sizes. Idiocy of the highest level, "justified" by excuses about
different hardware supporting different sizes better.

Secondly, you can guarantee that if and when these new types make it
into the C standards, the C++ committee will declare that they are
better implemented as a library of templates that are incompatible with
the C versions, as a method of ensuring that the spread of C++ in
embedded systems is hindered.


Öö Tiib

unread,
Feb 8, 2015, 1:22:45 PM2/8/15
to
On lot of cases closing the application with error signal is good enough
because there is (at least one of) programming errors like:
* collecting the input data for calculating the value is defective.
* algorithm that calculates the value is defective.
* data type is chosen that can not contain the value.

> But I agree that it would sometimes be nice to have a good way of
> checking, with some sort of integer types that appear as normal integers
> but have overflow detection. For C++, the obvious choice would be to
> throw an exception on overflow - but many of the systems for which this
> would be useful are embedded systems which often have exceptions
> disabled on the compiler.
>
> For saturated arithmetic, it would be easier to make a clean solution
> that fits well in normal code, as there are no error conditions or
> exceptions. There are some complications for optimisation - the
> compiler can no longer reduce "a + 1 - 1" to "a". There is a move
> towards standardising saturated arithmetic (along with fixed-point
> arithmetic) in C:
>
> <http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1169.pdf>
> <https://gcc.gnu.org/onlinedocs/gcc/Fixed-Point.html>

Saturated arithmetic is useful in situations where "unavailable",
"out of lower limit" or "out of higher limit" are valid and expected
values. For example because of noise or possible destruction of
sensory hardware or communication channels are expected and handled.
These values are already usually present with floating point types ...
with integers we have to use things like Boost.Optional.

> However, there are two major problems so far - in a move designed to
> annoy embedded programmers (the ones that need this stuff most) and to
> repeat the mistakes of history as closely as possible, the new types are
> given names like "_Sat long long" without any specifications of the
> exact sizes. Idiocy of the highest level, "justified" by excuses about
> different hardware supporting different sizes better.

Programmer needs most often value that can be only within certain limits.
For example something can (by laws of nature) take from 150 000
nanoseconds to 30 000 000 nanoseconds with around 2 000 000 most common. Everything outside of it is indicating defect in executing or measuring hardware or software.

> Secondly, you can guarantee that if and when these new types make it
> into the C standards, the C++ committee will declare that they are
> better implemented as a library of templates that are incompatible with
> the C versions, as a method of ensuring that the spread of C++ in
> embedded systems is hindered.

Major problem is that those solutions solve next to nothing. For example
information that automatic passenger counting system does overflow 'long'
with counted passengers is uninteresting because sane limits to that value
are way smaller than 'LONG_MAX'. It is indeed possible that for
something 2 147 483 647 is useful limit ... but for me it is rather hard
to imagine what it is.

Message has been deleted

David Brown

unread,
Feb 8, 2015, 2:35:19 PM2/8/15
to
Many of the systems where you are dealing with saturated arithmetic or
overflow detection are embedded systems where you have limited resources
or are aiming for maximal speed. A DSP application might be carefully
tuned to do 4 multiply-accumulate operations per clock cycle - you can't
just add some extra code for range checking as you might on a desktop
application. Similarly, you might need to stick to 16-bit integers -
you can't simply step up to 32-bit or floating point arithmetic to avoid
overflows.

In such systems, even though you may be correct about likely causes of
overflow, closing the application is seldom a good plan.

And in high-performance computing, you will often want to use a "sticky"
overflow flag - you do all the calculations with the assumption that
there are no problems (at maximum speed), then after finishing a batch
you check the overflow flag and if necessary, through away the whole batch.

>
>> But I agree that it would sometimes be nice to have a good way of
>> checking, with some sort of integer types that appear as normal integers
>> but have overflow detection. For C++, the obvious choice would be to
>> throw an exception on overflow - but many of the systems for which this
>> would be useful are embedded systems which often have exceptions
>> disabled on the compiler.
>>
>> For saturated arithmetic, it would be easier to make a clean solution
>> that fits well in normal code, as there are no error conditions or
>> exceptions. There are some complications for optimisation - the
>> compiler can no longer reduce "a + 1 - 1" to "a". There is a move
>> towards standardising saturated arithmetic (along with fixed-point
>> arithmetic) in C:
>>
>> <http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1169.pdf>
>> <https://gcc.gnu.org/onlinedocs/gcc/Fixed-Point.html>
>
> Saturated arithmetic is useful in situations where "unavailable",
> "out of lower limit" or "out of higher limit" are valid and expected
> values. For example because of noise or possible destruction of
> sensory hardware or communication channels are expected and handled.
> These values are already usually present with floating point types ...
> with integers we have to use things like Boost.Optional.
>

Saturated arithmetic is very common in a lot of physical modelling and
control applications - that's why it is often supported in hardware in
DSP's and microcontrollers. Of course, it would often be useful to have
the limit as a programmable value rather than the hard limit of the type.

Öö Tiib

unread,
Feb 8, 2015, 4:49:44 PM2/8/15
to
On Sunday, 8 February 2015 20:56:17 UTC+2, Stefan Ram wrote:
> Öö Tiib <oot...@hot.ee> writes:
> >Major problem is that those solutions solve next to nothing. For example
> >information that automatic passenger counting system does overflow 'long'
> >with counted passengers is uninteresting because sane limits to that value
> >are way smaller than 'LONG_MAX'. It is indeed possible that for
> >something 2 147 483 647 is useful limit ... but for me it is rather hard
> >to imagine what it is.
>
> When you write a counting routine, you just need to specify its limits
> in the contract, whatever those limits might be: Version 1:
>
> /* This routine must not be called more often then 10 times */
> increment_passenger_count();
>
> Version 2:
>
> /* This routine must not be called more often then 10000000000000000000000 times */
> increment_passenger_count();
>
> Both versions are perfectly fine as long as the limit is clearly
> documented.

In what sense of perfection? Neither of the limits feels correct;
both feel way too odd. I either do not understand the purpose or
the documentation is defective or the requirements are gathered by
idiot. It would be irresponsible to code it away blindly and not to
consult rest of the team that is involved.

> The Ariane 4 had a subroutine with a velocity limitation. It was
> documented, too, but kind of »hidden« in the middle of a multi-page
> document. The oversight of this limitation than lead to the
> explosion of the Ariane 5 when that routine was re-used.
>
> This was not even UB. Instead the routine even /did/ detect
> the overflow and throw an exception IIRC. Ie., it was all
> »super clean«. Only problem was that there was no proper
> handler for that exception, so the exception reached the
> general top-level exception handler which terminated
> the process. In this case, however, ironically, everything
> would have been fine, if the exception would just have been
> ignored or not been created in the first place. In a sense,
> it was the well-intentioned overflow-detection that made
> the rocket explode.
>
> So, when the client does not react properly, even detection of
> overflow by the routine does not help.

If the software does not handle the situations then it does
not matter if it is saturated overflow, exception thrown or usual
undefined behavior. It will not work. You can have three or more
alternative systems that vote for mutual decision and you can keep
backup system warm for each that takes over on case of failure but
if all of those do not handle the situation then nothing helps.

> Lesson learned was that limitations of routines needs to
> be documented in a prominent place. And there, Java has an
> edge with JavaDoc, because it makes cleare where to write
> the documentation.

What edge? C++ community has had Doxygen and AFAIK both it and
JavaDoc were released 1997. The first document generator was
perhaps by Donald Knuth in 1981 ... nothing to do with Java.

Documentation is helpful but it does not write software.
Java has edge that Java's exception specifications work.
In C++ it is too easy to forget about exceptions since language
is unhelpful about those.
0 new messages