Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Clarification requested on INTEGER range and the OUT_OF_RANGE() function

104 views
Skip to first unread message

John

unread,
Sep 9, 2022, 11:43:45 AM9/9/22
to

On most machines -1-huge(0) is representable, but should
OUT_OF_RANGE(-1-huge(0)) return .FALSE.?

Since how a negative integer is represented at the bit level is processor
dependent, should the range (for example) of a 1-byte integer be treated
as -127 to 127 or -128 to 127?

We were assuming a simple look through the standard would clarify it,
but we are still arguing about it.

There is a belief that there is a statement saying the allowable range
should be "symmetrical" but no one has found it.

There was a bug in a program that checked a value with OUT_OF_RANGE()
and then subsequently took the ABS() of the value (instead of
OUT_OF_RANGE(ABS(I)) it was using OUT_OF_RANGE(I) and then subsequently
using ABS(I)).

But ABS(-huge(0)-1) is an out of range value, so it caused an issue.

Discussions ensued. Can anyone point to a definitive definition of the
lowest negative value, as HUGE() always returns a positive value? Can
anyone point to a statement saying ranges are symmetrical around zero?

program demo_out_of_range
use, intrinsic :: iso_fortran_env, only : int8
implicit none
integer :: i
integer(kind=int8) :: i8

do i=-130,130
write(*,*)i, out_of_range( i,i8)
enddo

end program demo_out_of_range

One compilers' results:

-130 T
-129 T
-128 F <<< the value in debate
-127 F
-126 F
.
.
.
125 F
126 F
127 F
128 T
129 T
130 T

Robin Vowels

unread,
Sep 9, 2022, 3:27:03 PM9/9/22
to
.
It depends on the machine.
On twos complement hardware, this is correct.
On ones complement hardware, this would yield T.
.

gah4

unread,
Sep 10, 2022, 1:28:32 AM9/10/22
to
On Friday, September 9, 2022 at 8:43:45 AM UTC-7, John wrote:
> On most machines -1-huge(0) is representable, but should
> OUT_OF_RANGE(-1-huge(0)) return .FALSE.?
>
> Since how a negative integer is represented at the bit level is processor
> dependent, should the range (for example) of a 1-byte integer be treated
> as -127 to 127 or -128 to 127?

The standard allows for symmetric representations.
It allows for any integer base greater than one.

Since sign-magnitude and digit-complement representations
have a symmetric range, the standard allows those.

As well as I know, it could allow for some other non-symmetric
range, if one happened to want to build such a computer.

That said, I don't know what it means for OUT_OF_RANGE.

gah4

unread,
Sep 10, 2022, 1:46:28 AM9/10/22
to
On Friday, September 9, 2022 at 8:43:45 AM UTC-7, John wrote:
> On most machines -1-huge(0) is representable, but should
> OUT_OF_RANGE(-1-huge(0)) return .FALSE.?

> Since how a negative integer is represented at the bit level is processor
> dependent, should the range (for example) of a 1-byte integer be treated
> as -127 to 127 or -128 to 127?

First, is has two arguments, so that should be an error.

But maybe you mean OUT_OF_RANGE(-1-huge(0),0)

OK, actually reading it, it says:

"largest magnitude that lies between zero and X inclusive"

That suggests that the sign is significant in the test, and so
it should be false.

Note, though, that will fail on machines with symmetric representation,
where it will underflow, and wrap on most machines. It will then
be in-range again.

Thomas Koenig

unread,
Sep 10, 2022, 4:22:10 AM9/10/22
to
John <urba...@comcast.net> schrieb:

> On most machines -1-huge(0) is representable, but should
> OUT_OF_RANGE(-1-huge(0)) return .FALSE.?

Examples in the standard are non-normative and can be at odds with
what the standard actually prescribes, but they usually are informative
as to what was actually intended.

The description of OUT_OF_RANGE has the example

# If INT8 is the kind value for an 8-bit binary integer type,
# OUT_OF_RANGE (−128.5, 0_INT8) will have the value false and
# OUT_OF_RANGE (−128.5, 0_INT8, .TRUE.) will have the value true.

which indicates that at least the author of this example thought
that -128 was valid.

This appears to be at odds with Fortran's symmetric number model,
so this is a question that should probably best be addressed to
the J3 mailing list.

gah4

unread,
Sep 10, 2022, 8:38:27 AM9/10/22
to
On Saturday, September 10, 2022 at 1:22:10 AM UTC-7, Thomas Koenig wrote:

(snip)
> which indicates that at least the author of this example thought
> that -128 was valid.

> This appears to be at odds with Fortran's symmetric number model,
> so this is a question that should probably best be addressed to
> the J3 mailing list.

The standard guarantees that they symmetric model works.

It doesn't guarantee that other values won't.

It is especially interesting in DATA statements, which allow
for signed constants. In other statements, constants are
unsigned, with a possible unary - operator applied.

John

unread,
Sep 10, 2022, 10:02:30 AM9/10/22
to

John

unread,
Sep 10, 2022, 10:13:49 AM9/10/22
to
In the case that OUT_OF_RANGE is therefore indicating whether a value is representable, not if it is allowable strictly applying the symmetric number model then there are reasonable work-arounds as only a logical is returned, but perhaps an option like SYMMETRIC_MODEL=.true. would be nice, as in the case of a one-byte value -128 becomes problematic as has been pointed out in local discussions for functions like ABS, SIGN with a positive second value, and so-on. Given the example in the standard and that the standard does not indicate a symmetic model is the set, but a required subset of acceptable models it appears OUT_OF_RANGE does not quite do what the original author wanted.

Ron Shepard

unread,
Sep 10, 2022, 11:04:29 AM9/10/22
to
On 9/10/22 9:13 AM, John wrote:
> In the case that OUT_OF_RANGE is therefore indicating whether a value is representable, not if it is allowable strictly applying the symmetric number model then there are reasonable work-arounds as only a logical is returned, but perhaps an option like SYMMETRIC_MODEL=.true. would be nice, as in the case of a one-byte value -128 becomes problematic as has been pointed out in local discussions for functions like ABS, SIGN with a positive second value, and so-on. Given the example in the standard and that the standard does not indicate a symmetic model is the set, but a required subset of acceptable models it appears OUT_OF_RANGE does not quite do what the original author wanted.

I think if the programmer is concerned about working within the integer
model numbers, then just adding ABS() to the argument should work. With
j = -128,

OUT_OF_RANGE(ABS(j),0_int8)

should return .true.. Without the ABS(), it should return .false..

$.02 -Ron Shepard

Ron Shepard

unread,
Sep 10, 2022, 11:07:13 AM9/10/22
to


On 9/9/22 10:43 AM, John wrote:
>
> On most machines -1-huge(0) is representable, but should
> OUT_OF_RANGE(-1-huge(0)) return .FALSE.?
>
> Since how a negative integer is represented at the bit level is processor
> dependent, should the range (for example) of a 1-byte integer be treated
> as -127 to 127 or -128 to 127?
[...]

You may already know this, but this is the example in the f2018 standard:

Examples. If INT8 is the kind value for an 8-bit binary integer type,
OUT_OF_RANGE (-128.5, 0_INT8) will have the value false and
OUT_OF_RANGE (-128.5, 0_INT8, .TRUE.) will have the value true.

The first case is rounded toward zero, -128, while the second is rounded
the other way to -129.

The description of the function only refers to whether the value is
representable, so I think -128 is intended to satisfy that. This is not
the same as the fortran integer model, which does not include the value
-128. The integer model is defined in Section 16.4.

$.02 -Ron Shepard

John

unread,
Sep 10, 2022, 9:04:59 PM9/10/22
to
In the case of the example if the first argument was a one byte integer of value -128 ABS would be problematic because it would be trying to return 128, which is out of bounds, but for most other types or if you have a larger type to promote it to with INT it would work. For the time being we made a function called OUT_OF_MODEL_RANGE that does what he wants. After going through the standard again with the references sited here in hand it does look like OUT_OF_RANGE is determining if a value is represent-able, not that it fits the model, as you note -- so no bug reports.

gah4

unread,
Sep 12, 2022, 7:16:23 PM9/12/22
to
On Saturday, September 10, 2022 at 8:04:29 AM UTC-7, Ron Shepard wrote:

(snip)

> I think if the programmer is concerned about working within the integer
> model numbers, then just adding ABS() to the argument should work. With
> j = -128,

> OUT_OF_RANGE(ABS(j),0_int8)

> should return .true.. Without the ABS(), it should return .false..

It seems that much of the reason behind OUT_OF_RANGE is related
to floating point conversion. You can test the value of a REAL expression
and an INTEGER variable that it might fit into.

In the case of INTEGER values, it can be useful if they KIND is different.

One that I get used to in C, and forget in Fortran: In C, all integer
values in expressions are not smaller than int. They are promoted
to int before the operation.

In the original case above:

OUT_OF_RANGE(-1-huge(0))

which I presume should have been:

OUT_OF_RANGE(-1-huge(0),0)

both operands are the same KIND.

Most hardware that is not two's complement will wrap to
some in-range value, and the result will be .false..

To test it, -HUGE(0) has to be converted to a larger KIND,
before subtracting one.

I am not so sure how to find a sufficiently large KIND to do it. Maybe

SELECTED_INTEGER_KIND(DIGITS(0)+1), so:

OUT_OF_RANGE(INTEGER(-HUGE(0),SELECTED_INTEGER_KIND(DIGITS(0)+1)-1,0)





John

unread,
Sep 12, 2022, 10:16:26 PM9/12/22
to
In our routine we take a class(*) variable and promote it to the largest REAL type available and then compare it to the symmetric range of the type and kind from the second value assuming that the model requires -HUGE to HUGE to be valid values, checking for Inf and NAN input values, basically. The promotion is done using something that started from the M_anything module https://github.com/urbanjost/M_anything

> pure elemental function anyscalar_to_real128(valuein) result(d_out)
> implicit none
> $@(#) M_anything::anyscalar_to_real128(3f): convert integer or real parameter of any kind to real128
> class(*),intent(in) :: valuein
> real(kind=real128) :: d_out
> character(len=3),save :: readable='NaN'
> select type(valuein)
> type is (integer(kind=int8)); d_out=real(valuein,kind=real128)
> type is (integer(kind=int16)); d_out=real(valuein,kind=real128)
> type is (integer(kind=int32)); d_out=real(valuein,kind=real128)
> type is (integer(kind=int64)); d_out=real(valuein,kind=real128)
> type is (real(kind=real32)); d_out=real(valuein,kind=real128)
> type is (real(kind=real64)); d_out=real(valuein,kind=real128)
> Type is (real(kind=real128)); d_out=valuein
> type is (logical); d_out=merge(0.0_real128,1.0_real128,valuein)
> type is (character(len=*)); read(valuein,*) d_out
> class default
> !!d_out=huge(0.0_real128)
> read(readable,*)d_out
> !!stop '*M_anything::anyscalar_to_real128: unknown type'
> end select
> end function anyscalar_to_real128

but has since been refined; but basically the C idea of promoting to the largest available is being used so far. Seeing if I can get permission to share the result as it has some "lessons learned" aspects that are interesting; about whether to keep the function pure or not, breaking the promotion up between integer and real, whether to use class(*) or templating. Generalizing it got to be a bit more complex than what the original issue warranted but interesting.

In the original code that started it the ABS() solution would have been fine for that particular usage but it sparked an interest in having a general function that is able to ensure a similar problem does not arise. So for the time being OUT_OF_RANGE is banned! There are a couple of versions still being bandied about; all seem to resolve the issue so performance is expected to be the tie-breaker.

gah4

unread,
Sep 13, 2022, 1:55:29 AM9/13/22
to
On Monday, September 12, 2022 at 7:16:26 PM UTC-7, John wrote:

(snip)

> but has since been refined; but basically the C idea of promoting to the largest available is being used so far.

C promotes smaller types to int, but not to long.

Single character constants, such as 'x', are int. (They are char in C++.)
enum constants are also int.

K&R C did all floating point in double precision, all constants were
double precision, and all arithmetic functions only came in
double precision. ANSI allowed some float (single precision),
but constants without f are double. And many people declare
all variables double unless there is a reason to do otherwise.


0 new messages