Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

short, long, float, and double portability

2 views
Skip to first unread message

John Oliver

unread,
May 6, 1997, 3:00:00 AM5/6/97
to

I am writing a C program that needs to be portable across unix, mac, and
pc machines. The problem I am having is ensuring that the valid range,
decimal precession, etc remains constant for variables across
platforms.

For example:
On my SUN, double has 15 digit precision with exponants from -308 to 308
On a MAC, double has 18 digit precision with exponants from -4931 to
4932
This creates platform compatibility problems.

My C reference book gives me garbage like "the largest short int is no
larger than the largest int and may be smaller. Typically, long will be
bigger than short... etc" which is of no help.

--> SO: What is the a trick to guaranteeing ranges and precision for
variables?
I believe I can guarantee integer ranges, floats and double
worry me.

Here is variable types I would like to guarantee a range/precision for:

typedef signed char int8;
typedef signed short int int16;
typedef signed long int int 32
typedef signed long long int int64;
typedef unsigned char uint8;
typedef unsigned short int uint16;
typedef unsigned long int uint32;
typedef unsigned long long int uint64;
typedef float real32;
typedef double real64;
typedef long double real128;

Thanks for any help on this---

ps. You can email response to: di...@ti.com

Kurt Watzka

unread,
May 6, 1997, 3:00:00 AM5/6/97
to

John Oliver <jol...@dsbmail.itg.ti.com> writes:

>I am writing a C program that needs to be portable across unix, mac, and
>pc machines. The problem I am having is ensuring that the valid range,
>decimal precession, etc remains constant for variables across
>platforms.

This need not even be possible. A C implementation is limited
by minimum requirements for maxima of range, mantissa width,
exponent width, etc. You have no guarantee that a given implementation
implements the low precission of the "best" type of another
implementation in its "worst" type. Therfore, there need not
be aequivalent floating point types at all. The same applies
for integral types. You may meet machines where the smallest
integral type (i.e. a char) is a 64bit quantity, with the
range that goes with that, and machines where the biggest
integral type (i.e. a long int) is a 32bit quantity.

>For example:
>On my SUN, double has 15 digit precision with exponants from -308 to 308
>On a MAC, double has 18 digit precision with exponants from -4931 to
>4932
>This creates platform compatibility problems.

>My C reference book gives me garbage like "the largest short int is no
>larger than the largest int and may be smaller. Typically, long will be
>bigger than short... etc" which is of no help.

Well, but how would you express

sizeof(char) <= sizeof(short int) <= sizeof(int) <= sizeof(long int)

All types can have the same size, so what your reference book gives
you is not garbage, but a statement that is as precise as possible,
given the language definition and some practical experience.

>--> SO: What is the a trick to guaranteeing ranges and precision for
>variables?

What is the problem with _minimum_ ranges and _minimum precision?

> I believe I can guarantee integer ranges, floats and double
>worry me.

>Here is variable types I would like to guarantee a range/precision for:

>typedef signed char int8;
^^^^ This name may be misleading on some
implementations.


>typedef signed short int int16;

A 'short int' is always signed, but int16 is a misleading name for some
implementations.


>typedef signed long int int 32

Same as above.

[Further "portability typedefs" edited]

>typedef float real32;
>typedef double real64;
>typedef long double real128;

Since the precision requirements for "long double" do not yet exceed
those for "double", you will find a lot of implementations where those
assumptions to no hold true. OTOH, 80bit quantities as representations
of "double" and "long double" are not uncommon.

So, the question is, what are you trying to gain by giving
"portability typedef" names to builtin C types. Your program
does not become more portable, but the implied sizes may
lead to wrong assumptions.

Kurt

--
| Kurt Watzka Phone : +49-89-2180-6254
| wat...@stat.uni-muenchen.de

Stephan Wilms

unread,
May 7, 1997, 3:00:00 AM5/7/97
to

John Oliver wrote:
>
> I am writing a C program that needs to be portable across unix, mac, and
> pc machines. The problem I am having is ensuring that the valid range,
> decimal precession, etc remains constant for variables across
> platforms.
>
> For example:
> On my SUN, double has 15 digit precision with exponants from -308 to 308
> On a MAC, double has 18 digit precision with exponants from -4931 to
> 4932
> This creates platform compatibility problems.
>
> My C reference book gives me garbage like "the largest short int is no
> larger than the largest int and may be smaller. Typically, long will be
> bigger than short... etc" which is of no help.

That's not garbage, it's just that the ANSI-C standard defines it this
way.
And it is of a lot of help, if you know how to use it correctly. To
fully
understand the whole problem, consider to following points:
- a char does not necessarily consist of 8 bits
- long, int, short and char might all have the same size (in bits)
and value range

> --> SO: What is the a trick to guaranteeing ranges and precision for
> variables?

The general solution would be to write a function, which tests the
value ranges of the compiler and is called at the beginning of your
program. Test if the precisions you require are avaiable with what
the compiler has to offer.

> I believe I can guarantee integer ranges, floats and double
> worry me.

There are a lot of usefull constants in "float.h" which will help you
with chacking your floating point precision.

> Here is variable types I would like to guarantee a range/precision for:
>
> typedef signed char int8;

> typedef signed short int int16;

> typedef signed long int int 32

> typedef signed long long int int64;
> typedef unsigned char uint8;
> typedef unsigned short int uint16;
> typedef unsigned long int uint32;
> typedef unsigned long long int uint64;

> typedef float real32;
> typedef double real64;
> typedef long double real128;

You *CAN'T* do it that way ! Consider what I said above about ANSI-C
and type sizes: there is *no* guaranty that a certain type has a
certain number of bits.

One way to do it anyway would be to define the types you want by
conditional compilation depending on the compiler type. You will have
to write the correct "typedef"s for all the compilers you want to use
and *explicitely* exclude *all* other compilers.

My advice would be, to define your minimum requirements for char, int,
long, float, double and write a checking function (like I mentioned
above) which tests, if the compiler supplies this requirements.

Stephan
(self appointed member of the campaign against grumpiness in c.l.c)

Lawrence Kirby

unread,
May 7, 1997, 3:00:00 AM5/7/97
to

In article <5ko9vg$nmq$1...@sparcserver.lrz-muenchen.de>
wat...@stat.uni-muenchen.de "Kurt Watzka" writes:

>John Oliver <jol...@dsbmail.itg.ti.com> writes:

...

>>My C reference book gives me garbage like "the largest short int is no
>>larger than the largest int and may be smaller. Typically, long will be
>>bigger than short... etc" which is of no help.

However you are also guaranteed minimum ranges for each type e.g. short
and int are guaranteed to be able to represent all integers in the range
-32767 to 32767 and long is guaranteed to be able to represent all
integers in the range -2147483647 to 2147483647. For nearly all
practical purposes that is all you need to know (along with the ranges for
the other integral types).

>Well, but how would you express
>
> sizeof(char) <= sizeof(short int) <= sizeof(int) <= sizeof(long int)

The standard doesn't guarantee that although it would be a very odd
implementation where it was not the case. It does guarantee that
sizeof(char)==1 and nothing can be smaller than that. It also guarantees
that:

SCHAR_MAX <= SHRT_MAX <= INT_MAX <= LONG_MAX

and

SCHAR_MIN >= SHRT_MIN >= INT_MIN >= LONG_MIN

and

UCHAR_MAX >= SCHAR_MAX
USHRT_MAX >= SHRT_MAX
UINT_MAX >= INT_MAX
ULONG_MAX >= LONG_MAX

and that CHAR_MIN,CHAR_MAX corresponds to either SCHAR_MIN,SCHAR_MAX or
UCHAR_MIN,UCHAR_MAX.

--
-----------------------------------------
Lawrence Kirby | fr...@genesis.demon.co.uk
Wilts, England | 7073...@compuserve.com
-----------------------------------------


David Gimeno Gost

unread,
May 8, 1997, 3:00:00 AM5/8/97
to

John Oliver <jol...@dsbmail.itg.ti.com> wrote:

> For example:
> On my SUN, double has 15 digit precision with exponants from -308 to 308
> On a MAC, double has 18 digit precision with exponants from -4931 to
> 4932
> This creates platform compatibility problems.

Not really. You are assumming 80-bit doubles on the Mac, which is not
necessarily true -it depends on target processor (i.e. 68K or PowerPC) and
compiler settings. I suggest you to look into your compiler's
documentation.

For 68K processors, you can use IEEE 32-bit single, IEEE 64-bit double or
IEEE 80-bit extended precision floating types. Even if you choose 80-bit
doubles (default) you can still use "short double" (not standard C) for
64-bit floating point numbers. There is no 128-bit floating point type
for 68K processors, long double is 80-bit (stored as 96-bit if you compile
for a 68881 FPU).

For PowerPC processors, you can use IEEE 32-bit single or IEEE 64-bit
double precision floating types. There is no 80-bit floating type and I'm
not sure about 128-bit. You may want to ask in
comp.sys.mac.programmer.codewarrior for 128-bit floating point support.

> My C reference book gives me garbage like "the largest short int is no
> larger than the largest int and may be smaller. Typically, long will be
> bigger than short... etc" which is of no help.

Well, I wouldn't call this garbage. It's just that the C language does
not make assumptions about any particular implementation. You're looking
at the wrong place. If you are going to write code that is platform-
and/or compiler-dependent you should look at your compiler's documentation
rather than at the definition of the language. Here is what I do on the
Mac, it may work on other platforms as well (if your compiler does not
support short double for 68K processors, tell it to use 8-byte doubles and
use double instead):

/* Floating types of known size. */
typedef float Float32;
#if defined( __MC68K__ )
typedef short double Float64;
#else
typedef double Float64;
#endif

/* Fastest floating types having at least the required precision. */
#if defined( __MC68K__ )
typedef long double FastSingle;
typedef long double FastDouble;
#else
typedef float FastSingle;
typedef double FastDouble;
#endif

Hope that helps. There are other issues related to portability of
floating point data types I would like to know about, although I'm not
sure whether this is the right newsgroup to ask them. For example, do the
Sun, Mac and PC implementations of 32-bit single precision floating point
numbers use the same bit pattern?

--
David Gimeno Gost. DGG Software.
E-Mail: d...@grn.es
Your Macintosh can do whatever you want... with our help, of course!
Copyright (c) 1996-1997 DGG Software. All rights reserved.

0 new messages