Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

long long -- part of what ANSI standard?

18 views
Skip to first unread message

Jay Zipnick

unread,
Mar 31, 1997, 3:00:00 AM3/31/97
to

I have seen a number of references, and FAQs, recommending avoidance of
the type "long long", as it is a non-standard extension. Today, I was
corrected and told that "long long is ANSI", the person proceeded to site
lengthy, official looking, modifications to the ANSI C standard. (This
reply was posted today in comp.sys.mac.programmer.codewarrior, in the
article "Re: MSL Fun", the second response with that title from MW Ron.)

Can someone please clarify if long long is a part of an *official* ANSI
standard (C/C9X/other?) or draft standard, and as of when. Or was I just
quoted a proposal that was never accepted?

Please also reply to jzip...@home.com.

Thanks,
Jay Zipnick

Peter Seebach

unread,
Mar 31, 1997, 3:00:00 AM3/31/97
to

In article <jzipnick-310...@c420522-a.snvl1.sfba.home.com>,

It is in C9X.

I would say, at this point,

DO NOT USE IT UNDER ANY CIRCUMSTANCES.

1. It is very unpopular, and, however distant it may be, it is *possible*
that it will be taken out.
2. The rules for it changed, significantly, twice in the last meeting alone.

DO NOT USE THIS TYPE.

We are not sure how the promotion rules work on it, it is *NOT* a part of any
official ANSI or ISO standard, and it is essentially impossible for it to be
part of the ANSI or ISO language before late 1998.

>Please also reply to jzip...@home.com.

Sorry, I'm too lazy to cc replies.

-s
--
Copyright 1997 Peter Seebach - seebs at solon.com - C/Unix Wizard
I am not actually gla...@nancynet.com but junk mail is accepted there.
The *other* C FAQ, the hacker FAQ, et al. http://www.solon.com/~seebs
Unsolicited email (junk mail and ads) is unwelcome, and will be billed for.

Clive D.W. Feather

unread,
Apr 1, 1997, 3:00:00 AM4/1/97
to

In article <jzipnick-310...@c420522-a.snvl1.sfba.home.com>,
Jay Zipnick <jzip...@best.com> writes

>Can someone please clarify if long long is a part of an *official* ANSI
>standard (C/C9X/other?) or draft standard, and as of when.

It is not part of the current C Standard (sometimes informally called
C89 or C90). It is part of the current draft for the next revision of
the Standard (sometimes informally called C9X) and was added last June.

--
Clive D.W. Feather | Director of Software Development | Home email:
Tel: +44 181 371 1138 | Demon Internet Ltd. | <cl...@davros.org>
Fax: +44 181 371 1037 | <cl...@demon.net> |
Written on my laptop - please reply to the Reply-To address <cl...@demon.net>

Peter Curran

unread,
Apr 1, 1997, 3:00:00 AM4/1/97
to

On 31 Mar 1997 22:50:52 -0600 in article <5hq47c$7...@solutions.solon.com>
se...@solutions.solon.com (Peter Seebach) (Peter Seebach) wrote:
(regarding "long long"):

<snip>

>2. The rules for it changed, significantly, twice in the last meeting alone.

Which reminds me - I have been watching the archive sites carefully for notes
from the last meeting (Hawaii?) but nothing has appeared so far? Are there any
reports available about what happened?

--
Peter Curran Xpcu...@acm.org
(remove X for actual address)

Mark Brader

unread,
Apr 1, 1997, 3:00:00 AM4/1/97
to

Jay Zipnick:

> > I have seen a number of references, and FAQs, recommending avoidance of
> > the type "long long", as it is a non-standard extension. Today, I was
> > corrected and told that "long long is ANSI", the person proceeded to site
> > lengthy, official looking, modifications to the ANSI C standard. (This
> > reply was posted today in comp.sys.mac.programmer.codewarrior, in the
> > article "Re: MSL Fun", the second response with that title from MW Ron.)

That would be <MWRon-31039...@aumi1-a04.ccm.tds.net>. I wondered
if it could be an April Fools joke, but if the Date line reached us here
reasonably intact, then it seems to not yet to have been posted before
April 1 began in the time zone where it was posted.



> > Can someone please clarify if long long is a part of an *official* ANSI

> > standard (C/C9X/other?) or draft standard, and as of when. Or was I just
> > quoted a proposal that was never accepted?

C9X is *not* an official ANSI (or ISO) standard at this point; it is the
interim informal name for the next revised version of the standard. The
idea is that if adopted in, say, 1999, then the we will begin to speak
of C99. Presumably if it is not possible to have it in place by 1999,
the interim informal name will change to C0X. *Hmmm.*

The original ANSI standard has been replaced by a substantially identical
ISO version and then amended three times, but the changes posted by "MW Ron"
were not part of any of the amendments. For unofficial but accurate
information on the actual amendments, see <http://www.lysator.liu.se/c>.

Peter Seebach:
> It is in C9X.

Right -- so it's not part of the standard at present, and may not be
adopted. In other words, you were quoted a proposal that has not *yet*
been accepted.



> I would say, at this point,
>
> DO NOT USE IT UNDER ANY CIRCUMSTANCES.

Sounds like a good suggestion to me, but then I'm opposed to the change.
Actually, I'm opposed to C9X altogether; I think the existing standard
should have been reaffirmed, but that didn't happen.

Of course, the situation is different on an implementation that provides
long long as an extension, with rules like:

| Note that, in order to use long long support, __MSL_LONGLONG_SUPPORT__
| must be defined when the MSL C library is built as well as when the
| application is built.

That technique *does* conform to the standard, so you're free to use that
extension on that implementation by making that definition. But anyone
shouting that "long long is ANSI" is, at least, premature.
--
Mark Brader, m...@sq.com "What waters? We're in the desert."
SoftQuad Inc., Toronto "I was misinformed." -- Casablanca

My text in this article is in the public domain.

MW Ron

unread,
Apr 1, 1997, 3:00:00 AM4/1/97
to

In article <1997Apr1.1...@sq.com>, m...@sq.com (Mark Brader) wrote:

>C9X is *not* an official ANSI (or ISO) standard at this point; it is the
>interim informal name for the next revised version of the standard.

>Of course, the situation is different on an implementation that provides
>long long as an extension, with rules like:
>
>| Note that, in order to use long long support, __MSL_LONGLONG_SUPPORT__
>| must be defined when the MSL C library is built as well as when the
>| application is built.
>
>That technique *does* conform to the standard, so you're free to use that
>extension on that implementation by making that definition. But anyone
>shouting that "long long is ANSI" is, at least, premature.

That is how it is implemented in Metrowerks,

Mark , Peter and others that were correct.
I misunderstood and therefore I mis-spoke (wrote).

Sorry, I was premature ANSI :).

Ron

--
METROWERKS Ron Liechty
"Software at Work" MW...@metrowerks.com

John R. Mashey

unread,
Apr 9, 1997, 3:00:00 AM4/9/97
to

In article <5hq47c$7...@solutions.solon.com>, se...@solutions.solon.com (Peter Seebach) writes:
?
|>
|> It is in C9X.

|>
|> I would say, at this point,
|>
|> DO NOT USE IT UNDER ANY CIRCUMSTANCES.
|>
|> 1. It is very unpopular, and, however distant it may be, it is *possible*
|> that it will be taken out.
|> 2. The rules for it changed, significantly, twice in the last meeting alone.
|>
|> DO NOT USE THIS TYPE.
|>
|> We are not sure how the promotion rules work on it, it is *NOT* a part of any
|> official ANSI or ISO standard, and it is essentially impossible for it to be
|> part of the ANSI or ISO language before late 1998.

Sometimes, standards lag practice: that's life.

Many people have strong opinions about the goodness/badness of
1) Having a 64-bit type at all.
2) If so, whether or not it should be called long long, or
something else.

Many people with strong opinions may (or may not) have much experience with:
a) 64-bit micros
b) C for 64-bit programming models (where ptr is 64-bits)
c) C that is clean for either 32- and 64-bit code, running
on 64-bit CPU.
d) C that works when you mix 32- and 64-bit code on same platform
e) All of this in the presence of C/FORTRAN mixtures.

Many of the people who *do* have that experience seem to think long long
or some equivalent is needed...

Regardless of people's opinions of the goodness or badness thereof:

1. Two major computer companies have been shipping, since 1994, systems with:
- 64-bit micros
- 64-bit OSs
- enough memory that this actually comes up in practice.
Cray Research, and other supercomputer vendors, were doing this earlier.

2. DEC's UNIX went straight to a pure 64-bit model, with theso-called "LP64":
int 32
long 64
ptr 64
long long 64

3. SGI's IRIX (6.x versions, starting in 1994) supports both:

64-bit 32-bit
int 32 32
long 64 32
ptr 64 32
long long 64 64

Many of these types, of course, are often hidden under typedefs.

4. In late 1995, the companies involved in the ASPEN group
(including Intel, HP, Sun, IBM, and others)
had numerous discussions about this; most people wanted to be able to
mix 32-bit and 64-bit programs on the same machine. People ended up
going with the same model as above, among other things, because of
the existence-proof of DEC & SGI systems, and large numbers of applications,
which tended to overpower any theoretical complaints that LP64 was bad.

5. Having been involved in these arguments since the 1980s, I wouldn't
want to try to repeat the motivations once again. Every appraoch has
it's plusses and minus.

6. However, the approach above *is* the approach being taken by a very large
chunk of the computer industry, in extant practice, and in (potential)
future standards (like C(X, or whatever it ends up getting called).

7. While I might avoid "long long" in favor of typdefs, in the same way
as I might avoid other types, I do observe that *some* 64-bit type is
needed , for **32-bit** C:
a) To write straightforward C code that lets *both* 32-bit and 64-bit
programs describe a data structure that must be exchanged
externally. This is relevant to both 32- and 64-bit CPUs,
and sometimes shows up in file formats.
b) To write straightforward 32-bit C that works on 32-bit
and 64-bit CPUs, and gets the performance gains available for
some codes on the latter. This need not be cryptography or
bit-pushing, but could be as simple as the wish to have good code
for 32-bit C programs that want files bigger than 2GB, i.e.,
like UNIX Large File Summit efforts.


And in general, in a mixed-32/64-bit environment, where many OSs are
going to end up, it is a *fact* that programmer sanity is helped by
having both 32-bit and 64-bit models be able to say, when necessary
that a data object is definitely 32-bit or definitely 64-bit.

Finally, you might be surprised to find that long long is enthusiastically
used by various non-UNIX applications of 64-bit CPUs, who use 32-bit models,
but get important speedups via long long. A few examples, I'm told, are:
CISCO routers and, of coruse, Nintendo N64s...

8. As an exercise, if you have access to an SGI IRIX 6.x system,
go to /usr/include and grep for "long long".

9. People may object on various grounds ... but some people *had*
to figure out what to do, no later than 1991 (and actually, earlier),
to be able to provide what their customers wanted, to use the chips well.

It is the inherent nature of standards efforts that it takes a while
to catch up.

--
-john mashey DISCLAIMER: <generic disclaimer: I speak for me only...>
EMAIL: ma...@sgi.com DDD: 415-933-3090 FAX: 415-967-8496
USPS: Silicon Graphics/Cray Research 6L-005,
2011 N. Shoreline Blvd, Mountain View, CA 94043-1389

Peter Seebach

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

In article <5ih98r$a...@murrow.corp.sgi.com>,

John R. Mashey <ma...@mash.engr.sgi.com> wrote:
>Many of the people who *do* have that experience seem to think long long
>or some equivalent is needed...

I'm mostly going on what happens on my home computers, and what happens to the
language.

I run NetBSD; this means we have 64 bit types and 8/16/32 bit types. On some
of our platforms, the 64 bit type is "long long", and long is 32; on others,
the 64 bit type is "long" and there is no long long. Those platforms have a
different set of porting problems, and, it turns out, are the ones on which
correct code is the most likely to work.

>2. DEC's UNIX went straight to a pure 64-bit model, with theso-called "LP64":
> int 32
> long 64
> ptr 64
> long long 64

This is basically a correct implementation; without "long long", it's even a
conforming implementation of C which provides full support for the hardware's
capabilities.

>7. While I might avoid "long long" in favor of typdefs, in the same way
>as I might avoid other types, I do observe that *some* 64-bit type is
>needed , for **32-bit** C:
> a) To write straightforward C code that lets *both* 32-bit and 64-bit
> programs describe a data structure that must be exchanged
> externally. This is relevant to both 32- and 64-bit CPUs,
> and sometimes shows up in file formats.
> b) To write straightforward 32-bit C that works on 32-bit
> and 64-bit CPUs, and gets the performance gains available for
> some codes on the latter. This need not be cryptography or
> bit-pushing, but could be as simple as the wish to have good code
> for 32-bit C programs that want files bigger than 2GB, i.e.,
> like UNIX Large File Summit efforts.

Exactly... For this reason, the correct thing to do is use 64 bit longs if you
need a 64 bithh integer type. Then, all existing correct code remains
correct. "long long" *breaks existing code*. (Because existing code has been
given an iron clad guarantee by the standard that long is the largest type,
and yes, real code breaks mysteriously when this is not true.)

>And in general, in a mixed-32/64-bit environment, where many OSs are
>going to end up, it is a *fact* that programmer sanity is helped by
>having both 32-bit and 64-bit models be able to say, when necessary
>that a data object is definitely 32-bit or definitely 64-bit.

If you want a 32-bit object, and you're concerned about 64-bit values, you can
probably safely assume that int is 32 bits.

>Finally, you might be surprised to find that long long is enthusiastically
>used by various non-UNIX applications of 64-bit CPUs, who use 32-bit models,
>but get important speedups via long long. A few examples, I'm told, are:
>CISCO routers and, of coruse, Nintendo N64s...

I do not object to the 64 bit type; I object to the syntax error and abuse of
the type system.

>9. People may object on various grounds ... but some people *had*
>to figure out what to do, no later than 1991 (and actually, earlier),
>to be able to provide what their customers wanted, to use the chips well.

And some of them got it right; if everyone had done that, C9X would have been
able to do a lot more productive work, and spend less time trying to fix "long
long".

>It is the inherent nature of standards efforts that it takes a while
>to catch up.

It is the inherent nature of hasty patches to not really solve the problem at
all, and to merely postpone it. In addition, the canonical "long long" breaks
many, many rules... The integral promotions are wrong, the type of decimal
constants is wrong, and the integral promations end up doubling in complexity
because of the additional type.

The solution that should have happened didn't get enough time or effort to be
completed, because the debates over long long, and the struggles to get the
entire library and standard cleaned up to allow for the new type, took much,
much, longer than they were worth.

<inttypes.h> is also in the standard, and provides (on machines capable of the
support) 8/16/32/64 bit integral types without breaking the type system.

We may still be able to salvage this; some existing code *will* be broken by
the long long botch, because the type rules have to change in undesirable
ways. If nothing else, it appears that either the type system rules will be
radically inconsistant, or we will no longer have any unsigned decimal
constants.

A real solution, one which lets the user specify integral sizes, would have
been preferable. If you doubt this, wait a couple of years and see what
monstrosities are invented as all of the vendors scramble to provide the
128-bit type, which will probably get called "long long long", except that
some vendors will make it "long long", and some will spell it int128_t.

I think the main thing that offends me is that "long long" is absolutely and
completely worse than a new keyword "int64_t"; it's a *syntax error*, for
crying out loud!

Dan Pop

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

In <5ih98r$a...@murrow.corp.sgi.com> ma...@mash.engr.sgi.com (John R. Mashey) writes:

>7. While I might avoid "long long" in favor of typdefs, in the same way
>as I might avoid other types, I do observe that *some* 64-bit type is
>needed , for **32-bit** C:
> a) To write straightforward C code that lets *both* 32-bit and 64-bit
> programs describe a data structure that must be exchanged
> externally. This is relevant to both 32- and 64-bit CPUs,
> and sometimes shows up in file formats.
> b) To write straightforward 32-bit C that works on 32-bit
> and 64-bit CPUs, and gets the performance gains available for
> some codes on the latter. This need not be cryptography or
> bit-pushing, but could be as simple as the wish to have good code
> for 32-bit C programs that want files bigger than 2GB, i.e.,
> like UNIX Large File Summit efforts.
>
>

>And in general, in a mixed-32/64-bit environment, where many OSs are
>going to end up, it is a *fact* that programmer sanity is helped by
>having both 32-bit and 64-bit models be able to say, when necessary
>that a data object is definitely 32-bit or definitely 64-bit.

The model with 32-bit int's and 64-bit long's satisfies all these
requirements. No need for a long long type. There is no valid justifi-
cation for having long as a 32-bit type on a 32-bit platform.

Dan
--
Dan Pop
CERN, IT Division
Email: Dan...@cern.ch
Mail: CERN - PPE, Bat. 31 1-014, CH-1211 Geneve 23, Switzerland

Tanmoy Bhattacharya

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

ma...@mash.engr.sgi.com (John R. Mashey) writes:
<snip>

> Regardless of people's opinions of the goodness or badness thereof:
<snip>

> 3. SGI's IRIX (6.x versions, starting in 1994) supports both:
>
> 64-bit 32-bit
> int 32 32
> long 64 32
> ptr 64 32
> long long 64 64
>
> Many of these types, of course, are often hidden under typedefs.
>
> 4. In late 1995, the companies involved in the ASPEN group
> (including Intel, HP, Sun, IBM, and others)
> had numerous discussions about this; most people wanted to be able to
> mix 32-bit and 64-bit programs on the same machine. People ended up
> going with the same model as above, among other things, because of
> the existence-proof of DEC & SGI systems, and large numbers of applications,
> which tended to overpower any theoretical complaints that LP64 was bad.

What exactly is this 32-bit/64-bit business? Are you referring to the
size of the ptr in the above table? If so, what exactly does that have
to do with long long? What is wrong with the following table?

64-bit 32-bit
int 32 32

long 64 64
ptr 64 32

Exactly what advantage did it give any one to have two distinct types
(either int and long, or long and long long) with exactly the same
number of bits?

>
> 5. Having been involved in these arguments since the 1980s, I wouldn't
> want to try to repeat the motivations once again. Every appraoch has
> it's plusses and minus.
>

I am sorry I wasn't using any of those machines at that time, and I
wasn;t around to hear the argument in 1980s. I, however, do believe
that if there is a real need for a feature, at least the need can be
explained. An absence of explanation on the grounds that they exist
does not sound very convincing.

If he himself is unwilling to `repeat the motivations' again, could
anyone who is familiar please explain the `plusses' of what was called
the `32-bit' scheme by the previous poster.

The answer I believe is important. When we get an 128 bit type (who
knows why: but supposing we did), will we once again change the
standard to fit that in? I think someone should seriously suggest an
addendum to the standard:

``the following portions of the standard are almost certain to
change in the future revisions making programs that depend on it not
strictly conformant ten years later. Programmers are advised to use
them only if they, like the cobol programmers who never thought their
codes containing 2 digit years will be in use at the end of the
century, think that the code is really of short duration''.

Otherwise, one should make absolutely certain that nothing in the
standard will require a quite change when a long long long type is
introduced.

> 6. However, the approach above *is* the approach being taken by a very large
> chunk of the computer industry, in extant practice, and in (potential)
> future standards (like C(X, or whatever it ends up getting called).

Which seems like a mistake right now. But then, I do not know enough
about the arguments (or may be, not enough C) to understand, *why* one
needs 5 integral types when we are only talking of supporting 4 sizes
at most!

>
> 7. While I might avoid "long long" in favor of typdefs, in the same way
> as I might avoid other types, I do observe that *some* 64-bit type is
> needed , for **32-bit** C:
> a) To write straightforward C code that lets *both* 32-bit and 64-bit
> programs describe a data structure that must be exchanged
> externally. This is relevant to both 32- and 64-bit CPUs,
> and sometimes shows up in file formats.
> b) To write straightforward 32-bit C that works on 32-bit
> and 64-bit CPUs, and gets the performance gains available for
> some codes on the latter. This need not be cryptography or
> bit-pushing, but could be as simple as the wish to have good code
> for 32-bit C programs that want files bigger than 2GB, i.e.,
> like UNIX Large File Summit efforts.
>

All this would have been *much* clearer if you would have defined
32-bit C. Does 32-bit C mean 32-bit ints, *and* longs the same size as
ints, and yet you need 64-bit types in user code? Why?

>
> And in general, in a mixed-32/64-bit environment, where many OSs are
> going to end up, it is a *fact* that programmer sanity is helped by
> having both 32-bit and 64-bit models be able to say, when necessary
> that a data object is definitely 32-bit or definitely 64-bit.

You can't do that in portable code anyway: many machines still do not
have a 64 bit type. But, it is easy for the programmer to assume that
long is the longest integer type available on the machine (i.e. 64 bit
on the machines you are talking about), and that int is the natural
type on that machine (which, from your example seems to be indeed 32
bits on these machines).

I see nothing wrong in a proposal that tries to add an `exact n bit'
type (or constraint violation if unavailable) in addition to the
current `at least 8/16/32 bit' types. I have no objection to `smallest
at least n bits' or `fastest at least n bits' either (with constraint
violation if not available). Nor do I have any objection to the
fortran-like `implementation defined' `kind' parameters. All of these
are fine: but tell me exactly *which* of these questions does `long
long' address?

>
> Finally, you might be surprised to find that long long is enthusiastically
> used by various non-UNIX applications of 64-bit CPUs, who use 32-bit models,
> but get important speedups via long long. A few examples, I'm told, are:
> CISCO routers and, of coruse, Nintendo N64s...
>

And exactly *what* does that prove? That a wrong solution can be
adopted by a wide variety of people! Great!

> 8. As an exercise, if you have access to an SGI IRIX 6.x system,
> go to /usr/include and grep for "long long".

What does that show? I have no idea what this argument is trying to
prove. Yes, SGI IRIX 6.x is one of the systems which went ahead with
lack of forethought: does that fault justify a change in the standard?

When exactly are we going to see `int far *x;' becoming ANSI C?
Because of lack of compiler technology (or whatever reasons) that was
probably far more useful when introduced. Why wasn't it adopted into
the standard? Prior art was there, widespread use was there, as was
ease of implementation (`#define far' would do for most
implementations).

>
> 9. People may object on various grounds ... but some people *had*
> to figure out what to do, no later than 1991 (and actually, earlier),
> to be able to provide what their customers wanted, to use the chips well.
>

And in 1991, was there any reason they could not think of __long_long
instead? In any case, the inclusion of any nonstandard header can
activate `long long', as can the presence or absence of any flag that
makes the compiler non-standard compliant. (Isn't that how SGI
implements it anyway?)

The objection is very specifically to the adoption of a half-baked fix
to a nonexistent problem: the corresponding real problem remaining
unfixed. Such `fixes' should have no place in any standard.

> It is the inherent nature of standards efforts that it takes a while
> to catch up.

And it ought to be inherent nature of standard not to heed to
implementations which wilfully extend the standard with no heed to
namespace issues. I have no idea whether it was wilfully or carelessly
in this case, but I do not think this mistake should be continued.

long long was not standard then, and it is not standard now. Why
should one make it standard in the future: by what argument?

Cheers
Tanmoy
--
tan...@qcd.lanl.gov(128.165.23.46) DECNET: BETA::"tan...@lanl.gov"(1.218=1242)
Tanmoy Bhattacharya O:T-8(MS B285)LANL,NM87545 H:#9,3000,Trinity Drive,NM87544
Others see <gopher://yaleinfo.yale.edu:7700/00/Internet-People/internet-mail>,
<http://alpha.acast.nova.edu/cgi-bin/inmgq.pl>or<ftp://csd4.csd.uwm.edu/pub/
internetwork-mail-guide>. -- <http://nqcd.lanl.gov/people/tanmoy/tanmoy.html>
fax: 1 (505) 665 3003 voice: 1 (505) 665 4733 [ Home: 1 (505) 662 5596 ]

John R. Mashey

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

In article <y8zpv7j...@qcd.lanl.gov>, Tanmoy Bhattacharya <tan...@qcd.lanl.gov> writes:

|> What exactly is this 32-bit/64-bit business? Are you referring to the
|> size of the ptr in the above table? If so, what exactly does that have
|> to do with long long? What is wrong with the following table?
|>
|> 64-bit 32-bit
|> int 32 32
|> long 64 64
|> ptr 64 32

What's *wrong* is that a vendor who did it might go out of business...
or spend a lot of money, or perhaps, be assasinated by angry 3rd-party
software vendors (at least), and probably any other customers who
did their own programming. Most of these people have existing
code that runs in 32-bit UNIXs, which generally thought longs were 32-bits,
and *changing* the size of a datatype.

This might have been a good choice to have made *before* UNIX was ported to
the VAX (~1978); put another way, life would have been cleaner all-around
if typdefs had been in C from day-1, and the code that we all learned from
had used them, and people had looked ahead 20 years with regard to portability,
and and there had been commonly-used typedefs for "integer of convenient
size" and "integers of specified sizes, and they are that size regardless of
implementation", and if there had been only power-of-2-bytes machines
(i.e., no 36-bit GECOS and Univac 1100s to confuse the issue, which
certainly inhibited having __int8, __int16, etc).
but this didn't happen, and it's good to reflect on how things might have
been better, but they weren't, and so there's some funny baggage.


|> Exactly what advantage did it give any one to have two distinct types
|> (either int and long, or long and long long) with exactly the same
|> number of bits?

Recall how we got here, with * as "multiword implementation, i.e., slower",
and using MIPS examples:

A: 32-bit chips, compilers generate multiword expansions/function calls ...

B: 64-bit chips, 32-bit model, can run A, but of course,
if recompiled, can take advantage of 64-bit registers and operations
long long a, b, c; a = b * c;
is ~4X faster than the equivalent function call from A.
(This is the model that some videogames, routers, other embedded control
code likes since they want the performance, but could care less about
big addressability).

C: 64-bit chips, 64-bit programming model.

PDP-11 VAX MIPS MIPS MIPS
A B C
int 16 32 32 32 32
long 32* 32 32 32 32
ptr 16 32 32 32 64
long long - - 64* 64 64

Many people may *not* be familiar with compiler/environment bootstrapping:
just as it was *extremely* useful, when doing the early 32-bit UNIX ports,
that there already existed PDP-11 compilers that supported 32-bit
integers, and that much of the source code base that already existed,
used long when it needed 32-bit-sized data (rather than int[2] things).
In the late 1970s, there had to be code portability, and sometimes
reasonablel ways to describe data structures transferred between machines.

|> I am sorry I wasn't using any of those machines at that time, and I
|> wasn;t around to hear the argument in 1980s. I, however, do believe
|> that if there is a real need for a feature, at least the need can be
|> explained. An absence of explanation on the grounds that they exist
|> does not sound very convincing.

Hopefully the above description helps.

It wasn't the motivation I was unwilling to repeat, it's that I've
got megabytes of email & postings on this topic...


|> If he himself is unwilling to `repeat the motivations' again, could
|> anyone who is familiar please explain the `plusses' of what was called
|> the `32-bit' scheme by the previous poster.

I thought I've explained these more. Is it not enough to say that:

a) Commercial vendors who *changed* the meaning of long in 32-bit
C from 32-bit to 64-bit would be destroyed by their ISVs...

b) In making a transition from mostly 32-bit to mixed 32- and 64-bit,
which is very much like the 16-bit => mixed 16/32 bit change in the 1970s,
it is very useful to have a straightforward way to describe the
next bigger size on the older systems, even if the
code compiles to long sequences.

c) And if (unlike the PDP-11-VAX days), you expect to support mixes of
32- and 64-bit software running in the same environment, on the same
system, and much more often, sharing data, you are going to find that
you want first-class 64-bit integer

|> The answer I believe is important. When we get an 128 bit type (who
|> knows why: but supposing we did), will we once again change the

Probably, although this is most likely decades away. As I once
emailed somebody "I've done my duty for 64-bit, I'll have retired before
128-bit, soembody else can worry about that one."

If we're lucky, mabye the standards prcoess will be able to anticipate
that one with some more generic solution. Again, for *this* one,
those of us worried about it early just weren't able to get enough people
worried about it early enough...


|> > 6. However, the approach above *is* the approach being taken by a very large
|> > chunk of the computer industry, in extant practice, and in (potential)
|> > future standards (like C(X, or whatever it ends up getting called).
|>
|> Which seems like a mistake right now. But then, I do not know enough
|> about the arguments (or may be, not enough C) to understand, *why* one
|> needs 5 integral types when we are only talking of supporting 4 sizes
|> at most!

It's not what you'd do if you were starting now.


|> > 7. While I might avoid "long long" in favor of typdefs, in the same way
|> > as I might avoid other types, I do observe that *some* 64-bit type is
|> > needed , for **32-bit** C:
|> > a) To write straightforward C code that lets *both* 32-bit and 64-bit
|> > programs describe a data structure that must be exchanged
|> > externally. This is relevant to both 32- and 64-bit CPUs,
|> > and sometimes shows up in file formats.
|> > b) To write straightforward 32-bit C that works on 32-bit
|> > and 64-bit CPUs, and gets the performance gains available for
|> > some codes on the latter. This need not be cryptography or
|> > bit-pushing, but could be as simple as the wish to have good code
|> > for 32-bit C programs that want files bigger than 2GB, i.e.,
|> > like UNIX Large File Summit efforts.
|> >
|>
|> All this would have been *much* clearer if you would have defined
|> 32-bit C. Does 32-bit C mean 32-bit ints, *and* longs the same size as
|> ints, and yet you need 64-bit types in user code? Why?

Sorry, I meant what is sometime called ILP32 (integer, long, ptr == 32).

|> >
|> > And in general, in a mixed-32/64-bit environment, where many OSs are
|> > going to end up, it is a *fact* that programmer sanity is helped by
|> > having both 32-bit and 64-bit models be able to say, when necessary
|> > that a data object is definitely 32-bit or definitely 64-bit.
|>
|> You can't do that in portable code anyway: many machines still do not
|> have a 64 bit type. But, it is easy for the programmer to assume that
|> long is the longest integer type available on the machine (i.e. 64 bit
|> on the machines you are talking about), and that int is the natural
|> type on that machine (which, from your example seems to be indeed 32
|> bits on these machines).

A PDP-11 didn't actually have 32-bit integer hardware, but C had a 32-bit
long on it, and it was a good thing that it did.

Again, I personally would have preferred to have a different syntax
than long long, but at the time people *had* to make decisions,
Convex already had it, thee was at least one more vendor, and
probably most important, GNU C had it, and there was starting to be
code written that way.

John R. Mashey

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

In article <5ij0f6$b...@solutions.solon.com>, se...@solutions.solon.com (Peter Seebach) writes:

|> Exactly... For this reason, the correct thing to do is use 64 bit longs if you
|> need a 64 bithh integer type. Then, all existing correct code remains
|> correct. "long long" *breaks existing code*. (Because existing code has been
|> given an iron clad guarantee by the standard that long is the largest type,
|> and yes, real code breaks mysteriously when this is not true.)

As noted elsewhere, very choice breaks some existing code.
I posted more on this, but it is *not* a correct choice, on a system
that has forever used ILP32 (integer, long, pointer = 32 bits), to change
long to 64-bits; software vendors will definitely kill you.


|> <inttypes.h> is also in the standard, and provides (on machines capable of the
|> support) 8/16/32/64 bit integral types without breaking the type system.

<inttypes.h> came fro mteh 1991 work mentioned in an earlier posting.


|> A real solution, one which lets the user specify integral sizes, would have
|> been preferable. If you doubt this, wait a couple of years and see what
|> monstrosities are invented as all of the vendors scramble to provide the
|> 128-bit type, which will probably get called "long long long", except that
|> some vendors will make it "long long", and some will spell it int128_t.

Given that DRAM expands at 4X (2 bits)/3 years, and that virtual memory
more-or-less expands ~ memory sizes, and that we're in middle of 32-64-bit
transition now (call that 1992 start), and we just added ~ 32-bits,
32 bits / (2/3 bits/year) = 48 years + 1992 = 2040, *assuming*
DRAM keeps growing at same rate. *Assuming* heavier use of memory-mapping/
sparse-file techniques, maybe it gets relevant by 2020.

Of course, 4X/3 (or 2X / 1.5 years = Moore's Law) is guaranteed not to
run forever, or even until 2040, so it may be that we do not see
128-bit (integer) processors, in any way like we saw 32, and now 64-bit
CPUs. I wouldn't be surprised to see 128-bit floating-point sometime.

======
16->32, 32->64:P we've done this 2X thing twice; the first one was relatively
easy: Dennis just added long, well before the 16->32 move, and that was that.
32->64 has been more painful, for various reasons:
- There aren't enough people around who went through the previous time.
- It has more constraints that didn't exist 20 years ago, such as
CPUs that run both sizes of code together.

So: *maybe*, if we're lucky, it will go like this:

- By 2000, every microprocessor family used in general-purpose systems
will have at least 1 64-bit member delivered in systems.
- By 2002, 32/64-bit portability will be as well-understood as
16/32-bit portability got to be ~1980 inside Bell Labs.
- If not already in C, surely the scars will be fresh enough that people
may adopt extensions that will cover 128-bit, and the
difference between types-sizes that want to float and ones
that do not (or at least, peoplel will settle into well-accepted
#ifdefs tha achieve this result). Hopefully, this could actually
be in the standard by 2010. If it's not, then the problem will be
forgotten; everyone will assume that chips are 64-bit,
and somebody (else) will get to do this again.
In any case: the *right* solution, regardless of syntax, is that
first-class-support for 128-bit ints will be in place in compilers
for 64- and 32-bit CPUs 2-3 years before the first 128-bitter
appears, and hopefully earlier.

Peter Curran

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

On 11 Apr 1997 00:12:09 GMT in article <5ijvkp$j...@murrow.corp.sgi.com>

ma...@mash.engr.sgi.com (John R. Mashey) (John R. Mashey) wrote:

>In article <y8zpv7j...@qcd.lanl.gov>, Tanmoy Bhattacharya <tan...@qcd.lanl.gov> writes:
>
>> What exactly is this 32-bit/64-bit business? Are you referring to the
>> size of the ptr in the above table? If so, what exactly does that have
>> to do with long long? What is wrong with the following table?
>>
>> 64-bit 32-bit
>> int 32 32
>> long 64 64
>> ptr 64 32
>
>What's *wrong* is that a vendor who did it might go out of business...
>or spend a lot of money, or perhaps, be assasinated by angry 3rd-party
>software vendors (at least), and probably any other customers who
>did their own programming. Most of these people have existing
>code that runs in 32-bit UNIXs, which generally thought longs were 32-bits,
>and *changing* the size of a datatype.

<snip>

While I understand what you are saying, I really can't believe the problem is as
bad as you are suggesting. I cannot believe that anyone who wrote any
significant amount of code using "long long" did not realize that this was a
non-standard structure, and make provisions to change the declarations in a
simple way when the standard caught up with the real world (e.g. using
typedefs). (Anyone who didn't is a fool, and the world is probably better off
without their code :-)

As to the problem of running code that assumes "long" is 32 bits, a lot of that
can be handled by implementations providing a "32-bit" and a "64-bit" mode - in
32-bit mode, a "long" is 32 bits, in 64-bit mode, it is 64 bits.

This approach doesn't solve all the problems of expanding C to 64-bit models in
the "natural" way (i.e. simply deciding that a "long" is 64 bits) but I think
most problems could be addressed if it is taken for granted that the (relatively
few) people who have written 64-bit code have done so using some common sense,
and that existing implementations can continue to support any required
non-standard extensions when appropriate compiler options are selected.

This still leaves the projected problem of 128-bit integers, but I agree with
you that that is a long time in the future, and that more general solutions to
the integer identification technique should be developed before then.

--
Peter Curran pcu...@acm.Xorg

Norman Diamond

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

In article <5ih98r$a...@murrow.corp.sgi.com>, ma...@mash.engr.sgi.com (John R. Mashey) writes:
>Many people have strong opinions about the goodness/badness of
>1) Having a 64-bit type at all.

I haven't seen any dispute over that in this newsgroup. Consider that C
doesn't even have a 32-bit type. Around page 9 of K&R-1, one of the
implementations they list has a 36-bit int.

>2) If so, whether or not it should be called long long, or
>something else.

Bingo.

>Many of the people who *do* have that experience seem to think long long
>or some equivalent is needed...

Bignums are important, yes. If you want lisp, you know where to find it.
Or you even know how to use C to write a library that simulates lisp.
(Or if you don't, lisp implementors will show you how :-)

>7. While I might avoid "long long" in favor of typdefs, in the same way
>as I might avoid other types, I do observe that *some* 64-bit type is
>needed , for **32-bit** C:

Yup. So is a 256-bit type. This is why "long long" is not the way to do it.

>9. People may object on various grounds ... but some people *had*
>to figure out what to do, no later than 1991 (and actually, earlier),

One of your digits is upside down :-)

>It is the inherent nature of standards efforts that it takes a while
>to catch up.

You mean the way it took the standard a long time to catch up with the
way the real world was using trigraphs?

Look, just because some important work needs to be done, that doesn't mean
it has to be done as inconsistently and incompletely as possible.
--
<< If this were the company's opinion, I would not be allowed to post it. >>
"I paid money for this car, I pay taxes for vehicle registration and a driver's
license, so I can drive in any lane I want, and no innocent victim gets to call
the cops just 'cause the lane's not goin' the same direction as me" - J Spammer

John R. Mashey

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

OK, lots of other people *must* know more than I do about this,
and *they* are very sure of what they know, so this discussion
certainly needs nothing from me any more.

James Kuyper

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

John R. Mashey wrote:
>
> In article <5ij0f6$b...@solutions.solon.com>, se...@solutions.solon.com (Peter Seebach) writes:
>
> |> Exactly... For this reason, the correct thing to do is use 64 bit longs if you
> |> need a 64 bithh integer type. Then, all existing correct code remains
> |> correct. "long long" *breaks existing code*. (Because existing code has been
> |> given an iron clad guarantee by the standard that long is the largest type,
> |> and yes, real code breaks mysteriously when this is not true.)
> As noted elsewhere, very choice breaks some existing code.
> I posted more on this, but it is *not* a correct choice, on a system
> that has forever used ILP32 (integer, long, pointer = 32 bits), to change
> long to 64-bits; software vendors will definitely kill you.

Making long 64 bits will kill some code which assumes that long is 32
bits; creating a larger type named 'long long' will break some code that
assumes that 'long' is the largest standard type. All the way back to
K&R C, anyone who assumed that a particular type was a particular size
was writing un-portable code, and should have realized it. On the other
hand, the assumption that 'long' is the largest standard type has always
been true and portable. If somebody's code must break to support 64 bit
integers, I would prefer that it be the code that was never portable
anyway.

There is a real and important need for the ability to specify exact or
minimum integer sizes; the right solution is to add int32, int64 types,
etc.. Until that gets added to the standard, we'll have to make do with
typedefs surrounded by complicated #if's.

mbr...@austin.ibm.com

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

In article <danpop.8...@news.cern.ch>, Dan Pop <Dan...@cern.ch> wrote:
>
>The model with 32-bit int's and 64-bit long's satisfies all these
>requirements. No need for a long long type. There is no valid justifi-
>cation for having long as a 32-bit type on a 32-bit platform.

...excepting that no-one in the real world does it that way.

Dan, I was a participant in the ASPEN talks. With few exceptions, in the
32bit UNIX world the model is ILP32 [int==long==ptr==32bits]. We never
seriously discussed having it any other way. The companies we
represented would have been hacked to bits by our applications ISVs, not
to mention customers with their own applications.

We discussed making a proposed standard, and sending the proposal to
X/Open, that 64bit systems should have one model. Then the argument
started over ILP64 or LP64.

While we finally agreed that this shouldn't go into the ASPEN proposal
as a standard -- it was an ABI statement for a group that produces API
specifications -- we agreed upon LP64 along the way to that decision.

Why? The compiler and research teams found no intrinsic advantage to
either ILP64 or LP64 from a performance or compiler implementation
standpoint. It came down to the need for a 32-bit type in this 64bit
world, to make porting binary data easier. In ILP64, we would have had
to invent a new type. We realized that LP64 hurt the people who mixed up
ints and longs -- but as I think you would agree, we felt those people
were broken anyway.

We never seriously discussed long long, as at the time it wasn't even
proposed for a future C standard, and it certainly wasn't in the present
one.

ASPEN never did come out with an ABI spec. But if it had, it would have
been LP64 for the reasons above.

cheers,
Mark
--
"As the most participatory form of mass speech yet developed, the Internet
deserves the highest protection from governmental intrusion." -Judge Dalzell
Mark Brown RS/6000 AIX System Architecture (512)838-3926 T/L 678-3926
mbr...@austin.ibm.com IBM Corporation, Austin, Texas

trul...@student.docs.uu.se

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

John R. Mashey <ma...@mash.engr.sgi.com> wrote:
> In article <5ij0f6$b...@solutions.solon.com>, se...@solutions.solon.com (Peter Seebach) writes:

> |> Exactly... For this reason, the correct thing to do is use 64 bit longs if you
> |> need a 64 bithh integer type. Then, all existing correct code remains
> |> correct. "long long" *breaks existing code*. (Because existing code has been
> |> given an iron clad guarantee by the standard that long is the largest type,
> |> and yes, real code breaks mysteriously when this is not true.)

> As noted elsewhere, very choice breaks some existing code.
> I posted more on this, but it is *not* a correct choice, on a system
> that has forever used ILP32 (integer, long, pointer = 32 bits), to change
> long to 64-bits; software vendors will definitely kill you.

Anybody that assumes that int (or long) is exactly 32 (or 64) bits wide
deserves whatever they get.
Note that the C Standard only defines *minimum* widths for the different
types. If you need a variable of exectly n bits the correct way is to use
some typedef so that you can easily port it to other implementations.

There does seem to be a need for types with exectly n bits and support for that
seems to be included in C9X.
If an implementation needs a 64-bit wide type the obvious way to do this is to
have char/short/int/long be 8/16/32/64 bits wide respectively. This will break
some code but programs that break by this are already broken IMAO.

--
Erik Trulsson
trul...@student.docs.uu.se


Norman Diamond

unread,
Apr 12, 1997, 3:00:00 AM4/12/97
to

In article <5ik26l$l...@murrow.corp.sgi.com>, ma...@mash.engr.sgi.com (John R. Mashey) writes:
>In article <5ij0f6$b...@solutions.solon.com>, se...@solutions.solon.com (Peter Seebach) writes:

>>Exactly... For this reason, the correct thing to do is use 64 bit longs if you
>>need a 64 bithh integer type. Then, all existing correct code remains
>>correct. "long long" *breaks existing code*. (Because existing code has been
>>given an iron clad guarantee by the standard that long is the largest type,
>>and yes, real code breaks mysteriously when this is not true.)

>As noted elsewhere, very choice breaks some existing code.

Sure, every choice will break some existing already-broken code.
Just as if someone drives recklessly and totals his car, a drunken
truck driver can come along and total the car even worse.

Now let's talk about existing code that isn't already broken. Good
code does not break when long is 83 bits. However, good code breaks
when long is no longer the longest integer type.

Are you so jealous of programmers who wrote good code, that you will
force the committee to smash them to bits instead of letting bad code
go its own way?

One of the fortunate areas of the standard that wasn't already broken is
that it let implementors provide 64-bit arithmetic without breaking good
code. Are you so jealous that you must break that part of the standard too?

Weren't you famous once upon a time for succeeding as an engineer in an
industry that is showing increasing intolerance for engineers? Do you
really have to join the battle against solid engineering principles?

>I posted more on this, but it is *not* a correct choice, on a system
>that has forever used ILP32 (integer, long, pointer = 32 bits), to change
>long to 64-bits; software vendors will definitely kill you.

No problem. On a system that has forever used ILP32 forever, where
software vendors will definitely kill you if you depart from ILP32,
you keep providing ILP32 forever. You and your software vendors will
be happy. Unbroken code will also still run on your system.

When others need to implement 64-bit systems and don't want to break
good code, and their software vendors wrote good code, please leave
them alone instead of destroying them.

Eric Gindrup

unread,
Apr 14, 1997, 3:00:00 AM4/14/97
to

Norman Diamond wrote:
>> In article <5ij0f6$b...@solutions.solon.com>,
>> se...@solutions.solon.com (Peter Seebach) writes:
>>> Exactly... For this reason, the correct thing to do is use 64 bit
>>> longs if you need a 64 bithh integer type. Then, all existing
>>> correct code remains correct. "long long" *breaks existing code*.
>>> (Because existing code has been given an iron clad guarantee by
>>> the standard that long is the largest type, and yes, real code
>>> breaks mysteriously when this is not true.)
> Now let's talk about existing code that isn't already broken. Good
> code does not break when long is 83 bits. However, good code breaks
> when long is no longer the longest integer type.

Example? Justification? Perhaps I think of "break" as a bit stronger
than "doesn't give the best answer this implementation might provide."

> Are you so jealous of programmers who wrote good code, that you will
> force the committee to smash them to bits instead of letting bad
> code go its own way?

[troll, troll, troll]
[...]


> No problem. On a system that has forever used ILP32 forever, where
> software vendors will definitely kill you if you depart from ILP32,
> you keep providing ILP32 forever. You and your software vendors will
> be happy. Unbroken code will also still run on your system.

And ultimately both you and your software vendors will be unhappy when:
1) someone develops non-ILP32 code that does more,
2) clients stop buying the vendors' code because it is archaic,
3) you stop selling to your vendors because they don't exist.

You are trying to fight uphill against one of the least surmountable
forces in computing: more, faster, farther. ILP32 will die. So will
everything else in-place now.

> When others need to implement 64-bit systems and don't want to break
> good code, and their software vendors wrote good code, please leave
> them alone instead of destroying them.

> -- J Spammer

They will destroy themselves with their inflexibility.
-- Eric Gindrup ! gin...@okway.okstate.edu

Norman Diamond

unread,
Apr 15, 1997, 3:00:00 AM4/15/97
to

In article <3352AD...@okway.okstate.edu>, Eric Gindrup <gin...@okway.okstate.edu> writes:
>Norman Diamond wrote:
>>> In article <5ij0f6$b...@solutions.solon.com>,
>>> se...@solutions.solon.com (Peter Seebach) writes:
>>>> Exactly... For this reason, the correct thing to do is use 64 bit
>>>> longs if you need a 64 bithh integer type. Then, all existing
>>>> correct code remains correct. "long long" *breaks existing code*.
>>>> (Because existing code has been given an iron clad guarantee by
>>>> the standard that long is the largest type, and yes, real code
>>>> breaks mysteriously when this is not true.)
>> Now let's talk about existing code that isn't already broken. Good
>> code does not break when long is 83 bits. However, good code breaks
>> when long is no longer the longest integer type.

>Example?

void *fp();
printf("%lu", (long) sizeof fp);

>Justification?

There is none. Your ally was trying to find some.

>Perhaps I think of "break" as a bit stronger
>than "doesn't give the best answer this implementation might provide."

Hey, you and I agree on something! Are you sure that's what you meant?

>> Are you so jealous of programmers who wrote good code, that you will
>> force the committee to smash them to bits instead of letting bad
>> code go its own way?

>[troll, troll, troll]

Nah, your trolls are in your next paragraph, clown.

>> No problem. On a system that has forever used ILP32 forever, where
>> software vendors will definitely kill you if you depart from ILP32,
>> you keep providing ILP32 forever. You and your software vendors will
>> be happy. Unbroken code will also still run on your system.

>And ultimately both you and your software vendors will be unhappy when:
>1) someone develops non-ILP32 code that does more,

You mean like I35 L83 P42? Good code does not break there. Vendors of
good software will not be unhappy there. You already quoted me, above,
saying that good code does not break when long is 83 bits. Did you read
what you were quoting?

>2) clients stop buying the vendors' code because it is archaic,

Well, yeah 83-bit longs are archaic. Powers of 2 are more common now,
like I32 L64 P64. Good code doesn't break there either.

Of course P128 already exists, and that makes L128 kind of important,
too bad even more logical vendors haven't noticed that yet.

>You are trying to fight uphill against one of the least surmountable
>forces in computing: more, faster, farther.

Nah, I'm trying to fight uphill against *the* absolute least surmountable
force, stupidity.

>ILP32 will die.

Already did. L64 and P64 are pretty common now.

>> When others need to implement 64-bit systems and don't want to break
>> good code, and their software vendors wrote good code, please leave
>> them alone instead of destroying them.
>> -- J Spammer

>They will destroy themselves with their inflexibility.

L128 is not the least inflexible system. Of course, clowns like you can
destroy them with your inflexibility.

John R MacMillan

unread,
Apr 15, 1997, 3:00:00 AM4/15/97
to

|> ... However, good code breaks

|> when long is no longer the longest integer type.
|
|Example? Justification? Perhaps I think of "break" as a bit stronger

|than "doesn't give the best answer this implementation might provide."

I've seen (and written) plenty of code that assumes long is the longest
type (seeing as that was guaranteed). The usual scenario involves
handling typedef-ed integral types, often in conjuction with library
routines. For example, how do you print out a size_t with printf()?

I define ``break'' to include ``causes programs to invoke implementation-
defined behaviour where they did not before''. I agree that in many
programs, the break may cause little more than ``doesn't give the best
answer the implementation can provide'', but in the code controlling,
say, my city's traffic lights, or the nearest nuclear reactor...

|And ultimately both you and your software vendors will be unhappy when:
|1) someone develops non-ILP32 code that does more,

|2) clients stop buying the vendors' code because it is archaic,

|3) you stop selling to your vendors because they don't exist.

One could provide both an ILP32 flavour, and a new one with larger
types. The ``wide'' version may cause badly written code to break; if
it does, use the ILP32 mode until you fix your code. The wide version
could even offer an implementation-defined 32 bit type (eg. __int32),
and the ILP32 could offer an implementation-defined wider type (eg.
__int64), or both could offer the proposed inttypes.h header file.

That gives the software vendors opportunity to:
1) develop non-ILP32 code that does more,
2) update and correct their archaic code,
3) keep building their current code base.

|You are trying to fight uphill against one of the least surmountable

|forces in computing: more, faster, farther. ILP32 will die. So will
|everything else in-place now.

I think you misunderstand. I doubt anyone would argue that people want
more than ILP32 and that it will eventually go away (or at least become
relatively insignificant). What is at issue is whether or not we should
use a transition that can render currently correct code incorrect (long
long), or by forcing bad code to be (eventually) rewritten.

Both sides have their points. For myself, I prefer the latter, but I
know how much the users of the compiler I used to work on would have
screamed...


Peter Shenkin

unread,
Apr 15, 1997, 3:00:00 AM4/15/97
to

Eric Gindrup wrote:

> Norman Diamond wrote:
> > good code breaks
> > when long is no longer the longest integer type.

> Example? Justification? Perhaps I think of "break" as a bit stronger
> than "doesn't give the best answer this implementation might provide."

How about indexing into the longest possible array using an unsigned
long as the index.

--
************* "The past ain't what it used to be" (M. Reboul)
*************
* Peter S. Shenkin; Chemistry, Columbia U.; 3000 Broadway, Mail Code
3153 *
** NY, NY 10027; she...@columbia.edu; (212)854-5143; FAX: 678-9039
***
MacroModel WWW page:
http://www.cc.columbia.edu/cu/chemistry/mmod/mmod.html

Peter Seebach

unread,
Apr 15, 1997, 3:00:00 AM4/15/97
to

In article <3352AD...@okway.okstate.edu>,

Eric Gindrup <gin...@okway.okstate.edu> wrote:
>> Now let's talk about existing code that isn't already broken. Good
>> code does not break when long is 83 bits. However, good code breaks

>> when long is no longer the longest integer type.

>Example?

Okay. Let's take that example.

Tcl's file size routine used to do something like this:
printf("%ld", buf.st_size);
(Technically, incorrect if st_size could be smaller than long, but a
good beginning.)

It breaks if long is shorter than buf.st_size, and we are bigendian.

The cast to (long) *breaks* the code on a system with 64-bit file sizes; it
is broken, IMHO, to report a 2 GB file as having -2GB of bytes. The
cast to (unsigned long) and %lu format lasts longer, but it is at *least* as
broken to report a 4 GB file as being empty, or a 4.1GB file as being 100MB.

However, if there is a long long on some systems, but not on others, there is
*NO* way to do this correctly.

If long is 64 bits on some systems, 32 on others, and perhaps 256 on next
year's BiggieComp, the code survives if written with the conversion to
unsigned long, and works *everywhere* - as long as long is the largest type.

>Justification? Perhaps I think of "break" as a bit stronger
>than "doesn't give the best answer this implementation might provide."

Gives a wrong answer strikes me as bad.

There are a fair number of obvious examples where code which depends on all
integral values fitting in long or unsigned long gets broken by having a
larger integral type; with long, this excess can lead to undefined behavior.

>> Are you so jealous of programmers who wrote good code, that you will
>> force the committee to smash them to bits instead of letting bad
>> code go its own way?
>[troll, troll, troll]

No, he's right. I feel basically the same way about long long.

If we were in a bargaining situation, I wouldn't give up function pointers to
get rid of long long, but I'd probably be willing to lose bitfields to get rid
of it.

>You are trying to fight uphill against one of the least surmountable
>forces in computing: more, faster, farther. ILP32 will die. So will
>everything else in-place now.

Then we go to IP32L64, or I32LP64, and continue. Only *badly* broken code is
hurt by this. If you feel the need, provide a warning whenever someone
converts something that won't fit.

>They will destroy themselves with their inflexibility.

Don't be silly; the correct approach is not inflexible, it's just not
redundant. Having two integer types with the same number of bits is not a
feature which gives any kind of additional power to the user.

John David Galt

unread,
Apr 15, 1997, 3:00:00 AM4/15/97
to

Wouldn't it be much cleaner just to use the types that OSF DCE uses?

int16, unsigned16, int32, unsigned32, int64, unsigned64

Each is defined as _at least_ the specified size. This avoids problems on systems
with oddball word sizes. Of course, the 64-bit types may have to be C++ classes on
older hardware.

Put these in the standard, and code will be more portable than it is now.

John David Galt

Peter Seebach

unread,
Apr 15, 1997, 3:00:00 AM4/15/97
to

In article <5j1crd$a...@mulga.cs.mu.OZ.AU>,
Fergus Henderson <f...@mundook.cs.mu.OZ.AU> wrote:

>John David Galt <j...@but-i-dont-like-spam.boxmail.com> writes:

>>Wouldn't it be much cleaner just to use the types that OSF DCE uses?

>>int16, unsigned16, int32, unsigned32, int64, unsigned64

>>Each is defined as _at least_ the specified size.

>Why would that be any better than using

> short, unsigned short, int, unsigned, long long, unsigned long long

>respectively (assuming that `long long' is guaranteed to be at least 64 bits)?

Because "int64" is not a syntax error for which the C language requires a
diagnostic.

Because the way in which it would extend to "int128" or "unsigned128" when
IPv6 implementations become popular is obvious. (Do we go to "long long
long"?)

Because the guarantees of what the sizes are are unambiguous.
For instance, in your example, currently, "int" is inferior to "int32" in that
it is not required to be as large; perhaps you want "long" as the "at least 32
type".

C has a weakness here; I am not convinced at all that "long long" addresses
the weakness enough to be worth an additional 5 characters in a declaration,
let alone additional support in the syntax and grammar.

Even in C9X, when qualifiers become idempotent, "long long" will remain an
inconsistency at best; a wart in a language that doesn't need any more.

I doubt anyone will try to claim that C is a perfect, elegant language, free
of flaws in form or spirit. However, "long long" is at least as bad as
trigraphs, and solves its problem even less completely.

Norman Diamond

unread,
Apr 16, 1997, 3:00:00 AM4/16/97
to

In article <5iusgn$3u1$1...@nntpd.lkg.dec.com>, dia...@tbj.dec.com (Norman Diamond) writes:
> void *fp();
> printf("%lu", (long) sizeof fp);

Here is the obvious, pedantic correction:
printf("%lu", (unsigned long) sizeof fp);

Nitpickers, whether or not qualified in any other respect, have the right
to pick nits with my error, if they wish. Nonetheless, nits do not affect
the underlying reasoning, and my error does not justify the unjustifiable.

Fergus Henderson

unread,
Apr 16, 1997, 3:00:00 AM4/16/97
to

John David Galt <j...@but-i-dont-like-spam.boxmail.com> writes:

>Wouldn't it be much cleaner just to use the types that OSF DCE uses?
>
>int16, unsigned16, int32, unsigned32, int64, unsigned64
>
>Each is defined as _at least_ the specified size.

Why would that be any better than using

short, unsigned short, int, unsigned, long long, unsigned long long

respectively (assuming that `long long' is guaranteed to be at least 64 bits)?

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@128.250.37.3 | -- the last words of T. S. Garp.

Norman Diamond

unread,
Apr 17, 1997, 3:00:00 AM4/17/97
to

In article <5j1crd$a...@mulga.cs.mu.OZ.AU>, f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
>John David Galt <j...@but-i-dont-like-spam.boxmail.com> writes:
>>Wouldn't it be much cleaner just to use the types that OSF DCE uses?
>> int16, unsigned16, int32, unsigned32, int64, unsigned64
>>Each is defined as _at least_ the specified size.

>Why would that be any better than using
> short, unsigned short, int, unsigned, long long, unsigned long long
>respectively (assuming that `long long' is guaranteed to be at least 64 bits)?

Since you missed these threads the last two times around, here's why:
If "long long" is imposed, then sure it will be at least 64 bits long.
But why should a program be forced to use a 4096-bit long long when it
only needs a 64-bit medium_longish short short short long long?

At least I'm glad to see you don't break good code:


> short, unsigned short, int, unsigned, long long, unsigned long long

So the complete order is:
signed char, unsigned char, short, unsigned short, int, unsigned,
long long, unsigned long long, long, unsigned long
so that long is still the longest integral type and programs that used
the longest integral type with its guaranteed meaning won't be broken
by your suggestion. Confusing but not breaking good programs. Hmmm.

Fergus Henderson

unread,
Apr 17, 1997, 3:00:00 AM4/17/97
to

Fergus Henderson <f...@mundook.cs.mu.OZ.AU> wrote:
>John David Galt <j...@but-i-dont-like-spam.boxmail.com> writes:
>
>>Wouldn't it be much cleaner just to use the types that OSF DCE uses?
>
>>int16, unsigned16, int32, unsigned32, int64, unsigned64
>
>>Each is defined as _at least_ the specified size.
>
>Why would that be any better than using
>
> short, unsigned short, int, unsigned, long long, unsigned long long
>
>respectively (assuming that `long long' is guaranteed to be at least 64 bits)?

Oops, I meant `long, unsigned long' rather than `int, unsigned'.

I guess that partly answers my question ;-)

Actually I think the typedefs are a good idea, but I still want `long long'.
Yes, I know it breaks existing code, but requiring implementations to change
the size of `long' if they want to introduce a larger integral type would
cause even worse problems.

(I also want `long char' as an alternative to `wchar_t' too. That's
just an aesthetic thing: IMHO the fundamental types should have names
that are keywords. I find it ugly that you have to include a header
file to get access to the name for a fundamental type.)

Bradd W. Szonye

unread,
Apr 17, 1997, 3:00:00 AM4/17/97
to

Fergus Henderson <f...@mundook.cs.mu.OZ.AU> wrote in article
<5j4nl8$7...@mulga.cs.mu.OZ.AU>...

>
> (I also want `long char' as an alternative to `wchar_t' too. That's
> just an aesthetic thing: IMHO the fundamental types should have names
> that are keywords. I find it ugly that you have to include a header
> file to get access to the name for a fundamental type.)

Amen! Somebody expressing in public the absolute ugliness of the name
wchar_t. In C++, you get your wish at least: wchar_t is a basic type and a
keyword. Unfortunately, it's a keyword with the same naming convention as
library typedefs. A much much better solution would have been:

new type: long char
in library: typedef long char wchar_t;

... thus maintaining backward compatibility (it would not break existing
programs) and giving us a "reasonably" named type. I'm not one to advocate
aesthetic issues, normally, but this is one that really upsets me (in the
way that a nagging itch does).
--
Bradd W. Szonye
bra...@concentric.net

Eric Gindrup

unread,
Apr 17, 1997, 3:00:00 AM4/17/97
to

Norman Diamond wrote:
>
> In article <3352AD...@okway.okstate.edu>, Eric Gindrup <gin...@okway.okstate.edu> writes:
>> Norman Diamond wrote:
>>>> In article <5ij0f6$b...@solutions.solon.com>,
>>>> se...@solutions.solon.com (Peter Seebach) writes:

>>>>> Exactly... For this reason, the correct thing to do is use 64 bit
>>>>> longs if you need a 64 bithh integer type. Then, all existing
>>>>> correct code remains correct. "long long" *breaks existing code*.
>>>>> (Because existing code has been given an iron clad guarantee by
>>>>> the standard that long is the largest type, and yes, real code
>>>>> breaks mysteriously when this is not true.)

>>> Now let's talk about existing code that isn't already broken. Good
>>> code does not break when long is 83 bits. However, good code breaks
>>> when long is no longer the longest integer type.
>
>> Example?
>

> void *fp();
> printf("%lu", (long) sizeof fp);
>

1. When did function pointers become integer types?
2. And admittedly printing a signed value via an unsigned specifier
is not likely to give you what you want in all instances, I don't
see that this problem is caused by the representation sizes.

Your example fails to provide...
>> Justification?
[...]

>>> No problem. On a system that has forever used ILP32 forever,
>>> where software vendors will definitely kill you if you depart

>>> from ILP32, you keep providing ILP32 forever. You and your
>>> software vendors will be happy. Unbroken code will also still
>>> run on your system.
>

>> And ultimately both you and your software vendors will be unhappy
>> when:
> >1) someone develops non-ILP32 code that does more,
>

> You mean like I35 L83 P42? Good code does not break there.
> Vendors of good software will not be unhappy there. You already

> quoted me, above, saying that good code does not break when long


> is 83 bits. Did you read what you were quoting?

And what, *exactly* is it about I35 L83 P42 that breaks conforming
code or does not itself conform to the Standard? The Standard makes
no claim that pionters will have any particular size and the sizes
for int and lone definitely satisfy the range requirements.

> >2) clients stop buying the vendors' code because it is archaic,
>

> Well, yeah 83-bit longs are archaic. Powers of 2 are more common
> now, like I32 L64 P64. Good code doesn't break there either.
>
> Of course P128 already exists, and that makes L128 kind of
> important, too bad even more logical vendors haven't noticed that
> yet.

This is only logical if some coder has decided to start putting
pointers into longs. To pick an example out of the air, most of
what Windows 3.x/95 wants you to stick in integers is so
non-standard and therefore so poorly engineered that it is no
surprise that compiler vendors would prefer to "help" Microsoft
see the light of portable programming.
Whoever thought that passing function pointers through longs was
a good idea was an idiot.

>> You are trying to fight uphill against one of the least
>> surmountable forces in computing: more, faster, farther.
>

> Nah, I'm trying to fight uphill against *the* absolute least
> surmountable force, stupidity.

And based upon your replies, you are losing.

>> ILP32 will die.
>
> Already did. L64 and P64 are pretty common now.
>
>>> When others need to implement 64-bit systems and don't want to break
>>> good code, and their software vendors wrote good code, please leave
>>> them alone instead of destroying them.
>>> -- J Spammer
>

>> They will destroy themselves with their inflexibility.
>

> L128 is not the least inflexible system. Of course, clowns like you
> can destroy them with your inflexibility.

> -- J Spammer

My offer still stands. What's an example of conforming code that
breaks when the implemented widths of intrinsic types is widened?
Your example was a non-example. You have provided no justification.

Eric Gindrup

unread,
Apr 17, 1997, 3:00:00 AM4/17/97
to

John R MacMillan wrote:
>
> |> ... However, good code breaks


> |> when long is no longer the longest integer type.
> |

> |Example? Justification? Perhaps I think of "break" as a bit stronger


> |than "doesn't give the best answer this implementation might provide."
>

> I've seen (and written) plenty of code that assumes long is the longest
> type (seeing as that was guaranteed). The usual scenario involves
> handling typedef-ed integral types, often in conjuction with library
> routines. For example, how do you print out a size_t with printf()?

Since size_t is implementation defined, you don't, except on one
implementation. You get to look in the documentation for each
implementation to see what specifier to use to print a size_t.
This is the implementor's problem to report and the programmer's
problem to never do.
You are guaranteed that you can declare whatever type size_t is.
I don't see any promise that you can print one out without jumping
through implementation defined hoops.

[...]


> One could provide both an ILP32 flavour, and a new one with larger
> types. The ``wide'' version may cause badly written code to break; if
> it does, use the ILP32 mode until you fix your code. The wide version
> could even offer an implementation-defined 32 bit type (eg. __int32),
> and the ILP32 could offer an implementation-defined wider type (eg.
> __int64), or both could offer the proposed inttypes.h header file.

A much better-sounding solution would be to require that wider
compilers have a switch that forces implementation defined behaviour
to completely forget about types longer than long. This should
retain the usability of archaic code that uses non-portable behaviour.

Remember portability is also about portabillity to subsequent versions
of a compiler. Failure to write code portably is not the fault of
the compiler implementor or language standardization committees.

> That gives the software vendors opportunity to:
> 1) develop non-ILP32 code that does more,
> 2) update and correct their archaic code,
> 3) keep building their current code base.
>

> |You are trying to fight uphill against one of the least surmountable

> |forces in computing: more, faster, farther. ILP32 will die. So will
> |everything else in-place now.
>

> I think you misunderstand. I doubt anyone would argue that people
> want more than ILP32 and that it will eventually go away (or at
> least become relatively insignificant). What is at issue is whether
> or not we should use a transition that can render currently correct
> code incorrect (long long), or by forcing bad code to be
> (eventually) rewritten.
>
> Both sides have their points. For myself, I prefer the latter, but I
> know how much the users of the compiler I used to work on would have
> screamed...

Non-conforming code does not have any guarantees. If we create
crutches for non-conforming code now, they will never go away. We'll
have these transitional supports in place for the next change in
the Standard which will also have more crutches because the writers
of non-conforming code will complain that their existing practice
must be protected. This will eventually result in stagnation for
the language, a language that I would much rather see live and grow.

I have not yet been presented with conforming code that does not
already depend on implementation defined (or undefined in a
message by another poster) that would break under the proposition
for long long. Until I do, I don't see that my opinions will change.

The underlying error in Standard C is requiring that integral types
have extra semantics about what they're good for. A workable
solution would be to put a parametric integral type under all of the
scaffolding for the intrinsic types and then implement the intrinsic
types in terms of that parametric type. This would instantly destroy
the idea that long was the longest possible type on the
implementation, because something one bit longer could be
instantiated. The implementors would still be required to specify
to what int and long mapped in the parametric type, and would probably
try to support the current extra semantics (fits in a register, fast,
largest register, et c.) that practitioners have accreted.

Peter Seebach

unread,
Apr 17, 1997, 3:00:00 AM4/17/97
to

In article <3356BF...@okway.okstate.edu>,

Eric Gindrup <gin...@okway.okstate.edu> wrote:
>> routines. For example, how do you print out a size_t with printf()?

>Since size_t is implementation defined, you don't, except on one
>implementation.

Nonsense!
printf("%lu\n", (unsigned long) sizeof(foo));
is *ABSOLUTELY GUARANTEED* to work; there can be no unsigned integral type in
a C implementation which can represent a value that cannot be represented
exactly in unsigned long.

>You are guaranteed that you can declare whatever type size_t is.
>I don't see any promise that you can print one out without jumping
>through implementation defined hoops.

You can convert any smaller type up to a larger type reliably, and the
standard clearly gives a *complete* list of the integral types; since it also
specifies that the range of unsigned long is a superset (not necessarily a
proper superset) of the ranges of all other unsigned types, the above
conversion is guaranteed to be lossless.

>Failure to write code portably is not the fault of
>the compiler implementor or language standardization committees.

Right, which is why people who assumed that long was no bigger than int, and
was exactly 32 bits, deserve to rewrite their code.

>Non-conforming code does not have any guarantees. If we create
>crutches for non-conforming code now, they will never go away. We'll
>have these transitional supports in place for the next change in
>the Standard which will also have more crutches because the writers
>of non-conforming code will complain that their existing practice
>must be protected. This will eventually result in stagnation for
>the language, a language that I would much rather see live and grow.

Right... This is *EXACTLY* the argument against "long long", that it is a
crutch to support broken code (code which is assuming either that long and int
are the same size, or that long is no bigger than 32 bits), and that it
breaks the *correct* code.

>I have not yet been presented with conforming code that does not
>already depend on implementation defined (or undefined in a
>message by another poster) that would break under the proposition
>for long long. Until I do, I don't see that my opinions will change.

You haven't comprehended the nature of casting, printf, and sizeof.

Converting a size_t (which is an unsigned integral type, remember) to unsigned
long is *guaranteed* to work in C; it is no longer guaranteed if size_t
becomes unsigned long long.

>The underlying error in Standard C is requiring that integral types
>have extra semantics about what they're good for. A workable
>solution would be to put a parametric integral type under all of the
>scaffolding for the intrinsic types and then implement the intrinsic
>types in terms of that parametric type. This would instantly destroy
>the idea that long was the longest possible type on the
>implementation, because something one bit longer could be
>instantiated. The implementors would still be required to specify
>to what int and long mapped in the parametric type, and would probably
>try to support the current extra semantics (fits in a register, fast,
>largest register, et c.) that practitioners have accreted.

Destroying the idea that long is the longest possible type breaks existing
code. It may be necessary to break this, but if we do, we should do it in
such a way as to *FIX* the problem; I would buy into a parametric type, but
"long long" is simply too bloody stupid.

"long long" has all the sensibility of "char far *".

James Kuyper

unread,
Apr 18, 1997, 3:00:00 AM4/18/97
to

Eric Gindrup wrote:
>
> John R MacMillan wrote:
> >
> > |> ... However, good code breaks
> > |> when long is no longer the longest integer type.
> > |
> > |Example? Justification? Perhaps I think of "break" as a bit stronger
> > |than "doesn't give the best answer this implementation might provide."
> >
> > I've seen (and written) plenty of code that assumes long is the longest
> > type (seeing as that was guaranteed). The usual scenario involves
> > handling typedef-ed integral types, often in conjuction with library
> > routines. For example, how do you print out a size_t with printf()?
>
> Since size_t is implementation defined, you don't, except on one
> implementation. You get to look in the documentation for each
> implementation to see what specifier to use to print a size_t.
> This is the implementor's problem to report and the programmer's
> problem to never do.
> You are guaranteed that you can declare whatever type size_t is.
> I don't see any promise that you can print one out without jumping
> through implementation defined hoops.

Well, there is one: you are guaranteed to be able to safely cast size_t
to unsigned long because size_t must be an unsigned integral type, and
unsigned long is the largest unsigned integral type. Furthermore, you
are guaranteed to be able to print unsigned long using

printf("sizeof(something)=%lu\n",
(unsigned long)sizeof(something));

That first guarantee disappears if long long becomes the new longest
type. Code such as this which was based upon that guarantee will break
under any implementation which chooses to make size_t be unsigned long
long (assuming sizeof(unsigned long long) > sizeof(unsigned long),
which would almost certainly be the case).

Eric Gindrup

unread,
Apr 18, 1997, 3:00:00 AM4/18/97
to

James Kuyper wrote:
> Well, there is one: you are guaranteed to be able to safely cast size_t
> to unsigned long because size_t must be an unsigned integral type, and
> unsigned long is the largest unsigned integral type. Furthermore, you
> are guaranteed to be able to print unsigned long using
>
> printf("sizeof(something)=%lu\n",
> (unsigned long)sizeof(something));
>
> That first guarantee disappears if long long becomes the new longest
> type. Code such as this which was based upon that guarantee will break
> under any implementation which chooses to make size_t be unsigned long
> long (assuming sizeof(unsigned long long) > sizeof(unsigned long),
> which would almost certainly be the case).

Don't allow unsigned long long.

James Kuyper

unread,
Apr 18, 1997, 3:00:00 AM4/18/97
to

Eric Gindrup wrote:
>
> Norman Diamond wrote:
> >
> > In article <3352AD...@okway.okstate.edu>, Eric Gindrup <gin...@okway.okstate.edu> writes:
> >> Norman Diamond wrote:
> >>>> In article <5ij0f6$b...@solutions.solon.com>,
> >>>> se...@solutions.solon.com (Peter Seebach) writes:
> >>>>> Exactly... For this reason, the correct thing to do is use 64 bit
> >>>>> longs if you need a 64 bithh integer type. Then, all existing
> >>>>> correct code remains correct. "long long" *breaks existing code*.
> >>>>> (Because existing code has been given an iron clad guarantee by
> >>>>> the standard that long is the largest type, and yes, real code
^^^^^^^^^^^^^^^^^^^^^^^^

> >>>>> breaks mysteriously when this is not true.)
> >>> Now let's talk about existing code that isn't already broken. Good
> >>> code does not break when long is 83 bits. However, good code breaks

> >>> when long is no longer the longest integer type.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

> >
> >> Example?
> >
> > void *fp();
> > printf("%lu", (long) sizeof fp);
> >
>
> 1. When did function pointers become integer types?

They aren't, but you are guaranteed to be able to convert a pointer to
an integer and back again, and the resulting pointer must compare equal
to the orginal, IF the integer is large enough. I don't remember whether
the standard guarantees that there are any integral types that are large
enough to hold a pointer, but if there are, then 'long' is guaranteed to
be one of them. This guarantee, and any code based upon it, is broken if
'long long' is added to the language.

> 2. And admittedly printing a signed value via an unsigned specifier
> is not likely to give you what you want in all instances, I don't
> see that this problem is caused by the representation sizes.

I agre with you here, but I think that was a typo, and irrelevant to the
main thrust of his argument.

...


> > You mean like I35 L83 P42? Good code does not break there.

^^^^^^^^^^^^^^
...


> And what, *exactly* is it about I35 L83 P42 that breaks conforming
> code or does not itself conform to the Standard? The Standard makes

Nothing - which is what he said. It is 'long long' which could break
previously conforming code.

...


> My offer still stands. What's an example of conforming code that
> breaks when the implemented widths of intrinsic types is widened?

Check backward through the thread; the claim that you are responding to
is that it is 'long long' which breaks previously conforming code - NOT
increased widths of intrinsic type. See the first two sections above
that I marked with '^^^^^^^^^^^^'.

Bret Indrelee

unread,
Apr 18, 1997, 3:00:00 AM4/18/97
to

In article <3356BA...@okway.okstate.edu>,

Eric Gindrup <gin...@okway.okstate.edu> wrote:
>
>Norman Diamond wrote:
>>
>> In article <3352AD...@okway.okstate.edu>, Eric Gindrup <gin...@okway.okstate.edu> writes:
>>> Norman Diamond wrote:
>>>>> In article <5ij0f6$b...@solutions.solon.com>,
>>>>> se...@solutions.solon.com (Peter Seebach) writes:
>>>>>> Exactly... For this reason, the correct thing to do is use 64 bit
>>>>>> longs if you need a 64 bithh integer type. Then, all existing
>>>>>> correct code remains correct. "long long" *breaks existing code*.
>>>>>> (Because existing code has been given an iron clad guarantee by
>>>>>> the standard that long is the largest type, and yes, real code
>>>>>> breaks mysteriously when this is not true.)
>>>> Now let's talk about existing code that isn't already broken. Good
>>>> code does not break when long is 83 bits. However, good code breaks
>>>> when long is no longer the longest integer type.
>>
>>> Example?
>>
>> void *fp();
>> printf("%lu", (long) sizeof fp);
>>
>
>1. When did function pointers become integer types?

Gee, I thought that the sizeof operator returned an integer type.

>2. And admittedly printing a signed value via an unsigned specifier
> is not likely to give you what you want in all instances, I don't
> see that this problem is caused by the representation sizes.

I agree it would have been better if he had chosen to write "%ld" for
the format string.

>Your example fails to provide...
>>> Justification?
>[...]

Actually, it doesn't. Look at the example again.

---

Actually, I also dislike the 'long long' type. I would prefer that
implementations be required to have types that follow a particular
naming convention for each size of integer operand supported. For
example, __uint8, __uint16, __uint32, __uint64, etc could be
implemented on 8-bit char oriented machines; __uint6, __uint12,
__uint60 may be appropriate for other systems. We would also have
to define a *printf format string to cover these types.

The basic types (short, int, long) would continue to be of varying
size. When you use one of the underlying types, it will only work
on other conforming systems that use those same types.

This would give a nice clear way to add new types as machines progress,
yet allow the size of short/int/long to change with time. It would
also allow system headers to quit using the basic integer types, so
that a compiler could choose different types for short/int/long and
not break all the system interfaces.

-Bret
--
Bret Indrelee
br...@bit3.com #include <std_disclaimer.h>

Peter Seebach

unread,
Apr 18, 1997, 3:00:00 AM4/18/97
to

In article <33581...@news3.paonline.com>, <mfi...@lynchburg.net> wrote:

>In <3357A3...@okway.okstate.edu>, Eric Gindrup <gin...@okway.okstate.edu> writes:
>>Don't allow unsigned long long.
>> -- Eric Gindrup ! gin...@okway.okstate.edu

>NOT ACCEPTABLE!!!!

Well, not acceptable if there's a long long. C doesn't need either, but
having one of them really requires having the others.

>Sorry for shouting, but I use (and need) unsigned long long all
>the time. Unsigned types are at least as important as the
>signed types.

How is the language even more broken with long long, but without unsigned long
long? Let us count the ways.

There would be positive values of integer types which could be represented in
signed, but not unsigned, types. There would be a signed type with no
corresponding unsigned type. There ... It's too horrible to even comtemplate
...

"long long", in and of itself, is deeply and fundementally a failure; it will
lead to bad code, it will break good code, and it will never, ever, solve a
problem that isn't better fixed by <inttypes.h>.

Michael Morrell

unread,
Apr 18, 1997, 3:00:00 AM4/18/97
to

Peter Seebach (se...@solutions.solon.com) wrote:
> Converting a size_t (which is an unsigned integral type, remember) to unsigned
> long is *guaranteed* to work in C; it is no longer guaranteed if size_t
> becomes unsigned long long.

Is there a reason why the new standard can't allow "long long", but still
require that size_t is no longer than "unsigned long". Won't that allow
the existing code to still work?

Michael

mfi...@lynchburg.net

unread,
Apr 19, 1997, 3:00:00 AM4/19/97
to

In <3357A3...@okway.okstate.edu>, Eric Gindrup <gin...@okway.okstate.edu> writes:
>Don't allow unsigned long long.
> -- Eric Gindrup ! gin...@okway.okstate.edu

NOT ACCEPTABLE!!!!

Sorry for shouting, but I use (and need) unsigned long long all

Bradd W. Szonye

unread,
Apr 19, 1997, 3:00:00 AM4/19/97
to

Michael Morrell <mor...@cup.hp.com> wrote in article
<5j8hpo$3u5$1...@hpax.cup.hp.com>...

There is a practical problem with this. It's very likely that an
architecture with 64-bit integers will also have a 64-bit addressing model.
In that case, size_t should be sufficient to represent pointer ranges, that
is, size_t should also be 64-bit. Requiring size_t to fit in an unsigned
long then means that long is 64-bit, removing the necessity for a long long
type.

Peter Curran

unread,
Apr 19, 1997, 3:00:00 AM4/19/97
to

On 18 Apr 1997 19:19:53 -0500 in article <5j9339$1...@solutions.solon.com>

se...@solutions.solon.com (Peter Seebach) (Peter Seebach) wrote:

>"long long", in and of itself, is deeply and fundementally a failure; it will
>lead to bad code, it will break good code, and it will never, ever, solve a
>problem that isn't better fixed by <inttypes.h>.

This seems to be virtually the unanimous opinion of the entire C community. The
only difference of opinion seems to be that, despite its horrors, omitting it
from C9x would inconvenience people currently using compilers that support it.
However, this seems to be more than balanced by the inconvenience to people
relying on the current standard. Further, according to informal reports, the
committee has gone through several versions of the "long long" concept. That
implies that the existing compilers support it in different ways, and so even if
it is added to the standard, it will be different from the way it is supported
in most or all existing compilers that have it - so almost all existing users
will be almost as inconvenienced as if was not there.

Are there ANY arguments in favour of this thing???

--
Peter Curran pcu...@xacm.org
(remove x for actual address)

Fergus Henderson

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

pcu...@xacm.org (Peter Curran) writes:


>On 18 Apr 1997 19:19:53 -0500 in article <5j9339$1...@solutions.solon.com>
> se...@solutions.solon.com (Peter Seebach) (Peter Seebach) wrote:
>
>>"long long", in and of itself, is deeply and fundementally a failure; it will
>>lead to bad code, it will break good code, and it will never, ever, solve a
>>problem that isn't better fixed by <inttypes.h>.
>
>This seems to be virtually the unanimous opinion of the entire C community.

Nope, it is just that those who think that "long long" is fundamentally
a failure are (understandably) quite vocal about it on usenet.

Those who are aware of the pros and cons and who think that (despite the
well-publicized problems) the pros outweigh the cons tend not to be so
concerned; since this is apparently not the first such discussion of this
topic, I wouldn't be suprised if they don't respond quite so fervently.

>The
>only difference of opinion seems to be that, despite its horrors, omitting it
>from C9x would inconvenience people currently using compilers that support it.

I think omitting it from C9X would also inconvenience people using compilers
that don't have any 64 bit type yet (e.g. on 32 bit systems). Some of them
would be very significantly inconvenienced. In such a situation, a compiler
vendor has several possibile alternatives:

(1) Break binary backwards compatibility by changing the size of
`long'. This would be very very costly -- prohibitively so.

(2) Don't introduce any 64-bit type. But customers will demand one,
so compiler vendors will introduce one, even if it is non-standard.

(3) Introduce a 64 bit type.

(3) is what will happen (or has already happened) in practice, regardless
of what the standard says.

>However, this seems to be more than balanced by the inconvenience to people
>relying on the current standard.

I don't think so. Do you think that binary backwards compatibility is not
important? Do you think that the existence of a 64-bit type on 32-bit
systems is not important?

>Further, according to informal reports, the
>committee has gone through several versions of the "long long" concept. That
>implies that the existing compilers support it in different ways, and so even if
>it is added to the standard, it will be different from the way it is supported
>in most or all existing compilers that have it - so almost all existing users
>will be almost as inconvenienced as if was not there.

Nope, the differences are far outweighed by the similarities, IMHO.

Paul Eggert

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:

>I think omitting it from C9X would also inconvenience people using compilers
>that don't have any 64 bit type yet (e.g. on 32 bit systems). Some of them
>would be very significantly inconvenienced. In such a situation, a compiler
>vendor has several possibile alternatives:

> (1) Break binary backwards compatibility by changing the size of
> `long'. This would be very very costly -- prohibitively so.

Changing the size of `long' would not necessarily break binary
backwards compatibility. Back when `int' was growing from 16 to 32
bits, some systems supported both 16-bit `int' and 32-bit `int' without
breaking binary backwards compatibility. It is more of a hassle than
just bolting on a `long long' type, but it's not prohibitively costly.

A reasonable compromise would be to support both a ``traditional''
32-bit long model, and a more ``modern'' 64-bit long model; and to let
the users decide which model they want. Users who prefer 32-bit longs
could even have a flag to enable `long long' if that's what they prefer.
Then you could let users vote with their feet. I know which way _my_
feet would vote.

Richard A. O'Keefe

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

mor...@cup.hp.com (Michael Morrell) writes:
>Is there a reason why the new standard can't allow "long long", but still
>require that size_t is no longer than "unsigned long". Won't that allow
>the existing code to still work?

And what about all the other xxxx_t types in the standard?
Don't forget the POSIX.1 standard either; why should

offset_t x = ....;

printf("%ld", (long)x);

break when offset_t is made a 64-bit type (as doubtless it will be)?

--
Will maintain COBOL for money.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.

James Kuyper

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to
> Don't allow unsigned long long.
> -- Eric Gindrup ! gin...@okway.okstate.edu

Since you can get into the same problem with ptrdif_t, which is
guaranteed only to be a signed integer type, then the same logic implies
we must also not allow long long. I'll be happy with that!

Thad Smith

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

Unfortunately for this argument, the results of subtracting pointers
to different elements within an array are allowed to overflow a
ptrdiff_t, in which case the result is undefined. The implementor is
free to make ptrdiff_t any signed integral type, regardless of the
maximum size of objects.

Thad

Peter Curran

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

On 21 Apr 1997 03:48:48 GMT in article <5jeo30$6...@mulga.cs.mu.OZ.AU>
f...@mundook.cs.mu.OZ.AU (Fergus Henderson) (Fergus Henderson) wrote:

>pcu...@xacm.org (Peter Curran) writes:

>>This seems to be virtually the unanimous opinion of the entire C community.
>
>Nope, it is just that those who think that "long long" is fundamentally
>a failure are (understandably) quite vocal about it on usenet.

Yes, that's my assumption - I know the original idea wasn't produced by fools.

>I think omitting it from C9X would also inconvenience people using compilers
>that don't have any 64 bit type yet (e.g. on 32 bit systems). Some of them
>would be very significantly inconvenienced. In such a situation, a compiler
>vendor has several possibile alternatives:
>
> (1) Break binary backwards compatibility by changing the size of
> `long'. This would be very very costly -- prohibitively so.

I don't think this follows - it is entirely possible, for example, to maintain a
model using 32-bit longs, and one using 64-bit longs, selectable using the
compiler options or #pragmas or whatever. I can also think of ways of combining
these two models in useful ways, although they are not beautiful of course.

> (2) Don't introduce any 64-bit type. But customers will demand one,
> so compiler vendors will introduce one, even if it is non-standard.
>
> (3) Introduce a 64 bit type.
>
>(3) is what will happen (or has already happened) in practice, regardless
>of what the standard says.

I don't think there is any disagreement that a 64-bit type is needed.

>>However, this seems to be more than balanced by the inconvenience to people
>>relying on the current standard.

>>Further, according to informal reports, the


>>committee has gone through several versions of the "long long" concept. That
>>implies that the existing compilers support it in different ways, and so even if
>>it is added to the standard, it will be different from the way it is supported
>>in most or all existing compilers that have it - so almost all existing users
>>will be almost as inconvenienced as if was not there.
>
>Nope, the differences are far outweighed by the similarities, IMHO.

I'm certain this true, although I haven't seen anything about the discussions.
However, in many ways subtle differences are a bigger concern than big ones -
if, say, "long long" is not added, the compiler will report anywhere it is used,
so it can be fixed; if it is added but with subtle differences in semantics from
what the original programmer assumed, the problems will be much harder to find
and fix.

Further, I am quite certain that any compiler than currently supports "long
long" will continue to do so, regardless of C9x, for the foreseeable future,
through compiler options or whatever. Such code was clearly not written to be
portable, so this approach would seem to resolve the problems of anyone needing
to support such heritage code after the promulgation of C9x.

Peter Curran

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

John R MacMillan

unread,
Apr 21, 1997, 3:00:00 AM4/21/97
to

|Is there a reason why the new standard can't allow "long long", but still
|require that size_t is no longer than "unsigned long". Won't that allow
|the existing code to still work?

Because it's a bad idea. :-)

Seriously, that puts an unnecessary (and new type of) restriction on not
only the standard types (size_t, ptrdiff_t, etc.) but also user-defined
types defined either by other standards (eg. POSIX) or just application
coding conventions.

For example, an application header file might now have something like:

typedef unsigned long acct_t; /* Account numbers are 9 digits */

Now long long goes in and is guaranteed to be 64 bits, and some account
numbers now may contain 12 digits, so it the new account numbers would
fit in a long long.

BUT you can't change the typedef, since when the code was written,
(unsigned) long was the longest type, and you don't know if the code
that uses acct_t ever relies on this (it's a big application). Most
likely, of course, somebody makes the change without thinking, and
eventually something breaks.

In effect, you're forcing every existing type to remain frozen as being
no larger than long, and only allowing ``new'' types to be long long.
And you're expecting people to remember which is which, which will be a
maintenance headache.

Robert Corbett

unread,
Apr 22, 1997, 3:00:00 AM4/22/97
to

In article <5jam6s$r...@news.inforamp.net>,

Peter Curran <pcu...@xacm.org> wrote:
>
>Are there ANY arguments in favour of this thing???

Of course there are. I do not know of a C compiler for sale today
that does not support the data type long long. Too many programs
use long long to make its deletion possible. Any compiler vendor
who does not support long long is going to go out of business.

The issue is not whether C compilers will support long long; the
issue is whether the C standard should define it. There are minor
variations among C implementations. Adding long long to the C
standard offers a chance to eliminate gratuitous variations. Once
upon a time, that was the purpose of standards.

Sincerely,
Bob Corbett

Dan Pop

unread,
Apr 22, 1997, 3:00:00 AM4/22/97
to

In <5jhfj1$2...@engnews1.Eng.Sun.COM> cor...@lupa.eng.sun.com (Robert Corbett) writes:

>The issue is not whether C compilers will support long long;

Indeed. No vendor supporting it today will ever drop it from future
compilers, whether it will be included in C9X or not.

>the
>issue is whether the C standard should define it. There are minor
>variations among C implementations. Adding long long to the C
>standard offers a chance to eliminate gratuitous variations.

In theory. In practice it will be a complete mess, because NO vendor will
afford to change the way long long was implemented in the pre-standardization
days, in order to avoid upsetting its customers. So, all implementations
who did it differently than the future standard will have to provide a
way to switch between C9X long long and vendor-specific long long. The
customers will, more often than not, select the vendor-specific long long,
just to be on the safe side and be sure that their codes won't break
because of the "quiet change".

The advantage of a clean and proper solution to the problem is that no
legacy code will exist, so there will be no (sane) reason to avoid it in
new projects. Since everybody will continue to support long long as
before, nobody will cry that the existing code base will break, either.

Problem solved.

Dan
--
Dan Pop
CERN, IT Division
Email: Dan...@cern.ch
Mail: CERN - PPE, Bat. 31 1-014, CH-1211 Geneve 23, Switzerland

Thad Smith

unread,
Apr 23, 1997, 3:00:00 AM4/23/97
to

In article <danpop.8...@news.cern.ch>, Dan...@cern.ch (Dan Pop) wrote:
>In <5jhfj1$2...@engnews1.Eng.Sun.COM> cor...@lupa.eng.sun.com (Robert
Corbett) writes:
>
>>The issue is not whether C compilers will support long long;

>>the


>>issue is whether the C standard should define it. There are minor
>>variations among C implementations. Adding long long to the C
>>standard offers a chance to eliminate gratuitous variations.
>
>In theory. In practice it will be a complete mess, because NO vendor will
>afford to change the way long long was implemented in the pre-standardization
>days, in order to avoid upsetting its customers. So, all implementations
>who did it differently than the future standard will have to provide a
>way to switch between C9X long long and vendor-specific long long. The
>customers will, more often than not, select the vendor-specific long long,
>just to be on the safe side and be sure that their codes won't break
>because of the "quiet change".

I haven't been around C as long as many here, but I recall that prior
to ANSI C, there were such variations in operations of the
preprocessor, such as how to perform concatenations, and variations in
the semantics of enums (I recall reading a version of H&S that
described three models of enums). Also, some compilers allowed "$" as
an alpha character. The vendors seemed to have successfully converted
to the ANSI/ISO model, although perhaps with some switches for
backwards compatibility. I would think that most code would employ
the standard version of the compiler now, except where it is using
extensions simply not found in the standard.

I would expect a similar migration to the standard support for 64-bit
integers, whatever that is.

>The advantage of a clean and proper solution to the problem is that no
>legacy code will exist, so there will be no (sane) reason to avoid it in
>new projects. Since everybody will continue to support long long as
>before, nobody will cry that the existing code base will break, either.

I don't understand Dan's comment here. Why would a clean and proper
solution void all old code? Surely this is not desirable. In my
mind a clean and proper solution would continue to support currently
strictly conforming code and hopefully allow most of the conforming
code to remain unscathed.

Thad

Bret Indrelee

unread,
Apr 23, 1997, 3:00:00 AM4/23/97
to

In article <5jeo30$6...@mulga.cs.mu.OZ.AU>,

Fergus Henderson <f...@mundook.cs.mu.OZ.AU> wrote:
>
>pcu...@xacm.org (Peter Curran) writes:
>
>
>>On 18 Apr 1997 19:19:53 -0500 in article <5j9339$1...@solutions.solon.com>
>> se...@solutions.solon.com (Peter Seebach) (Peter Seebach) wrote:
>>
>>>"long long", in and of itself, is deeply and fundementally a failure; it will
>>>lead to bad code, it will break good code, and it will never, ever, solve a
>>>problem that isn't better fixed by <inttypes.h>.
>>
>>This seems to be virtually the unanimous opinion of the entire C community.
>
>Nope, it is just that those who think that "long long" is fundamentally
>a failure are (understandably) quite vocal about it on usenet.
>
>Those who are aware of the pros and cons and who think that (despite the
>well-publicized problems) the pros outweigh the cons tend not to be so
>concerned; since this is apparently not the first such discussion of this
>topic, I wouldn't be suprised if they don't respond quite so fervently.

Then there are those who think that something along the lines of an
__int64 and __uint64 (something like <inttypes.h>) would solve most of
the problems where people want a 64-bit type, but the system doesn't
support 64-bit types yet.

>>The
>>only difference of opinion seems to be that, despite its horrors, omitting it
>>from C9x would inconvenience people currently using compilers that support it.
>

>I think omitting it from C9X would also inconvenience people using compilers
>that don't have any 64 bit type yet (e.g. on 32 bit systems). Some of them
>would be very significantly inconvenienced. In such a situation, a compiler
>vendor has several possibile alternatives:
>
> (1) Break binary backwards compatibility by changing the size of
> `long'. This would be very very costly -- prohibitively so.

For a true 64-bit processor, you need long to be 64-bit for ptrdif_t and
size_t. As for the cost, SGI and others have already us a 64-bit environment
that can still run old 32-bit applications. Costly, yes. Prohibitively so,
no.

> (2) Don't introduce any 64-bit type. But customers will demand one,
> so compiler vendors will introduce one, even if it is non-standard.

I agree this isn't acceptable. That doesn't indicate that the type has to be
one that is a base type in C, nor that it is even portable to all Standard C
implementations.

> (3) Introduce a 64 bit type.

Or use long as a 64-bit type on those systems that directly support 64-bit types,
and define something along the lines of <inttypes.h> where there is a defined
naming convention for the various size of integers that an implementation
supports.

>(3) is what will happen (or has already happened) in practice, regardless
>of what the standard says.
>

>>However, this seems to be more than balanced by the inconvenience to people
>>relying on the current standard.
>

>I don't think so. Do you think that binary backwards compatibility is not
>important? Do you think that the existence of a 64-bit type on 32-bit
>systems is not important?

This is a solved problem, and solved without something as ugly as 'long long' for
an name.

The biggest complaints I have against long long are:
1) The name breaks the parsing.
2) It breaks code that is currently conformant. In a system that supports 64-bit
types, long should be 64-bit.

Antoine Leca

unread,
Apr 23, 1997, 3:00:00 AM4/23/97
to

Norman Diamond wrote:
>
> In article <5j1crd$a...@mulga.cs.mu.OZ.AU>, f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
> > short, unsigned short, int, unsigned, long long, unsigned long long
> So the complete order is:
> signed char, unsigned char, short, unsigned short, int, unsigned,
> long long, unsigned long long, long, unsigned long

(Just as I love picking nits)
I thought unsigned char (values) _can_ be larger than signed short,
unsigned short can be larger than signed, etc.

(If it is true or not is implementation relevant)

Or did I miss something?


> so that long is still the longest integral type and programs that used
> the longest integral type with its guaranteed meaning won't be broken
> by your suggestion. Confusing but not breaking good programs. Hmmm.

:-)


Antoine

Dan Pop

unread,
Apr 23, 1997, 3:00:00 AM4/23/97
to

Not ALL old code, only code that was already void, by using a non-standard
type: long long.

>Surely this is not desirable. In my
>mind a clean and proper solution would continue to support currently
>strictly conforming code

We already know that long long would BREAK currently strictly conforming
code, therefore this is not the optimal solution.

>and hopefully allow most of the conforming code to remain unscathed.

Unfortunately, "conforming code", as defined by the current standard, can
be ANYTHING that one compiler or another accepts, including programs
written in other languages.

Geoff Clare

unread,
Apr 24, 1997, 3:00:00 AM4/24/97
to

o...@goanna.cs.rmit.EDU.AU (Richard A. O'Keefe) writes:

>Don't forget the POSIX.1 standard either; why should

> offset_t x = ....;

> printf("%ld", (long)x);

>break when offset_t is made a 64-bit type (as doubtless it will be)?

Assuming you mean off_t, the above code is already broken. POSIX.1
does not require off_t to be an integral type, only an arithmetic one.
(I.e. it could be float or double).

The same is true of the other type names in POSIX.1 (except those
inherited from the C standard): pid_t, uid_t, etc. only have to be
arithmetic, not integral. Even ssize_t, intended as a signed equivalent
of size_t, is not required to be integral (although I think this was
probably an oversight when it was added in the 1990 revision).

I should also mention (at the risk of going too far off topic for this
group) that the X/Open specifications are somewhat more restrictive
than POSIX.1, in that some of these types *are* required to be integral.
However, off_t has changed in the latest spec (XSH5) from being integral
to being "extended integral", which allows it be longer than "long".
--
Geoff Clare <g...@root.co.uk>
UniSoft Limited, London, England.

Norman Diamond

unread,
Apr 25, 1997, 3:00:00 AM4/25/97
to

In article <335E3B...@Renault.FR>, Antoine Leca <Antoin...@Renault.FR> writes:
>Norman Diamond wrote:
>>In article <5j1crd$a...@mulga.cs.mu.OZ.AU>, f...@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
>>> short, unsigned short, int, unsigned, long long, unsigned long long
>>So the complete order is:
>> signed char, unsigned char, short, unsigned short, int, unsigned,
>> long long, unsigned long long, long, unsigned long

>I thought unsigned char (values) _can_ be larger than signed short,


>unsigned short can be larger than signed, etc.
>(If it is true or not is implementation relevant)

Nope -- the fact that it _can_ be is true regardless of implementations :-)

>Or did I miss something?

Nope -- your "etc." includes the case that unsigned char _can_ be larger
than long :-)

>>so that long is still the longest integral type and programs that used
>>the longest integral type with its guaranteed meaning won't be broken

In fact this is wrong too, because char (unsigned and signed and plain)
can be _longer_ than long (and unsigned long) in the present standard.
There are restrictions on the ranges of values, so unsigned int can't
hold a larger value than unsigned long can hold, but there are no
restrictions on the lengths. Unsigned int might have 112 unused bits
(holes) and be twice the size of unsigned long as measured by sizeof.
I think a DR asked about this but don't recall seeing a TC.

--
<< If this were the company's opinion, I would not be allowed to post it. >>
"I paid money for this car, I pay taxes for vehicle registration and a driver's
license, so I can drive in any lane I want, and no innocent victim gets to call
the cops just 'cause the lane's not goin' the same direction as me" - J Spammer

Clive D.W. Feather

unread,
Apr 28, 1997, 3:00:00 AM4/28/97
to

In article <5k2ev4$rbc$1...@halon.vggas.com>, James Youngman
<JYou...@vggas.com> writes
>I'm also not clear (after a fairly directed browse) where the C89 standard
>stipulates that {,un}signed long is the widest integral type available.
>
>The closest I can see is 6.1.2.5 "Types":
>
>[ ...]
>There are four signed integer types, designated as signed char, short int, int,
>and long int. (The signed integer and other types may be designated in several
>additional ways, as described in section 6.5.2.)
>
>This indicates to me that there are _exactly_ four integral types, and no more.
> Is this a correct interpretation?

This is the accepted interpretation (though I hear there are proposals
to change it).

--
Clive D.W. Feather | Director of Software Development | Home email:
Tel: +44 181 371 1138 | Demon Internet Ltd. | <cl...@davros.org>
Fax: +44 181 371 1037 | <cl...@demon.net> | Abuse:
Written on my laptop; please observe the Reply-To address | <cl...@bofh.org>

James Youngman

unread,
Apr 28, 1997, 3:00:00 AM4/28/97
to

In article <5j9339$1...@solutions.solon.com>, se...@solutions.solon.com says...

>"long long", in and of itself, is deeply and fundementally a failure; it will
>lead to bad code, it will break good code, and it will never, ever, solve a
>problem that isn't better fixed by <inttypes.h>.

I don't disagree with this statement, itself, but _outside_ the standard I
noticed

G.5.6 Other Arithmetic Types
Other arithmetic types, such as long long int, and
their appropriate conversions are defined (6.2.2.1)

I'm also not clear (after a fairly directed browse) where the C89 standard
stipulates that {,un}signed long is the widest integral type available.

The closest I can see is 6.1.2.5 "Types":

[ ...]
There are four signed integer types, designated as signed char, short int, int,
and long int. (The signed integer and other types may be designated in several
additional ways, as described in section 6.5.2.)

This indicates to me that there are _exactly_ four integral types, and no more.
Is this a correct interpretation?

--
James Youngman VG Gas Analysis Systems The trouble with the rat-race
Before sending advertising material, read is, even if you win, you're
http://www.law.cornell.edu/uscode/47/227.html still a rat.


0 new messages