Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

is "typedef int int;" illegal????

8 views
Skip to first unread message

jacob navia

unread,
Mar 24, 2006, 8:58:12 AM3/24/06
to
Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;

To me, this looks like a null assignment:

a = a;

Would it break something if lcc-win32 would accept that,
maybe with a warning?

Is the compiler *required* to reject that?

Microsoft MSVC: rejects it.
lcc-win32 now rejects it.
gcc (with no flags) accepts it with some warnnings.

Thanks

jacob
---
A free compiler system for windows:
http://www.cs.virginia.edu/~lcc-win32

Robert Gamble

unread,
Mar 24, 2006, 10:02:19 AM3/24/06
to
jacob navia wrote:
> Hi
>
> Suppose you have somewhere
>
> #define BOOL int
>
> and somewhere else
>
> typedef BOOL int;
>
> This gives
>
> typedef int int;

The typedef name must be an identifier, here you have a keyword.

> To me, this looks like a null assignment:
>
> a = a;

Maybe from an implementor's point of view but the Standard does not
allow it.

> Would it break something if lcc-win32 would accept that,
> maybe with a warning?

A diagnostic would be required, after that you can do anything you
want, including continuing to compile the program as you normally
would.

> Is the compiler *required* to reject that?

Define reject. You are required to produce a diagnostic, that's it.

> Microsoft MSVC: rejects it.

Good.

> lcc-win32 now rejects it.

Good.

> gcc (with no flags) accepts it with some warnnings.

It fails to compile for me (gcc 4.0.2):
error: two or more data types in declaration specifiers

Robert Gamble

jacob navia

unread,
Mar 24, 2006, 10:18:20 AM3/24/06
to
Robert Gamble a écrit :

> jacob navia wrote:
>
>>Hi
>>
>>Suppose you have somewhere
>>
>>#define BOOL int
>>
>>and somewhere else
>>
>>typedef BOOL int;
>>
>>This gives
>>
>>typedef int int;
>
>
> The typedef name must be an identifier, here you have a keyword.
>
>
>>To me, this looks like a null assignment:
>>
>> a = a;
>
>
> Maybe from an implementor's point of view but the Standard does not
> allow it.
>
>
>>Would it break something if lcc-win32 would accept that,
>>maybe with a warning?
>
>
> A diagnostic would be required, after that you can do anything you
> want, including continuing to compile the program as you normally
> would.
>

OK, that is what I had in mind.

>
>>Is the compiler *required* to reject that?
>
>
> Define reject. You are required to produce a diagnostic, that's it.
>

Reject means the program fails to compile. A warning is not a reject
since the program compiles.

>
>>Microsoft MSVC: rejects it.
>
>
> Good.
>
>
>>lcc-win32 now rejects it.
>
>
> Good.
>
>
>>gcc (with no flags) accepts it with some warnnings.
>
>
> It fails to compile for me (gcc 4.0.2):
> error: two or more data types in declaration specifiers
>

Strange, I get the following:

[root@gateway root]# gcc -v
Reading specs from /usr/lib/gcc-lib/i586-mandrake-linux-gnu/2.96/specs
gcc version 2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk)
[root@gateway root]# cat tint.c
typedef int int;
[root@gateway root]# gcc -c tint.c
tint.c:1: warning: useless keyword or type name in empty declaration
tint.c:1: warning: empty declaration
[root@gateway root]# ls -l tint.o
-rw-r--r-- 1 root root 703 Mar 24 16:17 tint.o
[root@gateway root]#

Program is not rejected.

> Robert Gamble
>

Robert Gamble

unread,
Mar 24, 2006, 10:30:33 AM3/24/06
to

With that definition no, you are not required to reject such a program
but you are certainly free to do so.

> >
> >>Microsoft MSVC: rejects it.
> >
> >
> > Good.
> >
> >
> >>lcc-win32 now rejects it.
> >
> >
> > Good.
> >
> >
> >>gcc (with no flags) accepts it with some warnnings.
> >
> >
> > It fails to compile for me (gcc 4.0.2):
> > error: two or more data types in declaration specifiers
> >
>
> Strange, I get the following:
>
> [root@gateway root]# gcc -v
> Reading specs from /usr/lib/gcc-lib/i586-mandrake-linux-gnu/2.96/specs
> gcc version 2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk)
> [root@gateway root]# cat tint.c
> typedef int int;
> [root@gateway root]# gcc -c tint.c
> tint.c:1: warning: useless keyword or type name in empty declaration
> tint.c:1: warning: empty declaration
> [root@gateway root]# ls -l tint.o
> -rw-r--r-- 1 root root 703 Mar 24 16:17 tint.o
> [root@gateway root]#
>
> Program is not rejected.

Quite a bit has changed since gcc 2.96 including the strictness of the
syntax and type checking. Note though that a diagnostic was still
produced.

Robert Gamble

James Dennett

unread,
Mar 24, 2006, 10:36:16 AM3/24/06
to

The standard doesn't ever require a compiler to reject
code; it's quite legal for a C compiler to accept Fortran
code, so long as it prints out a diagnostic (maybe
"This looks like Fortran, not C... compiling it anyway...").

There is a common notion among compilers that a "warning" is
a non-fatal diagnostic and an "error" is a fatal diagnostic
(i.e., one which causes no object code to be produced), but
that is not standardised.

>>> Microsoft MSVC: rejects it.
>>
>>
>> Good.
>>
>>
>>> lcc-win32 now rejects it.
>>
>>
>> Good.
>>
>>
>>> gcc (with no flags) accepts it with some warnnings.
>>
>>
>> It fails to compile for me (gcc 4.0.2):
>> error: two or more data types in declaration specifiers
>>
>
> Strange, I get the following:
>
> [root@gateway root]# gcc -v
> Reading specs from /usr/lib/gcc-lib/i586-mandrake-linux-gnu/2.96/specs
> gcc version 2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk)
> [root@gateway root]# cat tint.c
> typedef int int;
> [root@gateway root]# gcc -c tint.c
> tint.c:1: warning: useless keyword or type name in empty declaration
> tint.c:1: warning: empty declaration
> [root@gateway root]# ls -l tint.o
> -rw-r--r-- 1 root root 703 Mar 24 16:17 tint.o
> [root@gateway root]#
>
> Program is not rejected.

That's a nearly 6-year old compiler, and not an official GCC
release at that. Not to say that 2.96 didn't have its uses,
and maybe it still does, but it's far from the state of the
art.

-- James

David R Tribble

unread,
Mar 24, 2006, 11:01:40 AM3/24/06
to
jacob navia wrote:
>> Suppose you have somewhere
>> #define BOOL int
>> and somewhere else
>> typedef BOOL int;
>>
>> This gives
>> typedef int int;
>>
>> To me, this looks like a null assignment:
>> a = a;
>

Robert Gamble wrote:
> Maybe from an implementor's point of view but the Standard does not
> allow it.

> The typedef name must be an identifier, here you have a keyword.

Exactly.

On a related note, I've suggested in the past that duplicate
(redundant) typedefs be allowed as long as they are semantically
equivalent, e.g.:

typedef long mytype; // A
typedef long mytype; // B, error in C99
typedef long int mytype; // C, error in C99

It would introduce no problems if the redundant typedefs at B and C
were allowed. C99 rules, however, disallow this, so we're forced to
do things like the following in all our header files:

// foo.h
#ifndef MYTYPE_DEF
typedef long mytype;
#define MYTYPE_DEF
#endif

// bar.h
#ifndef MYTYPE_DEF
typedef long int mytype;
#define MYTYPE_DEF
#endif


Allowing redundant typedefs parallels the rule allowing redundant
preprocessor macro definitions:

#define SIZE 100 // D
#define SIZE 100 // E, okay, duplicate allowed

This also parallels C++ semantics, which allow duplicate typedefs.

-drt

jacob navia

unread,
Mar 24, 2006, 11:03:01 AM3/24/06
to
James Dennett a écrit :

>> Strange, I get the following:
>>
>> [root@gateway root]# gcc -v
>> Reading specs from /usr/lib/gcc-lib/i586-mandrake-linux-gnu/2.96/specs
>> gcc version 2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk)
>> [root@gateway root]# cat tint.c
>> typedef int int;
>> [root@gateway root]# gcc -c tint.c
>> tint.c:1: warning: useless keyword or type name in empty declaration
>> tint.c:1: warning: empty declaration
>> [root@gateway root]# ls -l tint.o
>> -rw-r--r-- 1 root root 703 Mar 24 16:17 tint.o
>> [root@gateway root]#
>>
>> Program is not rejected.
>
>
> That's a nearly 6-year old compiler, and not an official GCC
> release at that. Not to say that 2.96 didn't have its uses,
> and maybe it still does, but it's far from the state of the
> art.
>
> -- James

Wow, this complicates this quite a bit.
If Microsoft AND gcc reject the code... I think better leave it as it is...
I thought that gcc let it pass with some warnings and intended to do the
same, but it is true that I have not upgraded gcc since quite a while.

thanks

Eric Sosman

unread,
Mar 24, 2006, 12:11:45 PM3/24/06
to

David R Tribble wrote On 03/24/06 11:01,:


>
> On a related note, I've suggested in the past that duplicate
> (redundant) typedefs be allowed as long as they are semantically
> equivalent, e.g.:
>
> typedef long mytype; // A
> typedef long mytype; // B, error in C99
> typedef long int mytype; // C, error in C99

It seems to me "semantically equivalent" might
open an unpleasant can of worms. For example, are

typedef unsigned int mytype;
typedef size_t mytype;

"semantically equivalent" on an implementation that
uses `typedef unsigned int size_t;'? What's really
wanted is "equivalence of intent," which seems a
harder notion to pin down.

If the suggestion were modified to require "lexical
equivalence," such questions would disappear and I don't
think the language would be any the worse without them.
Writing header files would perhaps not be quite as much
easier as with "semantic equivalence," but I think would
be a good deal easier than it is now.

--
Eric....@sun.com

Douglas A. Gwyn

unread,
Mar 24, 2006, 12:03:40 PM3/24/06
to
jacob navia wrote:
> #define BOOL int
> typedef BOOL int;

> Would it break something if lcc-win32 would accept that,
> maybe with a warning?

Please don't try to change the language, it doesn't do your
users any favor. Indeed, it merely encourages even more
out-of-control coding

> Is the compiler *required* to reject that?

A diagnostic is required.

Wojtek Lerch

unread,
Mar 24, 2006, 12:25:36 PM3/24/06
to
"Eric Sosman" <Eric....@sun.com> wrote in message
news:e0198h$rh$1...@news1brm.Central.Sun.COM...

> David R Tribble wrote On 03/24/06 11:01,:
>>
>> On a related note, I've suggested in the past that duplicate
>> (redundant) typedefs be allowed as long as they are semantically
>> equivalent, e.g.:
>>
>> typedef long mytype; // A
>> typedef long mytype; // B, error in C99
>> typedef long int mytype; // C, error in C99
>
> It seems to me "semantically equivalent" might
> open an unpleasant can of worms. For example, are
>
> typedef unsigned int mytype;
> typedef size_t mytype;
>
> "semantically equivalent" on an implementation that
> uses `typedef unsigned int size_t;'? What's really
> wanted is "equivalence of intent," which seems a
> harder notion to pin down.

Couldn't it simply use the same rules as declarations do -- i.e. require
compatible types?

extern unsigned int myvar;
extern size_t myvar;


jacob navia

unread,
Mar 24, 2006, 12:43:32 PM3/24/06
to
Douglas A. Gwyn a écrit :

Well, "changing the language" is quite an overkill. Gcc 2.xx accepted
that (with warnings). Did they "change the language" ???

But basically why

typedef int int;

should be forbidden? Like

a = a;

it does nothing and the language is not changed, at most is made more
consistent.

But since gcc has changed its behavior in later versions, as I learned
in this forum, and microsoft rejects it, I think I will not do this.

Thanks for your reply

loufoque

unread,
Mar 24, 2006, 1:05:41 PM3/24/06
to
jacob navia wrote :

> But basically why
>
> typedef int int;
>
> should be forbidden?

Defining a type that already exists makes no sense.

Wojtek Lerch

unread,
Mar 24, 2006, 1:22:37 PM3/24/06
to

But *declaring* something that's already been declared is generally OK
in C. Why does a typedef have to behave like a definition rather than a
declaration -- it doesn't reserve any storage, just binds a type to an
identifier.

(The idea of allowing a keyword for a typedef name is a different story;
personally, I don't like it.)

Eric Sosman

unread,
Mar 24, 2006, 1:47:56 PM3/24/06
to

Wojtek Lerch wrote On 03/24/06 12:25,:

Yes it could, and since `typedef' doesn't actually
define types (it just defines aliases) the issue could
be resolved this way. But is that the way we *want* it
resolved, from a portability perspective? It would open
the door (or fail to close the door) to bletcherous
abuses like `typedef time_t ptrdiff_t', things that would
work on some platforms but go horribly wrong on others.

We've got time_t and size_t and int16_t and all the
rest specifically so the programmer has a chance to stay
above the implementation-specific fray. It would seem a
step in the wrong direction to make typedef weaker than
it already is.

(Isn't there a "what if" rule somewhere? ;-)

--
Eric....@sun.com

Mark McIntyre

unread,
Mar 24, 2006, 2:46:43 PM3/24/06
to
On Fri, 24 Mar 2006 18:43:32 +0100, in comp.lang.c , jacob navia
<ja...@jacob.remcomp.fr> wrote:

>Well, "changing the language" is quite an overkill. Gcc 2.xx accepted
>that (with warnings).

A warning is a diagnostic.

>Did they "change the language" ???

They emitted a diagnostic.

>But basically why
>
>typedef int int;

its meaningless.

>should be forbidden? Like
>
> a = a;

it has a meaning, albeit a pointless one.

You might as well say
"why is it incorrect to say 'runned bluer which apple' but ok to say
'prunes are orange' ?"

Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan

Wojtek Lerch

unread,
Mar 24, 2006, 3:00:44 PM3/24/06
to
"Eric Sosman" <Eric....@sun.com> wrote in message
news:e01est$3rp$1...@news1brm.Central.Sun.COM...

> Wojtek Lerch wrote On 03/24/06 12:25,:
>> "Eric Sosman" <Eric....@sun.com> wrote in message
>> news:e0198h$rh$1...@news1brm.Central.Sun.COM...
...

>>>typedef unsigned int mytype;
>>>typedef size_t mytype;
...

>> Couldn't it simply use the same rules as declarations do -- i.e. require
>> compatible types?
>>
>> extern unsigned int myvar;
>> extern size_t myvar;
>
> Yes it could, and since `typedef' doesn't actually
> define types (it just defines aliases) the issue could
> be resolved this way. But is that the way we *want* it
> resolved, from a portability perspective? It would open
> the door (or fail to close the door) to bletcherous
> abuses like `typedef time_t ptrdiff_t', things that would
> work on some platforms but go horribly wrong on others.

They wouldn't go horribly wrong, just cause a compile error, like they now
do on all platforms. They would behave the same way as this:

time_t fun();
ptrdiff_t fun();

or this:

time_t var;
ptrdiff_t *ptr = &var;

I'd expect these to be more likely to appear in a program by mistake than
your "typedef time_t ptrdiff_t" -- are you more worried about abuses that
are so obviously bletcherous that no sane person would put them in their
code?

> We've got time_t and size_t and int16_t and all the
> rest specifically so the programmer has a chance to stay
> above the implementation-specific fray. It would seem a
> step in the wrong direction to make typedef weaker than
> it already is.

In the existing C (and, presumably, in the proposed "C with redundant
typedefs"), trying to redefine a *standard* typedef in a program is
undefined behaviour anyway (7.1.3p2). And so is trying to declare a
standard function after including the corresponding header. Nevertheless, C
allows programs to declare the same function twice; do you think that should
be forbidden, too?

jacob navia

unread,
Mar 24, 2006, 3:04:59 PM3/24/06
to Wojtek Lerch
Wojtek Lerch a écrit :

> In the existing C (and, presumably, in the proposed "C with redundant
> typedefs"), trying to redefine a *standard* typedef in a program is
> undefined behaviour anyway (7.1.3p2). And so is trying to declare a
> standard function after including the corresponding header. Nevertheless, C
> allows programs to declare the same function twice; do you think that should
> be forbidden, too?
>

VERY GOOD POINT!

kuy...@wizard.net

unread,
Mar 24, 2006, 6:00:22 PM3/24/06
to
James Dennett wrote:
...

> The standard doesn't ever require a compiler to reject
> code;

Not quite: section 3.17p4 says :

"The implementation shall not successfully translate a preprocessing
translation unit containing a #error preprocessing directive unless it
is part of a group skipped by conditional inclusion."

However, that is the only exception to your statement.

Jordan Abel

unread,
Mar 24, 2006, 6:26:41 PM3/24/06
to

There is absolutely nothing that the standard requires to fail to
compile, with the SOLE exception of a program containing the #error
directive.

Jordan Abel

unread,
Mar 24, 2006, 6:31:51 PM3/24/06
to
On 2006-03-24, jacob navia <ja...@jacob.remcomp.fr> wrote:
> Douglas A. Gwyn a écrit :
>> jacob navia wrote:
>>
>>>#define BOOL int
>>>typedef BOOL int;
>>
>>
>>>Would it break something if lcc-win32 would accept that,
>>>maybe with a warning?
>>
>>
>> Please don't try to change the language, it doesn't do your
>> users any favor. Indeed, it merely encourages even more
>> out-of-control coding
>>
>>
>>>Is the compiler *required* to reject that?
>>
>>
>> A diagnostic is required.
>
> Well, "changing the language" is quite an overkill. Gcc 2.xx accepted
> that (with warnings). Did they "change the language" ???

GCC also didn't do what you think it did with it. It interpreted it as
"define (nothing) to be 'int int'", _NOT_ "define int to be int".

And a warning _does_ satisfy the requirement for a diagnostic.

David R Tribble

unread,
Mar 24, 2006, 7:47:18 PM3/24/06
to
David R Tribble wrote:
>> On a related note, I've suggested in the past that duplicate
>> (redundant) typedefs be allowed as long as they are semantically
>> equivalent, e.g.:
>>
>> typedef long mytype; // A
>> typedef long mytype; // B, error in C99
>> typedef long int mytype; // C, error in C99
>

Eric Sosman wrote:
> It seems to me "semantically equivalent" might
> open an unpleasant can of worms. For example, are
> typedef unsigned int mytype;
> typedef size_t mytype;
> "semantically equivalent" on an implementation that
> uses `typedef unsigned int size_t;'? What's really
> wanted is "equivalence of intent," which seems a
> harder notion to pin down.

It should mean "semantically equivalent", as in "equivalent types",
to allow C to be compatible with the C++.


> If the suggestion were modified to require "lexical
> equivalence," such questions would disappear and I don't
> think the language would be any the worse without them.
> Writing header files would perhaps not be quite as much
> easier as with "semantic equivalence," but I think would
> be a good deal easier than it is now.

Lexical equivalence is harder for compilers to check than
semantic type equivalence, which is already present in compilers.

-drt

Wojtek Lerch

unread,
Mar 24, 2006, 7:57:50 PM3/24/06
to
"jacob navia" <ja...@jacob.remcomp.fr> wrote in message
news:44242fc3$0$21267$8fcf...@news.wanadoo.fr...

> But basically why
>
> typedef int int;
>
> should be forbidden?

BTW Think about

typedef long long long long;

;-)

Douglas A. Gwyn

unread,
Mar 24, 2006, 8:16:44 PM3/24/06
to
loufoque wrote:
> Defining a type that already exists makes no sense.

There are numerous issues involved that led to the current
spec for typedef. One of them is that after the first
typedef of a given identifier, that identifier plays a
different role (type synonym) and it would be logical for
it to do so in the second "redundant" typedef (which
happens to result in a syntactic error).
typedef int foo;
typedef foo bar;
typedef bar foo; // would this be allowed?
typedef bar int; // but not this?
Basically this is too fundamental and established in the
language to be messing with. If you design some *new*
language you might want to do it differently.

Stephen Sprunk

unread,
Mar 25, 2006, 12:22:41 AM3/25/06
to
"Wojtek Lerch" <Wojt...@yahoo.ca> wrote in message
news:48jisaF...@individual.net...

That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another couple
decades? Call them "long long long"? Or if we redefine "long long" to be
128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
ints will be a "long char" or "short short"? Or is a "short short" already
equal to a "char"?

All we need are "int float" and "double int" and the entire C type system
will be perfect! </sarcasm>

S

--
Stephen Sprunk "Stupid people surround themselves with smart
CCIE #3723 people. Smart people surround themselves with
K5SSS smart people who disagree with them." --Aaron Sorkin

*** Free account sponsored by SecureIX.com ***
*** Encrypt your Internet usage with a free VPN account from http://www.SecureIX.com ***

Jordan Abel

unread,
Mar 25, 2006, 12:35:48 AM3/25/06
to
On 2006-03-25, Stephen Sprunk <ste...@sprunk.org> wrote:
> "Wojtek Lerch" <Wojt...@yahoo.ca> wrote in message
> news:48jisaF...@individual.net...
>> "jacob navia" <ja...@jacob.remcomp.fr> wrote in message
>> news:44242fc3$0$21267$8fcf...@news.wanadoo.fr...
>>> But basically why
>>>
>>> typedef int int;
>>>
>>> should be forbidden?
>>
>> BTW Think about
>>
>> typedef long long long long;
>>
>> ;-)
>
> That "long long" even exists is a travesty.
>
> What are we going to do when 128-bit ints become common in another couple
> decades? Call them "long long long"? Or if we redefine "long long" to be
> 128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
> long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
> ints will be a "long char" or "short short"? Or is a "short short" already
> equal to a "char"?
>
> All we need are "int float" and "double int" and the entire C type system
> will be perfect! </sarcasm>

Don't forget short double and long float.

and are _Complex integers legal?

Keith Thompson

unread,
Mar 25, 2006, 3:12:15 AM3/25/06
to
"Stephen Sprunk" <ste...@sprunk.org> writes:
[...]

> That "long long" even exists is a travesty.
>
> What are we going to do when 128-bit ints become common in another
> couple decades? Call them "long long long"? Or if we redefine "long
> long" to be 128-bit ints and "long" to be 64-bit ints, will a 32-bit
> int be a "short long" or a "long short"? Maybe 32-bit ints will
> become "short" and 16-bit ints will be a "long char" or "short short"?
> Or is a "short short" already equal to a "char"?
>
> All we need are "int float" and "double int" and the entire C type
> system will be perfect! </sarcasm>

So how would you improve it?

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.

Vladimir S. Oka

unread,
Mar 25, 2006, 3:16:02 AM3/25/06
to
Keith Thompson opined:

> "Stephen Sprunk" <ste...@sprunk.org> writes:
>>
>> All we need are "int float" and "double int" and the entire C type
>> system will be perfect! </sarcasm>
>
> So how would you improve it?

Well, obviously by adding `int float` and `double int`. ;-)

--
BR, Vladimir

Everyone can be taught to sculpt: Michelangelo would have had
to be taught how not to. So it is with the great programmers.

santosh

unread,
Mar 25, 2006, 3:18:26 AM3/25/06
to
James Dennett wrote:
... snip ...

> The standard doesn't ever require a compiler to reject code;

Except, I suppose, if an #error directive is encountered.

> it's quite legal for a C compiler to accept Fortran
> code, so long as it prints out a diagnostic (maybe
> "This looks like Fortran, not C... compiling it anyway...").

In which case it would no longer be a C compiler and would not come
under the restrictions of the C standard.

santosh

unread,
Mar 25, 2006, 3:28:33 AM3/25/06
to
Keith Thompson wrote:
> "Stephen Sprunk" <ste...@sprunk.org> writes:
> [...]
> > That "long long" even exists is a travesty.
> >
> > What are we going to do when 128-bit ints become common in another
> > couple decades? Call them "long long long"? Or if we redefine "long
> > long" to be 128-bit ints and "long" to be 64-bit ints, will a 32-bit
> > int be a "short long" or a "long short"? Maybe 32-bit ints will
> > become "short" and 16-bit ints will be a "long char" or "short short"?
> > Or is a "short short" already equal to a "char"?
> >
> > All we need are "int float" and "double int" and the entire C type
> > system will be perfect! </sarcasm>
>
> So how would you improve it?

Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly but
there's nothing that can be done. Calling a 128 bit integer as long
long long would be ridiculous.

Jordan Abel

unread,
Mar 25, 2006, 4:09:24 AM3/25/06
to

If it also compiles C, it has to print a diagnostic on being given non-C
code and being told that it's C. (e.g. gcc -x c)

pemo

unread,
Mar 25, 2006, 5:14:15 AM3/25/06
to

Perhaps you could try it with a couple more modern compilers - just as a
'belt and braces' kind of thing? Aren't the Intel compilers at least free to
try
http://www.intel.com/cd/software/products/asmo-na/eng/compilers/219690.htm,
and then there's Sun's .. http://developers.sun.com/prodtech/cc/index.jsp ..
which *are* free, and might prove useful?


--
==============
*Not a pedant*
==============


kuy...@wizard.net

unread,
Mar 25, 2006, 9:36:00 AM3/25/06
to
Stephen Sprunk wrote:
...

> That "long long" even exists is a travesty.
>
> What are we going to do when 128-bit ints become common in another couple
> decades? Call them "long long long"? Or if we redefine "long long" to be
> 128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
> long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
> ints will be a "long char" or "short short"? Or is a "short short" already
> equal to a "char"?

We now have size-named types. That should reduce (but not,
unfortunately, eliminate) the likelihood of similar travesties in the
future.

>
> All we need are "int float" and "double int" and the entire C type system
> will be perfect! </sarcasm>

Hey! That's a half-way plausible syntax for declaring a fixed-point
type. ;-) All it needs is some way of specifying the number of digits
after the decimal point.

kuy...@wizard.net

unread,
Mar 25, 2006, 9:44:56 AM3/25/06
to
Jordan Abel wrote:
...

> and are _Complex integers legal?

They aren't (6.7.2p2), but conceptually it would be a meaningful
concept, and I suspect there are certain obscure situations where
they'd be useful.

Ben Pfaff

unread,
Mar 25, 2006, 12:31:22 PM3/25/06
to
"santosh" <santo...@gmail.com> writes:

> Keith Thompson wrote:
>> "Stephen Sprunk" <ste...@sprunk.org> writes:
>> [...]
>> > That "long long" even exists is a travesty.
>>

>> So how would you improve it?
>
> Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly but

> there's nothing that can be done. [...]

Not a good "solution" in my opinion. I'm sure there are lots of
programs that use each of these identifiers. "long long" doesn't
reserve any previously unreserved identifiers.
--
int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}

Keith Thompson

unread,
Mar 25, 2006, 1:08:26 PM3/25/06
to

Mathematically, they're called "Gaussian integers".

jacob navia

unread,
Mar 25, 2006, 1:12:29 PM3/25/06
to
santosh a écrit :
lcc-win32 supports 128 bit integers. The type is named:

int128

Planned is support for 128 bit constants with

i128 m = 85566677766545455544455543344i128;

and

printf("%i128d",m);

Eric Sosman

unread,
Mar 25, 2006, 1:27:50 PM3/25/06
to
Ben Pfaff wrote:
> "santosh" <santo...@gmail.com> writes:
>
>
>>Keith Thompson wrote:
>>
>>>"Stephen Sprunk" <ste...@sprunk.org> writes:
>>>[...]
>>>
>>>>That "long long" even exists is a travesty.
>>>
>>>So how would you improve it?
>>
>>Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly but
>>there's nothing that can be done. [...]
>
>
> Not a good "solution" in my opinion. I'm sure there are lots of
> programs that use each of these identifiers. "long long" doesn't
> reserve any previously unreserved identifiers.

atoll()?

--
Eric Sosman
eso...@acm-dot-org.invalid

Jack Klein

unread,
Mar 25, 2006, 1:56:19 PM3/25/06
to
On Fri, 24 Mar 2006 23:22:41 -0600, "Stephen Sprunk"
<ste...@sprunk.org> wrote in comp.lang.c:

> "Wojtek Lerch" <Wojt...@yahoo.ca> wrote in message
> news:48jisaF...@individual.net...
> > "jacob navia" <ja...@jacob.remcomp.fr> wrote in message
> > news:44242fc3$0$21267$8fcf...@news.wanadoo.fr...
> >> But basically why
> >>
> >> typedef int int;
> >>
> >> should be forbidden?
> >
> > BTW Think about
> >
> > typedef long long long long;
> >
> > ;-)
>
> That "long long" even exists is a travesty.
>
> What are we going to do when 128-bit ints become common in another couple
> decades? Call them "long long long"? Or if we redefine "long long" to be
> 128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
> long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
> ints will be a "long char" or "short short"? Or is a "short short" already
> equal to a "char"?
>
> All we need are "int float" and "double int" and the entire C type system
> will be perfect! </sarcasm>

The 256 bit integer type has already been designated "long long long
long spam and long".

'nuff said.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~ajo/docs/FAQ-acllc.html

Michael Mair

unread,
Mar 25, 2006, 3:10:33 PM3/25/06
to
kuy...@wizard.net schrieb:

> Stephen Sprunk wrote:
> ...
>>That "long long" even exists is a travesty.
>>
>>What are we going to do when 128-bit ints become common in another couple
>>decades? Call them "long long long"? Or if we redefine "long long" to be
>>128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
>>long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
>>ints will be a "long char" or "short short"? Or is a "short short" already
>>equal to a "char"?
>
> We now have size-named types. That should reduce (but not,
> unfortunately, eliminate) the likelihood of similar travesties in the
> future.

Heh. We can hope.

>>All we need are "int float" and "double int" and the entire C type system
>>will be perfect! </sarcasm>
>
> Hey! That's a half-way plausible syntax for declaring a fixed-point
> type. ;-) All it needs is some way of specifying the number of digits
> after the decimal point.

There is something called Embedded C for that,
http://www.embedded-c.org

Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.

Jordan Abel

unread,
Mar 25, 2006, 4:18:51 PM3/25/06
to

case-insensitive, i hope.

> and
>
> printf("%i128d",m);

How do you differentiate this from the valid standard format string
consisting of %i followed by the string "128d"? Maybe you should use
%I128d instead, like how microsoft does I64

Mark McIntyre

unread,
Mar 25, 2006, 4:39:45 PM3/25/06
to
On 25 Mar 2006 00:28:33 -0800, in comp.lang.c , "santosh"
<santo...@gmail.com> wrote:

>Keith Thompson wrote:
>>
>> So how would you improve it?
>
>Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly

And delightful to pronounce for our welsh colleagues.

Personallu I suspect they'll have to redefine the language
drastically.
Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan

ena8...@yahoo.com

unread,
Mar 25, 2006, 5:21:55 PM3/25/06
to

Wojtek Lerch wrote:
> "Eric Sosman" <Eric....@sun.com> wrote in message
> news:e0198h$rh$1...@news1brm.Central.Sun.COM...
> > David R Tribble wrote On 03/24/06 11:01,:

> >>
> >> On a related note, I've suggested in the past that duplicate
> >> (redundant) typedefs be allowed as long as they are semantically
> >> equivalent, e.g.:
> >>
> >> typedef long mytype; // A
> >> typedef long mytype; // B, error in C99
> >> typedef long int mytype; // C, error in C99
> >
> > It seems to me "semantically equivalent" might
> > open an unpleasant can of worms. For example, are
> >
> > typedef unsigned int mytype;
> > typedef size_t mytype;
> >
> > "semantically equivalent" on an implementation that
> > uses `typedef unsigned int size_t;'? What's really
> > wanted is "equivalence of intent," which seems a
> > harder notion to pin down.
>
> Couldn't it simply use the same rules as declarations do -- i.e. require
> compatible types?
>
> extern unsigned int myvar;
> extern size_t myvar;

If a type is given two typedefs with compatible
but different types, which one do you use? If
we see

typedef int F(int g(), int h(int));

F *f1;

typedef int F(int g(void), int h());

F *f2;

what are the types of f1 and f2? Which type is used
can affect what values may be assigned, etc.

Wojtek Lerch

unread,
Mar 25, 2006, 5:32:25 PM3/25/06
to
<ena8...@yahoo.com> wrote in message
news:1143325315....@i40g2000cwc.googlegroups.com...

> If a type is given two typedefs with compatible
> but different types, which one do you use? If
> we see
>
> typedef int F(int g(), int h(int));
>
> F *f1;
>
> typedef int F(int g(void), int h());
>
> F *f2;
>
> what are the types of f1 and f2? Which type is used
> can affect what values may be assigned, etc.

The composite type, just like for any other declaration.


ena8...@yahoo.com

unread,
Mar 25, 2006, 6:12:24 PM3/25/06
to

The problem is the composite type isn't known when f1 is
declared. Any uses between f1's declaration and the second
definition of F would either have to use the first type or
require a second pass. Using the composite type for both
isn't how other declarations work either.

jacob navia

unread,
Mar 25, 2006, 6:13:05 PM3/25/06
to Jordan Abel
Jordan Abel a écrit :

> On 2006-03-25, jacob navia <ja...@jacob.remcomp.fr> wrote:
>>lcc-win32 supports 128 bit integers. The type is named:
>>
>>int128
>>
>>Planned is support for 128 bit constants with
[snip]

>>
>>printf("%i128d",m);
>
>
> How do you differentiate this from the valid standard format string
> consisting of %i followed by the string "128d"? Maybe you should use
> %I128d instead, like how microsoft does I64

good point!!!!

Thanks for this remark.

jacob

Wojtek Lerch

unread,
Mar 25, 2006, 7:04:20 PM3/25/06
to
<ena8...@yahoo.com> wrote in message
news:1143328344.6...@j33g2000cwa.googlegroups.com...

> Wojtek Lerch wrote:
>> <ena8...@yahoo.com> wrote in message
>> news:1143325315....@i40g2000cwc.googlegroups.com...
>> > If a type is given two typedefs with compatible
>> > but different types, which one do you use?
...

>> The composite type, just like for any other declaration.
>
> The problem is the composite type isn't known when f1 is
> declared. Any uses between f1's declaration and the second
> definition of F would either have to use the first type or
> require a second pass. Using the composite type for both
> isn't how other declarations work either.

Well, how do they work? If a regular identifier is declared twice, its type
between the declarations may be different from the type after the second
declaration:

int arr1[], arr2[];
// arr1 and arr2 have the same, incomplete type here
int arr1[6], arr2[8];
// arr1 and arr2 have different, complete types here

Why couldn't the same rule apply to typedefs?


Keith Thompson

unread,
Mar 25, 2006, 8:49:02 PM3/25/06
to
jacob navia <ja...@jacob.remcomp.fr> writes:
[...]

> lcc-win32 supports 128 bit integers. The type is named:
>
> int128

Which infringes on the user namespace. Is it defined in a
system-specific header?

Jordan Abel

unread,
Mar 25, 2006, 11:06:56 PM3/25/06
to
On 2006-03-26, Keith Thompson <ks...@mib.org> wrote:
> jacob navia <ja...@jacob.remcomp.fr> writes:
> [...]
>> lcc-win32 supports 128 bit integers. The type is named:
>>
>> int128
>
> Which infringes on the user namespace. Is it defined in a
> system-specific header?

probably should be something like __int128, typedef'd to int128_t in
stdint.h

Arthur J. O'Dwyer

unread,
Mar 26, 2006, 12:25:56 PM3/26/06
to

On Sat, 25 Mar 2006, santosh wrote:
> Keith Thompson wrote:
>> "Stephen Sprunk" <ste...@sprunk.org> writes:
>> [...]
>>> That "long long" even exists is a travesty.
>>>
>>> What are we going to do when 128-bit ints become common in another
>>> couple decades?
[...]

>>> All we need are "int float" and "double int" and the entire C type
>>> system will be perfect! </sarcasm>
>>
>> So how would you improve it?
>
> Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly
> but there's nothing that can be done. Calling a 128 bit integer as
> long long long would be ridiculous.

Obviously, 128 bits should be "long longer", and 256 bits should be
"long longest". Then, of course, 512 bits would be "longer longest" and
1024 bits would be "longest longest." That would cover us for another
few decades, at least. :)

-Arthur
sees problems with this proposal, unfortunately

Old Wolf

unread,
Mar 26, 2006, 9:31:20 PM3/26/06
to
jacob navia wrote:
> Like
>
> a = a;
>
> it does nothing

That code does do something, if "a" is volatile.

pete

unread,
Mar 26, 2006, 9:42:37 PM3/26/06
to

It's undefined if (a) is indeterminate.

--
pete

David R Tribble

unread,
Mar 27, 2006, 11:44:16 AM3/27/06
to
Wojtek Lerch wrote:
>> BTW Think about
>>
>> typedef long long long long;
>>
>> ;-)
>

Stephen Sprunk wrote:
> That "long long" even exists is a travesty.
>
> What are we going to do when 128-bit ints become common in another couple

> decades? Call them "long long long"? Or if we redefine "long long" to be
> 128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
> long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
> ints will be a "long char" or "short short"? Or is a "short short" already
> equal to a "char"?
>

> All we need are "int float" and "double int" and the entire C type system
> will be perfect! </sarcasm>

It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. For example,
consider the following typical choices of type sizes for various
CPU word sizes:

word | char | short| int | long | long long
-----+------+------+------+------+----------
8 | 8 | 16* | 16* | 32 | 64
9 | 9 | 18* | 18* | 36 | 72
9 | 9 | 18 | 36* | 36* | 72
12 | 8 | 24* | 24* | 48 | 96
16 | 8 | 16* | 16* | 32 | 64
16 | 8 | 16 | 32* | 32* | 64
18 | 9 | 18* | 18* | 36 | 72
18 | 9 | 18 | 36* | 36* | 72
20 | 10 | 20* | 20* | 40 | 80
24 | 8 | 24* | 24* | 48 | 96
32 | 8 | 16 | 32* | 32* | 64
36 | 9 | 18 | 36* | 36* | 72
40 | 8 | 20 | 40* | 40* | 80
60 | 10 | 30 | 60* | 60* | 120
64 | 8 | 16 | 32 | 64* | 64*
64 | 8 | 16 | 64* | 64* | 128

I've marked the duplicate type sizes in each row. Notice
that every row has two types of the same size.

So it's tempting to conclude that adding another int type
size to C would simply force compiler writers to provide
four actually different int sizes instead of only three.


Personally, I don't think we'll ever see 128-bit ints
as a standard C datatype, or to put it another way,
I don't think we'll ever see four standard int sizes in C.

But *if* that ever does happen, we'll simply call them
int128_t, etc., since C99 already has those types.

-drt

Eric Sosman

unread,
Mar 27, 2006, 12:15:05 PM3/27/06
to

David R Tribble wrote On 03/27/06 11:44,:


>
> It's interesting to note that most implementations (all of them I've
> ever seen, in fact) only provide three of the four standard int type

> sizes, with two of the four being the same size. [...]

In a 64-bit program for SPARC there are four different
integer widths: 8-bit char, 16-bit short, 32-bit int, and
64-bit long and long long.

My (possibly faulty) recollection has it that the DEC
Alpha used the same arrangement (without "long long") in
its compilers for OSF/1.

--
Eric....@sun.com

David R Tribble

unread,
Mar 27, 2006, 12:35:47 PM3/27/06
to
David R Tribble wrote:
>> It's interesting to note that most implementations (all of them I've
>> ever seen, in fact) only provide three of the four standard int type
>> sizes, with two of the four being the same size. [...]
>

Eric Sosman wrote:
> In a 64-bit program for SPARC there are four different
> integer widths: 8-bit char, 16-bit short, 32-bit int, and
> 64-bit long and long long.

Again, that's only three int sizes (four if you count 'char' as an int
type, which I'm not).

A 64-bit CPU could come the closest to having all four int
sizes: 16/32/64/128. But I don't know of any 64-bit C compilers
that do.


> My (possibly faulty) recollection has it that the DEC
> Alpha used the same arrangement (without "long long") in
> its compilers for OSF/1.

Yes, the DEC Alpha 64-bit CPU for OSF/1 used 16/32/64 ints
with 64-bit pointers (it did not have 'long long'). If it had had
128-bit 'long long', it would have been the first in my experience
with four different int sizes, but it didn't.

-drt

Eric Sosman

unread,
Mar 27, 2006, 1:21:14 PM3/27/06
to

David R Tribble wrote On 03/27/06 12:35,:


> David R Tribble wrote:
>
>>>It's interesting to note that most implementations (all of them I've
>>>ever seen, in fact) only provide three of the four standard int type
>>>sizes, with two of the four being the same size. [...]
>>
>
> Eric Sosman wrote:
>
>>In a 64-bit program for SPARC there are four different
>>integer widths: 8-bit char, 16-bit short, 32-bit int, and
>>64-bit long and long long.
>
>
> Again, that's only three int sizes (four if you count 'char' as an int
> type, which I'm not).

I was misled by the table in your post, whose
column headers listed five integer types (plus "word").

(Also: Why in the world do you exclude `char' from
the repertoire of "standard int types?" Are you put off
by the uncertainty over its signedness, perhaps? When I
spotted the mismatch between your "four standard int types"
and the six columns in the table, I quickly excluded "word"
but then guessed you'd forgotten to count `long long'. It
never occurred to me that you'd, er, recharacterize `char'
as a non-integer -- and it seems a bizarre stance for a C
programmer to take.)

--
Eric....@sun.com

Jordan Abel

unread,
Mar 27, 2006, 2:48:52 PM3/27/06
to
On 2006-03-27, Eric Sosman <Eric....@sun.com> wrote:
>
>
> David R Tribble wrote On 03/27/06 11:44,:
>>
>> It's interesting to note that most implementations (all of them I've
>> ever seen, in fact) only provide three of the four standard int type
>> sizes, with two of the four being the same size. [...]
>
> In a 64-bit program for SPARC there are four different
> integer widths: 8-bit char, 16-bit short, 32-bit int, and
> 64-bit long and long long.

I don't think he was counting char, when he talked about "three" of
"four".

Keith Thompson

unread,
Mar 27, 2006, 2:50:49 PM3/27/06
to
"David R Tribble" <da...@tribble.com> writes:
> David R Tribble wrote:
>>> It's interesting to note that most implementations (all of them I've
>>> ever seen, in fact) only provide three of the four standard int type
>>> sizes, with two of the four being the same size. [...]
>>
>
> Eric Sosman wrote:
>> In a 64-bit program for SPARC there are four different
>> integer widths: 8-bit char, 16-bit short, 32-bit int, and
>> 64-bit long and long long.
>
> Again, that's only three int sizes (four if you count 'char' as an int
> type, which I'm not).

Well, you should, because it is.

8/16/32/64 is fairly common these days. Making full use of all 5
integer type sizes, assuming 8-bit char, would of course require
8/16/32/64/128 -- and I've never seen a system with 128-bit integers.

When 32-bit integers and pointers were common, it wasn't difficult to
foresee that they would become inadequate, and that we'd move to 64
bits. Now that 64-bit integers and pointers are becoming widespread,
I suspect we've reached a plateau; I don't think we'll move on to 128
bits for several decades. A 16-exabyte address space will keep me
happy for quite a while; even where I work, we're barely dealing with
petabytes, and that's not directly addressible.

Jordan Abel

unread,
Mar 27, 2006, 2:50:52 PM3/27/06
to
On 2006-03-27, Eric Sosman <Eric....@sun.com> wrote:
> When I spotted the mismatch between your "four standard int types" and
> the six columns in the table, I quickly excluded "word" but then
> guessed you'd forgotten to count `long long'. It never occurred to me
> that you'd, er, recharacterize `char' as a non-integer -- and it seems
> a bizarre stance for a C programmer to take.)

The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".

Jordan Abel

unread,
Mar 27, 2006, 3:07:04 PM3/27/06
to
On 2006-03-27, Keith Thompson <ks...@mib.org> wrote:
> "David R Tribble" <da...@tribble.com> writes:
>> David R Tribble wrote:
>>>> It's interesting to note that most implementations (all of them I've
>>>> ever seen, in fact) only provide three of the four standard int type
>>>> sizes, with two of the four being the same size. [...]
>>>
>>
>> Eric Sosman wrote:
>>> In a 64-bit program for SPARC there are four different
>>> integer widths: 8-bit char, 16-bit short, 32-bit int, and
>>> 64-bit long and long long.
>>
>> Again, that's only three int sizes (four if you count 'char' as an int
>> type, which I'm not).
>
> Well, you should, because it is.
>
> 8/16/32/64 is fairly common these days. Making full use of all 5
> integer type sizes, assuming 8-bit char, would of course require
> 8/16/32/64/128 -- and I've never seen a system with 128-bit integers.
>
> When 32-bit integers and pointers were common, it wasn't difficult to
> foresee that they would become inadequate, and that we'd move to 64
> bits. Now that 64-bit integers and pointers are becoming widespread,
> I suspect we've reached a plateau; I don't think we'll move on to 128
> bits for several decades. A 16-exabyte address space will keep me
> happy for quite a while; even where I work, we're barely dealing with
> petabytes, and that's not directly addressible.

a 128-bit word size might make sense, though, for a specialized system
that is intended to mainly work with high-precision floating point,
though. But I'll agree that probably LP64/LLP64 are going to be the most
common model for hosted systems from here on out. [those are 8/16/32/64
with 64- and 32-bit long, respectively, and 64-bit pointers.]

Keith Thompson

unread,
Mar 27, 2006, 3:09:20 PM3/27/06
to

The language makes no such distinction. Type char is an integer type;
the only thing that's really special about it is that plain char may
be either signed or unsigned.

David R Tribble

unread,
Mar 27, 2006, 3:23:26 PM3/27/06
to
Eric Sosman wrote:
>> When I spotted the mismatch between your "four standard int types" and
>> the six columns in the table, I quickly excluded "word" but then
>> guessed you'd forgotten to count `long long'. It never occurred to me
>> that you'd, er, recharacterize `char' as a non-integer -- and it seems
>> a bizarre stance for a C programmer to take.)
>

Jordan Abel wrote:
> The keyword "int" is not allowed as part of its type name, therefore it
> is arguable that it is not an "int type" despite being an "integer
> type".

Exactly. Yes, 'char' is an integer type of C, but it's not an 'int'
type (because 'int' is not allowed as part of its type name).

Doesn't matter anyway; my point is still true that all C compilers
to date (at least those I'm aware of) support two standard integer
types of identical size. Three out of four or four out of five, either
way, there appears to always be a redundant type.

-drt

kuy...@wizard.net

unread,
Mar 27, 2006, 3:44:50 PM3/27/06
to

Perhaps; but why bother talking about 'int' types in the first place?
Why not discuss "integer" types instead?

Eric Sosman

unread,
Mar 27, 2006, 3:41:19 PM3/27/06
to

David R Tribble wrote On 03/27/06 15:23,:

Isn't the Alpha under OSF/1 (already mentioned) a
counterexample? It's "four out of four" (or "three out
of three" if you count un-char-itably). If you want to
look from the other angle, it has no "redundant" type.

--
Eric....@sun.com

Eric Sosman

unread,
Mar 27, 2006, 3:47:19 PM3/27/06
to

Jordan Abel wrote On 03/27/06 15:07,:


>
> a 128-bit word size might make sense, though, for a specialized system
> that is intended to mainly work with high-precision floating point,

> though. [...]

Not a mere theoretical possibility: DEC VAX supported
four floating-point formats, one of which (H-format) used
128 bits. The small-VAX models I used implemented H-format
with trap-and-emulate, but it was part of the instruction
architecture nonetheless and in that sense a "native" form.

--
Eric....@sun.com

Keith Thompson

unread,
Mar 27, 2006, 4:10:57 PM3/27/06
to
Eric Sosman <Eric....@sun.com> writes:
[...]

>> Doesn't matter anyway; my point is still true that all C compilers
>> to date (at least those I'm aware of) support two standard integer
>> types of identical size. Three out of four or four out of five, either
>> way, there appears to always be a redundant type.
>
> Isn't the Alpha under OSF/1 (already mentioned) a
> counterexample? It's "four out of four" (or "three out
> of three" if you count un-char-itably). If you want to
> look from the other angle, it has no "redundant" type.

Alpha OSF/1 has the following:

char 8
short 16
int 32
long 64
long long 64

It has no redundant type only if you ignore C99.

In any case, redundant types aren't necessarily a bad thing. The
standard guarantees a minimum range for each type, and requires a
reasonably large set of types to be mapped onto the native types of
the underlying system. Having some types overlap is better than
leaving gaps.

Jordan Abel

unread,
Mar 27, 2006, 5:54:11 PM3/27/06
to
On 2006-03-27, Eric Sosman <Eric....@sun.com> wrote:
>
>

I'm talking about a hypothetical machine that used 128 bits for
everything, as some allegedly now use 32 bits for everything.

Jordan Abel

unread,
Mar 27, 2006, 5:55:21 PM3/27/06
to
On 2006-03-27, Keith Thompson <ks...@mib.org> wrote:
> Jordan Abel <rand...@gmail.com> writes:
>> On 2006-03-27, Eric Sosman <Eric....@sun.com> wrote:
>>> When I spotted the mismatch between your "four standard int types" and
>>> the six columns in the table, I quickly excluded "word" but then
>>> guessed you'd forgotten to count `long long'. It never occurred to me
>>> that you'd, er, recharacterize `char' as a non-integer -- and it seems
>>> a bizarre stance for a C programmer to take.)
>>
>> The keyword "int" is not allowed as part of its type name, therefore it
>> is arguable that it is not an "int type" despite being an "integer
>> type".
>
> The language makes no such distinction.

We have short ints, long ints, and no char ints. that's a language
distinction if there ever was one. "int type" isn't really a term
defined by the language anyway, and arguably one plausible definition is
"types declared using the keyword 'int'".

Keith Thompson

unread,
Mar 27, 2006, 6:30:16 PM3/27/06
to
Jordan Abel <rand...@gmail.com> writes:
> On 2006-03-27, Keith Thompson <ks...@mib.org> wrote:
>> Jordan Abel <rand...@gmail.com> writes:
>>> On 2006-03-27, Eric Sosman <Eric....@sun.com> wrote:
>>>> When I spotted the mismatch between your "four standard int types" and
>>>> the six columns in the table, I quickly excluded "word" but then
>>>> guessed you'd forgotten to count `long long'. It never occurred to me
>>>> that you'd, er, recharacterize `char' as a non-integer -- and it seems
>>>> a bizarre stance for a C programmer to take.)
>>>
>>> The keyword "int" is not allowed as part of its type name, therefore it
>>> is arguable that it is not an "int type" despite being an "integer
>>> type".
>>
>> The language makes no such distinction.
>
> We have short ints, long ints, and no char ints. that's a language
> distinction if there ever was one. "int type" isn't really a term
> defined by the language anyway, and arguably one plausible definition is
> "types declared using the keyword 'int'".

We also have "short", "unsigned short", "unsigned", "long", "unsigned
long", etc.

If I wanted to define the term "int type", I suppose "any type that
*can* be declared using the keyword 'int'" might be a plausible
definition. However, the standard doesn't define such a term (any
more than it groups long, unsigned long, long long, unsigned long
long, and long double as "long types").

I see absolutely no point either in defining such a term or in
continuing this discussion.

RSoIsCaIrLiIoA

unread,
Mar 28, 2006, 1:39:37 AM3/28/06
to
On 27 Mar 2006 08:44 -0800, "David R Tribble" wrote:
>Wojtek Lerch wrote:
>>> BTW Think about
>>> typedef long long long long;
>>> ;-)
>Stephen Sprunk wrote:
>> That "long long" even exists is a travesty.
>>
>> What are we going to do when 128-bit ints become common in another couple
>> decades? Call them "long long long"? Or if we redefine "long long" to be
>> 128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
>> long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
>> ints will be a "long char" or "short short"? Or is a "short short" already
>> equal to a "char"?
>>
>> All we need are "int float" and "double int" and the entire C type system
>> will be perfect! </sarcasm>
>
>It's interesting to note that most implementations (all of them I've
>ever seen, in fact) only provide three of the four standard int type
>sizes, with two of the four being the same size. For example,
>consider the following typical choices of type sizes for various
>CPU word sizes:
>
> word | char | short| int | long | long long
> -----+------+------+------+------+----------

data structure and its size effect heavy for portability (because when
i has the data of the same size & ^ | etc for them should be all well
definite the same) so all problem on portability disappear

so to use char, int, short, long, etc is an error if someone sees
portability for a program.
they had been int8, int16, int32, etc (until char int8) from the day 1
uns8, uns16 uns32 etc
the problem could be that different cpu has different 'main' word size
and this effect in efficience

Douglas A. Gwyn

unread,
Mar 28, 2006, 11:11:01 AM3/28/06
to
Stephen Sprunk wrote:
> That "long long" even exists is a travesty.

Hardly. The need for something along those lines was so pressing
that different compiler vendors had invented a variety of solutions
already, including some using "long long".

> What are we going to do when 128-bit ints become common in another couple
> decades?

Use int_least128_t if you need a standard name for a signed int
with width at least 128 bits. If you don't know what that is,
here's an opportunity to learn.

Douglas A. Gwyn

unread,
Mar 28, 2006, 11:12:40 AM3/28/06
to
Keith Thompson wrote:
> Mathematically, they're called "Gaussian integers".

And like most specialized types there isn't strong reason
to build them into the language (as opposed to letting the
programmer use a library for them). Probably floating-
complex should have been in that category, were it not for
established Fortran practice.

Douglas A. Gwyn

unread,
Mar 28, 2006, 11:15:39 AM3/28/06
to
jacob navia wrote:
> lcc-win32 supports 128 bit integers. The type is named:
> int128

We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards
conformant. You should have used an identifier reserved
for use by the C implementation, not one that is
guaranteed to be available for the application.

kuy...@wizard.net

unread,
Mar 28, 2006, 12:17:48 PM3/28/06
to
Douglas A. Gwyn wrote:
> Stephen Sprunk wrote:
> > That "long long" even exists is a travesty.
>
> Hardly. The need for something along those lines was so pressing
> that different compiler vendors had invented a variety of solutions
> already, including some using "long long".

It's not "something along those lines" which was a travesty. A
size-named type like the ones that were introduced in C99 would have
been much better. It's specifically the choice of "long long" for the
type name that made it so objectionable.

David R Tribble

unread,
Mar 28, 2006, 3:06:13 PM3/28/06
to
Eric Sosman wrote:
>> When I spotted the mismatch between your "four standard int types" and
>> the six columns in the table, I quickly excluded "word" but then
>> guessed you'd forgotten to count `long long'. It never occurred to me
>> that you'd, er, recharacterize `char' as a non-integer -- and it seems
>> a bizarre stance for a C programmer to take.)
>

Jordan Abel writes:
>> The keyword "int" is not allowed as part of its type name, therefore it
>> is arguable that it is not an "int type" despite being an "integer
>> type".
>

Keith Thompson wrote:
>> The language makes no such distinction.
>

Jordan Abel writes:
>> We have short ints, long ints, and no char ints. that's a language
>> distinction if there ever was one. "int type" isn't really a term
>> defined by the language anyway, and arguably one plausible definition is
>> "types declared using the keyword 'int'".
>

Keith Thompson wrote:
> We also have "short", "unsigned short", "unsigned", "long", "unsigned
> long", etc.
>
> If I wanted to define the term "int type", I suppose "any type that
> *can* be declared using the keyword 'int'" might be a plausible
> definition. However, the standard doesn't define such a term (any
> more than it groups long, unsigned long, long long, unsigned long
> long, and long double as "long types").
>
> I see absolutely no point either in defining such a term or in
> continuing this discussion.

Sorry for the confusion.

But like I said, it doesn't change my point, that all C compilers I've
ever seen have a redundant integer type size.

By itself, this is not necessarily a bad thing, but it does make
writing portable code a headache sometimes. I'm still waiting for
a standard macro that tells me about endianness (but that's
a topic for another thread).

-drt

David R Tribble

unread,
Mar 28, 2006, 3:13:59 PM3/28/06
to
Stephen Sprunk wrote:
>> That "long long" even exists is a travesty.
>

Douglas A. Gwyn wrote:
>> Hardly. The need for something along those lines was so pressing
>> that different compiler vendors had invented a variety of solutions
>> already, including some using "long long".
>

Kuyper wrote:
> It's not "something along those lines" which was a travesty. A
> size-named type like the ones that were introduced in C99 would have
> been much better. It's specifically the choice of "long long" for the
> type name that made it so objectionable.

Type names like 'long long' have the advantage of being decoupled
from the exact word size of the underlying CPU. That's why you
can write reasonably portable code for machines that don't have
nice multiple-of-8 word sizes.

Some programmers may prefer using 'int_least64_t' over 'long long'.
But I don't.

-drt

Keith Thompson

unread,
Mar 28, 2006, 3:14:34 PM3/28/06
to

None of the predefined integer types (char, short, int, long, long
long) have names that specify their actual sizes, allowing the sizes
to vary across platforms. Only minimum sizes are specified. This
encourages code that doesn't assume specific sizes (though there's
still plenty of code that assumes "all the world's a VAX", or these
days, "all the world's an x86". Introducing a new fundamental type
with a size-specific name would break that pattern, and could break
systems that don't have power-of-two sizes (vanishingly rare these
days, but the standard still allows for them).

Wojtek Lerch

unread,
Mar 28, 2006, 4:06:37 PM3/28/06
to
"David R Tribble" <da...@tribble.com> wrote in message
news:1143576373.3...@t31g2000cwb.googlegroups.com...

> I'm still waiting for
> a standard macro that tells me about endianness (but that's
> a topic for another thread).

One macro, or one per integer type? C doesn't disallow systems where some
types are big endian and some little endian.

C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
Would you just classify those as "other", or do you have something more
complicated in mind? Or would you just ban them?

And what about padding bits -- how useful is it to know the endianness of a
type if you don't know where its padding bits are?


Douglas A. Gwyn

unread,
Mar 28, 2006, 4:34:05 PM3/28/06
to
kuy...@wizard.net wrote:
> ... It's specifically the choice of "long long" for the

> type name that made it so objectionable.

Why is that objectionable? It avoided using up another
identifier for a new keyword, did not embed some assumed
size in its name (unlike several extensions), and
matched the choice of some of the existing extensions.

David R Tribble

unread,
Mar 28, 2006, 8:28:56 PM3/28/06
to
David R Tribble wrote:
>> I'm still waiting for a standard macro that tells me about endianness (but that's
>> a topic for another thread).
>

Wojtek Lerch wrote:
> One macro, or one per integer type? C doesn't disallow systems where some
> types are big endian and some little endian.
>
> C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
> Would you just classify those as "other", or do you have something more
> complicated in mind? Or would you just ban them?
>
> And what about padding bits -- how useful is it to know the endianness of a
> type if you don't know where its padding bits are?

Something along the lines of:
http://david.tribble.com/text/c9xmach.txt

This was written in 1995, before 'long long' existed, so I'd have
to add a few more macros, including:

#define _ORD_LONG_HL n

My suggestion is just one of hundreds of ways to describe
endianness, bits sizes, alignment, padding, etc., that have been
invented over time. None of which ever made it into ISO C.

-drt

kuy...@wizard.net

unread,
Mar 28, 2006, 11:30:53 PM3/28/06
to

Keith Thompson wrote:
> kuy...@wizard.net writes:
> > Douglas A. Gwyn wrote:
> >> Stephen Sprunk wrote:
> >> > That "long long" even exists is a travesty.
> >>
> >> Hardly. The need for something along those lines was so pressing
> >> that different compiler vendors had invented a variety of solutions
> >> already, including some using "long long".
> >
> > It's not "something along those lines" which was a travesty. A
> > size-named type like the ones that were introduced in C99 would have
> > been much better. It's specifically the choice of "long long" for the
> > type name that made it so objectionable.
>
> None of the predefined integer types (char, short, int, long, long
> long) have names that specify their actual sizes, allowing the sizes
> to vary across platforms. Only minimum sizes are specified.

In other words, the built-in types were roughly equivalent to
int_leastN_t or int_fastN_t. I definitely approve of types that are
allowed to have different sizes on different platforms. I think that
they are, by far, the most appropriate types to use in most contexts.

However, while using english adjectives as keywords to specify the
minimm size seemed reasonable when the number of different sizes was
small, it has become steadily less reasonable as the number of
different sizes has increased. The new size-named types provide a more
scalable solution to identifying the minimum size. Were backward
compatibility not an issue, I'd recommend abolishing the original type
names in favor of size-named types. I wouldn't recommend the current
naming scheme for the new types, however - intN_t should have been used
for the fast types, with int_exactN_t being reserved for the
exact-sized types.

> This
> encourages code that doesn't assume specific sizes (though there's

The same benefit accrues to the non-exact-sized size-named types.

> days, "all the world's an x86". Introducing a new fundamental type
> with a size-specific name would break that pattern, and could break
> systems that don't have power-of-two sizes (vanishingly rare these
> days, but the standard still allows for them).

You're assuming that the size-specific name would identify an
exact-sized type rather than a minimum-sized type. I would not approve
of that solution any more than you would, for precisely the reasons you
give.

kuy...@wizard.net

unread,
Mar 28, 2006, 11:34:18 PM3/28/06
to
David R Tribble wrote:
...

> Type names like 'long long' have the advantage of being decoupled
> from the exact word size of the underlying CPU. That's why you
> can write reasonably portable code for machines that don't have
> nice multiple-of-8 word sizes.

int_least64_t shares that same characteristic in a more scalable
fashion.

> Some programmers may prefer using 'int_least64_t' over 'long long'.
> But I don't.

I would prefer using int64 over either of those alternatives, with
int64 being given the same meaning currently attached to int_least64_t.

kuy...@wizard.net

unread,
Mar 28, 2006, 11:58:54 PM3/28/06
to
Douglas A. Gwyn wrote:
> kuy...@wizard.net wrote:
> > ... It's specifically the choice of "long long" for the
> > type name that made it so objectionable.
>
> Why is that objectionable?

Because it's the wrong solution, and adopting it into the standard
creates justification for anticipating (hopefully incorrectly) that
this wrong solution is the way that future versions of the standard
will handle new type sizes.

> ... It avoided using up another


> identifier for a new keyword, did not embed some assumed
> size in its name (unlike several extensions),

You consider that an advantage. I think it's a disadvantage to have a
type whose miniminum required size corresponds to 64 bits, but giving
it a name which does not make that fact explicit.

Also, I've heard it criticised because of the fact that it's form makes
it something unique in the standard: a doubled keyword that is neither
a syntax error nor eqivalent to the corresponding un-doubled keyword. I
don't know much about the internals of compiler design, but I've seen
comments on someone who thought he did, who claimed that this
distinction unnecessarily imposed an (admittedly small) additional
level of complexity on the parser.

> and
> matched the choice of some of the existing extensions.

I recognise the practical necesity of taking into consideration
existing practice. My criticism of 'long long' was aimed primarily at
those who created it in the first place as an extension to existing
implementations.

Richard Bos

unread,
Mar 29, 2006, 1:41:20 AM3/29/06
to
"Douglas A. Gwyn" <DAG...@null.net> wrote:

> jacob navia wrote:
> > lcc-win32 supports 128 bit integers. The type is named:
> > int128
>
> We hope you defined the appropriate stuff in <stdint.h>
> and <inttypes.h>, since that is what portable programs
> will have to use instead of implementation-specific names.
>
> Note also that you have made lcc-win32 non standards conformant.

^
even more
HTH; HAND.

Richard

Richard Bos

unread,
Mar 29, 2006, 1:44:06 AM3/29/06
to
kuy...@wizard.net wrote:

[ about "long long": ]

> Also, I've heard it criticised because of the fact that it's form makes
> it something unique in the standard: a doubled keyword that is neither
> a syntax error nor eqivalent to the corresponding un-doubled keyword. I
> don't know much about the internals of compiler design, but I've seen
> comments on someone who thought he did, who claimed that this
> distinction unnecessarily imposed an (admittedly small) additional
> level of complexity on the parser.

More so than "long int", "signed int", and "unsigned short int" already
did? If so, I can't help thinking that the difference must have been
truly slight.

Richard

jacob navia

unread,
Mar 29, 2006, 1:35:34 AM3/29/06
to
Douglas A. Gwyn a écrit :

The use of int128 is only there IF you

#include <int128.h>

Otherwise you can use the identifier int128 as you want.

jacob

Jordan Abel

unread,
Mar 29, 2006, 10:34:41 AM3/29/06
to

In that case, why not use stdint.h and int128_t, int_least128_t, and
int_fast128_t?

Jordan Abel

unread,
Mar 29, 2006, 10:36:00 AM3/29/06
to

Those still have only one of each keyword present. It's not like "long
long" acts as a 'pseudo-keyword' - "long int signed long" is a valid
name for the type.

jacob navia

unread,
Mar 29, 2006, 10:08:35 AM3/29/06
to

Because that would force ALL users of stdint.h to accept int128_t and
all the associated machinery, what is probably not what all of them want.

But the name int128 is not "cast in stone" and since I suppose the names
intXXX_t are reserved I could use those.

Basically this type is implemented using lcc-win32 specific extensions
like operator overloading, what allows to easily define new types. This
extensions are disabled when you invoke the compiler under the "no
extensions" mode. If I would put the 128 bit integers in the stdint
header, the operator overloading required would not work under the "ansi
c" environment, and problems would appear. That is why I use a special
header that will be used only by people the want those integer types.

Of course there is a strict ANSI C interface for 128 bit integers, but
if you use it, you would have to write

int128 a,b,c;
...
c = i128add(a,b);

instead of

c = a+b;

Michael Mair

unread,
Mar 29, 2006, 12:15:12 PM3/29/06
to
jacob navia schrieb:

This is IMO no good solution; if someone defined
typedef struct {
....
} int128;
in a library and one of your users wants to use this
library and another one which uses the
implementation-provided 128 bit exact width signed
integer type, he or she runs into a rather unnecessary
problem.
IMO, providing appropriate definitions in the
appropriate headers is better.

FWIW: I have seen enough "int64" and "Int64" structure
typedefs to assume that there may be the same for
128 bits.

Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.

Wojtek Lerch

unread,
Mar 29, 2006, 4:31:31 PM3/29/06
to
"David R Tribble" <da...@tribble.com> wrote in message
news:1143595736....@i39g2000cwa.googlegroups.com...

> David R Tribble wrote:
>>> I'm still waiting for a standard macro that tells me about endianness
>>> (but that's
>>> a topic for another thread).
>
> Wojtek Lerch wrote:
>> One macro, or one per integer type? C doesn't disallow systems where
>> some
>> types are big endian and some little endian.
>>
>> C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
>> Would you just classify those as "other", or do you have something more
>> complicated in mind? Or would you just ban them?
>>
>> And what about padding bits -- how useful is it to know the endianness of
>> a
>> type if you don't know where its padding bits are?
>
> Something along the lines of:
> http://david.tribble.com/text/c9xmach.txt

I have to say that I find it rather vague and simplistic, and can't find
where it answers my questions. I have absolutely no clue how you wanted to
handle implementations that are neither clearly little-endian nor clearly
big-endian. You didn't propose to ban them, did you?

/* Bit/byte/word order */

#define _ORD_BIG 0 /* Big-endian */
#define _ORD_LITTLE 1 /* Little-endian */

#define _ORD_BITF_HL 0 /* Bitfield fill order */
#define _ORD_BYTE_HL 0 /* Byte order within shorts */
#define _ORD_WORD_HL 0 /* Word order within longs */

What about implementations with one-byte shorts? What if the bit order
within a short doesn't match the bit order in a char? What if the byte
order within a two-byte short doesn't match the byte order within a half of
a four-byte long? What about the halves of an int? What about
implementations with three-byte longs? What if the most significant bits
sit in the middle byte? Or if the three most significant bits are mapped to
the least significant bit of the three bytes?

> This was written in 1995, before 'long long' existed, so I'd have
> to add a few more macros, including:
>
> #define _ORD_LONG_HL n
>
> My suggestion is just one of hundreds of ways to describe
> endianness, bits sizes, alignment, padding, etc., that have been
> invented over time. None of which ever made it into ISO C.

Perhaps because they all made the incorrect assumption that in every
conforming implementation, every integer type must necessarily be either
little endian or big endian?

Personally, I think it would be both easier and more useful not to try to
classify all types on all implementations, but instead to define names for
big- and little-endian types and make them all optional. For instance:

uint_be32_t -- a 32-bit unsigned type with no padding bits and a
big-endian representation, if such a type exists.

The representation is big-endian if:

* for any two value bits located in different bytes, the bit whose byte
has a lower address represents a higher value
* for any two value bits located in the same byte, the order of their
represented values matches the order of the values they represent in
unsigned char


Jordan Abel

unread,
Mar 29, 2006, 6:24:11 PM3/29/06
to
On 2006-03-29, jacob navia <ja...@jacob.remcomp.fr> wrote:
> Jordan Abel wrote:
>> On 2006-03-29, jacob navia <ja...@jacob.remcomp.fr> wrote:
>>
>>>Douglas A. Gwyn a écrit :
>>>
>>>>jacob navia wrote:
>>>>
>>>>
>>>>>lcc-win32 supports 128 bit integers. The type is named:
>>>>>int128
>>>>
>>>>
>>>>We hope you defined the appropriate stuff in <stdint.h>
>>>>and <inttypes.h>, since that is what portable programs
>>>>will have to use instead of implementation-specific names.
>>>>
>>>>Note also that you have made lcc-win32 non standards
>>>>conformant. You should have used an identifier reserved
>>>>for use by the C implementation, not one that is
>>>>guaranteed to be available for the application.
>>>
>>>The use of int128 is only there IF you
>>>
>>>#include <int128.h>
>>>
>>>Otherwise you can use the identifier int128 as you want.
>>>
>>>jacob
>>
>>
>> In that case, why not use stdint.h and int128_t, int_least128_t, and
>> int_fast128_t?
>
> Because that would force ALL users of stdint.h to accept int128_t and
> all the associated machinery, what is probably not what all of them want.

Why? What machinery is associated with int128_t that c99 doesn't
_already_ say is permitted in stdint.h?

You'd have
[u]int128_t, [u]int_least128_t, [u]int_fast128_t, etc typedefs,
INT128_MIN, INT128_MAX, UINT128_MAX, and the associated LEAST and FAST
ones as well, INT128_C(x) and UINT128_C(x) in stdint.h

{PRI,SCN}[diouxX]{FAST,LEAST,}128 in inttypes.h

what else do you need?

> But the name int128 is not "cast in stone" and since I suppose the names
> intXXX_t are reserved I could use those.
>
> Basically this type is implemented using lcc-win32 specific extensions
> like operator overloading, what allows to easily define new types. This
> extensions are disabled when you invoke the compiler under the "no
> extensions" mode. If I would put the 128 bit integers in the stdint
> header, the operator overloading required would not work under the "ansi
> c" environment, and problems would appear.

Why not implement it as a standard type so that it can _always_ be used,
with nothing but an #ifdef INT128_MAX to check if it's present?

> That is why I use a special header that will be used only by people
> the want those integer types.
>
> Of course there is a strict ANSI C interface for 128 bit integers, but
> if you use it, you would have to write
>
> int128 a,b,c;
> ...
> c = i128add(a,b);
>
> instead of
>
> c = a+b;

why? why not implement it as a standard type, with the compiler knowing
about it?

#ifdef INT_LEAST128_MAX
int_least128_t a,b,c;
c = a+b;
#else
#error No 128-bit integer type available
#endif

Douglas A. Gwyn

unread,
Mar 30, 2006, 1:32:31 PM3/30/06
to
jacob navia wrote:

> Jordan Abel wrote:
> > In that case, why not use stdint.h and int128_t, int_least128_t, and
> > int_fast128_t?
> Because that would force ALL users of stdint.h to accept int128_t and
> all the associated machinery, what is probably not what all of them want.

If the programs don't try to use the type then the extra definitions
are of no consequence.

> ... If I would put the 128 bit integers in the stdint


> header, the operator overloading required would not work under the "ansi
> c" environment, and problems would appear. That is why I use a special
> header that will be used only by people the want those integer types.

You ought to rethink your design. If your compiler knows the
type as __int128 (for example) then <stdint.h> need only refer
to that name. You may have to define a testable macro for
your extended environment in order for the standard header to
know whether that type is supported or not, but that kind of
thing is quite common in implementations already.

Douglas A. Gwyn

unread,
Mar 30, 2006, 1:40:53 PM3/30/06
to
kuy...@wizard.net wrote [re "long long"]:

> You consider that an advantage. I think it's a disadvantage to have a
> type whose miniminum required size corresponds to 64 bits, but giving
> it a name which does not make that fact explicit.

Then you should use <stdint.h>, which was introduced at the
same time. None of the "keyword" types has ever had a
specific size embedded in its name.

> Also, I've heard it criticised because of the fact that it's form makes
> it something unique in the standard: a doubled keyword that is neither
> a syntax error nor eqivalent to the corresponding un-doubled keyword. I
> don't know much about the internals of compiler design, but I've seen
> comments on someone who thought he did, who claimed that this
> distinction unnecessarily imposed an (admittedly small) additional
> level of complexity on the parser.

If a parser generator is used (e.g. yacc) there is no significant
problem. If a hand-coded parser is used, it's nearly trivial to
handle. (Look ahead one token, for example. In Ritchie's PDP-11
C compiler a "long" counter was incremented, and there was no
diagnostic for multiple "longs". It is trivial to test for a
count of 1, 2, or many and do the right thing for each case.)

kuy...@wizard.net

unread,
Mar 30, 2006, 3:32:36 PM3/30/06
to
Douglas A. Gwyn wrote:
> kuy...@wizard.net wrote [re "long long"]:
> > You consider that an advantage. I think it's a disadvantage to have a
> > type whose miniminum required size corresponds to 64 bits, but giving
> > it a name which does not make that fact explicit.
>
> Then you should use <stdint.h>, which was introduced at the
> same time.

I plan to, should our client ever give us permission to use anything
more advanced than C94. However, I wasn't complaining about the absence
of those types - I know they exists. I was objecting to the presence of
"long long", and in particular to it's presence in some pre-C99
implementations. It's that presence which forced the C committee to
accept "long long" in the same revision as the preferred alternatives.

> ... None of the "keyword" types has ever had a


> specific size embedded in its name.

And, in retrospect, I don't approve of that fact.

David R Tribble

unread,
Mar 30, 2006, 4:55:23 PM3/30/06
to
David R Tribble wrote:
>> I'm still waiting for a standard macro that tells me about endianness
>> (but that's a topic for another thread).
>

Wojtek Lerch wrote:
>> One macro, or one per integer type? C doesn't disallow systems where
>> some types are big endian and some little endian.
>>
>> C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
>> Would you just classify those as "other", or do you have something more
>> complicated in mind? Or would you just ban them?
>

David R Tribble wrote:
>> Something along the lines of:
>> http://david.tribble.com/text/c9xmach.txt
>

Wojtek Lerch wrote:
> I have to say that I find it rather vague and simplistic, and can't find
> where it answers my questions. I have absolutely no clue how you wanted to
> handle implementations that are neither clearly little-endian nor clearly
> big-endian. You didn't propose to ban them, did you?

No, that's why there are three endianness macros. This allows for,
say, the PDP-11 mixed-endian 'long int' type:

#define _ORD_BIG 0 /* Big-endian */

#define _ORD_LITTLE 0 /* Little-endian */

#define _ORD_BITF_HL 0 /* Bitfield fill order */
#define _ORD_BYTE_HL 0 /* Byte order within shorts */

#define _ORD_WORD_HL 1 /* Word order within longs */


> What about implementations with one-byte shorts?

Obviously the macro names could be better.


> What if the bit order within a short doesn't match the bit order in a char?
> What if the byte order within a two-byte short doesn't match the byte order
> within a half of a four-byte long? What about the halves of an int? What about
> implementations with three-byte longs? What if the most significant bits
> sit in the middle byte? Or if the three most significant bits are mapped to
> the least significant bit of the three bytes?

Then we need more macros with better names.
You're not saying that this is an unsolvable problem, are you?


> Perhaps because they all made the incorrect assumption that in every
> conforming implementation, every integer type must necessarily be either
> little endian or big endian?

I didn't make that assumption.


> Personally, I think it would be both easier and more useful not to try to
> classify all types on all implementations, but instead to define names for
> big- and little-endian types and make them all optional. For instance:
> uint_be32_t -- a 32-bit unsigned type with no padding bits and a
> big-endian representation, if such a type exists.

How do you tell if those types are not implemented?

More to the point, how do you tell portably what byte order plain
'int' is implemented with?

-drt

David R Tribble

unread,
Mar 30, 2006, 5:02:09 PM3/30/06
to
Douglas A. Gwyn wrote:
>> ... None of the "keyword" types has ever had a
>> specific size embedded in its name.
>

Kuyper wrote:
> And, in retrospect, I don't approve of that fact.

Then you probably don't approve of Java, Perl, awk, ksh, FORTRAN,
BASIC, etc., or most other programming languages, either.

-drt

Wojtek Lerch

unread,
Mar 30, 2006, 6:16:55 PM3/30/06
to
"David R Tribble" <da...@tribble.com> wrote in message
news:1143755723.9...@g10g2000cwb.googlegroups.com...

> David R Tribble wrote:
>>> Something along the lines of:
>>> http://david.tribble.com/text/c9xmach.txt
>>
>
> Wojtek Lerch wrote:
>> What if the bit order within a short doesn't match the bit order in a
>> char?
>> What if the byte order within a two-byte short doesn't match the byte
>> order
>> within a half of a four-byte long? What about the halves of an int? What
>> about
>> implementations with three-byte longs? What if the most significant bits
>> sit in the middle byte? Or if the three most significant bits are mapped
>> to
>> the least significant bit of the three bytes?
>
> Then we need more macros with better names.
> You're not saying that this is an unsolvable problem, are you?

Pretty much, depending on what exactly you call the problem and what kind of
a solution you find acceptable.

Let's concentrate on implementations that have 16-bit short types with no
padding bits. There are 20922789888000 possible permutations of 16 bits,
and the C standard doesn't disallow any of them. Even though it's
theoretically possible to come up with a system of macros allowing programs
to distinguish all the permutations, I don't think it would be very useful
or practical. For all practical purposes, a distinction between big endian,
little endian, and "other" is sufficient. There are no existing "other"
implementations anyway.

In practice, a simple one-bit solution like your is perfectly fine.
Unfortunately, it only covers practical implementations; therefore, it
wouldn't be acceptable as a part of the standard.

>> Perhaps because they all made the incorrect assumption that in every
>> conforming implementation, every integer type must necessarily be either
>> little endian or big endian?
>
> I didn't make that assumption.

Correct me if I'm wrong, but you did seem to make the assumption that there
are only two possible byte orders within a short, and that there are only
two possible "word orders" within a long, and that knowing those two bits of
information (along with the common stuff from <limits.h>) gives you complete
or at least useful knowledge about the bit order of all integer types (in
C89).

If I indeed misunderstood something, could you explain how you would use
your macros in a program to distinguish between implementations where an
unsigned short occupies two 9-bit bytes, has two padding bits, and
represents the value 0x1234 as

(a) 0x12, 0x34 ("big endian", with a padding bit at the top of each byte)
(b) 0x24, 0x68 ("big endian", with a padding bit at the bottom of each
byte)
(b) 0x22, 0x64 ("big endian", with a padding bit in the middle of each
byte)
(c) 0x34, 0x12 ("little endian", padding at the top)
(d) 0x68, 0x24 ("little endian", padding at the bottom)
(e) 0x23, 0x14 ("middle endian", with the middle bits in the first byte, a
padding bit at the top of each byte)

>> Personally, I think it would be both easier and more useful not to try to
>> classify all types on all implementations, but instead to define names
>> for
>> big- and little-endian types and make them all optional. For instance:
>> uint_be32_t -- a 32-bit unsigned type with no padding bits and a
>> big-endian representation, if such a type exists.
>
> How do you tell if those types are not implemented?

The same way as any other type from <stdint.h> -- #if
defined(UINT_BE32_MAX).

> More to the point, how do you tell portably what byte order plain
> 'int' is implemented with?

You don't. It doesn't make sense to talk about the "byte order" without
assuming that the value bits are grouped into bytes according to their
value; and that assumption is not portable. At least not in theory.

Using your method, how do you tell where the padding bits are located? If
you can't, how useful is it to know the "byte order"?

Eric Sosman

unread,
Mar 30, 2006, 6:18:44 PM3/30/06
to

David R Tribble wrote On 03/30/06 16:55,:
> [...]


>
> Wojtek Lerch wrote:
>
>>I have to say that I find it rather vague and simplistic, and can't find
>>where it answers my questions. I have absolutely no clue how you wanted to
>>handle implementations that are neither clearly little-endian nor clearly
>>big-endian. You didn't propose to ban them, did you?
>
>
> No, that's why there are three endianness macros. This allows for,
> say, the PDP-11 mixed-endian 'long int' type:
>
> #define _ORD_BIG 0 /* Big-endian */
> #define _ORD_LITTLE 0 /* Little-endian */

Does the Standard require that the 1's bit and the
2's bit of an `int' reside in the same byte? Or is the
implementation free to scatter the bits of the "pure
binary" representation among the different bytes as it
pleases? (It must, of course, scatter the corresponding
bits of signed and unsigned versions in the same way.)

If the latter, I think there's the possibility (a
perverse possibility) of a very large number of permitted
"endiannesses," something like

(sizeof(type) * CHAR_BIT) !
-----------------------------
(CHAR_BIT !) ** sizeof(type)

Argument: There are `sizeof(type) * CHAR_BIT' bits (value,
sign, and padding) in the object, so the number of ways to
permute the bits is the factorial of that quantity. But C
cannot detect the arrangement of individual bits within a
byte, so each byte of the object divides the number of
detectably different arrangements by `CHAR_BIT!'.

For an `int' made up of four eight-bit bytes, this
gives 32! / (8! ** 4) ~= 1e17 "endiannesses," or one tenth
of a billion billion.

--
Eric....@sun.com

Jordan Abel

unread,
Mar 30, 2006, 7:34:11 PM3/30/06
to
On 2006-03-30, Wojtek Lerch <Wojt...@yahoo.ca> wrote:
> (a) 0x12, 0x34 ("big endian", with a padding bit at the top of each
> byte)
> (b) 0x24, 0x68 ("big endian", with a padding bit at the bottom of each
> byte)
> (b) 0x22, 0x64 ("big endian", with a padding bit in the middle of each
> byte)
> (c) 0x34, 0x12 ("little endian", padding at the top)
> (d) 0x68, 0x24 ("little endian", padding at the bottom)
> (e) 0x23, 0x14 ("middle endian", with the middle bits in the first byte, a
> padding bit at the top of each byte)

You forgot 0x09, 0x34, big-endian with the padding bits at the top of
the word, which is, to me, the most obvious of all.

Wojtek Lerch

unread,
Mar 30, 2006, 8:06:27 PM3/30/06
to
"Jordan Abel" <rand...@gmail.com> wrote in message
news:slrne2oue2.2...@random.yi.org...

> You forgot 0x09, 0x34, big-endian with the padding bits at the top of
> the word, which is, to me, the most obvious of all.

The truth is I didn't think of it because my example originally had 8-bit
bytes and no padding. But as far as my point is concerned, it doesn't
matter which combination is the most obvious one, only that there are
zillions of valid combinations.


It is loading more messages.
0 new messages