int_least8_t
int_least16_t
int_least32_t
int_least64_t
*but no others*, the new C standard is guilty of a new form of
discrimination. If, for my application, I want a type with at least 20
bits, why can I not write
int_least20_t j;
and get whatever's provided that fits my needs?
It seems to me that *every* type of the form
int_leastN_t
uint_leastN_t
for all values of N up till the largest (64, 128, whatever) ought to be
mandated.
In general, it seems to me that C9x does a very half-hearted job of
supporting extended integer types, as though the committee didn't
really think anyone would use them. Their use is certainly not
"standardised" as I understand the term.
For example, suppose I am writing for an implementation that
supplies an int48_t and I write
int48_t i;
It seems I can print this value by using
printf ("i " PRId48 "\n", i);
which is fair enough; but how do I take it's absolute value? How do I
divide it by another such, to get a quotient and remainder? No
Standard words help me or my implementor.
Shouldn't the whole library should be "genericised", so that it
extends in a consistent way across *all* extended integer types?
Thereby solving the problem for int128_t, int80_t, etc, as well?
Once that was done, my bet is that the spelling "long long" for a
64-bit type would rapidly lose ground to much clearer "int64_t" (where
available). Then, if support for *all* extended types was made
implementation-defined (so smaller systems don't need to support
64-bit types at all), "long" could be once more defined as the longest
type, "long long" deleted, and
*everyone would be happy*!!
and C9x would be upwards-compatible with C89. (All at the cost of a
little firm leadership from the committee, no worse than the
introduction of prototypes, and a few people having to change
non-standard "long long" to standard "int64_t" in their code.)
I hope that the U K position at Santa Cruz will continue be that a
"no" vote is required for a draft that, while professing to be in some
sense compatible with the previous one, breaks so much code.
/|
(_|/
/|
(_/
> int_least20_t j;
>and get whatever's provided that fits my needs?
Because people wouldn't go for a requirements based specification which made
things remotely tricky.
You'll note that people are *not* required to provide int16_t.
I think the argument for suggesting the _powerof2_t's is that they're very
common, and anyone with a larger byte size can always just fudge by providing
larger "least" types.
>for all values of N up till the largest (64, 128, whatever) ought to be
>mandated.
I wouldn't necessarily mind.
> In general, it seems to me that C9x does a very half-hearted job of
>supporting extended integer types, as though the committee didn't
>really think anyone would use them. Their use is certainly not
>"standardised" as I understand the term.
The committee couldn't find reasonable existing practice for real
specifications-based integer types, and didn't want to go wild inventing
something. We remember trigraphs.
> I hope that the U K position at Santa Cruz will continue be that a
>"no" vote is required for a draft that, while professing to be in some
>sense compatible with the previous one, breaks so much code.
While I basically agree, I think this one may be lost. At least, thanks
to some heroic efforts, the type promotions rules no longer have an
entire paragraph that is to the word "long" what the spam sketch was to the
word "spam".
(... "and unsigned long long and long shall be promoted to unsigned long long,
short, long, and long.")
-s
--
Copyright 1998, All rights reserved. Peter Seebach / se...@plethora.net
C/Unix wizard, Pro-commerce radical, Spam fighter. Boycott Spamazon!
Seeking interesting programming projects. Not interested in commuting.
Visit my new ISP <URL:http://www.plethora.net/> --- More Net, Less Spam!
No.
> ... it seems to me
> that in requiring the types
> int_least8_t
> int_least16_t
> int_least32_t
> int_least64_t
> *but no others*, the new C standard is guilty of a new form of
> discrimination.
No more than C89's requirements that:
char is at least 8 bits wide
short and int are at least 16 bits wide
long is at least 32 bits wide
> ... why can I not write
> int_least20_t j;
> and get whatever's provided that fits my needs?
Basically, because it would make an already bulky header
vastly bulkier to support more than a handful of values "n".
Actually, the exact-width types of various "n" could be handy too.
(You could test for them, remember.)
We actually discussed this during the <inttypes.h> revision process,
and the consensus was that there was too little benefit;
in the vast majority of implementations,
using int_least32_t instead would get you the same type.
There are few architectures with non-power-of-two native integer widths.
> In general, it seems to me that C9x does a very half-hearted job of
> supporting extended integer types, as though the committee didn't
> really think anyone would use them. Their use is certainly not
> "standardised" as I understand the term.
To the contrary, extended integers and ways to specify them
received a *lot* of committee attention,
as has been previously explained
(you seem to have missed that discussion).
<inttypes.h> (which now has a cleaned-up clone <stdint.h>)
was invented by various large-machine vendors and programmers
as a way to get a handle on the various forms of extended
integer type names that those environments introduced.
It was either standardize that existing practice (i.e.,
draw up the best, broadest specs we could for a compatible
facility) or else create a new, incompatible mechanism for
addressing at least the same application requirements.
We pursued both paths for several meetings, finally agreeing
on the former with improvements inspired by the latter.
There are several of us interested in a language-based
mechanism for requirements-based integer type specification,
but it was deemed too radical a change to the language with
too little existing practice and experience to draw upon for
this revision of the standard. I've been encouraging
experimentation along these lines, with the hope that a
good mechanism can be incorporated into a future revision
of C, or other languages.
> For example, suppose I am writing for an implementation that
> supplies an int48_t and I write
> int48_t i;
> It seems I can print this value by using
> printf ("i " PRId48 "\n", i);
> which is fair enough; but how do I take it's absolute value?
Convert to long long int, do the operation, and convert back.
> How do I divide it by another such, to get a quotient and remainder?
Just divide using the / operator; it is specified for *all* integer
types
including the implementation-defined extended integer types.
The remainder can be computed from the operands and the quotient
quite simply in C9x, which specifies the direction of truncation
for integer division. (It's not as simple as it could have been;
I argued for a pure step function, but the do-it-like-Fortran
contingent prevailed.)
> Shouldn't the whole library should be "genericised", so that it
> extends in a consistent way across *all* extended integer types?
If we didn't have to be consistent with existing practice,
that could be done; Fortran was somewhat like that.
But then the application program really ought to be able to
defined its own generic-type functions, too, nicht wahr?
> *everyone would be happy*!!
I think it is more likely that everyone would complain at the
extent of changes from C89. C is not meant to be a fat, lazy
language. I would rather see it tautened up.
> I hope that the U K position at Santa Cruz will continue be that a
> "no" vote is required for a draft that, while professing to be in some
> sense compatible with the previous one, breaks so much code.
And *I* hope that everyone there will understand that what
you call "breaking code" doesn't break any code, but opens up
new possibilities that the previous code could never have exploited.
Thank you for the answers. It's interesting that these are all
expressed in terms of politics and prior art, rather than in terms of
defining a direction for the evolution of C for the next 5 or so
years. I would be fascinated to learn exactly how the social dynamics
of committee meetings work. The C9x committee seems to be a very
"meek" body, happy only to standardise existing practice, even at the
cost of its own previous rulings on interpretations of the standard.
By contrast, the C89 committee was self-confident enough to invent its
way out of the various impasses that it came upon: prototypes,
trigraphs, wide-character handling (in N A 1).
And of course then there's the C++ committee: a veritable giant
among committees, to tread the jewelled thrones of the Earth under its
sandalled feet.[1]
> Jonathan Coxhead wrote:
>
> > In general, it seems to me that C9x does a very half-hearted job of
> >supporting extended integer types, as though the committee didn't
> >really think anyone would use them. Their use is certainly not
> >"standardised" as I understand the term.
>
> The committee couldn't find reasonable existing practice for real
> specifications-based integer types, and didn't want to go wild inventing
> something. We remember trigraphs.
But you have forgotten prototypes!
I'm convinced that a (relatively small) leap of faith could restore
everyone's confidence in the compatibility of C9x, and therefore the
basic design of C itself; in this case the standardisation of the
"long long" type under the name "int64_t" only.
> (... "and unsigned long long and long shall be promoted to unsigned
> long long, short, long, and long.")
"Chortle"!
/|
(_|/
/|
(_/
[1] Robert E Howard (sorry!).
> Jonathan Coxhead wrote:
>
> > ... why can I not write
> > int_least20_t j;
> > and get whatever's provided that fits my needs?
>
> Basically, because it would make an already bulky header
> vastly bulkier to support more than a handful of values "n".
You are arguing that a facility is not needed because it cannot be
supported easily by the framework that has been proposed for it; but
if the facility is needed, something can be invented to provide it; if
the facility is not needed, why not support it at all, even for the
arbitrary values 8, 16, 32, 64?
> There are few architectures with non-power-of-two native integer
> widths.
To restate exactly this in the positive: "some architectures have
non-power-of-two native integer widths". These statements are
logically identical: the *emotional colouring* behind one way of
phrasing this point is being used to influence the design of one of
the most important tools we use to write software.
> > In general, it seems to me that C9x does a very half-hearted job of
> > supporting extended integer types, as though the committee didn't
> > really think anyone would use them. Their use is certainly not
> > "standardised" as I understand the term.
>
> To the contrary, extended integers and ways to specify them
> received a *lot* of committee attention,
> as has been previously explained
> (you seem to have missed that discussion).
It's possible I might, though of course I have no first-hand
knowledge of what has had committee atention and what has not. If you
think I haven't been following all the threads here about |long long|
though, you are mistaken, as they have been receiving my closest
attention.
It seems not so hard to satify both sides of the debate
(*) we need a 64-bit type
(*) |long| should be the longest type
by spelling the 64-bit type |int64_t| rather than |long long|.
This would require a better "genericisation" of the standard
library, which I think is needed anyway. I don't seem to have made
myself very clear on this point though, so I shall elaborate further.
I wrote,
> > For example, suppose I am writing for an implementation that
> > supplies an int48_t and I write
> > int48_t i;
> > It seems I can print this value by using
> > printf ("i " PRId48 "\n", i);
> > which is fair enough; but how do I take it's absolute value?
> > How do I divide it by another such, to get a quotient and remainder?
By writing this, I meant to establish in the mind of the reader a
set of parallel code constructions, as follows (for a typical 16-bit
machine, in this case):
Absolute value Quotient and remainder
-------- ----- -------- --- ---------
int16_t x, y; int16_t i, j;
x = abs (y); div_t d;
d = div (i, j);
int32_t x, y; int32_t i, j;
x = labs (y); ldiv_t d;
d = ldiv (i, j);
int64_t x, y; int64_t i, j;
x = llabs (y); lldiv_t d;
d = lldiv (i, j);
At present, to use these functions in this way you have to know that
|int16_t| is |int|, |int32_t| is |long|, |int64_t| is |long long|. If
we imagine that this implementation also provides a 48-bit type
|int48_t|, because this is an important type in the underlying
hardware, then we have to ask how to construct the code fragments
above for 48-bit x, y, i, j, d.
The solutions
> Convert to |long long int|, do the operation, and convert back.
or
> Just divide using the |/| operator; it is specified for *all* integer
> types
are not right, because the hypothesis is that we need access to
the 48-bit instruction set for efficient use of the hardware.
There ought to be standard guidance for implementors wishing to do
these things, if extended integer types are intended to be
"industrial-strength" facilities of the language.
Other questions include, What promotes to |int48_t|? What does
|uint16_t + int48_t| promote to?
(I suppose ideally I want to write
int_t:48 x, y; int_t:48 i, j;
x = abs:48 (y); div_t:48 d;
d = div:48 (i, j);
or something equivalent. Do you reckon this will get through for the
Final Draft International Standard? [irony])
> But then the application program really ought to be able to
> defined its own generic-type functions, too, nicht wahr?
Well, of course---why not? :-)
> > *everyone would be happy*!!
>
> I think it is more likely that everyone would complain at the
> extent of changes from C89. C is not meant to be a fat, lazy
> language. I would rather see it tautened up.
Quite right too. It should provide access to whatever word-lengths
the hardware provides---*all* of them, in an equally convenient way;
and it should be compatible with C89. A machine that has a 36-bit word
should be allowed to provide |int36_t|, |uint36_t| (with modulo 2^36
arithmetic) *only*, and not be required to emulate 32-bit unsigned
arithmetic or cobble up some 64-bit type just because it's fashionable
in 1998.
> > I hope that the U K position at Santa Cruz will continue be that a
> > "no" vote is required for a draft that, while professing to be in some
> > sense compatible with the previous one, breaks so much code.
>
> And *I* hope that everyone there will understand that what
> you call "breaking code" doesn't break any code, but opens up
> new possibilities that the previous code could never have exploited.
I really don't see how this, often-repeated, view can be supported.
However, this seems to be some kind of holy war, where the 2 camps are
each impervious to the logic of the other.
C'mon, committee, don't just follow any longer! Lead the way!!
Please!
Because those values are far from arbitrary. The facility to provide
power-of-2 sizes is very widely needed. The facility to provide
non-power-of-2 sizes is less widely needed. More to the point,
int_least32_t doesn't have to be a 32 bit type; it can and should be 36
bits on some machines, and it should be implemented using native 36-bit
instructions on those machines.
...
> It seems not so hard to satify both sides of the debate
>
> (*) we need a 64-bit type
>
> (*) |long| should be the longest type
>
> by spelling the 64-bit type |int64_t| rather than |long long|.
Those are not the two sides of the 'long long' debate. Take a look at
previous messages using DejaView. Both sides were willing to allow 64
bit types. The two sides are:
1. 'long' should remain the longest type; therefore it must be at least
64 bits for any implementation that supports a 64 bit type.
2. 'long' should be allowed to remain 32 bits, for backward
compatibility, and there should be a standardized way to use a
64 bit type, preferably using the de-facto standard name
'long long'.
Those two sides are fundamentally incompatible, even if you ignore the
clause starting with 'preferably'. Your proposed solution can't satisfy
both of them.
> [...] The facility to provide
> power-of-2 sizes is very widely needed. The facility to provide
> non-power-of-2 sizes is less widely needed.
My point was that this should be left to the implementor, not the
language standard. An implementation for hardware that has power-of-2
word sizes should clearly provide power-of-2 integer types; but why
should others?
> > It seems not so hard to satisfy both sides of the debate
> >
> > (*) we need a 64-bit type
> >
> > (*) |long| should be the longest type
> >
> > by spelling the 64-bit type |int64_t| rather than |long long|.
>
> Those are not the two sides of the 'long long' debate. Take a look at
> previous messages using DejaView. Both sides were willing to allow 64
> bit types. The two sides are:
>
> 1. 'long' should remain the longest type; therefore it must be at least
> 64 bits for any implementation that supports a 64 bit type.
>
> 2. 'long' should be allowed to remain 32 bits, for backward
> compatibility, and there should be a standardized way to use a
> 64 bit type, preferably using the de-facto standard name
> 'long long'.
>
> Those two sides are fundamentally incompatible, even if you ignore the
> clause starting with 'preferably'. Your proposed solution can't satisfy
> both of them.
You're right, and of course I was oversimplifying, in my optimistic
way.
But I would claim that that the incompatibility isn't "fundamental".
Would it be *so* bad to ask people who have been using |long| to mean
"a 32-bit type" in programmes where a 64-bit type is provided to
change |long| to |int32_t|?
Maybe it would. But it seems much easier than changing all those
interfaces that assume "if |int| isn't long enough, |long| will do" by
adding extra functions to the interface definition for |long long| as
well. |lldiv|, |llabs|, etc, are not the prettiest names in the world,
and surely it does not take a genius to recognise that by going down
this road the committee is committing itself to adding |llldiv|,
|lllabs| in 5 years time, |lllldiv|, |llllabs| 5 years after that, and
so on, on a never-ending treadmill.
Better to break the cycle now, before it starts, and solve all
these problems immediately with a generic library specification that
can handle any number of extended integer types.
They shouldn't. That's why the exact-length types are optional.
> > ... Both sides were willing to allow 64
> > bit types. The two sides are:
> >
> > 1. 'long' should remain the longest type; therefore it must be at least
> > 64 bits for any implementation that supports a 64 bit type.
> >
> > 2. 'long' should be allowed to remain 32 bits, for backward
> > compatibility, and there should be a standardized way to use a
> > 64 bit type, preferably using the de-facto standard name
> > 'long long'.
> >
> > Those two sides are fundamentally incompatible, even if you ignore the
> > clause starting with 'preferably'. Your proposed solution can't satisfy
> > both of them.
>
> You're right, and of course I was oversimplifying, in my optimistic
> way.
>
> But I would claim that that the incompatibility isn't "fundamental".
> Would it be *so* bad to ask people who have been using |long| to mean
> "a 32-bit type" in programmes where a 64-bit type is provided to
> change |long| to |int32_t|?
According to them, yes. There is tons of legacy software with this
non-portable assumption built in, that would need to be converted. I've
little sympathy for writers of such code, but what they lack in
justification, they make up for with sheer numbers.
For What it is worth I think that the problem started way back when we
first allowed built-in types to need two (or more) keywords to specifiy
them. I know all the rational but it results in confusion.
For example, unsigned long long (with optional int, or is it optional
when we have thrown away implicit int?) is a perfectly acceptable
implementation defined type, but we really should be able to provide it
as a typedef or #define of a real type. The syntax of the language does
not allow us to do that. I would have little problem with:
#define |long long| int64_t
well I would not want _t but I cannot even start a discussion about such
an option. (No I do not want to waste time and mental effort discussing
some hypothetical syntax we might have had, just to point out that we
have painted ourselves into a corner)
I do not think that there is any way to resolve this issue with a
compromise. The general view does not seem unreasonable but none the
less we have a perfectly reasonable (I know him) UK expert who maintains
that the introduction of long long as in C9X will require him to check
many hundreds of thousands of lines of code. Those that assert that he
need not do so have to convince him that mission critical code for which
his employer holds him responsible really will not do anything
unexpected in the context of C9X.
Unfortunately, trying to meet some self imposed deadline for delivery
does not lead to patient explanation based on listening to an
individuals concerns. We nearly had a stand off in C++ (re auto_ptr)
until, at the last moment, the two sides realised that there was a
technical issue and that it might be solvable. I can see no way to get
a similar resolution of the impasse re long long.
Francis Glassborow Chair of Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
No, and please don't put words in my mouth.
I actually support having a general integer width specification
mechanism,
but don't want it to be constrained to follow the <inttypes.h> model.
I also don't want it to be a half-baked kludge whose
deficiencies may soon surface.
> if the facility is not needed, why not support it at all, even for the
> arbitrary values 8, 16, 32, 64?
Since I never implied the facility wasn't needed, your question is moot.
The correct reason for supporting a certain handful of values is:
> > There are few architectures with non-power-of-two native integer
> > widths.
> To restate exactly this in the positive: "some architectures have
> non-power-of-two native integer widths". These statements are
> logically identical: ...
No, they are not. English is more expressive than whatever logical
language you think it can be mapped into. I am sure that well over
99% of all C application code will never need to compile for a non
power-of-two wordsize architecture. Therefore, we are able to
*well* serve the vast majority of applications with a *modest*,
existing, proven mechanism. That makes it worth doing. Yes, value
judgments are required, and they were not hard to make in this case.
> It's possible I might, though of course I have no first-hand
> knowledge of what has had committee atention and what has not.
I was referring to previous newsgroup discussion that you missed.
> It seems not so hard to satify both sides of the debate ...
> by spelling the 64-bit type |int64_t| rather than |long long|.
That has been previously suggested, debated, etc.
You conveniently ignored two issues:
"long long" *already exists* (and lacks standardization);
a mandatory at-least 64-bit integer is needed (and needs a standard
name so that sufficient support can be specified).
> The solutions
> > Convert to |long long int|, do the operation, and convert back.
> or
> > Just divide using the |/| operator; it is specified for *all* integer
> > types
> are not right, because the hypothesis is that we need access to
> the 48-bit instruction set for efficient use of the hardware.
I took that into account, and those solutions are of a form
that compiler can map into efficient native code.
I will say that you're right that the [u]int*_t types are treated
as second-class citizens with respect to the library functions.
A really good solution to that would involve either true type-generic
functions (which apparently was *not* what you meant) or else
a radical change to the type system. Both have been suggested
and discussed. Reasons have been given in this and other
newgroups as to why the C9x choices were made.
> (I suppose ideally I want to write
> int_t:48 x, y; int_t:48 i, j;
> x = abs:48 (y); div_t:48 d;
> d = div:48 (i, j);
> or something equivalent. Do you reckon this will get through for the
> Final Draft International Standard? [irony])
As I said before, this kind of approach has been proposed and
thoroughly debated. As you seem to indicate, it would have been
most improbable that such massive changes to a fundamental area
of the language could be gotten right within the time constraints
we face for the current revision of the standard. However, if
you want to explore such mechanisms and their ramifications in
order to contribute a better proposal for the *next* revision,
several committee members are interested in considering it.
> Quite right too. It should provide access to whatever word-lengths
> the hardware provides---*all* of them, in an equally convenient way;
> and it should be compatible with C89. A machine that has a 36-bit word
> should be allowed to provide |int36_t|, |uint36_t| (with modulo 2^36
> arithmetic) *only*, and not be required to emulate 32-bit unsigned
> arithmetic or cobble up some 64-bit type just because it's fashionable
> in 1998.
The C9x draft does not require that a 36-bit word based implementation
emulate 32-bit operations.
> > And *I* hope that everyone there will understand that what
> > you call "breaking code" doesn't break any code, but opens up
> > new possibilities that the previous code could never have exploited.
> I really don't see how this, often-repeated, view can be supported.
Well, that's the problem. I certainly understand what the *other*
camp sees as the problem. But their code will continue to work on
all the platforms it currently does when the C9x compiler upgrades
come out. It will even work okay on platforms on which they never
could have portably encountered the troublesome object sizes,
so long as they don't come across such huge objects.
And C9x provides a mechanism that will work for all object sizes on all
platforms; a simple edit and they will never again have this problem.
Actually, it's externally imposed by the requirements for periodic
review and revision or reaffirmation of the standard.
To do this anywhere with near the required frequency,
we need to spend about 5 years in maintenance of the current
standard then 5 years in preparation of the next revision.
> code [that assumes "long" is the longest integer type] will continue to work
> on all the platforms it currently does when the C9x compiler upgrades
> come out. It will even work okay on platforms on which they never
> could have portably encountered the troublesome object sizes,
> so long as they don't come across such huge objects.
This is incorrect, since large size_t values are used in portable code
even when huge objects aren't encountered. For example, many functions
-- some even in the C standard, if I recall correctly -- use
((size_t) -1) to denote special cases and error values. It's not
uncommon for conforming C89 code to print and read sizes using unsigned
long formats. Such code will break in C9x if size_t is longer than
long, even if no huge objects are encountered, because it will mishandle
((size_t) -1).
This problem could have been avoided had C9x required that all C89
types (including size_t) be no longer than long. I don't know why this
compromise was not adopted by the committee. It allows long long
(which is what the long long camp wants), while not breaking existing
strictly conforming code (which is what the conservative camp wants).
The problem with that solution is that there is a large body of code out
there which assumes (unportably) that 'long' is 32 bits, and that 'long
long' is 64 bits. People who don't want to go to the the enormous effort
of re-writing all that code, successfully argued in favor of a
specification that allows a conforming implementation to support it.
I personally don't like catering to badly written code, no matter how
pervasive it is, at the expense of a strictly conforming construct, no
matter how rare it is. But then I'm not exactly the most pragmatic
person in the world.
And the converse problem is that there is an equally large body of
conforming and portable code that is broken by C9X. However, let
us assume that the hackers have won the day.
What really concerns me is that any attempt to raise that issue is
met with flat denial, evasions and even abuse. I am going to make
some proposals for migration facilities, but am not optimistic that
they will even be considered.
If they aren't, I hope that C89 will be re-ratified by ISO, whether
or not C9X gets accepted. And I really DON'T want two C languages,
but the currently proposed alternative is worse :-(
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QG, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679
No and wrong. Both make the requirement that a char is at *least* 8 bits
wide.
> but it seems to me
>that in requiring the types
> int_least8_t
> int_least16_t
> int_least32_t
> int_least64_t
>*but no others*, the new C standard is guilty of a new form of
>discrimination.
Yes, but only a minor one. The same as it's discriminating to make the
minimum sizes of various types 8, 16, 32, and 64 bits.
> In general, it seems to me that C9x does a very half-hearted job of
>supporting extended integer types,
Agreed. They were clearly a second-best solution because we couldn't get
the best into a state everyone was happy with.
> For example, suppose I am writing for an implementation that
>supplies an int48_t and I write
> int48_t i;
>
>It seems I can print this value by using
> printf ("i " PRId48 "\n", i);
You should be able to, though in fact there's no such requirement in
CD2. I've submitted a paper to fix this.
>which is fair enough; but how do I take it's absolute value? How do I
>divide it by another such, to get a quotient and remainder? No
>Standard words help me or my implementor.
> Shouldn't the whole library should be "genericised",
Perhaps, but this is by no means trivial, and no one got round to
proposing it.
>"long" could be once more defined as the longest
>type, "long long" deleted, and
> *everyone would be happy*!!
*Wrong*. Won't satisfy the "I want long = int32_t *and* a 64 bit type"
crowd.
--
Clive D.W. Feather | Regulation Officer, LINX | Work: <cl...@linx.org>
Tel: +44 1733 705000 | (on secondment from | Home: <cd...@i.am>
Fax: +44 1733 353929 | Demon Internet) | <http://i.am/davros>
Written on my laptop; please observe the Reply-To address
That depends on what types you have around, and what the conversion rank
of int48_t is.
>What does
>|uint16_t + int48_t| promote to?
Again depends. But for any one implementation it is easy to determine,
using the rules for conversion rank and promotion in the draft.
> (I suppose ideally I want to write
>
> int_t:48 x, y; int_t:48 i, j;
> x = abs:48 (y); div_t:48 d;
> d = div:48 (i, j);
>
>or something equivalent. Do you reckon this will get through for the
>Final Draft International Standard? [irony])
We've been there and run away screaming.
>A machine that has a 36-bit word
>should be allowed to provide |int36_t|, |uint36_t| (with modulo 2^36
>arithmetic) *only*, and not be required to emulate 32-bit unsigned
>arithmetic or cobble up some 64-bit type just because it's fashionable
>in 1998.
It is, except that it needs to produce a 72-bit type.
I don't see how that suddenly starts not working under C9x.
> This problem could have been avoided had C9x required that all C89
> types (including size_t) be no longer than long. I don't know why this
> compromise was not adopted by the committee.
That has been explained so many times that I am not going to do so
again.
> The problem with that solution is that there is a large body of code out
> there which assumes (unportably) that 'long' is 32 bits, and that 'long
> long' is 64 bits. People who don't want to go to the the enormous effort
> of re-writing all that code, successfully argued in favor of a
> specification that allows a conforming implementation to support it.
Of course, there's probably a much larger body of code that expects
that a pointer will fit in a 'long'...
I think someone else has already proposed two language specifications,
one that requires that 'long == 32 bits' and one that requires that
'long is the longest type'. (Then the first specification could be
rejected as too ugly, and we could keep the second one :-).
--
Geoff Keating <Geoff....@anu.edu.au>
>Paul Eggert wrote:
>> This problem could have been avoided had C9x required that all C89
>> types (including size_t) be no longer than long. I don't know why this
>> compromise was not adopted by the committee. It allows long long
>> (which is what the long long camp wants), while not breaking existing
>> strictly conforming code (which is what the conservative camp wants).
>The problem with that solution is that there is a large body of code out
>there which assumes (unportably) that 'long' is 32 bits, and that 'long
>long' is 64 bits. People who don't want to go to the the enormous effort
>of re-writing all that code, successfully argued in favor of a
>specification that allows a conforming implementation to support it.
But you haven't contradicted the compromise that I mentioned.
An implementation with 32-bit long and 64-bit long long would conform
to this compromise standard, so long as it didn't define size_t to be
unsigned long long.
Do you know of any implementations where longs are 32 bits but size_t's
are 64 bits? I don't. A lot of widely used code would break on them.
Shouldn't C9x disallow longer-than-long size_t in support of such code,
just as it allows `long long' in support of the large body of code that
you mention?
>Paul Eggert wrote:
>> ... Such code will break in C9x if size_t is longer than
>> long, even if no huge objects are encountered, because it will mishandle
>> ((size_t) -1).
>I don't see how that suddenly starts not working under C9x.
Here's an example. Suppose a program print values to a text file and
later reads them back in, and that it's essential for correctness that
the program reads back exactly the same values that it printed.
The values to be printed include some size_t values. Some of the
size_t values equal (size_t)-1; they are used as placeholders.
To print a size_t value, the program casts it to unsigned long and then
prints it with the "%lu" format. To read a size_t value, the program
applies strtoul to the character representation.
This program works correctly on a C89 implementation. It will
break on a C9x implementation where size_t is longer than long,
because when it prints (size_t)-1 and then reads it back in again, it
gets (size_t)(unsigned long)(size_t)-1, and this isn't the same value
as the original. For example, if size_t is 64 bits but unsigned
long is 32 bits, then the program starts with (size_t)-1 == 2**64 - 1 ==
18446744073709551615, prints it out and read it back in again, and ends
up with 2**32 - 1 == 4294967295. So the placeholder values will become
corrupted.
>> This problem could have been avoided had C9x required that all C89
>> types (including size_t) be no longer than long. I don't know why this
>> compromise was not adopted by the committee.
>That has been explained so many times that I am not going to do so
>again.
Really? I don't recall the explanation of this particular point.
I went back and looked at these discussions via Dejanews, and didn't
see an explanation.
I did find plausible explanations for why some non-C-standard types
(e.g. off_t) might be longer than long. But I never found an
explanation of why the standard should allow C standard types like
size_t to be longer than long.
I don't know of any implementations where size_t is longer than long,
though I do know of many implementations with longer-than-long integers.
A lot of code that I'm familiar with will break (in sometimes subtle
ways) if size_t is longer than long. Converting it to conform to C9x,
just in case there happens to be a implementation which actually has a
longer-than-long size_t, will be a lot of stupid makework, as far as I
can see. In our organization, I'll recommend that we simply avoid
such implementations, if anyone is silly enough to build them.
And that is the crux of the problem.
I have no doubt that there is a lot of code out there that relies on
using long long. Of course none of it is currently conforming but that
does not worry me.
We are also assured that there is a (large) body of code out there that
is written assuming that u(long) is the largest type. That assumption
was put to the test with a DR. The owners (and users) of this code
assure us that if size_t can refer to a type bigger than unsigned long
their code may have logical errors. Much of this code is in mission
critical code and so must be manually checked if the change is made.
The people responsible for this code (in the nature of things a
minority, because most do not have to maintain mission critical code)
are asking for some help.
It seems to me that thevery least they could expect by way of a
migration path is a macro that they could use to determine if an
implementation of C9X uses a type larger than (u)long for any of the C89
standard typedefs (size_t, ptrdiff_t, time_t etc.)
It would be ironical if, at the time that the World was suffering from
the consequences of the Y2K problem, we slung in another requirement for
checking all the code in a language that had previously been relatively
free of this precise problem.
It si no good asserting that conforming code will continue to work, we
have clear examples of not-unreasonable techniques that will fail.
>In article <360ea454...@news.pathcom.com>, Peter Curran
><pcu...@acm.gov> writes
>>The finality was further verified when committee members reported that
>>they would like the decision, apparently passed at a poorly attended
>>meeting, to be reconsidered, but for some reason it could not be,
>>although the minutes show many other decisions were reconsidered.
>I believe you may have misunderstood. It *has* been reconsidered on more
>than one occasion, but in every case the majority was to retain it.
I may well have misunderstood, and frankly this is something I would
like to understand better. In the above quotation, I summarised my
understanding of what Peter Seebach reported, and which I understood
you substantially agreed with. (This was, of course, well before CD1,
when everything becomes open for discussion again.) This situation
made very little sense to me, but it certainly was what I understood
was the case. I recall Peter saying he would strongly support
reopening the discussion, but he could not get it done.
--
Peter Curran pcu...@acm.gov (chg gov=>org)
>By contrast, the C89 committee was self-confident enough to invent its
>way out of the various impasses that it came upon: prototypes,
>trigraphs, wide-character handling (in N A 1).
I think prototypes were adopted from C++, rather than being invented
by the C89 committee.
--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@128.250.37.3 | -- the last words of T. S. Garp.
> "Douglas A. Gwyn" <DAG...@null.net> writes, inter alia:
>
>> Jonathan Coxhead wrote:
>>
>> > ... why can I not write
>> > int_least20_t j;
>> > and get whatever's provided that fits my needs?
>>
>> Basically, because it would make an already bulky header
>> vastly bulkier to support more than a handful of values "n".
>
> You are arguing that a facility is not needed because it cannot be
>supported easily by the framework that has been proposed for it; but
>if the facility is needed, something can be invented to provide it; if
>the facility is not needed, why not support it at all, even for the
>arbitrary values 8, 16, 32, 64?
>> There are few architectures with non-power-of-two native integer
>> widths.
>
> To restate exactly this in the positive: "some architectures have
>non-power-of-two native integer widths".
No, that's a different statement. A statement whose meaning is much
closer to the original is the following: "Not many architectures have
non-power-of-two native integer widths". Note that "not many" has a
quite different meaning to "some": the former imposes an upper bound,
albeit a fuzzy one, whereas the latter imposes no such upper bound.
>If you
>think I haven't been following all the threads here about |long long|
>though, you are mistaken, as they have been receiving my closest
>attention.
>
> It seems not so hard to satify both sides of the debate
>
> (*) we need a 64-bit type
>
> (*) |long| should be the longest type
>
>by spelling the 64-bit type |int64_t| rather than |long long|.
You missed the "we should allow long to remain 32 bits" side of the debate;
your suggestion doesn't satisfy that requirement.
Perhaps you should try re-reading all those threads ;-)
>A machine that has a 36-bit word
>should be allowed to provide |int36_t|, |uint36_t| (with modulo 2^36
>arithmetic) *only*,
>and not be required to emulate 32-bit unsigned arithmetic
Well, there's no requirement on implementations to provide `int32_t' (etc.)
they only have to provide `int_least32_t' (etc.), and this can easily
be done on a 36-bit word machine, like so:
typedef int36_t int_least32_t;
>or cobble up some 64-bit type just because it's fashionable in 1998.
Implementations with 36 bit words don't need to cobble up an `int64_t';
they only need to cobble up an `int_least64_t', and this can easily be
done, like so:
typedef int72_t int_least64_t;
I hope this makes things clearer.
>Paul Eggert wrote:
>...
>> This problem could have been avoided had C9x required that all C89
>> types (including size_t) be no longer than long. I don't know why this
>> compromise was not adopted by the committee. It allows long long
>> (which is what the long long camp wants), while not breaking existing
>> strictly conforming code (which is what the conservative camp wants).
>
>The problem with that solution is that there is a large body of code out
>there which assumes (unportably) that 'long' is 32 bits, and that 'long
>long' is 64 bits. People who don't want to go to the the enormous effort
>of re-writing all that code, successfully argued in favor of a
>specification that allows a conforming implementation to support it.
The main issue is *not* badly written code that assumes that `long' is 32
bits; the issue is binary compatibility.
It may well be that you wouldn't need to rewrite a single line of code,
assuming you recompile everything; but nevertheless the task of recompiling
everything all at the same time may be impossibly difficult. You
might not have source code, there may be many different vendors
involved, etc.
Furthermore, there's lots of code that uses non-portable binary file
formats; such code typically assumes only that `sizeof(long)' remains
constant, not that `sizeof(long) == 4'. So even if you could recompile
everything at once, that wouldn't be enough; you may also need to convert
all the existing binary files to new formats.
>I personally don't like catering to badly written code, no matter how
>pervasive it is, at the expense of a strictly conforming construct, no
>matter how rare it is.
The main point is preserving backwards binary compatibility, not catering
to badly written code.
>Would it be *so* bad to ask people who have been using |long| to mean
>"a 32-bit type" in programmes where a 64-bit type is provided to
>change |long| to |int32_t|?
That's not the main issue. The main issue is binary backwards compatibility.
Many programs don't assume that `long' is exactly 32 bits but would
nevertheless break if the size of `long' were changed, either
(a) simply because they must interface with other programs and libraries
that use `long' and cannot easily be recompiled, or
(b) because they must interface with binary data files and they assumed
that the size of `long' wasn't going to change. (Note that assuming
that the size of `long' isn't going to change is not the same as
assuming that that `long' is exactly 32 bits.)
> I did find plausible explanations for why some non-C-standard types
> (e.g. off_t) might be longer than long. But I never found an
> explanation of why the standard should allow C standard types like
> size_t to be longer than long.
>
> I don't know of any implementations where size_t is longer than long,
> though I do know of many implementations with longer-than-long integers.
>
> A lot of code that I'm familiar with will break (in sometimes subtle
> ways) if size_t is longer than long. Converting it to conform to C9x,
> just in case there happens to be a implementation which actually has a
> longer-than-long size_t, will be a lot of stupid makework, as far as I
> can see. In our organization, I'll recommend that we simply avoid
> such implementations, if anyone is silly enough to build them.
I think it should be quite easy for a compiler manufacturer to make the
type of size_t depend on a compiler option. The only situation where it
matters in the compiler is during evaluation of a "sizeof" expression. If
the size of an int is for example three bytes, then sizeof(int) must
produce EXACTLY the same as one of 3u, 3ul, or ((unsigned long long) 3),
depending on which type is used for sizeof; that should be easy to handle
in the compiler. And the standard library headers must get the type of
size_t right, that is all.
I cannot see any reason why one would want size_t > unsigned long. I would
bet that any real code that breaks because unsigned long > 32 bit will
break if size_t > 32 bit, so if the compiler manufacturer keeps unsigned
long = 32 bit for compatibility, they MUST keep size_t at 32 bit.
On the other hand, I cannot imagine an architecture where size_t needs to
be so large that it becomes inefficient, like an architecture that can
handle object sizes up to 2^64 byte but cannot handle 64 bit integers
efficiently. But if size_t is efficient, there is no reason not to make
unsigned long as large as size_t.
No, I don't know of such implementations, but the people who argued
against me in favor of 'long long' said that extended implementations of
C89 with those features already exist. They have 32 bit 'long', and 64
bit 'long long', pointers, size_t, and ptrdiff_t. They also have the
ability to create objects whose total size is sufficient to require more
than 32 bits to store all the different possible addresses. I see no
reason to doubt those people's honesty. Your proposal would prevent a
conforming implementation from supporting the code they've already
written, without a major re-write that they'd be very unhappy about.
I am sorry, but the only correct response to this is "nonsense".
Dammit, I have personal experience of several such changes, as a
user, as an advisor/debugger and as a consultant to the implementors.
It isn't hard to maintain binary compatibility and change the 'word'
size, if you plan it and do it competently.
However, I admit that you must have control over the linker etc.
to do so, and there are certain (relatively rare) problems that are
extremely nasty - whether you decide to tackle those problems
depends on your priorities.
Ideally you want to have to name the sizes in the function calls
even though the compiler should be able to tell? That's a pretty
grim ideal, IMHO: change the type of i or j and you have to change
all the functions you call...
--
To reply by mail, please remove "mail." from my address -- but please
send e-mail or post, not both
And it would be quite easy for WG14 to require a macro that indicated
whether the standard typedefs might use types larger than long.
Unfortunately sizeof(long)>=sizeof(size_t) is not guaranteed to work.
What those maintaining mission critical code need is a device that
guarantees that no idiot can successfully compile the code with a
compiler that is using 'large' versions of Size_t, ptrdiff_t etc.
Beyond that, I believe even int32_t can be directly supported.
Since padding bits are allowed in integer types, and the `width'
is just the value bits, and since signed overflow is undefined,
I think you could just declare the native 36 bit signed type
as int32_t and be conforming. And since I fully expect lots of
programmers to use int32_t where they really mean int_least32_t
there will be pressure on implementors to declare these not-so-
exact types to allow more source to compile with their compiler.
I think one could even argue that the implementors are _obliged_
to declare these types because of 7.18(#4).
Obviously, this would not work with uint32_t, and direct compiler
support would be required.
This is a bullshit claim.
If it were true, there would never have been a C compiler for the
(8-bit) Z-80, because the processor couldn't handle 16-bit "short"
efficiently, much less "int" and "long".
And in the Real World, we *need* larger-than-40-bit size_t and off_t
*right*now*. It is a Pain In The Ass (tm) to have to deal with dinky
little systems that don't support multi-gigabyte objects. (Note
that even WindowsNT has 64-bit file sizes (although the functionality
to access them is broken because of lame excuses like the above.)
Douglas Gwyn writes:
> Jonathan Coxhead wrote:
>
> The correct reason for supporting a certain handful of values is:
> > > There are few architectures with non-power-of-two native integer
> > > widths.
>
> > To restate exactly this in the positive: "some architectures have
> > non-power-of-two native integer widths". These statements are
> > logically identical: ...
>
> No, they are not. English is more expressive than whatever logical
> language you think it can be mapped into.
Logic was invented as as tool to support reasoning. If you reject
logic, you reject reason itself, and also the whole of mathematics and
science. Debate becomes impossible.
I suspect the first logicians, c2000 B C or whenever, would have
recognised the equivalence of these statements, which is a result of
the scientific-reductionist interpretation of the world, to which I
fully subscribe.
Douglas's subsequent line (his attempt to prove that the statements
are not equivalent) was
> I am sure that well over
> 99% of all C application code will never need to compile for a non
> power-of-two wordsize architecture.
The figure is irrelevant to the point I was making. Even **1**
architecture with a non-power-of-2 integer width proves both
statements (the positive and the contrapositive).
> Therefore, we are able to
> *well* serve the vast majority of applications with a *modest*,
> existing, proven mechanism. That makes it worth doing. Yes, value
> judgments are required, and they were not hard to make in this case.
This makes more sense.
A more relevant way of rephrasing
There are few architectures with non-power-of-two native integer
widths.
would therefore be
Those architectures with non-power-of-two native integer
widths need less support from the Standard.
I feel better now :-)
> I will say that you're right that the [u]int*_t types are treated
> as second-class citizens with respect to the library functions.
> A really good solution to that would involve either true type-generic
> functions (which apparently was *not* what you meant) or else
> a radical change to the type system.
I wasn't ruling it out. Once there was a generic library, providing
all of e g, |abs16()|, |abs32()|, |abs48()|, |abs64()| (as
appropriate), an obvious next step would be to make a polymorphic
version (as has already been done for floating-point values in
|<tgmath.h>|) with names like |tgabs()| or something.
> Both have been suggested
> and discussed. Reasons have been given in this and other
> newgroups as to why the C9x choices were made.
These are the reasons I referred to: mostly based on pragmatism
and politics.
> > (I suppose ideally I want to write
> > int_t:48 x, y; int_t:48 i, j;
> > x = abs:48 (y); div_t:48 d;
> > d = div:48 (i, j);
> > or something equivalent. Do you reckon this will get through for the
> > Final Draft International Standard? [irony])
>
> As I said before, this kind of approach has been proposed and
> thoroughly debated. As you seem to indicate, it would have been
> most improbable that such massive changes to a fundamental area
> of the language could be gotten right within the time constraints
> we face for the current revision of the standard.
Exactly what I was trying to say.
Maybe the question is simply, do we want it now, or do we want it
right?
> > > And *I* hope that everyone there will understand that what
> > > you call "breaking code" doesn't break any code, but opens up
> > > new possibilities that the previous code could never have exploited.
> > I really don't see how this, often-repeated, view can be supported.
>
> Well, that's the problem. I certainly understand what the *other*
> camp sees as the problem. But their code will continue to work on
> all the platforms it currently does when the C9x compiler upgrades
> come out. It will even work okay on platforms on which they never
> could have portably encountered the troublesome object sizes,
> so long as they don't come across such huge objects.
> And C9x provides a mechanism that will work for all object sizes on all
> platforms; a simple edit and they will never again have this
> problem.
The difference here is that this is a *run-time* issue. When you
wrote the programme, you wanted it to work for all objects it could be
handed, and you coded to that assumption. With C9x, there may now be
objects that you can't handle, and you have no way of knowing that.
Doesn't your line of reasoning apply equally well to |gets()|? As
long as the user doesn't supply a line that's too long (a "troublesome
object size"), |gets()| is a perfectly useful function.
Who believes that?
/|
(_|/
/|
(_/
> | (I suppose ideally I want to write
> |
> | int_t:48 x, y; int_t:48 i, j;
> | x = abs:48 (y); div_t:48 d;
> | d = div:48 (i, j);
>
> Ideally you want to have to name the sizes in the function calls
> even though the compiler should be able to tell? That's a pretty
> grim ideal, IMHO: change the type of i or j and you have to change
> all the functions you call...
In FORTRAN terms, these would be the "specific intrinsic
functions". Compile-time support for selecting between them ("generic
intrinsic functions") would, of course, also be useful, and could even
be hacked together within the above framework with no other *language*
changes by writing them as
x = abs:(CHAR_BIT*sizeof y) (y);
d = div:(CHAR_BIT*(sizeof i > sizeof j? sizeof i: sizeof j)) (i, j);
Since this is a bit tedious, standard type-generic facilities (e g,
hidden in |<stdlib.h>|) could be provided too:
#define tgabs(x) (abs:(CHAR_BIT*sizeof (x)) (x))
#define tgdiv(i, j) \
(div:(CHAR_BIT*(sizeof (i) > sizeof (j)? sizeof (i): sizeof (j))) \
(i, j))
I thought this possibility was implicit in what I wrote, but now
I've spelled it out. However, this is not a serious proposal in any
sense, just my attempt to retain some form of academic respectability ...
The first statement says there are 'few' implementations, which implies
a fuzzy indication of numbers. Your 'equivalent' statement implies no
numerical estimates other than n>0. It would be just as accurate if
exactly 100% of all implementations were for non-power-of-two
architectures. Therefore the two statements are not equivalent. Your
statement can be derived from his, but not vice-versa.
...
> A more relevant way of rephrasing
>
> There are few architectures with non-power-of-two native integer
> widths.
>
> would therefore be
>
> Those architectures with non-power-of-two native integer
> widths need less support from the Standard.
"few" is an objective (if fuzzy) statement about numbers. "need less
support" is a subjective judgement which, for some people (not including
Hans Aberg), is a logical conclusion from "few".
The reverse derivation wouldn't work. "need less support" is something
that can be true for many reasons other than "few". Therefore, the
second is more than just a re-phrasing of the first.
>the people who argued
>against me in favor of 'long long' said that extended implementations of
>C89 with those features already exist. They have 32 bit 'long', and 64
>bit 'long long', pointers, size_t, and ptrdiff_t.
Interesting. I'd like to know what implementations these are, and how
much (non-C89) code depends on these implementations.
>Your proposal would prevent a
>conforming implementation from supporting the code they've already
>written, without a major re-write that they'd be very unhappy about.
It wouldn't require a major rewrite. All they'd need is a switch to
turn on the nonconforming extension that they desire. This switch
could be as simple as a #define before including any file -- that's
what Sun does in Solaris 2.6 with its C89 compiler, if you want the
(non-C89) type off_t to be longer than long.
I'd venture a guess that the amount of their code affected is much
smaller than -- and much less widely used than -- the amount of C89
code that depends on size_t being no longer than long.
On Mon, 28 Sep 1998 10:31:55 -0400, James Kuyper <kuy...@wizard.net>
wrote:
>Paul Eggert wrote:
>>
>> "James Russell Kuyper Jr." <kuy...@wizard.net> writes:
>>
>> >Paul Eggert wrote:
>> >> This problem could have been avoided had C9x required that all C89
>> >> types (including size_t) be no longer than long. I don't know why this
>> >> compromise was not adopted by the committee. It allows long long
>> >> (which is what the long long camp wants), while not breaking existing
>> >> strictly conforming code (which is what the conservative camp wants).
>>
[James: "There is a lot of unportable code that assumes long means 32
bits, long long is 64, and want the standard to bless existing
practice"]
>>
[Paul: "So? There is no problem so long as size_t et. al. is no longer
than long. I don't know of any implementations with 32-bit long and
64-bit size_t, do you?"]
>
>No, I don't know of such implementations, but the people who argued
>against me in favor of 'long long' said that extended implementations of
>C89 with those features already exist. They have 32 bit 'long', and 64
>bit 'long long', pointers, size_t, and ptrdiff_t. They also have the
>ability to create objects whose total size is sufficient to require more
>than 32 bits to store all the different possible addresses. I see no
>reason to doubt those people's honesty.
Nor I, or anyone else, I suspect..
> Your proposal would prevent a
>conforming implementation from supporting the code they've already
>written, without a major re-write that they'd be very unhappy about.
Well, yes. But no conforming implementation _currently_ supports the
code they've already written. After all, such code is written in a
language closely resembling but quite different from standard C. They
could continue to support that language (and the programs written in
it) in the manner they do now: outside the blessing of the standard.
Personally, I find Paul's compromise appealing. It breaks no
conforming code, yet provides the means to implement a standard 64-bit
type.
If an implementation needs a 64-bit integer type to represent size, it
should either use a 64-bit unsigned long size_t (i.e., unsigned long
should be 64 bits wide) or use an implementation-defined symbol for
the type and associated operators and library functions (e.g.,
_e_size_t, _e_sizeof, whatever).
All IMHO. Regards,
-=Dave
Just my (10-010) cents
I can barely speak for myself, so I certainly can't speak for B-Tree.
Change is inevitable. Progress is not.
If they were willing to continue to run using a non-conforming
extension, they wouldn't care what the standard says. They want their
code to be compilable by a conforming implementation using strict
ANSI/ISO mode.
>In article <360ea454...@news.pathcom.com>, pcu...@acm.gov (Peter Curran) writes:
>|> Nick Maclaren wrote...
>|>
>|> >And the converse problem is that there is an equally large body of
>|> >conforming and portable code that is broken by C9X. However, let
>|> >us assume that the hackers have won the day.
>|>
>|> I don't really think you can claim this ("equally large"). I haven't
>|> seen any evidence that anyone has a clue how much code would be
>|> affected by either choice in this area, in absolute or relative terms.
>Why do you think that? I do have some idea, because I have looked
>at quite a lot of public-domain code, and am or have been involved
>in the development of a large amount of portable commercial code.
>But I quite agree that "equally large" has to be interpreted in the
>context of a large amount of uncertainty, and major differences in
>perspective.
By "evidence" I am refering to a credible, repeatable, study of a
large body of C code - something along the lines of Knuth's study of
Fortran code. Such studies are very difficult to do at all, and
extremely difficult to do well. I've asked a few times whether anyone
had done any studies along these lines w.r.t. long long, etc., with no
response.
I certainly have opinions in this area, similar to yours, based on
plenty of experience. But I know that there are people who don't
entirely agree with these opinions :-) and I don't know of any factual
studies to determine where the truth lies.
The referent of "these" is not clear, but I can tell you what
the current C9x specs say: The enumerated "at-least" types must all
exist,
but the "exact" types might or might not exist (in a conforming
implementation).
The "fastest" and "widest" types must also exist.
Well, my understanding is that the first set is bigger than the second.
Yes, the first set is not conforming, while the second is. But
vendors are not really tighted whether theirs users write strictly
conforming code or not; if their users complain because they already
have *too*much* code that assume that "long being at least 32 bits"
instead of "long being the largest integer type", they will have to
handle their complaints.
We can cry all the time we want about this naughty programmers,
but we must issue a solution, and the less costly for the widest
community of programmers, and not only to the (much smallest)
community that always read the DRs and follow comp.std.c regularly.
Also, a *major* task is to issue a new Standard that impeede the
problem to occur again. A crucial point is to *not* allowing
people to believe that long long, or even intmax_t, or even long
using your reasonment, be the 64 bits type in C.
I am not sure this is achieved these days (of course, again, I do
not speak about comp.std.c regulars, but about the widest
community of programmers).
Having said this, I am sympathic to your concern. But I do not think
it is an working solution.
[about code that assume that long is the biggest]
> Much of this code is in mission
> critical code and so must be manually checked if the change is made.
I am certain that the code that assume that long is 32 bits is
by far largest...
> It seems to me that thevery least they could expect by way of a
> migration path is a macro that they could use to determine if an
> implementation of C9X uses a type larger than (u)long for any of the C89
> standard typedefs (size_t, ptrdiff_t, time_t etc.)
(not tested)
#include <stdint.h>
#include <limits.h>
#if SIZE_MAX > ULONG_MAX
#error This code does not work with the new assertion that long\
is not the largest integer type.
#endif
and similar with ptrdiff_t (or wchar_t) is the need ever arises.
I agree this does not work with time_t, but it did not work with
C90 either (time_t may be long double since C89).
Antoine
Actually, there is if they do provide a type that meets the
requirements, and as I have argued elsewhere, if they have a
type that meets the requirements for int_leastN_t it can also
be used for intN_t (not so for the unsigned versions).
Agreed, but I still do not see that makes a good enough argument
for changing the _source_ language such that it can break existing,
conforming programs without warning.
Clearly, implementations that are already using "long long"
to solve their binary compatibility problems do not mind being
non-conforming (since they are), so why not simply leave them
non-conforming, and make the Standard's solution one that is not
hampered by concerns for binary compatibility? Surely you are
not going to argue that they are looking for a portable way to
maintain binary compatibility?
And yes, I understand the desire to standardize existing practice,
but that does _not_ mean the committee has to standardize _all_
existing practice.
And yes, I realize it's a battle that's almost certainly already
over. :-(
It addresses part of the problem, but not enough for everyone.
--
Clive D.W. Feather | Regulation Officer, LINX | Work: <cl...@linx.org>
Tel: +44 1733 705000 | (on secondment from | Home: <cd...@i.am>
Fax: +44 1733 353929 | Demon Internet) | <http://i.am/davros>
Written on my laptop; please observe the Reply-To address
I think you mean 3ull.
>depending on which type is used for sizeof;
No, it can also be ((uint27_t) 3), or any other unsigned integer type
provided by the implementation.
But what about the non-standard typedefs, such as off_t ?
>Unfortunately sizeof(long)>=sizeof(size_t) is not guaranteed to work.
>What those maintaining mission critical code need is a device that
>guarantees that no idiot can successfully compile the code with a
>compiler that is using 'large' versions of Size_t, ptrdiff_t etc.
#include <stdint.h>
#include <limit.h>
#if ULONG_MAX < SIZE_MAX || LONG_MAX < PTRDIFF_MAX
#error This compiler doesn't believe in making large objects easy.
#endif
That looks more like a word than anything else.
-s
--
Copyright 1998, All rights reserved. Peter Seebach / se...@plethora.net
C/Unix wizard, Pro-commerce radical, Spam fighter. Boycott Spamazon!
Seeking interesting programming projects. Not interested in commuting.
Visit my new ISP <URL:http://www.plethora.net/> --- More Net, Less Spam!
By `these types' I meant the intN_t types. My claim is that one (in
this case, I :-) ) can argue that the intN_t (but not uintN_t) types
must be defined for N = 8,16,32,64. This is because any int_leastN_t
type can be used as an intN_t, where any `extra' bits are declared
padding bits. Since signed overflow is implementation-defined, one
can't detect this in a strictly conforming program, and since the
implementation is required to define the intN_t types if it can, and
it always can, it always must.
The draft I have, which bills itself in the page footers as
"WG14/N843 Committee Draft -- August 3, 1998", says in 7.18(#4):
For each type described herein that can be declared as a type
existing in the implementation, <stdint.h> shall declare that
type [...]
In 7.18.1.1(#1) Exact-width integer types:
Each of the following types designates an integer type
that has exactly the specified width.
Padding bits in signed integer types are allowed, as per 6.2.6.2(#2):
For signed integer types, the bits of the object
representation shall be divided into three groups: value
bits, padding bits, and the sign bit.
And the width of such an integer is the number of value and sign
bits, as per 6.2.6.2(#4):
The precision of an integer type is the number of bits
it uses to represent values, excluding any sign and padding
bits. The width of an integer type is the same but
including any sign bit; [...]
So any int_leastN_t type that is implemented with a type that
really has width M can be said to be an intN_t type with M-N
padding bits, and since a type then exists in the implementation,
it must be declared.
The only `out' I see for implementors _not_ to provide them is if
one argues that this N-bit wide type with M-N padding bits is not a
`type existing in the implementation'. I honestly can't make a
really strong argument on either side of that one
Even if implementors _can_ avoid declaring them, I don't see why they
would, since not having the intN_t types is sure to make source more
difficult to port to their implementation, and earn them lots of
grumbling from their users.
Incidentally, this is why I've never seen the point of the exact-
width signed types (since they don't appear to buy anything
over at-least signed types of the same size), and why I would
have preferred a different name scheme for the <stdint.h> types
that didn't encourage the notion that exact-width sized types
were useful.
That said, I can certainly live with what the committee came up with,
since the effect is the same, and it's much better than nothing.
>It seems to me that the very least they could expect by way of a
>migration path is a macro that they could use to determine if an
>implementation of C9X uses a type larger than (u)long for any of the C89
>standard typedefs (size_t, ptrdiff_t, time_t etc.)
That seems to me to be a reasonable suggestion and I would support
such a proposal.
>f...@cs.mu.oz.au (Fergus Henderson) writes:
>|> Jonathan Coxhead <jona...@doves.demon.co.uk> writes:
>|>
>|> >Would it be *so* bad to ask people who have been using |long| to mean
>|> >"a 32-bit type" in programmes where a 64-bit type is provided to
>|> >change |long| to |int32_t|?
>|>
>|> That's not the main issue. The main issue is binary backwards compatibility.
>|> Many programs don't assume that `long' is exactly 32 bits but would
>|> nevertheless break if the size of `long' were changed, either
>|> (a) simply because they must interface with other programs and libraries
>|> that use `long' and cannot easily be recompiled, or
>|> (b) because they must interface with binary data files and they assumed
>|> that the size of `long' wasn't going to change. (Note that assuming
>|> that the size of `long' isn't going to change is not the same as
>|> assuming that that `long' is exactly 32 bits.)
>
>I am sorry, but the only correct response to this is "nonsense".
Is that "nonsense" to (a), (b), or both?
Certainly those programs falling into category (b) are not
the best written programs in the world. However, programs
in this category are common-place, and even strictly conforming
programs can fall into this category.
>Dammit, I have personal experience of several such changes, as a
>user, as an advisor/debugger and as a consultant to the implementors.
>It isn't hard to maintain binary compatibility and change the 'word'
>size, if you plan it and do it competently.
There are many practical difficulties, as you admit below, such as the
need for control over the "linker etc.".
Furthermore, many may simply not wish to do it, for efficiency reasons;
they may need access to a 64 bit type, but they may not want to change
the size of `long' because a larger `long' might increase the memory
usage of their program and/or slow it down.
>However, I admit that you must have control over the linker etc.
>to do so, and there are certain (relatively rare) problems that are
>extremely nasty - whether you decide to tackle those problems
>depends on your priorities.
--
> And it would be quite easy for WG14 to require a macro that indicated
> whether the standard typedefs might use types larger than long.
> Unfortunately sizeof(long)>=sizeof(size_t) is not guaranteed to work.
> What those maintaining mission critical code need is a device that
> guarantees that no idiot can successfully compile the code with a
> compiler that is using 'large' versions of Size_t, ptrdiff_t etc.
I think this should do the job, if you write it as the first line in main ():
char useless_array [((unsigned long) -1) >= ((size_t) -1)];
> In article <christian.bau-2...@christian-mac.isltd.insignia.com>,
> Christian Bau <christ...@isltd.insignia.com> wrote:
> ...
> > On the other hand, I cannot imagine an architecture where size_t needs to
> > be so large that it becomes inefficient, like an architecture that can
> > handle object sizes up to 2^64 byte but cannot handle 64 bit integers
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > efficiently. But if size_t is efficient, there is no reason not to make
> ^^^^^^^^^^^
> > unsigned long as large as size_t.
>
> This is a bullshit claim.
>
> If it were true, there would never have been a C compiler for the
> (8-bit) Z-80, because the processor couldn't handle 16-bit "short"
> efficiently, much less "int" and "long".
The Z-80 IMHO is not an architecture that can handle object sizes of 2^64
byte but no 64 bit integers. So much for bullshit.
> And in the Real World, we *need* larger-than-40-bit size_t and off_t
> *right*now*. It is a Pain In The Ass (tm) to have to deal with dinky
> little systems that don't support multi-gigabyte objects. (Note
> that even WindowsNT has 64-bit file sizes (although the functionality
> to access them is broken because of lame excuses like the above.)
Did you bother to read what I wrote? Do you need larger than 40 bit size_t
***on a machine that cannot do 64 bit operations efficiently*** ? Like
size_t i;
size_t huge_number = ...;
char* p = malloc (huge_number);
for (i = 0; i < huge_number; ++i)
p [i] = 0;
and the code crawls because size_t is bigger than the compiler can handle???
That's not the stated requirement:
> For each type described herein that can be declared as a type
> existing in the implementation, <stdint.h> shall declare that
> type [...]
The existence of the type comes first, then the declaration of
the typedef'ed type.
If one pushed your line of argument to a further extreme,
all the types would have to be declared, because they are
all implementable *some*how. In which case, what would be
the point of the above-cited clause? That illustrates
that that cannot be a proper interpretation of the clause.
Types aren't automatically created just by somebody viewing
other types in a certain way.
> That said, I can certainly live with what the committee came up with,
> since the effect is the same, and it's much better than nothing.
I hope so.
To neither - to the whole concept. Firstly, it doesn't help with
maintaining binary compatibility and, secondly, it isn't needed.
It doesn't work, because the only way that it COULD work is if all
standard types were kept as they were in C89 - and this includes the
system's standard types, like off_t. And, if you are doing that,
it doesn't help with more than an infinitesimal number of programs!
The key requirement is for a linker to be able to detect whether a
call or function was built for a 32- or 64-bit semantic mode. Or
any other differing properties that happen to be relevant. That is
all, and it is quite easy to do with 100% upwards compatibility in
the case of every linker that I have seen. Many vendors have done
it, so successfully that users have not noticed.
You can even handle 90% of mixed-mode programs (i.e. where some code
is compiled for each model) without much hassle, and 99% with some
fair amount of hassle for the implementor. But it STILL remains
hidden from the user.
I agree that this is not obvious to people not familiar with linker
technology.
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QG, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679
However, it fails when ported from a C89 implementation where
size_t are 32 bits long, to another where size_t are 64 bits long,
assumming data have to be passed from an implementation to the
other, which is not as fool as it can appears; in fact, this is
an avatar of one of the most common pitfall when porting an
application to 64 bits...
> It will break on a C9x implementation where size_t is longer
> than long, [...]
Agreed.
> In our organization, I'll recommend that we simply avoid
> such implementations, if anyone is silly enough to build them.
Fair enough. However, I cannot understand if it is an argument
pro or against the inclusion of long long in the Standard...
Antoine
Not if done competently. One standard method of doing this is to
have a header in the data file that specifies the exceptional values
and/or limits, and then the whole thing comes out in the wash. For
example, just start each file by:
fprintf(file,"SIZE_T_MAX = %lu\n",(size_t)-1);
No references to bit widths, and easily portable from any number of
bits to any larger number. And diagnosable the other way round.
|> > It will break on a C9x implementation where size_t is longer
|> > than long, [...]
|>
|> Agreed.
And THAT is the issue.
OTOH a standard is extremely weak if it's not prepared to stand by its
own definitions (C89). By that reasoning there's no need to have a standards
organization at all as what it's doing is only really telling everyone
what they're already doing. What's the point? What standards organizations
should be for are to protect the form they are standardizing. To work on
generic solutions to problems so that anything abiding to the rules of the
standards is guaranteed to work on a conforming implementation. In fact,
the ultimate achievement for a standards group is to make itself useless.
I think C9X should be primarily concerned with supporting anything C89 has
defined and only then look at what non-conforming implementations define.
If these are in conflict then conforming implementations should always be
first priority. This is simply so the standards organization can claim
its relevance.
--
- ---------- = = ---------//--+
| / Kristoffer Lawson | www.fishpool.com
+-> | se...@fishpool.com | - - --+
|-- Fishpool Creations Ltd - / |
+-------- = - - - = --------- /~setok/
Actually, it's worse than that. There are several changes in C9X
that have deliberately penalised the authors of conforming programs
(and the developers of conforming implementations) in favour of the
rule breakers.
"long long" is the worst, but the I/O format specifiers are another.
C89 explicitly said that the standard reserved lower-case letters,
and left other characters for extensions. So what does C9X do?
Why, to avoid a lower-case letter, it used one of the characters
that it said could be used for extensions. And, yes, I have written
code that change will break.
|> I think C9X should be primarily concerned with supporting anything C89 has
|> defined and only then look at what non-conforming implementations define.
|> If these are in conflict then conforming implementations should always be
|> first priority. This is simply so the standards organization can claim
|> its relevance.
I agree, but we have lost that one.
The message that is clearly coming from C9X is that it is far
better to establish a vociferous support camp for your particular
non-conforming construction than it is to conform to the existing
standard. This is good precedent for C0X - so get in there, mad
hackers :-(
>Paul Eggert wrote:
>> To print a size_t value, the program casts it to unsigned long and then
>> prints it with the "%lu" format. To read a size_t value, the program
>> applies strtoul to the character representation.
>>
>> This program works correctly on a C89 implementation.
>However, it fails when ported from a C89 implementation where
>size_t are 32 bits long, to another where size_t are 64 bits long,
Not if the file is a temporary file, or is meant to be used only for
that particular platform. This is fairly common in practice. And
there are other examples where it's important to be able to convert
size_t to string and back reliably -- e.g. when one is communicating a
size_t value to an extension language like Tcl that requires a string
representation.
>> In our organization, I'll recommend that we simply avoid
>> [size_t longer-than-long] implementations, if anyone is
>> silly enough to build them.
>Fair enough. However, I cannot understand if it is an argument
>pro or against the inclusion of long long in the Standard...
It's not an argument against `long long'; it is an argument against
allowing implementations to have size_t longer than long.
I understand that some members of the committee feel the need to cater
to such implementations, but so far I haven't seen one cited by name,
and my impression is that they are not economically important --
certainly not important enough to make us go back and check our C code
line by line. If this is true for most people (as I think it is), then
it's misleading and counterproductive for the de jure standard to allow
such implementations, since the de facto standard will disallow them.
My point is that the type _does_ exist first. Whatever the type that
was used to declare int_leastN_t is an existing type that meets all
the requirements of intN_t.
Note, I don't really care if implementors are or are not required to
do what I've described; I expect many will because it's `free' and it
buys their customers the ability to more easily port code that uses
intN_t.
|If one pushed your line of argument to a further extreme,
|all the types would have to be declared, because they are
|all implementable *some*how.
That's not my argument at all; you're trying to stretch it into
something that it never was. For the uintN_t types, for example, the
implementors would have make changes to their implementation so I do
not see how the types can be said in any way to be `existing'. For
the signed versions, however, no change is necessary; all they have
to do is make the typedef. As a user, I could claim that if all that
is necessary is the typedef, then they should have been bound by
7.18(#4) to have included it.
|[...] In which case, what would be
|the point of the above-cited clause? That illustrates
|that that cannot be a proper interpretation of the clause.
Well, to be honest, I think part of the `point' is that it reflects
the on-going misconception that exact-width signed integers are
meaningful as currently defined in the draft (where `exact' can't
guarantee anything). Regardless, and more importantly, it means that
implementors don't have to do extra work in the unsigned case (though
I suspect many eventually will on unusual architectures, to allow
customers to more readily compile source that uses uintN_t).
|Types aren't automatically created just by somebody viewing
|other types in a certain way.
Says who? :-)
Seriously, what in the draft standard prevents it? For example,
on our 36-bit machine, suppose int is a 36-bit type. By the
definitions in the draft concerning width, etc. can you cite
why int is not _also_ a signed integer type with width 32 and 4
padding bits? The only thing I can think of is if you claim the
overflow behaviour is not documented in the implementation-defined
section, but depending on how that description is already worded,
it may cover it.
Indeed. When I read the posting when it arrived back, I was about to say
something sarcastic when I realized it was mine !
That's not his point.
What requirement on int32_t does an existing signed integer type with
width 36 *not* obey ? Given that, there *is* a type existing in the
implementation that has the right properties, so int32_t must be
declared.
> We can cry all the time we want about this naughty programmers, but
> we must issue a solution, and the less costly for the widest
> community of programmers, [...]
Yes, but a standardisation comittee has to think about the effect of
their decisions on tomorrow's implementations and users, not just
today's.
--
Actually reachable as @free-lunch.demon.(whitehouse)co.uk:james+usenet
Part of the problem is that the great silent majority does not care.
Actually for most of these our specific decision re whether a typedef
provided by the standard can be a typename for long long is irrelevant
to their work. However there is a body that does care and for whom the
choice does matter. There are subtle reasons why the macros that we
intend to provide in C9X are not enough to determine that their code is
not liable to silent damage. These people deserve, at the very least,
consideration and a mechanism that will protect their code. That is why
I propose a specific macro whose definition will declare that an
implementation breaches previous understandings.
Is this too much to ask for? It won't meet all the objections but we
could do with some compromise. Having followed this debate for some
time (in places other than this) I am certain that despite assertions to
the contrary, some programmers will have a problem.
Francis Glassborow Chair of Association of C & C++ Users
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation
You're talking about C9x, right?
This is going to be the reason for rejecting C9x?
But wasn't it already the reason for rejecting C89?
OK, let's restore the context to your answer...
>In article <6uimm7$mpv$1...@shade.twinsun.com>, Paul Eggert
><egg...@twinsun.com> writes
>>This problem could have been avoided had C9x required that all C89
>>types (including size_t) be no longer than long. I don't know why this
>>compromise was not adopted by the committee. It allows long long
>>(which is what the long long camp wants), while not breaking existing
>>strictly conforming code (which is what the conservative camp wants).
So your answer was really the reason for rejecting a partial solution
which would remove some of the breakage that a broken change imposes
on valid programs. But is it a good reason? Wouldn't your answer be
a better reason for rejecting C9x than for rejecting this partial fix?
--
<< If this were the company's opinion, I would not be allowed to post it. >>
"I paid money for this car, I pay taxes for vehicle registration and a driver's
license, so I can drive in any lane I want, and no innocent victim gets to call
the cops just 'cause the lane's not goin' the same direction as me" - J Spammer
Please remind us which format-specifier character that is.
I don't recall such a change.
For each signed integer type there is a corresponding
unsigned integer type having the same width. Thus it is
not possible to pretend that a wider type is actually a
narrower type, because that pretense doesn't work for the
unsigned variety.
'L'.
I don't think that the use of 'L' is a major problem, which is why
I haven't bothered to comment (officially) on the matter, but bending
over backwards to avoid 'm' is quite serious.
I apologise to all concerned with respect to using a letter allocated
for extensions!
The layout of C9X has been improved over C89, and I misread the
latter when checking up whether 'L' was defined in the standard
or was a widespread extension. Yes, it was in C89.
For each signed type there must be an unsigned type that uses up the
same amount of storage. As has already been conceeded, the trick that
makes implementing int32_t trivial doesn't work for uint32_t.
Section 6.2.5, paragraph 6 says only that it must take up the same
amount of storage space. Is there another section which requires that it
have the same width? If not, then the signed type is allowed to have
more padding bits than the unsigned one.
I thought we did something about this or did we just add some extra
rules to ensure, for example, that unsigned int was capable of storing
all the values of unsigned short. (for those who do not know, the
strict rules in C89 do not provide a guarantee of this)
Not as far as I know.
> ... or did we just add some extra
> rules to ensure, for example, that unsigned int was capable of storing
> all the values of unsigned short. (for those who do not know, the
> strict rules in C89 do not provide a guarantee of this)
C9X does explicitly require that the non-negative values of a signed int
must be a sub-range of the corresonding unsigned int, and must have the
same representation. In particular, it says explicitly that this is
intended to allow interchangeability of the two types, for values in
that range.
Take ISO 646.
The first version (:1968 if memory serves), which was finest to me
(as French), specified that ~ should be an overline, and $ should
be a kind of universal currency symbol. And also that each country
can customize it to suit its national needs, leading to a famous
nightmare when dealing with interoperability.
Then, in 1983 and again in 1991, this Standard was *revised*, and
it becomes what more or less everybody thought it was (i.e. the
ASCII character set).
Your argumentation, as I read it, says it should not be changed,
because it was done one way initialy, even if current practice
showed the original design was partly wrong.
> What's the point?
The point is that C89, and K&R before it, oversighted the need
for a (separate) 64-bits integer type in C.
And then implied, wrongly according to my views, that long is
both the widest integer type *and* the only one that
guarantees 32 bits.
> I think C9X should be primarily concerned with supporting anything C89 has
> defined and only then look at what non-conforming implementations define.
> If these are in conflict then conforming implementations should always be
> first priority.
Extract from C89 charter, which have been extended for C9X:
"1. Existing code is important, existing implementations are not. [...]"
So you appear to be contradictory with the first principe that guides
the Committee. I have no objection to your position, but it should
be no surprise therefore if you disagree with the outputs.
This explains why arguments are more against the breaking of strictly
conforming programs, than against the breaking of any implementation.
(And yes, I know that quite a big number of members of the Committee
are compilers' vendors).
Antoine
I do not know.
I think that some members are certain there is no real problem
(I repeat, I agree your example might break programs);
they might think such implementations are (very) unlikely.
And some members may be reluctant to destroy the theoric construction.
A thing that I see quite different in principle from your point
of view above (but very similar in consequences).
Antoine
No that was already required in C89. Similar requirement were placed
between the signed and unsigned versions of short and long. There is
also the well known requirement that int shall include the entire range
for short and that long includes the entire range for int.
Unfortunately, as we realised when we tried to provide generalised rules
for extended integer types, these C89 rules do not result in any
requirement that unsigned int should include all values in unsigned
short, nor that unsigned long include the whole range of unsigned int.
This is definitely language law territory but it goes like this:
Suppose you have a machine whose smallest memory allocation is 64 bits.
The designer decides to pack 8 chars per unit (not a problem)
However for efficiency of access s/he decides that all other integer
types shall use 64bits and then elects the following (idiotic scheme)
signed bits used unsigned bits used
short 16 | short 64
int 16 | int 48
long 32 | long 32
Note that this scheme meets all requirements specified in the C89
standard, yet 'bigger' unsigned types have a 'smaller' range.
Stupid? Of course it is. But we all know what happens when you fail to
dot the 'i's and cross the 't's <G>
Now I am sure we fixed that. (Along with some other bizzare
possibilities spotted by the language lawyers such as the potential for
undefined behaviour in the following, apparently innocuous call to
printf:
int i;
printf("%s %n %s %n","Undefined", &i, "Behaviour", &i);
It would take malice by the implementer of printf() to cause a problem.
C9X will require that you implement *printf() functions so that there
will be sequence points between successive writes to a %n field and that
they will take place in the order listed.
My company's copy of the C89 spec vanished two years ago (I think it was
Schildt's book anyway, so it wasn't a big loss). I just described the
draft requirements; I've no idea whether they were changes.
[Re: an example with unsigned long having fewer bits than unsigned
short]
> Now I am sure we fixed that.
What was the fix? In particular, getting back to the earlier topic, is
there are requirement that the width (as opposed to the storage space)
for an unsigned type be the same as for the corresponding signed type.
I haven't got my copy of the draft available (and even then searching it
for things that are spread through several clauses is tough) But I am
pretty sure we would never have been so restrictive. Whatever we did
would have been to instate what everyone thought was the case already.
I cannot think that we would have insisted that signed and unsigned
should have the same width. At least one implementation of C used the
same storage for both signed and unsigned int but reduced the width by
one for the latter (i.e. in unsigned, the sign bit was ignored, I think
they also used the same storage for double - ints just ignored the
exponent bits)
> int i;
> printf("%s %n %s %n","Undefined", &i, "Behaviour", &i);
>
>
> It would take malice by the implementer of printf() to cause a problem.
> C9X will require that you implement *printf() functions so that there
> will be sequence points between successive writes to a %n field and that
> they will take place in the order listed.
Are you sure you intended to eliminate printf implementations that
read the string (and, more importantly, the arguments) backward? Or
did the standard already prohibit this?
--
Geoff Keating <Geoff....@anu.edu.au>
I agree with your point above, but I do not understand the "but".
How does it contradict with my point above?
Certainly the evaluation of the costs have to handle both today's
*and* tomorrow's effects (at least, this is the way it is viewed in
France; I am sorry if it does not hold the same meaning in English).
Antoine
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
I am having a problem seeing how such an implementation will generate a
correct sequence of output. If it wasn't already prohibited explicitly
'a close and careful reading -- between the lines -- ...<g>
I'm not saying mistakes shouldn't be fixed - but I don't see long being
defined as the largest type to be a problem. In fact, some type like this
should exist. It's silly, not to mention ugly, to keep adding new types
every time someone invents a new architecture which can handle larger
data. It becomes a coder nightmare. The standard must define rules for
types that are abstract enough to be scalable and usable on any platform.
Otherwise there will be no end to new types and definitions -
continuously making the language more complex. Currently it seems that
the standard is only concerned with leaving the problem open for some later
date. I know C has never attempted to be as generic as say Lisp, but I do
think that the language has a problem that is not being solved but
postponed to a later date.
: The point is that C89, and K&R before it, oversighted the need
: for a (separate) 64-bits integer type in C.
I don't think so. C89 never specified that long would be exactly 32 bits
(which would've been a mistake if it did). That allowed long to scale to
larger sizes without having to change the format of the specification.
:> I think C9X should be primarily concerned with supporting anything C89 has
:> defined and only then look at what non-conforming implementations define.
:> If these are in conflict then conforming implementations should always be
:> first priority.
: Extract from C89 charter, which have been extended for C9X:
: "1. Existing code is important, existing implementations are not. [...]"
: So you appear to be contradictory with the first principe that guides
: the Committee. I have no objection to your position, but it should
: be no surprise therefore if you disagree with the outputs.
Not really, as there are existing code for both cases (I have yet to actually
deal with code that uses "long long", but have seen a bit that assumes
long as the widest type). Some existing implementations use "long long"
and I'm sure those implementations can use "long long" in the future aswell
to allow their existing code to work. Still, it makes sense for them to
allow fully compliant code to use 64-bit longs with a compiler option. I
don't even believe this would be terribly problematic for them. In fact I
think several implementations allow 64-bit longs (at least SGI's compiler
uses 64-bit longs in the 64-bit model).
Besides, I would make that particular passage more accurate by saying:
"1. Existing complient code is most important, existing code is important
and existing implementations are not" or something ;-)
> : The point is that C89, and K&R before it, oversighted the need
> : for a (separate) 64-bits integer type in C.
>
> I don't think so. C89 never specified that long would be exactly 32 bits
> (which would've been a mistake if it did). That allowed long to scale to
> larger sizes without having to change the format of the
> specification.
In my dreams, Dennis Ritchie makes a posting on this newsgroup that
says, 'Long long must go. This is non-negotiable. It must not be
reworded, reformulated or reinvented.'
But then I wake up ...
/|
(_|/ :-)
/|
(_/
The arguments are all *evaluated* before the function is called.
We want to eliminate the possibility of the specified formatting
actions occurring out of sequence.
We don't care how the format string is scanned so long as the
actions occur in the proper order.
Probably more likely would be to dig up some of John Mashey's
observations on the subject.
Dennis
No, that `uses the same amount of storage', not has the same width,
and as near as I can tell is not required to be unique.
|[...] Thus it is
|not possible to pretend that a wider type is actually a
|narrower type, because that pretense doesn't work for the
|unsigned variety.
I'm not yet convinced. The hypothetical implementation can still be
said to have the following types:
Type Size:W:Pad Unsigned Version Size:W:Pad
---- ---------- ---------------- ----------
int 36:36:0 unsigned int 36:36:0
int 32:32:4 unsigned int 36:36:0
Why does that not meet the requirements?
If we skip over the fiddly details about whether or not it is
required, you seem to be saying it's explicitly _not allowed_, that
is, our 36-bit implementor _cannot_ typedef int32_t without doing the
additional work required to support uint32_t, because whatever signed
type he/she uses must have a corresponding unsigned one. Why on
earth would you prohibit that?
--
To reply by mail, please remove "mail." from my address -- but please
send e-mail or post, not both
As I posted elsewhere in the thread, I believe (for example) the
unsigned version of the same 36-bit integer that is being viewed as a
32-bit integer meets that requirement, though it still does not help
with uint32_t (which is not a requirement, as near as I can tell).
Your argument seems to _forbid_ the trivial int32_t implementation,
which seems pointless to me. Is that what you want?
Which types were you nominating for int32_t and uint32_t?
> If we skip over the fiddly details about whether or not it is
> required, you seem to be saying it's explicitly _not allowed_, that
> is, our 36-bit implementor _cannot_ typedef int32_t without doing the
> additional work required to support uint32_t, because whatever signed
> type he/she uses must have a corresponding unsigned one. Why on
> earth would you prohibit that?
Well, for one thing, the exact length types were meant to be exactly the
size specified. No one wants to make it easy to implement them the way
you're suggesting.
They will, of course, be misused, but they have no proper use that
anyone would want to port to a 36 bit machine. The 'least' and 'fast'
types have such uses, but not the exact ones.
> What requirement on int32_t does an existing signed integer type with
> width 36 *not* obey ? Given that, there *is* a type existing in the
> implementation that has the right properties, so int32_t must be
> declared.
It seems then that C9X has bitten itself. Assuming that we have a machine
where int and unsigned int are 36 bit.
int and unsigned int are 36 bit, therefore types int36_t and uint_36t must
be defined (the implementation MUST define them because it CAN).
The only difference between a type int32_t and int36_t is that sometimes
int32_t operations would overflow when int36_t doesnt, but you cannot
reliably detect this, therefore int36_t is a valid implementation for
int32_t, therefore the implementation MUST define int32_t.
For every signed type, there is a corresponding unsigned type. Therefore
the implementation MUST define uint32_t. Of course, uint32_t can be
implemented relatively easily, by replacing for example (x + y), where x
and y are uint32_t, with (uint32_t) ((uint36_t) x + (uint36_t) y).
So if nothing with this argumentation is wrong, an implementation MUST
provide intnn_t types and uintnn_t of all sizes up to the longest possible
size?
Absolutely. It's not a type (in your example).
: In my dreams, Dennis Ritchie makes a posting on this newsgroup that
: says, 'Long long must go. This is non-negotiable. It must not be
: reworded, reformulated or reinvented.'
Too bad they already have a "long long" supporting compiler for Plan9.
Doh!
And of course, considering that huge popularity of that platform I'm sure
it would be difficult to remove that ;-)
Indeed, we had to change the C standard with respect to
allowing an extra , at the end of a list of enumerated constants,
since Plan 9 insisted on not conforming to the previous standard on that
score.
:-)
Then you do not generaly program in an environment where int are
16 *or* 32 bits, and the only way to specify that a quantity can be
greater than 65535 is to use long.
Oh happy man you are ;-)
> In fact, some type like this should exist.
Correct, it is called intmax_t in C9X.
And as I highlighted, it was absent from C89 and K&R.
> The standard must define rules for types that are abstract enough
> to be scalable and usable on any platform.
That is the very point of <inttypes.h> (and of the competing
proposals as well).
> Currently it seems that the standard is only concerned with
> leaving the problem open for some later date.
If this is your impression nowadays, then the committee did a bad
job explicating.
I agree <inttypes.h> won't be used *everywhere* beginning on
Jan 1st, 2000, when C9X will be available from the ISO; but just
saying the problem is left open by C9X is a bit strong, I think.
> Antoine Leca <Antoin...@renault.fr> wrote:
> : The point is that C89, and K&R before it, oversighted the need
> : for a (separate) 64-bits integer type in C.
>
> I don't think so. C89 never specified that long would be exactly 32 bits
> (which would've been a mistake if it did). That allowed long to scale to
> larger sizes without having to change the format of the specification.
Then you missed my point; sorry to not being that clear.
long in C89 (and K&R) carries two meanings: "at least 32 bits", and this
is by far the most widely used meaning; and "the largest integer
type available to portable programs".
Your position is to request everybody to align on the second meaning.
My point is to say that it is disconnected from reality.
[The committee's charter:]
> : "1. Existing code is important, existing implementations are not. [...]"
>
> : So you appear to be contradictory with the first principe that guides
> : the Committee.
>
> Not really, as there are existing code for both cases (I have yet to actually
> deal with code that uses "long long", but have seen a bit that assumes
> long as the widest type).
And how many code have you encountered that assumes that long means
int_least32_t?
If you answer "not many", please ask people in charge of programming
for Microsoft platforms to encounter some... ;-)
However, my point was only focused on your remark about implementations.
The committee affirmed (and affirms) that only programs are important.
I am sure you got the point.
> Still, it makes sense for them to allow fully compliant code to use
> 64-bit longs with a compiler option.
Sorry, but "[something] with a compiler option" is very different from
"the standard requires [something]".
And we are speaking of the later (we all agree the former is always
possible, and even more, that it is most often preferable).
OTOH, if your code requires what you are requesting, you can always write
#if ULONG_MAX != UINTMAX_MAX
#error This code assumes long is the longest integer type
#endif
If you have a bit more time, consider doing a search&replace of
/long/ by /intmax_t/
(just after having replaced /unsigned[:space:]*long/ by /uintmax_t/ ;-).
And then compiling with gcc with -Wformat to correct the printf calls
Note: I never claims this is easy job.
I just claim this is what C9X implies (well, the current FCD).
> Besides, I would make that particular passage more accurate by saying:
>
> "1. Existing complient code is most important, existing code is important
> and existing implementations are not" or something ;-)
But that is not what is written in the charter.
The main objection I ever receive was: "but long long is standardizing
existing practice". Since I do not work in the 32/64bits-both-platform-
compatible business, I have nothing to answer to this point, and I left
it open to others. But if you intend standardizing long to be the longest
type, I will strongly disapprove, even if I agree that on a number of
platforms, this is "standardizing existing practice"; because it goes
again the soft inclusion of a new 64-bits type. Written lengthly, that
is the point I wanted to make.
Not necessarily. I have programmed for environments where int can
be 16, 24, 32, 36 or 64 bits, and I favour long remaining the longest
type. And, let us make no mistake, C89 states quite clearly that
long is the longest integer type available to conforming programs.
|> > Currently it seems that the standard is only concerned with
|> > leaving the problem open for some later date.
|>
|> If this is your impression nowadays, then the committee did a bad
|> job explicating.
|>
|> I agree <inttypes.h> won't be used *everywhere* beginning on
|> Jan 1st, 2000, when C9X will be available from the ISO; but just
|> saying the problem is left open by C9X is a bit strong, I think.
Perhaps. Unfortunately, I and others have pointed out several,
very important, areas where C9X has made the same mistakes as C89
(and added new ones), but our concerns have not been addressed or
even seem to have been considered.
|> Then you missed my point; sorry to not being that clear.
|> long in C89 (and K&R) carries two meanings: "at least 32 bits", and this
|> is by far the most widely used meaning; and "the largest integer
|> type available to portable programs".
Not in C89. It is the longest integer type available to CONFORMING
programs.
|> Your position is to request everybody to align on the second meaning.
|> My point is to say that it is disconnected from reality.
And my point is that it isn't - see my survey on the reflector.
I have seen NO evidence that any portable programs need long long,
but I have produced evidence that many will be broken by it.
|> And how many code have you encountered that assumes that long means
|> int_least32_t?
|>
|> If you answer "not many", please ask people in charge of programming
|> for Microsoft platforms to encounter some... ;-)
But that is specified by C89 anyway! Or do you mean int32_t?
|> However, my point was only focused on your remark about implementations.
|> The committee affirmed (and affirms) that only programs are important.
|> I am sure you got the point.
And I am afraid that the responses that I have received indicate
that C9X is concerned more about preserving implementations than
programs.
Sorry, there was a typo in my table, but I'm nominating int 36:32:4
(named `int' with it's corresponding unsigned type, unsigned int
36:36:0 named `unsigned int') for int32_t, and no implementation for
uint32_t. I do not believe that having int32_t requires providing
uint32_t, since int32_t is not one of the integer types directly, it
is merely a typedef for an existing integer type (in this case int)
that does already have a corresponding unsigned type. I see no
language in the draft that requires uintN_t typedef if intN_t exists,
any more than it requires a uptrdiff_t or a uwchar_t. It only
requires the underlying signed integer type have a corresponding
unsigned type, and I claim it does.
If you really want to forbid trivial intN_t implementations (which I
think would be a bad thing; see below) then 7.18 should have language
that explicitly says if you have an intN_t you need a uintN_t.
|Well, for one thing, the exact length types were meant to be exactly the
|size specified. No one wants to make it easy to implement them the way
|you're suggesting.
If exact _sizes_ were what was desired, why were they not specified
as such? Now, `exactly' is a meaningless claim in the signed case.
No portable code can use them in any fashion that assumes the 36-bit
version is not the underlying type, because it can be (unarguably I
think if the implementor does implement a uint32_t, no?)
|They will, of course, be misused, but they have no proper use that
|anyone would want to port to a 36 bit machine. The 'least' and 'fast'
|types have such uses, but not the exact ones.
Please remind me of the proper use, given the current definition, of
a signed, exact-width type. Every time the discussion of extended
integers has come up I have asked for this, and have yet to see any
such use that does not really want an exact-sized type, which is not
what the draft provides. Even the unsigned examples are few and far
between, but I will grant that they do exist.
Given that there is no portable use for intN_t as an exact-width
type, it seems to me that intN_t can _only_ be `misused' and that
implementors on odd machines will want to do anything they can to
help their customers compile this bad code. To prevent them from
doing so would be doing them a disservice.
I disagree.
> ... then 7.18 should have language
> that explicitly says if you have an intN_t you need a uintN_t.
I agree; it should also say that about all the bit-sized types.
> |Well, for one thing, the exact length types were meant to be exactly the
> |size specified. No one wants to make it easy to implement them the way
> |you're suggesting.
>
> If exact _sizes_ were what was desired, why were they not specified
> as such? Now, `exactly' is a meaningless claim in the signed case.
Because such specification is difficult in the context of the standard,
without making it excessively restrictive. However, if an implementation
were allowed/required to implement int32_t only if it can also easily
implement uint32_t, then that would come closer to forcing an
exact-sized type.
...
> |They will, of course, be misused, but they have no proper use that
> |anyone would want to port to a 36 bit machine. The 'least' and 'fast'
> |types have such uses, but not the exact ones.
>
> Please remind me of the proper use, given the current definition, of
> a signed, exact-width type. Every time the discussion of extended
> integers has come up I have asked for this, and have yet to see any
> such use that does not really want an exact-sized type, which is not
> what the draft provides. Even the unsigned examples are few and far
> between, but I will grant that they do exist.
I know of no use for exact-width types that are not also exact-size
types. In the absence of a standard specification that defines exact
size types, the exact-width types are the next best thing available.
There are plenty of legitimate uses for exact-size types, such as for
header files to be used with many different implementations of C, all
designed to use a compatible interface, such as calling OS functions, or
reading files with a language-independent filespec. On a 36 bit system,
it is extremely unlikely that any such functions, or any such files,
would use types whose sizes were multiples of 8 rather than of 9.
> Given that there is no portable use for intN_t as an exact-width
> type, it seems to me that intN_t can _only_ be `misused' and that
"No portable use" is not the same "no use which is not a mis-use". Why
should it matter whether interfaces that only work on certain platforms
can be described in a way that is portable to incompatible platforms?
What does matter is whether the standard supports methods that would
allow such interfaces to be defined in an implementation-independent
manner. It is very common for an interface to need to be shared across
many different compatible implementations, while not necessarily being
useable or even meaningful on all implementations.
That was not my point.
My point was that there are *many* programmers producing code that should
work on Win16 *and* Win32.
On such a platform (beyond the typedefs provided by Microsoft that mimic
<inttypes.h>), long means 32 bits, even if you do not want it, because
int is reserved to the "natural" size (that is, 16 or 32, depending of
the target).
This behaviour is firmly graved in their heads, and it is not likely to
change any time soon.
Also, long changing from 32 to 64 bits is a nightmare for performances
on 16 bits box. But that is a minor point.
> Unfortunately, I and others have pointed out several,
> very important, areas where C9X has made the same mistakes as C89
> (and added new ones), but our concerns have not been addressed or
> even seem to have been considered.
I am not aware <inttypes.h> is one of such area; is it?
> |> Then you missed my point; sorry to not being that clear.
> |> long in C89 (and K&R) carries two meanings: "at least 32 bits", and this
> |> is by far the most widely used meaning; and "the largest integer
> |> type available to portable programs".
>
> Not in C89. It is the longest integer type available to CONFORMING
> programs.
First, I can't left this one: this is wrong, because every programs
compiled by gcc is conforming from the very day that gcc was or will be
a conforming implementation (I do not want to discuss if it should be
perfect or future...)
But I am sure you intended "strictly conforming", don't you?
So, I repeat: even in C89, even to strictly conforming programs,
long int carries both meanings. You cannot deny it is the only type
that warrants 32 bits to strictly conforming programs, can you?
> |> Your position is to request everybody to align on the second meaning.
> |> My point is to say that it is disconnected from reality.
>
> And my point is that
Since I have the opportunity, I thank you for this very good piece
of work, that will provide datas to an otherwise sterile debate.
> I have seen NO evidence that any portable programs need long long,
> but I have produced evidence that many will be broken by it.
I do not deny this one (our points are different).
> |> And how many code have you encountered that assumes that long means
> |> int_least32_t?
> |>
> |> If you answer "not many", please ask people in charge of programming
> |> for Microsoft platforms to encounter some... ;-)
>
> But that is specified by C89 anyway! Or do you mean int32_t?
This is the key point.
We both agree long means int_least32_t in C90, and will stay this way
for strictly conforming programs in C9X.
My point is the following: since long means int_least32_t, some
programmers assume (incorrectly) that long means effectively int32_t.
Usually, this does not yield problems, so the transition path
to 64 bits is usually smooth.
But sometimes, the assertion breaks; and then, these naughty
programmers say: "I meant int32_t; perhaps it is wrong, but I do not
care the Standard says, I want my program to work, so please fix
your new compiler and let long stay int32_t. Thanks."
And I say these programmers, and theirs programs:
- are faulty
- are numerous
Therefore, the compilers are forced to have at least a mode where
long is 32 bits, even if they fully support 64 bits. C9X tried to
keep this mode in the realm of the Standard.
Please note long long do not enter in my discussion, because this is
not my point.
That may well be true. I don't see it as grounds for a major quiet
change, because building a standard round the "most stupid implementor
and corresponding programmers" is a sure way to get something that is
a long-term disaster.
However, I am well-known for not being a politician :-)
>> Unfortunately, I and others have pointed out several,
>> very important, areas where C9X has made the same mistakes as C89
>> (and added new ones), but our concerns have not been addressed or
>> even seem to have been considered.
>
>I am not aware <inttypes.h> is one of such area; is it?
Yes and no. <inttypes.h> as such adds useful features, and I can't
see anything that it breaks. However, it is claimed to be a solution
to the problems that I and others have raised, and it isn't. Worse,
converting from C89 long to C9X <inttypes.h> reduces the long-term
portability in many cases.
>> |> Then you missed my point; sorry to not being that clear.
>> |> long in C89 (and K&R) carries two meanings: "at least 32 bits", and this
>> |> is by far the most widely used meaning; and "the largest integer
>> |> type available to portable programs".
>>
>> Not in C89. It is the longest integer type available to CONFORMING
>> programs.
>
>First, I can't left this one: this is wrong, because every programs
>compiled by gcc is conforming from the very day that gcc was or will be
>a conforming implementation (I do not want to discuss if it should be
>perfect or future...)
>But I am sure you intended "strictly conforming", don't you?
No. I meant what I said. In C89, a compiler is permitted to introduce
a "long long" type, but it is NOT an integer type - I lost this one
on bit operators, and so am using other people's arguments :-)
I don't have a suitable standard here, so I can't give numbers. But,
in the section headed "Types", it says:
There are four {\it signed integer types}, designated as signed
char, short int, int and long int.
< Similarly about unsigned integer types >
The type char, the signed and unsigned integer types and the
enumerated types are collectively called {\it integral types}.
Because italic indicates a definition, "long long" cannot be either
a signed integer type or an integral type - actually, it can't be
arithmetic, either, because of a later paragraph. And, because
size_t and ptrdiff_t are defined to be integer types, they cannot
be any variant of long long.
Q.E.D.
>So, I repeat: even in C89, even to strictly conforming programs,
>long int carries both meanings. You cannot deny it is the only type
>that warrants 32 bits to strictly conforming programs, can you?
Yes, I agree that it carries both meanings, and that is the root
cause of the problems.
>And I say these programmers, and theirs programs:
> - are faulty
> - are numerous
>
>Therefore, the compilers are forced to have at least a mode where
>long is 32 bits, even if they fully support 64 bits. C9X tried to
>keep this mode in the realm of the Standard.
Yes, an I have no disagreement with this. What I object to is the
following:
1) Existing, portable and conforming programs are broken by thus
change, with (currently) no aid to migration.
2) Any attempt attempt to bring this up is met with a barrage of
erroneous denial and misdirection. You are the ONLY proponent of the
changes who has actually responded to my points.
3) Statements are made that the CORRECT way to achieve long-term
portability is to screw a particular bit width requirement into code.
That is completely wrong.
: Correct, it is called intmax_t in C9X.
: And as I highlighted, it was absent from C89 and K&R.
Yes, I'm aware of that, although by earlier specification it was not
really necessary. We can only hope that people don't start assuming
it's 64-bit. What I see as the biggest problem here is inventing new
types each time we get a wider type. We might assume that this will
not continue past 64 bits, but I'd say that was quite a horrific
assumption. With the current methods a new type will have to be
invented in a few more years. This isn't a scalable language specification.
This is the problem I think should be remedied immediately.
:> The standard must define rules for types that are abstract enough
:> to be scalable and usable on any platform.
: That is the very point of <inttypes.h> (and of the competing
: proposals as well).
I wouldn't call those very abstract types. The more abstract type
possible would be something like 'num', which of course could cause
problems generating optimal code.
: Then you missed my point; sorry to not being that clear.
: long in C89 (and K&R) carries two meanings: "at least 32 bits", and this
: is by far the most widely used meaning; and "the largest integer
: type available to portable programs".
: Your position is to request everybody to align on the second meaning.
: My point is to say that it is disconnected from reality.
I have no problem with aligning to the first meaning either. It does say
'at least'.
: And how many code have you encountered that assumes that long means
: int_least32_t?
: If you answer "not many", please ask people in charge of programming
: for Microsoft platforms to encounter some... ;-)
Well yes, of course I have to admit to seeing quite a bit of that.
From my perspective that is non-compliant and basically bad coding. I
don't expect a language specification to try and fix bad coding.
: The main objection I ever receive was: "but long long is standardizing
: existing practice". Since I do not work in the 32/64bits-both-platform-
: compatible business, I have nothing to answer to this point, and I left
: it open to others. But if you intend standardizing long to be the longest
: type, I will strongly disapprove, even if I agree that on a number of
: platforms, this is "standardizing existing practice"; because it goes
: again the soft inclusion of a new 64-bits type. Written lengthly, that
: is the point I wanted to make.
And put shortly, including new types each time wider data needs to
be processed is IMO a mistake.