Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

C programming in 2011

150 views
Skip to first unread message

Duncan Bayne

unread,
May 25, 2011, 11:27:38 PM5/25/11
to
Hi All,

Many moons ago I cut C code for a living, primarily while maintaining
a POP3 server that supported a wide range of OSs (Linux, *BSD, HPUX,
VMS ...).

I'm planning to polish the rust off my C skills and learn a bit about
language implementation by coding a simple FORTH in C.

But I'm wondering how (or whether?) have things changed in the C world
since 2000. When I think C, I think ...

1. comp.lang.c
2. ANSI C wherever possible (but C89 as C99 isn't that widely
supported)
3. `gcc -Wall -ansi -pedantic` in lieu of static analysis tools
4. Emacs
5. Ctags
6. Autoconf + make (and see point 2 for VMS, HP-UX etc. goodness)

Can anyone who's been writing in C for the past eleven years let me
know what (if anything ;-) ) has changed over the years?

(In other news, holy crap, I've been doing this for more than a
decade).

--
Duncan Bayne
ph: +61 420 817 082 | web: http://www.fluidscape.com/ | skype:
duncan_bayne

Shao Miller

unread,
May 25, 2011, 11:33:29 PM5/25/11
to
On 5/25/2011 10:27 PM, Duncan Bayne wrote:
> ...When I think C, I think ...
>
> ...

> 2. ANSI C wherever possible (but C89 as C99 isn't that widely
> supported)
> 3. `gcc -Wall -ansi -pedantic` in lieu of static analysis tools
> ...

Agreed. :)

Message has been deleted

Duncan Bayne

unread,
May 25, 2011, 11:58:56 PM5/25/11
to
>   Can this be used on, say, Microsoft Windows? (I know that
>   make is available for Windows. I referred to Autoconf.)

Only via Cygwin (and then, you'd better bundle shtool). But it works
most everywhere else ... and where it doesn't (e.g. Windows) there's
always hand-crafted Makefiles. At least that's the way I used to do
it back in the day.

Duncan Bayne

unread,
May 26, 2011, 12:06:36 AM5/26/11
to
On May 26, 1:58 pm, Duncan Bayne <dhgba...@gmail.com> wrote:
> >   Can this be used on, say, Microsoft Windows? (I know that
> >   make is available for Windows. I referred to Autoconf.)
>
> Only via Cygwin (and then, you'd better bundle shtool).  But it works
> most everywhere else ... and where it doesn't (e.g. Windows) there's
> always hand-crafted Makefiles.  At least that's the way I used to do
> it back in the day.

In fact, thinking back, I think the way we did it for the
aforementioned POP3 server was to have a Visual C++ 6.0 solution for
building on Windows, Autoconf for most other OSs, plus maybe custom
Makefiles? I think that most of us used Windows as a dev platform,
some (including me) used Linux, and one chap some sort of BSD.

As you can probably tell, my memory of that project is a little
fuzzy :-)

Shao Miller

unread,
May 26, 2011, 12:24:42 AM5/26/11
to
On 5/25/2011 11:06 PM, Duncan Bayne wrote:
> In fact, thinking back, I think the way we did it for the
> aforementioned POP3 server was to have a Visual C++ 6.0 solution for
> building on Windows, Autoconf for most other OSs, plus maybe custom
> Makefiles? I think that most of us used Windows as a dev platform,
> some (including me) used Linux, and one chap some sort of BSD.
>
> As you can probably tell, my memory of that project is a little
> fuzzy :-)

I seem to recall TinyMUX having a similar strategy. It might bring back
some memories for you, should you choose to indulge.

Angel

unread,
May 26, 2011, 3:51:36 AM5/26/11
to
On 2011-05-26, Duncan Bayne <dhgb...@gmail.com> wrote:
> Hi All,
>
> Many moons ago I cut C code for a living, primarily while maintaining
> a POP3 server that supported a wide range of OSs (Linux, *BSD, HPUX,
> VMS ...).

Oh, what was the name of that server? I'd like to know...

> I'm planning to polish the rust off my C skills and learn a bit about
> language implementation by coding a simple FORTH in C.
>
> But I'm wondering how (or whether?) have things changed in the C world
> since 2000. When I think C, I think ...

I've been trying to shake the rust off of me for some time, last time I
did any real serious programming was back in college about 10 years ago.

> 1. comp.lang.c

I'm new here myself, joined a few months ago. Seems like a friendly and
helpful group most of the time.

Which reminds me, I should thank everyone who answered my query here.
Making my code more portable didn't only make my program work on Sparc
and PPC, it also made the coding as a whole a heck of a lot easier.

> 2. ANSI C wherever possible (but C89 as C99 isn't that widely
> supported)

I tend to stick to C99 as it has many good features that I missed in old
C. It really is an improvement IMHO.

> 3. `gcc -Wall -ansi -pedantic` in lieu of static analysis tools

I tend to use 'gcc -std=gnu99 -Wall -Werror' as I mostly develop for
Linux/GNU, but I avoid gcc-specific constructs where I can.

> 4. Emacs

Vi IMproved for me, but whatever works. The Linux kernel folks seem not
too fond of emacs' default layout for C though, just saying...

> 5. Ctags

Good stuff. ^^

> 6. Autoconf + make (and see point 2 for VMS, HP-UX etc. goodness)

I have to figure out autoconf someday...


--
"C provides a programmer with more than enough rope to hang himself.
C++ provides a firing squad, blindfold and last cigarette."
- seen in comp.lang.c

Chris H

unread,
May 26, 2011, 6:15:13 AM5/26/11
to
In message <slrnits1k7.2...@pearlgates.net>, Angel
<angel...@spamcop.net> writes

>> 2. ANSI C wherever possible (but C89 as C99 isn't that widely
>> supported)
>
>I tend to stick to C99 as it has many good features that I missed in old
>C. It really is an improvement IMHO.

Good luck.... most compilers do NOT support C99. Those that *claim* to
only support parts of it.

Most areas of the industry tend to avoid C99 all together.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Chris H

unread,
May 26, 2011, 6:13:06 AM5/26/11
to
In message <C-201105...@ram.dialup.fu-berlin.de>, Stefan Ram
<r...@zedat.fu-berlin.de> writes
>Duncan Bayne <dhgb...@gmail.com> writes:
>>1. comp.lang.c
>
> Usenet is dying. (Some people say.)

Sadly true.

>Try Twitter?

Orrible :-(

>
> (Oh, I see you are not using a real newsgroup client,
> but Google Groups, so you already support the Web!)


>
>>2. ANSI C wherever possible (but C89 as C99 isn't that widely
>>supported)
>

> ISO C.

ISO-C95 NOT C99
However ISO C 12 may be an improvement.

>>3. `gcc -Wall -ansi -pedantic` in lieu of static analysis tools

No. Use a proper static analyser.

>>4. Emacs

Editor etc is personal preference.

Duncan Bayne

unread,
May 26, 2011, 6:41:08 AM5/26/11
to
> Oh, what was the name of that server? I'd like to know...

DPOP.

> Vi IMproved for me, but whatever works. The Linux kernel folks seem not
> too fond of emacs' default layout for C though, just saying...

The only oddity I've noticed is indenting in loops, like:

while (foo)
{
bar();
}

> I have to figure out autoconf someday...

Let me know when you do ;-) I've figured out just enough to do what I
need, & nothing more :-)

--
Duncan Bayne
ph: +61 420 817 082 | web: http://www.fluidscape.com/ | skype:
duncan_bayne

--
undefined

James Kuyper

unread,
May 26, 2011, 6:53:07 AM5/26/11
to
On 05/26/2011 06:15 AM, Chris H wrote:
> In message <slrnits1k7.2...@pearlgates.net>, Angel
> <angel...@spamcop.net> writes
>>> 2. ANSI C wherever possible (but C89 as C99 isn't that widely
>>> supported)
>>
>> I tend to stick to C99 as it has many good features that I missed in old
>> C. It really is an improvement IMHO.
>
> Good luck.... most compilers do NOT support C99. Those that *claim* to
> only support parts of it.

It's true that very few compilers fully support C99, But it's also true
that compilers which support most of the new features of C99 are quite
commonplace nowadays.

--
James Kuyper

Francois Grieu

unread,
May 26, 2011, 9:34:25 AM5/26/11
to
On 26/05/2011 05:27, Duncan Bayne wrote:
> Can anyone who's been writing in C for the past eleven years
> let me know what (if anything;-) ) has changed over the years?

In my experience, both the robustness and code quality of
mainstream compilers (including but not limited to GCC 4,
MS Visual C, Keil C for ARM) has vastly improved.
You'll still often find ugly bugs [1] with these, but very
rarely in the worse category: "generate wrong code".

You can now expect the most obvious optimizations to be
automatically applied [2][3], and will on occasions be surprised
(e.g. by removal of code that is dead only at second glance).

For many of these compiler, inline, __inline, __forceinline
or whatever it is called works, and when ISO C 90
conformance is not necessary, is better than macros.

One the other hand, compilers for less mainstream embedded
platforms must still be expected to generate code that is
badly suboptimal and often wrong.

Francois Grieu


[1] for example compiling these two lines
#if 1?1?1:1:1
#endif
with MS Visual C gives:
fatal error C1017: invalid integer constant expression

[2] many compilers optimize
for(j=0;j<10;++i)
foo();
into the obvious code for
j=0;
do
foo();
while(++j<10);

[3] some compilers know that, when j is an int,
if (j<1 || j>8)
foo();
can be rewritten as
if ((unsigned)j-1u>8u)
foo();

Francois Grieu

unread,
May 26, 2011, 10:04:54 AM5/26/11
to
On 26/05/2011 05:27, Duncan Bayne wrote:
> Can anyone who's been writing in C for the past eleven years
> let me know what (if anything;-) ) has changed over the years?

In my experience, both the robustness and code quality of

Francois Grieu

if ((unsigned)j-1u>7u)
foo();
[fix typo; compilers should do optimization]

Branimir Maksimovic

unread,
May 26, 2011, 10:21:25 AM5/26/11
to
On 05/26/2011 04:04 PM, Francois Grieu wrote:
> [3] some compilers know that, when j is an int,
> if (j<1 || j>8)
> foo();
> can be rewritten as
> if ((unsigned)j-1u>7u)
> foo();

No, it can;t.


Francois Grieu

unread,
May 26, 2011, 11:23:01 AM5/26/11
to

Would you please illustrate by an example when that does not stand?
Stick to C99, please.

Francois Grieu


Note: my version of GCC makes VERY good code for

#include <limits.h>
#include <stdio.h>
int main(void)
{
int j;
j=INT_MIN;
for(;;)
{
if ((j<1 || j>8) != ((unsigned)j-1u>7u))
{
printf("%d\n",j);
return 1;
}
if (j==INT_MAX)
break;
++j;
}
return 0;
}

Kenneth Brody

unread,
May 26, 2011, 12:27:43 PM5/26/11
to
On 5/25/2011 11:50 PM, Stefan Ram wrote:
> Duncan Bayne<dhgb...@gmail.com> writes:
[...]
>> 4. Emacs
>
> Gesundheit! - Real C programms use vi.
> Just kidding.
[...]

Butterflies.

uggc://kxpq.pbz/378/

--
Kenneth Brody

Nobody

unread,
May 26, 2011, 1:02:04 PM5/26/11
to
On Thu, 26 May 2011 03:50:50 +0000, Stefan Ram wrote:

>>6. Autoconf + make (and see point 2 for VMS, HP-UX etc. goodness)
>

> Can this be used on, say, Microsoft® Windows? (I know that
> make is available for Windows. I referred to Autoconf.)

MSys provides bash and standard Unix utilities for Windows.
Regardless of autoconf, you'll probably need MSys if you want to
use Unix Makefiles on Windows. Writing Makefiles which work with either
/bin/sh or cmd.exe is too much work.

Cygwin is overkill if you just need to make a Unix build system work on
Windows. It's only really necessary if you need to make complex Unix
software work on Windows with minimal changes.

Branimir Maksimovic

unread,
May 26, 2011, 1:15:26 PM5/26/11
to
On Thu, 26 May 2011 17:23:01 +0200
Francois Grieu <fgr...@gmail.com> wrote:

> On 26/05/2011 16:21, Branimir Maksimovic wrote:
> > On 05/26/2011 04:04 PM, Francois Grieu wrote:
> >> [3] some compilers know that, when j is an int,
> >> if (j<1 || j>8)
> >> foo();
> >> can be rewritten as
> >> if ((unsigned)j-1u>7u)
> >> foo();
> >
> > No, it can't.
>
> Would you please illustrate by an example when that does not stand?

When j is in range [-8 , 0]

Actually: if(j < -8 || j > 8)
can be rewritten as: if((unsigned)j>8)


--
drwx------ 2 bmaxa bmaxa 4096 2011-05-26 19:00 .

Willem

unread,
May 26, 2011, 1:29:38 PM5/26/11
to
Branimir Maksimovic wrote:
) On Thu, 26 May 2011 17:23:01 +0200
) Francois Grieu <fgr...@gmail.com> wrote:
)
)> On 26/05/2011 16:21, Branimir Maksimovic wrote:
)> > On 05/26/2011 04:04 PM, Francois Grieu wrote:
)> >> [3] some compilers know that, when j is an int,
)> >> if (j<1 || j>8)
)> >> foo();
)> >> can be rewritten as
)> >> if ((unsigned)j-1u>7u)
)> >> foo();
)> >
)> > No, it can't.
)>
)> Would you please illustrate by an example when that does not stand?
)
) When j is in range [-8 , 0]
)
) Actually: if(j < -8 || j > 8)
) can be rewritten as: if((unsigned)j>8)

Is this meant to be some kind of joke ?


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT

Pierre Habouzit

unread,
May 26, 2011, 1:35:20 PM5/26/11
to
On 2011-05-26, Branimir Maksimovic <bm...@hotmail.com> wrote:
> On Thu, 26 May 2011 17:23:01 +0200
> Francois Grieu <fgr...@gmail.com> wrote:
>
>> On 26/05/2011 16:21, Branimir Maksimovic wrote:
>> > On 05/26/2011 04:04 PM, Francois Grieu wrote:
>> >> [3] some compilers know that, when j is an int,
>> >> if (j<1 || j>8)
>> >> foo();
>> >> can be rewritten as
>> >> if ((unsigned)j-1u>7u)
>> >> foo();
>> >
>> > No, it can't.
>>
>> Would you please illustrate by an example when that does not stand?
>
> When j is in range [-8 , 0]

C99 doesn't assure you both are equivalent. I think what Francois tries
to say is that gcc can produce assembly that will do the same as

if ((unsigned)j - 1u > 7u)

and that's indeed what GCC does, since on the architectures GCC targets
with the implementation choices it makes, both are equivalent.
--
·O· Pierre Habouzit
··O madc...@debian.org
OOO http://www.madism.org

Willem

unread,
May 26, 2011, 1:53:41 PM5/26/11
to
Pierre Habouzit wrote:
) On 2011-05-26, Branimir Maksimovic <bm...@hotmail.com> wrote:
)> On Thu, 26 May 2011 17:23:01 +0200

)> Francois Grieu <fgr...@gmail.com> wrote:
)>
)>> On 26/05/2011 16:21, Branimir Maksimovic wrote:
)>> > On 05/26/2011 04:04 PM, Francois Grieu wrote:
)>> >> [3] some compilers know that, when j is an int,
)>> >> if (j<1 || j>8)
)>> >> foo();
)>> >> can be rewritten as
)>> >> if ((unsigned)j-1u>7u)
)>> >> foo();
)>> >
)>> > No, it can't.
)>>
)>> Would you please illustrate by an example when that does not stand?
)>
)> When j is in range [-8 , 0]
)
) C99 doesn't assure you both are equivalent.

Are you sure ? As far as I know, casting a signed integer to unsigned
is defined for negative numbers as being INT_MAX minus that number.

Ralf Damaschke

unread,
May 26, 2011, 1:57:38 PM5/26/11
to
Branimir Maksimovic <bm...@hotmail.com> wrote:

Actually not! Consider, say, j = -1. Your first if statement
takes the else-part, your second the then-part - always.

You had had a quarter point when you had argued that the
precision of unsigned might not be larger than that of int.
But:

1: the compiler would know that and not internally rewrite the
expression the way Francis mentioned (for *some* compilers).

2: the vast majority of compilers (if not all, at least all I
know of) have a greater precision for unsigned than for int.

-- Ralf

Branimir Maksimovic

unread,
May 26, 2011, 2:06:54 PM5/26/11
to
On 26 May 2011 17:35:20 GMT
Pierre Habouzit <madc...@debian.org> wrote:

Yes, indeed that is what gcc does ;).
I simply didn't imagine that casting -1 to unsigned *is not* 1 ;)


--
drwx------ 2 bmaxa bmaxa 4096 2011-05-26 20:02 .

Branimir Maksimovic

unread,
May 26, 2011, 2:10:14 PM5/26/11
to
On Thu, 26 May 2011 17:29:38 +0000 (UTC)
Willem <wil...@toad.stack.nl> wrote:

> Branimir Maksimovic wrote:
> ) On Thu, 26 May 2011 17:23:01 +0200
> ) Francois Grieu <fgr...@gmail.com> wrote:
> )
> )> On 26/05/2011 16:21, Branimir Maksimovic wrote:
> )> > On 05/26/2011 04:04 PM, Francois Grieu wrote:
> )> >> [3] some compilers know that, when j is an int,
> )> >> if (j<1 || j>8)
> )> >> foo();
> )> >> can be rewritten as
> )> >> if ((unsigned)j-1u>7u)
> )> >> foo();
> )> >
> )> > No, it can't.
> )>
> )> Would you please illustrate by an example when that does not stand?
> )
> ) When j is in range [-8 , 0]
> )
> ) Actually: if(j < -8 || j > 8)
> ) can be rewritten as: if((unsigned)j>8)
>
> Is this meant to be some kind of joke ?
>

No, I didn;t know how casting from signed to unsigned works.

--
drwx------ 2 bmaxa bmaxa 4096 2011-05-26 20:06 .

Malcolm McLean

unread,
May 26, 2011, 2:13:15 PM5/26/11
to
On May 26, 8:53 pm, Willem <wil...@toad.stack.nl> wrote:
> Pierre Habouzit wrote:
>
> ) On 2011-05-26, Branimir Maksimovic <bm...@hotmail.com> wrote:
> )> On Thu, 26 May 2011 17:23:01 +0200
> )> Francois Grieu <fgr...@gmail.com> wrote:
> )>
> )>> On 26/05/2011 16:21, Branimir Maksimovic wrote:
> )>> > On 05/26/2011 04:04 PM, Francois Grieu wrote:
> )>> >> [3] some compilers know that, when j is an int,
> )>> >> if (j<1 || j>8)
> )>> >>     foo();
> )>> >> can be rewritten as
> )>> >>     if ((unsigned)j-1u>7u)
> )>> >>         foo();
> )>> >
> )>> > No, it can't.
> )>>
> )>> Would you please illustrate by an example when that does not stand?
> )>
> )> When j is in range [-8 , 0]
> )
> ) C99 doesn't assure you both are equivalent.
>
> Are you sure ?  As far as I know, casting a signed integer to unsigned
> is defined for negative numbers as being INT_MAX minus that number.
>
Plus one.
Invert and increment to negate, in twos complememnt
--
MiniBasic - how to write a script interpreter
http://www.lulu.com/bgy1mm


Keith Thompson

unread,
May 26, 2011, 2:31:05 PM5/26/11
to
Ralf Damaschke <rws...@gmx.de> writes:
[...]

> 2: the vast majority of compilers (if not all, at least all I
> know of) have a greater precision for unsigned than for int.

Depends on what you mean by "precision". For most compilers, signed int
and unsigned int can represent exactly the same number of values.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Ralf Damaschke

unread,
May 26, 2011, 4:43:11 PM5/26/11
to
Keith Thompson <ks...@mib.org> wrote:

> Ralf Damaschke <rws...@gmx.de> writes:
> [...]
>> 2: the vast majority of compilers (if not all, at least all I
>> know of) have a greater precision for unsigned than for int.
>
> Depends on what you mean by "precision".

Of course I use the term as defined:

C99, 6.2.6.2p6:
| The /precision/ of an integer type is the number of bits it
| uses to represent values, excluding any sign and padding bits.

> For most compilers,
> signed int and unsigned int can represent exactly the same
> number of values.

D'accord, and that's implies that unsigned has a larger precision
than signed.

-- Ralf

Chad

unread,
May 26, 2011, 5:09:47 PM5/26/11
to
On May 26, 12:51 am, Angel <angel+n...@spamcop.net> wrote:

> On 2011-05-26, Duncan Bayne <dhgba...@gmail.com> wrote:
>
> > Hi All,
>
> > Many moons ago I cut C code for a living, primarily while maintaining
> > a POP3 server that supported a wide range of OSs (Linux, *BSD, HPUX,
> > VMS ...).
>
> Oh, what was the name of that server? I'd like to know...
>
> > I'm planning to polish the rust off my C skills and learn a bit about
> > language implementation by coding a simple FORTH in C.
>
> > But I'm wondering how (or whether?) have things changed in the C world
> > since 2000.  When I think C, I think ...
>
> I've been trying to shake the rust off of me for some time, last time I
> did any real serious programming was back in college about 10 years ago.
>
> > 1. comp.lang.c
>
> I'm new here myself, joined a few months ago. Seems like a friendly and
> helpful group most of the time.
>
> Which reminds me, I should thank everyone who answered my query here.
> Making my code more portable didn't only make my program work on Sparc
> and PPC, it also made the coding as a whole a heck of a lot easier.
>

The right wing religious nuts that roam this forum apparently haven't
gotten to you yet.

Chad

Ian Collins

unread,
May 26, 2011, 5:12:29 PM5/26/11
to

Not to mention those that claim to support it, do.

--
Ian Collins

Keith Thompson

unread,
May 26, 2011, 5:53:47 PM5/26/11
to

Ah, I forgot that the standard defines the term "precision" for
integer types. (It's possible for signed and unsigned to have the
same precision, but only if unsigned has a padding bit in places
of signed's sign bit, which woudl be unusual.)

Note that the standard also defines the "width" as the "precision"
plus any sign bit.

Keith Thompson

unread,
May 26, 2011, 6:07:09 PM5/26/11
to
Chad <cda...@gmail.com> writes:
> On May 26, 12:51 am, Angel <angel+n...@spamcop.net> wrote:
[...]

>> Which reminds me, I should thank everyone who answered my query here.
>> Making my code more portable didn't only make my program work on Sparc
>> and PPC, it also made the coding as a whole a heck of a lot easier.
>>
>
> The right wing religious nuts that roam this forum apparently haven't
> gotten to you yet.

What??

James Kuyper

unread,
May 26, 2011, 9:57:38 PM5/26/11
to
On 05/26/2011 02:13 PM, Malcolm McLean wrote:
> On May 26, 8:53�pm, Willem <wil...@toad.stack.nl> wrote:
...

>> Are you sure ? �As far as I know, casting a signed integer to unsigned
>> is defined for negative numbers as being INT_MAX minus that number.
>>
> Plus one.
> Invert and increment to negate, in twos complememnt

That's UINT_MAX, not INT_MAX. And it's true whether or not 'int' is a
2's complement type; the rule is simply easier to implement in 2's
complement.
--
James Kuyper

Angel

unread,
May 27, 2011, 3:42:08 AM5/27/11
to
On 2011-05-26, Chad <cda...@gmail.com> wrote:

> On May 26, 12:51?am, Angel <angel+n...@spamcop.net> wrote:
>>
>> I'm new here myself, joined a few months ago. Seems like a friendly and
>> helpful group most of the time.
>>
>> Which reminds me, I should thank everyone who answered my query here.
>> Making my code more portable didn't only make my program work on Sparc
>> and PPC, it also made the coding as a whole a heck of a lot easier.
>
> The right wing religious nuts that roam this forum apparently haven't
> gotten to you yet.

I've been on news.admin.net-abuse.email and alt.religion.scientology.
Trust me, this group is paradise compared to them. :-)


--
"C provides a programmer with more than enough rope to hang himself.
C++ provides a firing squad, blindfold and last cigarette."
- seen in comp.lang.c

James Kuyper

unread,
May 27, 2011, 7:30:48 AM5/27/11
to
On 05/26/2011 05:09 PM, Chad wrote:
...

> The right wing religious nuts that roam this forum apparently haven't
> gotten to you yet.

Who in the world are you talking about?
--
James Kuyper

Jorgen Grahn

unread,
May 27, 2011, 8:46:37 PM5/27/11
to
On Thu, 2011-05-26, Duncan Bayne wrote:
...

>> Vi IMproved for me, but whatever works. The Linux kernel folks seem not
>> too fond of emacs' default layout for C though, just saying...
>
> The only oddity I've noticed is indenting in loops, like:
>
> while (foo)
> {
> bar();
> }

That is indeed a Gnu oddity, and probably the default in Emacs.
You should ask it to use one of its other builtin styles or
(in my case) a variation of one of them:

(require 'cc-mode)
(c-add-style "bsd4" '("bsd" (c-basic-offset . 4)))
(setq c-default-style "bsd4")

>> I have to figure out autoconf someday...
>
> Let me know when you do ;-) I've figured out just enough to do what I
> need, & nothing more :-)

The need to support odd Unixes is smaller today than in 2000.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Phil Carmody

unread,
May 28, 2011, 6:15:35 PM5/28/11
to
Angel <angel...@spamcop.net> writes:
> On 2011-05-26, Duncan Bayne <dhgb...@gmail.com> wrote:
> > 4. Emacs

>
> Vi IMproved for me, but whatever works. The Linux kernel folks seem not
> too fond of emacs' default layout for C though, just saying...

I don't think I've ever see emacs' "linux" style do anything not
approved by the coding standard. Some things aren't specified too
accurately, so emacs may produce different indentation for multi-line
if conditions, for example, but there's no right and wrong in those
few cases.

I guess a straw poll of the half of the kernel team where I worked
would have yielded about 50% emacs-alike, 40% vi-alike, 10% other.
From their patches, I was never able to tell which was which.

Phil
--
"At least you know where you are with Microsoft."
"True. I just wish I'd brought a paddle." -- Matthew Vernon

Angel

unread,
May 28, 2011, 6:24:21 PM5/28/11
to
On 2011-05-28, Phil Carmody <thefatphi...@yahoo.co.uk> wrote:
> Angel <angel...@spamcop.net> writes:
>> On 2011-05-26, Duncan Bayne <dhgb...@gmail.com> wrote:
>> > 4. Emacs
>>
>> Vi IMproved for me, but whatever works. The Linux kernel folks seem not
>> too fond of emacs' default layout for C though, just saying...
>
> I don't think I've ever see emacs' "linux" style do anything not
> approved by the coding standard. Some things aren't specified too
> accurately, so emacs may produce different indentation for multi-line
> if conditions, for example, but there's no right and wrong in those
> few cases.
>
> I guess a straw poll of the half of the kernel team where I worked
> would have yielded about 50% emacs-alike, 40% vi-alike, 10% other.
> From their patches, I was never able to tell which was which.

Coding style is quite a personal thing, and I don't think there is a
real right or wrong in that. It was just something I came across while
browsing through the kernel documentation.

I like putting my accolades on separate lines so I can clearly see
where blocks begin and end, but others like to put their opening
accolade at the end of the if/while/for/function. Like I said earlier,
whatever works for you. :-)

Keith Thompson

unread,
May 28, 2011, 7:08:09 PM5/28/11
to
Angel <angel...@spamcop.net> writes:
[...]

> Coding style is quite a personal thing, and I don't think there is a
> real right or wrong in that. It was just something I came across while
> browsing through the kernel documentation.
>
> I like putting my accolades on separate lines so I can clearly see
> where blocks begin and end, but others like to put their opening
> accolade at the end of the if/while/for/function. Like I said earlier,
> whatever works for you. :-)

Are you sure that "accolade" is the word you're looking for?

Nizumzen

unread,
May 28, 2011, 7:39:40 PM5/28/11
to
On 2011-05-26 11:15:13 +0100, Chris H said:

> In message <slrnits1k7.2...@pearlgates.net>, Angel
> <angel...@spamcop.net> writes
>>> 2. ANSI C wherever possible (but C89 as C99 isn't that widely
>>> supported)
>>
>> I tend to stick to C99 as it has many good features that I missed in old
>> C. It really is an improvement IMHO.
>
> Good luck.... most compilers do NOT support C99. Those that *claim* to
> only support parts of it.
>

> Most areas of the industry tend to avoid C99 all together.

Just ditch GCC for Clang and your life will improve markedly. Plus
Clang's static analyser is awesome.

Angel

unread,
May 28, 2011, 7:47:01 PM5/28/11
to
On 2011-05-28, Keith Thompson <ks...@mib.org> wrote:
> Angel <angel...@spamcop.net> writes:
> [...]
>> Coding style is quite a personal thing, and I don't think there is a
>> real right or wrong in that. It was just something I came across while
>> browsing through the kernel documentation.
>>
>> I like putting my accolades on separate lines so I can clearly see
>> where blocks begin and end, but others like to put their opening
>> accolade at the end of the if/while/for/function. Like I said earlier,
>> whatever works for you. :-)
>
> Are you sure that "accolade" is the word you're looking for?

Like I pointed out in another thread, it is the correct word in Dutch.
I'm not a native English speaker.

It does have that meaning in English too, but only in the context of
sheet music, it seems. I think "curly brace" is the correct term, no?

Anyway, I mean these things: {}

Geoff

unread,
May 28, 2011, 7:58:06 PM5/28/11
to
On 28 May 2011 23:47:01 GMT, Angel <angel...@spamcop.net> wrote:

>On 2011-05-28, Keith Thompson <ks...@mib.org> wrote:
>> Angel <angel...@spamcop.net> writes:
>> [...]
>>> Coding style is quite a personal thing, and I don't think there is a
>>> real right or wrong in that. It was just something I came across while
>>> browsing through the kernel documentation.
>>>
>>> I like putting my accolades on separate lines so I can clearly see
>>> where blocks begin and end, but others like to put their opening
>>> accolade at the end of the if/while/for/function. Like I said earlier,
>>> whatever works for you. :-)
>>
>> Are you sure that "accolade" is the word you're looking for?
>
>Like I pointed out in another thread, it is the correct word in Dutch.
>I'm not a native English speaker.
>
>It does have that meaning in English too, but only in the context of
>sheet music, it seems. I think "curly brace" is the correct term, no?
>
>Anyway, I mean these things: {}

brackets: []
braces: {}

I thoroughly detest the words curly braces to describe braces.

Keith Thompson

unread,
May 28, 2011, 9:47:19 PM5/28/11
to
Angel <angel...@spamcop.net> writes:
> On 2011-05-28, Keith Thompson <ks...@mib.org> wrote:
>> Angel <angel...@spamcop.net> writes:
>> [...]
>>> Coding style is quite a personal thing, and I don't think there is a
>>> real right or wrong in that. It was just something I came across while
>>> browsing through the kernel documentation.
>>>
>>> I like putting my accolades on separate lines so I can clearly see
>>> where blocks begin and end, but others like to put their opening
>>> accolade at the end of the if/while/for/function. Like I said earlier,
>>> whatever works for you. :-)
>>
>> Are you sure that "accolade" is the word you're looking for?
>
> Like I pointed out in another thread, it is the correct word in Dutch.
> I'm not a native English speaker.
>
> It does have that meaning in English too, but only in the context of
> sheet music, it seems. I think "curly brace" is the correct term, no?
>
> Anyway, I mean these things: {}

Understood.

I just checked on dictionary.com, and one of its sources shows "a rare
word for brace" as one of the lesser meanings. I had never heard of
that usage. (And I'm reasonably familiar with sheet music.)

Interesting stuff (though not quite topical). And your English is
certainly much better than my anything-other-than-English!

Keith Thompson

unread,
May 28, 2011, 9:49:23 PM5/28/11
to

The problem is that I've seen () referred to as brackets, and []
as braces. I think there are differences between US and UK usage.

The word "parentheses" is reasonably unambiguous as far as I know,
but I like to refer to "square brackets" and "curly braces" when
there's any chance of confusion. (Or, in writing, I just use the
characters themselves.)

Message has been deleted

Pierre Habouzit

unread,
May 29, 2011, 7:13:40 AM5/29/11
to

Let's use french!

() is parenthèses
[] is crochets
{} is accolades
<> is chevrons

Nizumzen

unread,
May 29, 2011, 9:35:38 AM5/29/11
to
On 2011-05-29 02:49:23 +0100, Keith Thompson said:

> Geoff <ge...@invalid.invalid> writes:
>> On 28 May 2011 23:47:01 GMT, Angel <angel...@spamcop.net> wrote:
>>> On 2011-05-28, Keith Thompson <ks...@mib.org> wrote:
>>>> Angel <angel...@spamcop.net> writes:
>>>> [...]
>>>>> Coding style is quite a personal thing, and I don't think there is a
>>>>> real right or wrong in that. It was just something I came across while
>>>>> browsing through the kernel documentation.
>>>>>
>>>>> I like putting my accolades on separate lines so I can clearly see
>>>>> where blocks begin and end, but others like to put their opening
>>>>> accolade at the end of the if/while/for/function. Like I said earlier,
>>>>> whatever works for you. :-)
>>>>
>>>> Are you sure that "accolade" is the word you're looking for?
>>>
>>> Like I pointed out in another thread, it is the correct word in Dutch.
>>> I'm not a native English speaker.
>>>
>>> It does have that meaning in English too, but only in the context of
>>> sheet music, it seems. I think "curly brace" is the correct term, no?
>>>
>>> Anyway, I mean these things: {}
>>
>> brackets: []
>> braces: {}
>>
>> I thoroughly detest the words curly braces to describe braces.
>
> The problem is that I've seen () referred to as brackets, and []
> as braces. I think there are differences between US and UK usage.

Correct. In England we refer to () as brackets and [] as square brackets.

Malcolm McLean

unread,
May 29, 2011, 9:55:34 AM5/29/11
to
On May 29, 2:58 am, Geoff <ge...@invalid.invalid> wrote:
>
> I thoroughly detest the words curly braces to describe braces.
>
Think of it as a belt and braces description.

Keith Thompson

unread,
May 29, 2011, 3:49:54 PM5/29/11
to
Nizumzen <chi...@mcnuggets.com> writes:
> On 2011-05-29 02:49:23 +0100, Keith Thompson said:
[...]

>> The problem is that I've seen () referred to as brackets, and []
>> as braces. I think there are differences between US and UK usage.
>
> Correct. In England we refer to () as brackets and [] as square brackets.

Do you not use the word parentheses? If someone else refers to
parentheses, do they unambiguously refer to ()?

And what do you call {}?

Shao Miller

unread,
May 29, 2011, 5:13:35 PM5/29/11
to
On 5/29/2011 2:49 PM, Keith Thompson wrote:
> Nizumzen<chi...@mcnuggets.com> writes:
>> On 2011-05-29 02:49:23 +0100, Keith Thompson said:
> [...]
>>> The problem is that I've seen () referred to as brackets, and []
>>> as braces. I think there are differences between US and UK usage.
>>
>> Correct. In England we refer to () as brackets and [] as square brackets.
>
> Do you not use the word parentheses? If someone else refers to
> parentheses, do they unambiguously refer to ()?

When I was a young student of various public education institutes, we
learned "BEDMAS" or sometimes "BEMDAS". See here[1].

As a native speaker of English, I understand both terms can be used to
mean '(' and ')'. Almost everyone I know uses "brackets" for these, but
these same people would understand "parentheses" just as well. I use
"brackets" more in speech and "parentheses" more in online discussion.

FWIW. :)

[1] http://en.wikipedia.org/wiki/Order_of_operations

Nizumzen

unread,
May 29, 2011, 6:58:19 PM5/29/11
to
On 2011-05-29 20:49:54 +0100, Keith Thompson said:

> Nizumzen <chi...@mcnuggets.com> writes:
>> On 2011-05-29 02:49:23 +0100, Keith Thompson said:
> [...]
>>> The problem is that I've seen () referred to as brackets, and []
>>> as braces. I think there are differences between US and UK usage.
>>
>> Correct. In England we refer to () as brackets and [] as square brackets.
>
> Do you not use the word parentheses? If someone else refers to
> parentheses, do they unambiguously refer to ()?
>
> And what do you call {}?

I very rarely hear anyone outside of the programming community refer to
them as parentheses. Some do I am sure but round where I am everyone
calls them brackets in general day to day conversation.

Yes, parentheses means () to me. But maybe that is just because I am
used to the programming lingo? I have no idea what someone with no
programming experience would think.

As for {} we call them curly brackets unsurprisingly :).

J. J. Farrell

unread,
May 29, 2011, 7:38:46 PM5/29/11
to
Keith Thompson wrote:
> Nizumzen <chi...@mcnuggets.com> writes:
>> On 2011-05-29 02:49:23 +0100, Keith Thompson said:
> [...]
>>> The problem is that I've seen () referred to as brackets, and []
>>> as braces. I think there are differences between US and UK usage.
>> Correct. In England we refer to () as brackets and [] as square brackets.
>
> Do you not use the word parentheses?

Not commonly; it's widely understood and thought of as "a fancy word for
brackets".

> If someone else refers to
> parentheses, do they unambiguously refer to ()?

Yes. Except perhaps in a literary context where "in parentheses" could
be used to refer to "a parenthetical phrase" or "parenthesis" which
could actually be delimited by commas.

> And what do you call {}?

Personally, "curly brackets" or "braces". I've been programming too long
in multinational contexts to remember for sure what they're normally
called in Britain; if indeed they're "normally" called anything, since
they're rarely seen outside programming.

Phil Carmody

unread,
May 30, 2011, 4:01:44 AM5/30/11
to
Angel <angel...@spamcop.net> writes:
> On 2011-05-28, Phil Carmody <thefatphi...@yahoo.co.uk> wrote:
> > Angel <angel...@spamcop.net> writes:
> >> On 2011-05-26, Duncan Bayne <dhgb...@gmail.com> wrote:
> >> > 4. Emacs
> >>
> >> Vi IMproved for me, but whatever works. The Linux kernel folks seem not
> >> too fond of emacs' default layout for C though, just saying...
> >
> > I don't think I've ever see emacs' "linux" style do anything not
> > approved by the coding standard. Some things aren't specified too
> > accurately, so emacs may produce different indentation for multi-line
> > if conditions, for example, but there's no right and wrong in those
> > few cases.
> >
> > I guess a straw poll of the half of the kernel team where I worked
> > would have yielded about 50% emacs-alike, 40% vi-alike, 10% other.
> > From their patches, I was never able to tell which was which.
>
> Coding style is quite a personal thing, and I don't think there is a
> real right or wrong in that. It was just something I came across while
> browsing through the kernel documentation.

Then you should have encountered Documentation/CodingStyle and realised
that it's not a personal thing, it's a Linus thing, and anyone who does
different is wrong. And quite probably stupid too, knowing Linus.

> I like putting my accolades on separate lines so I can clearly see
> where blocks begin and end, but others like to put their opening
> accolade at the end of the if/while/for/function. Like I said earlier,
> whatever works for you. :-)

Were I to code using my prefered coding style, as one of the gatekeepers
of our company kernel, I'd then have to reject all my patches and ask
myself to do them again. So no, when it comes to linux kernel programming,
it's whatever works for upstream that matters.

Phil Carmody

unread,
May 30, 2011, 4:14:59 AM5/30/11
to
Keith Thompson <ks...@mib.org> writes:
> Nizumzen <chi...@mcnuggets.com> writes:
> > On 2011-05-29 02:49:23 +0100, Keith Thompson said:
> [...]
> >> The problem is that I've seen () referred to as brackets, and []
> >> as braces. I think there are differences between US and UK usage.
> >
> > Correct. In England we refer to () as brackets and [] as square brackets.
>
> Do you not use the word parentheses? If someone else refers to

They all brace, and therefore they're all brackets.

> parentheses, do they unambiguously refer to ()?

Parentheses shouldn't be unambiguous. Parenthesis is a description of
the *meaning* of the pair of symbols, namely marking something as
parenthetical. In C, the parenthetical markup is traditionally /* this
one */. Editorial parenthetical markup is usually [this one]. It's
generally only prose that uses (this form).

> And what do you call {}?

Curly {brackets,braces}, but mostly the former (see my first paragraph
above). The latter only by those who discuss code with Americans a lot.

Angel

unread,
May 30, 2011, 4:51:09 AM5/30/11
to
On 2011-05-30, Phil Carmody <thefatphi...@yahoo.co.uk> wrote:
> Angel <angel...@spamcop.net> writes:
>>
>> Coding style is quite a personal thing, and I don't think there is a
>> real right or wrong in that. It was just something I came across while
>> browsing through the kernel documentation.
>
> Then you should have encountered Documentation/CodingStyle and realised
> that it's not a personal thing, it's a Linus thing, and anyone who does
> different is wrong. And quite probably stupid too, knowing Linus.

I've never personally dealt with Linus, but I've read the stories, and
seen the flamewars on LKML. The only "famous" Linux people I've had
personal encounters with (Andrew Morton and David Miller, while analyzing
and solving a bug I had reported) were much more relaxed, IMHO anyway.

>> I like putting my accolades on separate lines so I can clearly see
>> where blocks begin and end, but others like to put their opening
>> accolade at the end of the if/while/for/function. Like I said earlier,
>> whatever works for you. :-)
>
> Were I to code using my prefered coding style, as one of the gatekeepers
> of our company kernel, I'd then have to reject all my patches and ask
> myself to do them again. So no, when it comes to linux kernel programming,
> it's whatever works for upstream that matters.

As long as it's readable, not too confusing and does what it should do,
it doesn't really matter where exactly that curly brace is, right?

Phil Carmody

unread,
May 30, 2011, 8:02:10 AM5/30/11
to
Keith Thompson <ks...@mib.org> writes:
> Angel <angel...@spamcop.net> writes:
> > On 2011-05-28, Keith Thompson <ks...@mib.org> wrote:
> >> Angel <angel...@spamcop.net> writes:
> >> [...]
> >>> Coding style is quite a personal thing, and I don't think there is a
> >>> real right or wrong in that. It was just something I came across while
> >>> browsing through the kernel documentation.
> >>>
> >>> I like putting my accolades on separate lines so I can clearly see
> >>> where blocks begin and end, but others like to put their opening
> >>> accolade at the end of the if/while/for/function. Like I said earlier,
> >>> whatever works for you. :-)
> >>
> >> Are you sure that "accolade" is the word you're looking for?
> >
> > Like I pointed out in another thread, it is the correct word in Dutch.
> > I'm not a native English speaker.
> >
> > It does have that meaning in English too, but only in the context of
> > sheet music, it seems. I think "curly brace" is the correct term, no?
> >
> > Anyway, I mean these things: {}
>
> Understood.
>
> I just checked on dictionary.com, and one of its sources shows "a rare
> word for brace" as one of the lesser meanings.

http://dictionary.reference.com/browse/accolade is enlightening (if you like
etymologies and historical meanings):

1. any award, honor, or laudatory notice: The play received accolades from the press.
2. a light touch on the shoulder with the flat side of the sword or formerly by an embrace, done in the ceremony of conferring knighthood.
3. the ceremony itself.
4. Music. a brace joining several staves.
5. Architecture.
a. an archivolt or hood molding having more or less the form of an ogee arch.
b. a decoration having more or less the form of an ogee arch, cut into a lintel or flat arch.

Origin:
1615-25; < French, derivative of a(c)col\'ee embrace (with -ade [-ade#1]), noun use of feminine past participle of a(c)coler, Old French verbal derivative of col neck (see collar) with a- [a-#5]

So it all makes perfect sense if you can join a few dots.

> I had never heard of
> that usage. (And I'm reasonably familiar with sheet music.)
>
> Interesting stuff (though not quite topical). And your English is
> certainly much better than my anything-other-than-English!

Ditto.

Chris H

unread,
May 30, 2011, 9:42:26 AM5/30/11
to
In message <iruj2b$49t$1...@speranza.aioe.org>, Nizumzen

<chi...@mcnuggets.com> writes
>On 2011-05-29 20:49:54 +0100, Keith Thompson said:
>
>> Nizumzen <chi...@mcnuggets.com> writes:
>>> On 2011-05-29 02:49:23 +0100, Keith Thompson said:
>> [...]
>>>> The problem is that I've seen () referred to as brackets, and []
>>>> as braces. I think there are differences between US and UK usage.
>>> Correct. In England we refer to () as brackets and [] as square
>>>brackets.
>> Do you not use the word parentheses? If someone else refers to
>> parentheses, do they unambiguously refer to ()?
>> And what do you call {}?
>
>I very rarely hear anyone outside of the programming community refer to
>them as parentheses. Some do I am sure but round where I am everyone
>calls them brackets in general day to day conversation.

A lot of people (but few now than in the past) call () Parenthesis in
normal speech.

>Yes, parentheses means () to me. But maybe that is just because I am
>used to the programming lingo? I have no idea what someone with no
>programming experience would think.

The same.

--
Support Sarah Palin for the next US President
Go Palin! Go Palin! Go Palin!
In God We Trust! Rapture Ready!!!
http://www.sarahpac.com/


Morris Keesan

unread,
May 30, 2011, 12:17:08 PM5/30/11
to
On Sun, 29 May 2011 17:13:35 -0400, Shao Miller <sha0....@gmail.com>
wrote:

A few years ago, I was using an Artificial Intelligence textbook which
was written, and originally published, in England, and had been
republished in the US. In the course of preparing the book for US
publication, someone saw, in the chapter on LISP, the word "brackets",
and changed all of the related instances of () into [].


--
Morris Keesan -- mke...@post.harvard.edu

Nizumzen

unread,
May 30, 2011, 12:40:11 PM5/30/11
to
On 2011-05-30 14:42:26 +0100, Chris H said:

> In message <iruj2b$49t$1...@speranza.aioe.org>, Nizumzen
> <chi...@mcnuggets.com> writes
>> On 2011-05-29 20:49:54 +0100, Keith Thompson said:
>>
>>> Nizumzen <chi...@mcnuggets.com> writes:
>>>> On 2011-05-29 02:49:23 +0100, Keith Thompson said:
>>> [...]
>>>>> The problem is that I've seen () referred to as brackets, and []
>>>>> as braces. I think there are differences between US and UK usage.
>>>> Correct. In England we refer to () as brackets and [] as square
>>>> brackets.
>>> Do you not use the word parentheses? If someone else refers to
>>> parentheses, do they unambiguously refer to ()?
>>> And what do you call {}?
>>
>> I very rarely hear anyone outside of the programming community refer to
>> them as parentheses. Some do I am sure but round where I am everyone
>> calls them brackets in general day to day conversation.
>
> A lot of people (but few now than in the past) call () Parenthesis in
> normal speech.
>
>> Yes, parentheses means () to me. But maybe that is just because I am
>> used to the programming lingo? I have no idea what someone with no
>> programming experience would think.
>
> The same.

Well, your experience may vary, but round here its brackets for everything.

Shao Miller

unread,
May 30, 2011, 1:52:34 PM5/30/11
to
D'oh; yikes. Perhaps it was an AI that noted this and performed the
change? :)

Sjouke Burry

unread,
May 30, 2011, 4:38:10 PM5/30/11
to
Chris H wrote:
> In message <iruj2b$49t$1...@speranza.aioe.org>, Nizumzen
> <chi...@mcnuggets.com> writes
>> On 2011-05-29 20:49:54 +0100, Keith Thompson said:
>>
>>> Nizumzen <chi...@mcnuggets.com> writes:
>>>> On 2011-05-29 02:49:23 +0100, Keith Thompson said:
>>> [...]
>>>>> The problem is that I've seen () referred to as brackets, and []
>>>>> as braces. I think there are differences between US and UK usage.
>>>> Correct. In England we refer to () as brackets and [] as square
>>>> brackets.
>>> Do you not use the word parentheses? If someone else refers to
>>> parentheses, do they unambiguously refer to ()?
>>> And what do you call {}?
>> I very rarely hear anyone outside of the programming community refer to
>> them as parentheses. Some do I am sure but round where I am everyone
>> calls them brackets in general day to day conversation.
>
> A lot of people (but few now than in the past) call () Parenthesis in
> normal speech.
>
>> Yes, parentheses means () to me. But maybe that is just because I am
>> used to the programming lingo? I have no idea what someone with no
>> programming experience would think.
>
> The same.
>
>
>
() haakjes
{} accoladen
[] blokhaken

Nobody

unread,
May 30, 2011, 8:54:35 PM5/30/11
to
On Mon, 30 May 2011 08:51:09 +0000, Angel wrote:

> As long as it's readable, not too confusing and does what it should do,
> it doesn't really matter where exactly that curly brace is, right?

Wrong.

On any large project, it's important that there's a single formatting
style used throughout the project. Otherwise, the next time that
someone edits the code using the correct style, you end up with noise in
the diffs which can make it hard to discern the substantive changes from
the formatting corrections.

Also, rigid adherence to a coding convention helps weed out people who
aren't team players at an early stage.

Tim Rentsch

unread,
May 31, 2011, 4:03:48 AM5/31/11
to
Francois Grieu <fgr...@gmail.com> writes:

> On 26/05/2011 16:21, Branimir Maksimovic wrote:
>> On 05/26/2011 04:04 PM, Francois Grieu wrote:
>>> [3] some compilers know that, when j is an int,
>>> if (j<1 || j>8)
>>> foo();
>>> can be rewritten as
>>> if ((unsigned)j-1u>7u)
>>> foo();
>>
>> No, it can't.
>
> Would you please illustrate by an example when that does not stand?
> Stick to C99, please.

On most implementations it will work, but it can fail if
UINT_MAX == INT_MAX. Of course an implementation is
allowed to know that and take it into account.

Tim Rentsch

unread,
May 31, 2011, 4:13:24 AM5/31/11
to
James Kuyper <james...@verizon.net> writes:

> On 05/26/2011 02:13 PM, Malcolm McLean wrote:
>> On May 26, 8:53 pm, Willem <wil...@toad.stack.nl> wrote:
> ...
>>> Are you sure ? As far as I know, casting a signed integer to unsigned
>>> is defined for negative numbers as being INT_MAX minus that number.
>>>
>> Plus one.
>> Invert and increment to negate, in twos complememnt
>
> That's UINT_MAX, not INT_MAX. And it's true whether or not 'int' is a
> 2's complement type; the rule is simply easier to implement in 2's
> complement.

Amusing that there were two levels of correction and
the result still isn't right.

Kleuskes & Moos

unread,
May 31, 2011, 4:59:55 AM5/31/11
to

I've been scratching my head for a bit, reading the above. In any
numbersystem i can come up with it seems that UINT_MAX > INT_MAX, if
only because of the sign bit required in integers.

So please, which situation did you have in mind where UINT_MAX ==
INT_MAX? The only one i can think of is a very silly definition in one
of the standard headers, making me want to give strong advice not to
use that a compiler. God knows what other silliness is hidden in the
headers.

Tim Rentsch

unread,
May 31, 2011, 5:30:31 AM5/31/11
to

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf

Please refer to section 6.2.6.2, paragraph 2.

Kleuskes & Moos

unread,
May 31, 2011, 5:55:09 AM5/31/11
to
On May 31, 11:30 am, Tim Rentsch <t...@alumni.caltech.edu> wrote:

Nice to know that you know where integers are defined in the standard,
but that does not really answer my question. Much in contrast, from
what i read, it seems to validate my view, so please expound.

The chapter you mention endorses (explicitly) ones- and twos
complement, both of which obey the relation defined above, both for
the simple reason the need a (explicitly defined and demanded in the
chapter you mention) sign-bit.

James Kuyper

unread,
May 31, 2011, 6:36:04 AM5/31/11
to
On 05/31/2011 05:55 AM, Kleuskes & Moos wrote:
> On May 31, 11:30�am, Tim Rentsch <t...@alumni.caltech.edu> wrote:
>> "Kleuskes & Moos" <kleu...@xs4all.nl> writes:
...

>>> I've been scratching my head for a bit, reading the above. In any
>>> numbersystem i can come up with it seems that UINT_MAX > INT_MAX, if
>>> only because of the sign bit required in integers.

The closest the standard comes to directly constraining UINT_MAX
relative to INT_MAX is in 6.2.5p9: "The range of nonnegative values of a
signed integer type is a subrange of the corresponding unsigned integer
type, ...". That corresponds to the constraint UINT_MAX >= INT_MAX; it
is not violated by UINT_MAX==INT_MAX.

>>> So please, which situation did you have in mind where UINT_MAX ==
>>> INT_MAX? The only one i can think of is a very silly definition in one
>>> of the standard headers, making me want to give strong advice not to
>>> use that a compiler. God knows what other silliness is hidden in the
>>> headers.
>>
>> http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
>>
>> Please refer to section 6.2.6.2, paragraph 2.
>
> Nice to know that you know where integers are defined in the standard,
> but that does not really answer my question. Much in contrast, from
> what i read, it seems to validate my view, so please expound.
>
> The chapter you mention endorses (explicitly) ones- and twos

> complement, ...

You've forgotten sign-and-magnitude, which is also explicitly permitted.

... both of which obey the relation defined above, both for


> the simple reason the need a (explicitly defined and demanded in the
> chapter you mention) sign-bit.

Actually, the relevant issue is 6.2.6.2p1, which specifies that unsigned
types can have padding bits. In particular, the bit which serves as a
sign bit for the corresponding signed type could be a padding bit for
the unsigned type.
This does not apply to those unsigned types which are prohibited from
having padding bits: unsigned char and the exact-sized types from
<stdint.h>, but unsigned int is allowed to have padding bits.
--
James Kuyper

Angel

unread,
May 31, 2011, 8:00:49 AM5/31/11
to

Mm, I suppose you have some valid points there. Thanks for pointing that
out to me, I never looked at it that way.

Kleuskes & Moos

unread,
May 31, 2011, 8:57:11 AM5/31/11
to
On May 31, 12:36 pm, James Kuyper <jameskuy...@verizon.net> wrote:
> On 05/31/2011 05:55 AM, Kleuskes & Moos wrote:
>
> > On May 31, 11:30 am, Tim Rentsch <t...@alumni.caltech.edu> wrote:
> >> "Kleuskes & Moos" <kleu...@xs4all.nl> writes:
> ...
> >>> I've been scratching my head for a bit, reading the above. In any
> >>> numbersystem i can come up with it seems that UINT_MAX > INT_MAX, if
> >>> only because of the sign bit required in integers.
>
> The closest the standard comes to directly constraining UINT_MAX
> relative to INT_MAX is in 6.2.5p9: "The range of nonnegative values of a
> signed integer type is a subrange of the corresponding unsigned integer
> type, ...". That corresponds to the constraint UINT_MAX >= INT_MAX; it
> is not violated by UINT_MAX==INT_MAX.

True. But then there's "not violating a constraint" and there's sheer,
unadulterated sillyness of actually creating an implementation in
which UINT_MAX==INT_MAX, since you would be throwing away half the
range of the unsigned integer (and that goes in all three mandated
numbersystems).

So while it may not violate the standard, it _is_ silly, and i would
very much like to know which compiler meets UINT_MAX==INT_MAX. Just so
that i can avoid it.

> >>> So please, which situation did you have in mind where UINT_MAX ==
> >>> INT_MAX? The only one i can think of is a very silly definition in one
> >>> of the standard headers, making me want to give strong advice not to
> >>> use that a compiler. God knows what other silliness is hidden in the
> >>> headers.
>
> >>http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
>
> >> Please refer to section 6.2.6.2, paragraph 2.
>
> > Nice to know that you know where integers are defined in the standard,
> > but that does not really answer my question. Much in contrast, from
> > what i read, it seems to validate my view, so please expound.
>
> > The chapter you mention endorses (explicitly) ones- and twos
> > complement, ...
>
> You've forgotten sign-and-magnitude, which is also explicitly permitted.

Yes. I omitted that one. But that changes nothing.

> ... both of which obey the relation defined above, both for
>
> > the simple reason the need a (explicitly defined and demanded in the
> > chapter you mention) sign-bit.
>
> Actually, the relevant issue is 6.2.6.2p1, which specifies that unsigned
> types can have padding bits. In particular, the bit which serves as a
> sign bit for the corresponding signed type could be a padding bit for
> the unsigned type.

Hmmm... For the reason mentioned above, that would be silly and i
suspect the padding has more to do with varying register sizes, the
occasional need to stuff a short int into a 32 bit register and
aligning arrays on word-boundaries.

> This does not apply to those unsigned types which are prohibited from
> having padding bits: unsigned char and the exact-sized types from
> <stdint.h>, but unsigned int is allowed to have padding bits.

True. And having these padding bits serves what purpose in your
opinion? IFAIK the standards guys do nothing without a proper
rationale.

James Kuyper

unread,
May 31, 2011, 9:23:46 AM5/31/11
to
On 05/31/2011 08:57 AM, Kleuskes & Moos wrote:
> On May 31, 12:36�pm, James Kuyper <jameskuy...@verizon.net> wrote:
...

>> The closest the standard comes to directly constraining UINT_MAX
>> relative to INT_MAX is in 6.2.5p9: "The range of nonnegative values of a
>> signed integer type is a subrange of the corresponding unsigned integer
>> type, ...". That corresponds to the constraint UINT_MAX >= INT_MAX; it
>> is not violated by UINT_MAX==INT_MAX.
>
> True. But then there's "not violating a constraint" and there's sheer,
> unadulterated sillyness of actually creating an implementation in
> which UINT_MAX==INT_MAX, since you would be throwing away half the
> range of the unsigned integer (and that goes in all three mandated
> numbersystems).
>
> So while it may not violate the standard, it _is_ silly, and i would
> very much like to know which compiler meets UINT_MAX==INT_MAX. Just so
> that i can avoid it.

There have been implementations which used the floating point processor
to implement an integer type (probably 'long int'), essentially by using
only mantissa bits, leaving the exponent bits completely unused.
Compared with that, throwing away only half of the potentially
representable range seems pretty minor.

...


>> This does not apply to those unsigned types which are prohibited from
>> having padding bits: unsigned char and the exact-sized types from
>> <stdint.h>, but unsigned int is allowed to have padding bits.
>
> True. And having these padding bits serves what purpose in your
> opinion? IFAIK the standards guys do nothing without a proper
> rationale.

An obvious possibility is a platform with no hardware support for
unsigned integers - depending upon the precise details of it's
instruction set, it could be easiest to implement unsigned ints as
signed ints with a range restricted to non-negative values. However, I
claim no familiarity with any such system.

I gather that padding bits were allowed because not allowing them would
make creation of a conforming implementation of C more difficult on some
platforms. The wording that describes padding bits was probably more
general than was needed to address the specific platforms the committee
was aware of, but I'm sure they were also concerned about possible
future platforms, and did not desire to unnecessarily constrain the
implementability of C. While they may be obscure platforms, the general
attitude of the C committee seems to be that C should be implementable
almost everywhere, even obscure platforms.
--
James Kuyper

Kleuskes & Moos

unread,
May 31, 2011, 10:50:06 AM5/31/11
to
On May 31, 3:23 pm, James Kuyper <jameskuy...@verizon.net> wrote:
> On 05/31/2011 08:57 AM, Kleuskes & Moos wrote:
>
>
>
>
>
>
>
>
>
> > On May 31, 12:36 pm, James Kuyper <jameskuy...@verizon.net> wrote:
> ...
> >> The closest the standard comes to directly constraining UINT_MAX
> >> relative to INT_MAX is in 6.2.5p9: "The range of nonnegative values of a
> >> signed integer type is a subrange of the corresponding unsigned integer
> >> type, ...". That corresponds to the constraint UINT_MAX >= INT_MAX; it
> >> is not violated by UINT_MAX==INT_MAX.
>
> > True. But then there's "not violating a constraint" and there's sheer,
> > unadulterated sillyness of actually creating an implementation in
> > which UINT_MAX==INT_MAX, since you would be throwing away half the
> > range of the unsigned integer (and that goes in all three mandated
> > numbersystems).
>
> > So while it may not violate the standard, it _is_ silly, and i would
> > very much like to know which compiler meets UINT_MAX==INT_MAX. Just so
> > that i can avoid it.
>
> There have been implementations which used the floating point processor
> to implement an integer type (probably 'long int'), essentially by using
> only mantissa bits, leaving the exponent bits completely unused.
> Compared with that, throwing away only half of the potentially
> representable range seems pretty minor.

Providing an example of even greatr sillyness, does not prove throwing
away half the range to be any less silly than it was before.


> >> This does not apply to those unsigned types which are prohibited from
> >> having padding bits: unsigned char and the exact-sized types from
> >> <stdint.h>, but unsigned int is allowed to have padding bits.
>
> > True. And having these padding bits serves what purpose in your
> > opinion? IFAIK the standards guys do nothing without a proper
> > rationale.
>
> An obvious possibility is a platform with no hardware support for
> unsigned integers - depending upon the precise details of it's
> instruction set, it could be easiest to implement unsigned ints as
> signed ints with a range restricted to non-negative values. However, I
> claim no familiarity with any such system.

Such a system would not be able to operate, since every relative jmp-
instruction involves the addition (or subtraction, which is basically
the same). So name that fabled platform.

> I gather that padding bits were allowed because not allowing them would
> make creation of a conforming implementation of C more difficult on some
> platforms. The wording that describes padding bits was probably more
> general than was needed to address the specific platforms the committee
> was aware of, but I'm sure they were also concerned about possible
> future platforms, and did not desire to unnecessarily constrain the
> implementability of C. While they may be obscure platforms, the general
> attitude of the C committee seems to be that C should be implementable
> almost everywhere, even obscure platforms.

Not so obscure that unsigned integers are not supported, after all,
every physical and logical memory address is, basically, an unsigned
integer.

So, noting the absence of any argument otherwise, UINT_MAX==INT_MAX is
only possible theoretically, but the chances of actually running into
it are virtually nonexistent, that is, somewhere between "highly
unlikely" and "absolute zero". In practice UINT_MAX > INT_MAX, and i
dare you to show me an example to the contrary.


Seebs

unread,
May 31, 2011, 12:44:07 PM5/31/11
to
On 2011-05-31, Angel <angel...@spamcop.net> wrote:
> Mm, I suppose you have some valid points there. Thanks for pointing that
> out to me, I never looked at it that way.

The interesting part is that it still doesn't actually matter where the
braces are, just that they're consistent.

It's like which side of the road you drive on. It's not actually known
to be significant, but it helps immensely if everyone on a given road agrees.

-s
--
Copyright 2011, all wrongs reversed. Peter Seebach / usenet...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
I am not speaking for my employer, although they do rent some of my opinions.

Keith Thompson

unread,
May 31, 2011, 2:41:54 PM5/31/11
to
"Kleuskes & Moos" <kle...@xs4all.nl> writes:
> On May 31, 3:23 pm, James Kuyper <jameskuy...@verizon.net> wrote:
>> On 05/31/2011 08:57 AM, Kleuskes & Moos wrote:
>> > On May 31, 12:36 pm, James Kuyper <jameskuy...@verizon.net> wrote:
>> ...
>> >> The closest the standard comes to directly constraining UINT_MAX
>> >> relative to INT_MAX is in 6.2.5p9: "The range of nonnegative values of a
>> >> signed integer type is a subrange of the corresponding unsigned integer
>> >> type, ...". That corresponds to the constraint UINT_MAX >= INT_MAX; it
>> >> is not violated by UINT_MAX==INT_MAX.
>>
>> > True. But then there's "not violating a constraint" and there's sheer,
>> > unadulterated sillyness of actually creating an implementation in
>> > which UINT_MAX==INT_MAX, since you would be throwing away half the
>> > range of the unsigned integer (and that goes in all three mandated
>> > numbersystems).
>>
>> > So while it may not violate the standard, it _is_ silly, and i would
>> > very much like to know which compiler meets UINT_MAX==INT_MAX. Just so
>> > that i can avoid it.
[...]

>> An obvious possibility is a platform with no hardware support for
>> unsigned integers - depending upon the precise details of it's
>> instruction set, it could be easiest to implement unsigned ints as
>> signed ints with a range restricted to non-negative values. However, I
>> claim no familiarity with any such system.
>
> Such a system would not be able to operate, since every relative jmp-
> instruction involves the addition (or subtraction, which is basically
> the same). So name that fabled platform.

How do you know what's involved in relative jump instructions on a
hypothetical system that might not even exist?

>> I gather that padding bits were allowed because not allowing them would
>> make creation of a conforming implementation of C more difficult on some
>> platforms. The wording that describes padding bits was probably more
>> general than was needed to address the specific platforms the committee
>> was aware of, but I'm sure they were also concerned about possible
>> future platforms, and did not desire to unnecessarily constrain the
>> implementability of C. While they may be obscure platforms, the general
>> attitude of the C committee seems to be that C should be implementable
>> almost everywhere, even obscure platforms.
>
> Not so obscure that unsigned integers are not supported, after all,
> every physical and logical memory address is, basically, an unsigned
> integer.

On some systems, yes. On the hypothetical system in question, it's
likely that physical and logical memory addresses are represented as
signed integers. For example, if it's a 32-bit system, addresses might
range from -2147483648 to +2147483647.

> So, noting the absence of any argument otherwise, UINT_MAX==INT_MAX is
> only possible theoretically, but the chances of actually running into
> it are virtually nonexistent, that is, somewhere between "highly
> unlikely" and "absolute zero". In practice UINT_MAX > INT_MAX, and i
> dare you to show me an example to the contrary.

I do not claim that such an example exists, but it *could*. Perhaps it
could be a specialized system, not primarily designed with C in mind,
but intended to support an environment which has no need for unsigned
arithmetic (and no, address calculations don't *necessarily* require
unsigned integer arithmetic).

On such a system, it would make sense to have:
INT_MIN = -2147483648
INT_MAX = +2147483647
UINT_MAX = INT_MAX
and for unsigned int to have a single padding bit in place of
signed int's sign bit. And that would probably result in much more
efficient code than forcing unsigned int to have 32 value bits and
perform most unsigned operations in software. (Perhaps unsigned long
would be 32 bits wide, and be slower than unsigned int.)

The point is that C is designed to be implementable with reasonable
efficiency on a wide variety of systems.

There are more things in heaven and earth, Horatio, ...

Keith Thompson

unread,
May 31, 2011, 2:43:40 PM5/31/11
to
Seebs <usenet...@seebs.net> writes:
> On 2011-05-31, Angel <angel...@spamcop.net> wrote:
>> Mm, I suppose you have some valid points there. Thanks for pointing that
>> out to me, I never looked at it that way.
>
> The interesting part is that it still doesn't actually matter where the
> braces are, just that they're consistent.
>
> It's like which side of the road you drive on. It's not actually known
> to be significant, but it helps immensely if everyone on a given road agrees.

Indeed. I've worked under some code layout standards that I find
extremely ugly, but having parts of the code in my own preferred style
would be far worse.

J. J. Farrell

unread,
May 31, 2011, 2:56:26 PM5/31/11
to
Kleuskes & Moos wrote:
> ...

> So, noting the absence of any argument otherwise, UINT_MAX==INT_MAX is
> only possible theoretically, but the chances of actually running into
> it are virtually nonexistent, that is, somewhere between "highly
> unlikely" and "absolute zero". In practice UINT_MAX > INT_MAX, and i
> dare you to show me an example to the contrary.

Why on earth should he? Who ever claimed such an environment exists, or
isn't silly? It's legal in C for such an environment to exist, that's
all. If someone is choosing to write code which is portable to all
possible legal C implementations, they have to allow for it. If they
only need to make their code portable to all environments which there is
the slightest predictable possibility of ever coming across, they can
ignore it.

Francois Grieu

unread,
May 31, 2011, 3:57:36 PM5/31/11
to
On 31/05/2011 10:03, Tim Rentsch wrote:
> Francois Grieu<fgr...@gmail.com> writes:
>
>> On 26/05/2011 16:21, Branimir Maksimovic wrote:
>>> On 05/26/2011 04:04 PM, Francois Grieu wrote:
>>>> [3] some compilers know that, when j is an int,
>>>> if (j<1 || j>8)
>>>> foo();
>>>> can be rewritten as
>>>> if ((unsigned)j-1u>7u)
>>>> foo();
>>>
>>> No, it can't.
>>
>> Would you please illustrate by an example when that does not stand?
>> Stick to C99, please.
>
> On most implementations it will work, but it can fail if
> UINT_MAX == INT_MAX.(..)

I agree that an implementation could have UINT_MAX == INT_MAX
(the example of using a floating point engine is fine).

But, assuming that, for what value of j would the above
rewriting become invalid? I do not find any.

Francois Grieu

Francois Grieu

unread,
May 31, 2011, 4:03:39 PM5/31/11
to
I just wrote::

Got it: j = -INT_MAX; I stand corrected.

Francois Grieu

Kleuskes & Moos

unread,
Jun 1, 2011, 4:46:00 AM6/1/11
to
On May 31, 8:41 pm, Keith Thompson <ks...@mib.org> wrote:

What other methods of doing relative jumps did you have in mind? Since
you think there's an alternative, it's up to you to name it.


<snip>

> > Not so obscure that unsigned integers are not supported, after all,
> > every physical and logical memory address is, basically, an unsigned
> > integer.
>
> On some systems, yes.  On the hypothetical system in question, it's
> likely that physical and logical memory addresses are represented as
> signed integers.  For example, if it's a 32-bit system, addresses might
> range from -2147483648 to +2147483647.

Ah. Another hypothetical system that defies the laws of common
sense... Great, what advantages does your hypothetical system with
signed memory addresses have? More importantly, how does that jive
with the electronics of the address-bus of that hypothetical system?

Objections concerning some hypothetical system which uses signed
integers as memory addresses and do not support nsigned integers are
not quite taken seriously on my side of the NNTP-server. Why not
invent a hypothetical system which uses pink bunnies to address
memory?

> > So, noting the absence of any argument otherwise, UINT_MAX==INT_MAX is
> > only possible theoretically, but the chances of actually running into
> > it are virtually nonexistent, that is, somewhere between "highly
> > unlikely" and "absolute zero". In practice UINT_MAX > INT_MAX, and i
> > dare you to show me an example to the contrary.
>
> I do not claim that such an example exists, but it *could*.

Yes. And the chip-select signal might be delivered by invisible pink
unicorns, the Carry Flag might be hoisted by Daffy Duck and interrupts
might be implemented by Yosemity Sam, firing his guns at the hootenest-
tootenest-shootenest programmable interrupt controller north, east,
south AAAAND west of the Pecos.

To me, the flurry of hypotheticals merely indicates that i was right,
but you don't like admitting it, so you're grasping at hypothetical
straws.

> Perhaps it could be a specialized system, not primarily designed with C
> in mind, but intended to support an environment which has no need for
> unsigned arithmetic (and no, address calculations don't *necessarily*
> require unsigned integer arithmetic).

Ok. So name a (non-hypothetical) example of adddresses not being
unsigned integers. I wager a case of Grolsch you won't be able to. Not
because it's not possible, just because it's unpractical.

Besides, a pattern of bits on the address bus, and this would be the
best objection you can make, can be interpreted as signed _or_
unsigned at the whim of whomsoever is interpreting it. It really makes
no difference, it's just a question of how you interpret them.

But, given the practicalities of hardware design, they are usually
interpreted as being unsigned, simply because it's more convenient.

> On such a system, it would make sense to have:
>     INT_MIN = -2147483648
>     INT_MAX = +2147483647
>     UINT_MAX = INT_MAX
> and for unsigned int to have a single padding bit in place of
> signed int's sign bit.  And that would probably result in much more
> efficient code than forcing unsigned int to have 32 value bits and
> perform most unsigned operations in software.  (Perhaps unsigned long
> would be 32 bits wide, and be slower than unsigned int.)

Ok.

Let me pose two simple questions...

How does a CPU add 1 and 1 and arrive at the (correct) answer: 2?
How do signed and unsigned versions of this operation differ?

The question "why doesn't anyone design a computer that does not
support unsigned arithmatic" should be obvious, if you got the above
two questions right, and hypothetical, highly unpractical systems
should then be laid to rest.

> The point is that C is designed to be implementable with reasonable
> efficiency on a wide variety of systems.

True. But i doubt you'll find ANY that match the exotic, nay,
excentric hardware you describe. I still dare yoou to name a single
CPU that does not support unsigned integers, and i'm very confident
you won't find any.

> There are more things in heaven and earth, Horatio, ...

True. But that's no argument.

Willem

unread,
Jun 1, 2011, 5:07:45 AM6/1/11
to
Kleuskes & Moos wrote:
) On May 31, 3:23?pm, James Kuyper <jameskuy...@verizon.net> wrote:
)> An obvious possibility is a platform with no hardware support for
)> unsigned integers - depending upon the precise details of it's
)> instruction set, it could be easiest to implement unsigned ints as
)> signed ints with a range restricted to non-negative values. However, I
)> claim no familiarity with any such system.
)
) Such a system would not be able to operate, since every relative jmp-
) instruction involves the addition (or subtraction, which is basically
) the same). So name that fabled platform.

What do relative jump instructions have to do with unsigned integers ?


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT

Tim Rentsch

unread,
Jun 1, 2011, 5:35:59 AM6/1/11
to
"Kleuskes & Moos" <kle...@xs4all.nl> writes:

> but that does not really answer my question. [snip elaboration]

I think it will if you take some time to re-read it
carefully. To help, all text after "If the sign bit is
zero," does not bear on the INT_MAX == UINT_MAX question.

Ike Naar

unread,
Jun 1, 2011, 6:07:26 AM6/1/11
to
On 2011-06-01, Kleuskes & Moos <kle...@xs4all.nl> wrote:
> Ah. Another hypothetical system that defies the laws of common
> sense... Great, what advantages does your hypothetical system with
> signed memory addresses have? More importantly, how does that jive
> with the electronics of the address-bus of that hypothetical system?
>
> Objections concerning some hypothetical system which uses signed
> integers as memory addresses and do not support nsigned integers are
> not quite taken seriously on my side of the NNTP-server. Why not
> invent a hypothetical system which uses pink bunnies to address
> memory?

http://en.wikipedia.org/wiki/Burroughs_large_systems describres
a family of non-hypothetical systems that used sign-and magnitude
representation for integers (integers were simply floating-point
numbers with a zero exponent). These systems existed before the
C language became popular (Algol was their main high level language),
but UINT_MAX==INT_MAX would be a logical choice for a C implementation
on such a system.

Kleuskes & Moos

unread,
Jun 1, 2011, 7:12:45 AM6/1/11
to
On Jun 1, 12:07 pm, Ike Naar <i...@iceland.freeshell.org> wrote:

Dating back from the 60's. Ok. Welcome back to the stone age.

In short: UINT_MAX==INT_MAX is only found in hypothetical and
prehistoric machines. Which still does not invalidate the point that
the situation in question is (anno domini 2011) hypothetical at best.

There is a reason, and a very good one, too, that sign-and-magnitude
isn't used anymore. If you have a different opinion, please post the
hardware in question. I'd be curious.

Angel

unread,
Jun 1, 2011, 7:56:22 AM6/1/11
to
On 2011-06-01, Kleuskes & Moos <kle...@xs4all.nl> wrote:
>
> In short: UINT_MAX==INT_MAX is only found in hypothetical and
> prehistoric machines. Which still does not invalidate the point that
> the situation in question is (anno domini 2011) hypothetical at best.
>
> There is a reason, and a very good one, too, that sign-and-magnitude
> isn't used anymore. If you have a different opinion, please post the
> hardware in question. I'd be curious.

Regardless of what is available now, you don't know what the future
might bring. In order to ensure maximum portability, it's best not to
make any assumptions about the implementation unless you really have to.

There is already a lot of broken software out there because programmers
assumed 32-bit integers (even though 64-bit systems have been around
longer than most people think), we really don't need more broken
software because programmers make assumptions about UINT_MAX that may
or may not be true in the future.

Noob

unread,
Jun 1, 2011, 8:54:55 AM6/1/11
to

Would this work (I'm uncertain about pre-processor arithmetic type promotion)

#if UINT_MAX <= INT_MAX
#error UNSUPPORTED PLATFORM !!!
#endif

Angel

unread,
Jun 1, 2011, 9:07:00 AM6/1/11
to
On 2011-06-01, Noob <ro...@127.0.0.1> wrote:
>
> Would this work (I'm uncertain about pre-processor arithmetic type promotion)
>
> #if UINT_MAX <= INT_MAX
> #error UNSUPPORTED PLATFORM !!!
> #endif

Assuming that you include limits.h before this test, I don't see any
reason why this shouldn't work.

James Kuyper

unread,
Jun 1, 2011, 9:29:27 AM6/1/11
to
On 05/31/2011 10:50 AM, Kleuskes & Moos wrote:
> On May 31, 3:23�pm, James Kuyper <jameskuy...@verizon.net> wrote:
...

>> An obvious possibility is a platform with no hardware support for
>> unsigned integers - depending upon the precise details of it's
>> instruction set, it could be easiest to implement unsigned ints as
>> signed ints with a range restricted to non-negative values. However, I
>> claim no familiarity with any such system.
>
> Such a system would not be able to operate, since every relative jmp-
> instruction involves the addition (or subtraction, which is basically
> the same). ...

You can't do addition or subtraction with signed integers? That's news
to me.

> ... So name that fabled platform.

Can you not read? I said "I claim no familiarity with any such system".
I suppose I could assign an arbitrary name to a purely hypothetical
system, but to what end?

--
James Kuyper

James Kuyper

unread,
Jun 1, 2011, 10:04:10 AM6/1/11
to
On 06/01/2011 04:46 AM, Kleuskes & Moos wrote:
> On May 31, 8:41�pm, Keith Thompson <ks...@mib.org> wrote:
>> "Kleuskes & Moos" <kleu...@xs4all.nl> writes:
>>> On May 31, 3:23�pm, James Kuyper <jameskuy...@verizon.net> wrote:
...
>>>> An obvious possibility is a platform with no hardware support for
>>>> unsigned integers - depending upon the precise details of it's
>>>> instruction set, it could be easiest to implement unsigned ints as
>>>> signed ints with a range restricted to non-negative values. However, I
>>>> claim no familiarity with any such system.
>>
>>> Such a system would not be able to operate, since every relative jmp-
>>> instruction involves the addition (or subtraction, which is basically
>>> the same). So name that fabled platform.
>>
>> How do you know what's involved in relative jump instructions on a
>> hypothetical system that might not even exist?
>
> What other methods of doing relative jumps did you have in mind? Since
> you think there's an alternative, it's up to you to name it.

How about adding a signed integer value to the current address, which is
also a signed integer, giving a result which is also a signed integer,
and therefore a valid address on that platform?

>>> Not so obscure that unsigned integers are not supported, after all,
>>> every physical and logical memory address is, basically, an unsigned
>>> integer.
>>
>> On some systems, yes. �On the hypothetical system in question, it's
>> likely that physical and logical memory addresses are represented as
>> signed integers. �For example, if it's a 32-bit system, addresses might
>> range from -2147483648 to +2147483647.
>
> Ah. Another hypothetical system that defies the laws of common
> sense...

No, it's not another hypothetical system, it's a more specific example
of the same hypothetical. Keith said as much: "On the hypothetical
system in question ...". You might try reading more closely.

...


> To me, the flurry of hypotheticals merely indicates that i was right,
> but you don't like admitting it, so you're grasping at hypothetical
> straws.

No, it simply means that we prefer to avoid writing code that relies
upon guarantees not provided by the standard. You don't care about that,
and you have every right not to care about it. With that right comes the
responsibility of accepting the consequences in the event (which you
consider very unlikely to occur) that your code needs to be ported to an
implementation which violates them.

> Besides, a pattern of bits on the address bus, and this would be the
> best objection you can make, can be interpreted as signed _or_
> unsigned at the whim of whomsoever is interpreting it. It really makes
> no difference, it's just a question of how you interpret them.

It makes no difference in 2's complement; it can make a big difference
in the other two permitted representations of signed integers. But you
are, of course, free to assumes that 2's complement is the only
representation in use now or ever again in the future, if that's what
you want to assume.

...


>> On such a system, it would make sense to have:
>> � � INT_MIN = -2147483648
>> � � INT_MAX = +2147483647
>> � � UINT_MAX = INT_MAX

Actually, Keith's example is somewhat poorly chosen. Because of the
equivalence between 2's complement signed operations and corresponding
unsigned operations, a platform which could only natively support signed
math would have to be using one of the other two representations.

>> and for unsigned int to have a single padding bit in place of
>> signed int's sign bit. �And that would probably result in much more
>> efficient code than forcing unsigned int to have 32 value bits and
>> perform most unsigned operations in software. �(Perhaps unsigned long
>> would be 32 bits wide, and be slower than unsigned int.)
>
> Ok.
>
> Let me pose two simple questions...
>
> How does a CPU add 1 and 1 and arrive at the (correct) answer: 2?

We postulated that this hypothetical platform has built-in hardware
support for signed arithmetic. Do you need details in the form of
hardware layout and a list of the supported machine instructions? I'm
not a hardware designer, I'm sure I'd make lots of mistakes in any
attempt to provide such a specification. Are you really in doubt about
the feasibility of implementing signed arithmetic?

> How do signed and unsigned versions of this operation differ?

The signed instructions interpret the memory as signed integers in,
let's say, 1's complement representation. Unsigned versions of the
operation are non-existent, and unsigned arithmetic must therefore be
emulated, since the signed operations would not produce the correct
result if the unsigned result would be greater than INT_MAX, or if the
result of applying those operations would be negative.

> The question "why doesn't anyone design a computer that does not
> support unsigned arithmatic" should be obvious, if you got the above
> two questions right, and hypothetical, highly unpractical systems
> should then be laid to rest.

You'll need to explain the point you're trying to make. It's not at all
clear to me what problem you're thinking of, so I can't evaluate whether
it makes any sense to worry about that problem.

--
James Kuyper

Kleuskes & Moos

unread,
Jun 1, 2011, 10:46:53 AM6/1/11
to
On Jun 1, 1:56 pm, Angel <angel+n...@spamcop.net> wrote:

> On 2011-06-01, Kleuskes & Moos <kleu...@xs4all.nl> wrote:
>
>
>
> > In short: UINT_MAX==INT_MAX is only found in hypothetical and
> > prehistoric machines. Which still does not invalidate the point that
> > the situation in question is (anno domini 2011) hypothetical at best.
>
> > There is a reason, and a very good one, too, that sign-and-magnitude
> > isn't used anymore. If you have a different opinion, please post the
> > hardware in question. I'd be curious.
>
> Regardless of what is available now, you don't know what the future
> might bring. In order to ensure maximum portability, it's best not to
> make any assumptions about the implementation unless you really have to.
>
> There is already a lot of broken software out there because programmers
> assumed 32-bit integers (even though 64-bit systems have been around
> longer than most people think), we really don't need more broken
> software because programmers make assumptions about UINT_MAX that may
> or may not be true in the future.

Blahdiblahdiblah...

First off, i'm not assuming anything, just making the point that
UINT_MAX == INT_MAX is valid in hypothetical cases only and when a
(alledged) counterexample is finally found, it turns out to be
straight out of the jurassic age and never supported C in in the first
place. Instead it ran ALGOL and (with some prodding) COBOL.

There's good programming practice, such as not assuming any type to be
of any particular size, unless dictated by the standard, and there's
the sheer sillyness portrayed in this subthread.

Of course the standards guys allow for all kinds of stuff nobody would
normally use anymore, just to suit the fringe cases, but then there's
fringe cases and there's "CPU's that don't support unsigned integers".

Noob

unread,
Jun 1, 2011, 11:12:37 AM6/1/11
to
Kleuskes & Moos wrote:

> How does a CPU add 1 and 1 and arrive at the (correct) answer: 2?
> How do signed and unsigned versions of this operation differ?

Tangential nit :-)

Not all operations can ignore the "signed-ness" of its operands.
Consider a platform (such as x86) with 32-bit registers, and a
32x32->64 multiply operation.

Regards.

Kleuskes & Moos

unread,
Jun 1, 2011, 11:21:00 AM6/1/11
to
On Jun 1, 4:04 pm, James Kuyper <jameskuy...@verizon.net> wrote:
> On 06/01/2011 04:46 AM, Kleuskes & Moos wrote:
>
>
>
>
>
>
>
>
>
> > On May 31, 8:41 pm, Keith Thompson <ks...@mib.org> wrote:
> >> "Kleuskes & Moos" <kleu...@xs4all.nl> writes:
> >>> On May 31, 3:23 pm, James Kuyper <jameskuy...@verizon.net> wrote:
> ...
> >>>> An obvious possibility is a platform with no hardware support for
> >>>> unsigned integers - depending upon the precise details of it's
> >>>> instruction set, it could be easiest to implement unsigned ints as
> >>>> signed ints with a range restricted to non-negative values. However, I
> >>>> claim no familiarity with any such system.
>
> >>> Such a system would not be able to operate, since every relative jmp-
> >>> instruction involves the addition (or subtraction, which is basically
> >>> the same). So name that fabled platform.
>
> >> How do you know what's involved in relative jump instructions on a
> >> hypothetical system that might not even exist?
>
> > What other methods of doing relative jumps did you have in mind? Since
> > you think there's an alternative, it's up to you to name it.
>
> How about adding a signed integer value to the current address, which is
> also a signed integer, giving a result which is also a signed integer,
> and therefore a valid address on that platform

As i already pointed out elsethread, it doesn't make any difference.
Most relative jumps rely on signed integers for the offset anyway, but
ultimately any idea of 'signed integers' is something _we_ assign to a
specified set of bits, it's not inherent in the contents of the
address-bus and memory chips only look to see whether an adressline is
high or low. So you can interpret addresses any way you want, treating
them as unsigned integers is simply more convenient.

But hey... If you prefer to view them as signed integers...

> >>> Not so obscure that unsigned integers are not supported, after all,
> >>> every physical and logical memory address is, basically, an unsigned
> >>> integer.
>
> >> On some systems, yes. On the hypothetical system in question, it's
> >> likely that physical and logical memory addresses are represented as
> >> signed integers. For example, if it's a 32-bit system, addresses might
> >> range from -2147483648 to +2147483647.
>
> > Ah. Another hypothetical system that defies the laws of common
> > sense...
>
> No, it's not another hypothetical system, it's a more specific example
> of the same hypothetical. Keith said as much: "On the hypothetical
> system in question ...". You might try reading more closely.

The 'hypothetical' was enough. 'Hypothetical' means 'I have no
counterexample so i'll just start playing games'. At least Ike dug up
a nice fossil of a brontosaur.

> > To me, the flurry of hypotheticals merely indicates that i was right,
> > but you don't like admitting it, so you're grasping at hypothetical
> > straws.
>
> No, it simply means that we prefer to avoid writing code that relies
> upon guarantees not provided by the standard.

How nice.

> You don't care about that,

Nope. I just program _real_ computers instead of hypothetical ones.
Besides, your claim to know what i care about or not is not only
presumptious, but also quite pigheaded.

> and you have every right not to care about it.

How nice.

> With that right comes the
> responsibility of accepting the consequences in the event (which you
> consider very unlikely to occur) that your code needs to be ported to an
> implementation which violates them.

So far, we've got one example out of the dark ages which _might_
satisfy the condition that started this subthread, and a host of
hypothetical, and highly implausible systems which might if the
designers of the CPU were _very_ silly.

Your rather pompous assertions about my sense of responsability does
nothing to convince me otherwise, but achieve rather the opposite and
convinces me you have no arguments but ad-hominem ones.

> > Besides, a pattern of bits on the address bus, and this would be the
> > best objection you can make, can be interpreted as signed _or_
> > unsigned at the whim of whomsoever is interpreting it. It really makes
> > no difference, it's just a question of how you interpret them.
>
> It makes no difference in 2's complement; it can make a big difference
> in the other two permitted representations of signed integers. But you
> are, of course, free to assumes that 2's complement is the only
> representation in use now or ever again in the future, if that's what
> you want to assume.

If i ever encounter an UNIVAC 7094 again, i'll be on my toes. And
thanks for pointing out in such clarity why the other two mandated
numbersystems are obsolete by now.
<snip boring stuff>

> > Let me pose two simple questions...
>
> > How does a CPU add 1 and 1 and arrive at the (correct) answer: 2?
>
> We postulated that this hypothetical platform has built-in hardware
> support for signed arithmetic.

Anyone can postulate anything, but no-one can run any software on
postulated machines, so the answer is already wrong.

Do you need details in the form of
> hardware layout and a list of the supported machine instructions?

Just the basic circuit will do. Even a list of components for a 2-bit
adder

> I'm
> not a hardware designer, I'm sure I'd make lots of mistakes in any
> attempt to provide such a specification. Are you really in doubt about
> the feasibility of implementing signed arithmetic?

Nope. I just wanted to know whether or not you have any idea what
you're talking about. It appears you have no idea.

See http://en.wikipedia.org/wiki/Full_adder

> > How do signed and unsigned versions of this operation differ?
>
> The signed instructions interpret the memory as signed integers in,
> let's say, 1's complement representation. Unsigned versions of the
> operation are non-existent, and unsigned arithmetic must therefore be
> emulated, since the signed operations would not produce the correct
> result if the unsigned result would be greater than INT_MAX, or if the
> result of applying those operations would be negative.
>
> > The question "why doesn't anyone design a computer that does not
> > support unsigned arithmatic" should be obvious, if you got the above
> > two questions right, and hypothetical, highly unpractical systems
> > should then be laid to rest.
>
> You'll need to explain the point you're trying to make. It's not at all
> clear to me what problem you're thinking of, so I can't evaluate whether
> it makes any sense to worry about that problem.

In terms of hardware, the actual gates being used, both 1's complement
(mainly because of the double 0) and signed magnitude are expensive.
You need extra hardware to take care of a lot of things and extra
hardware costs time, money and dissipates power.
2's complement can use the same circuits to do addition, subtraction
(and hence comparisons), signed and unsigned, using the same circuits.
That, basically, is why every processor uses 2's complement nowadays,
instead of one of the others.

And that's why i feel quite comfortable knowing UINT_MAX==INT_MAX only
in hypothetical cases.

Noob

unread,
Jun 1, 2011, 11:38:17 AM6/1/11
to
Nizumzen wrote:

> Just ditch GCC for Clang and your life will improve markedly.
> Plus Clang's static analyser is awesome.

Steve ? Is that you ?!

Keith Thompson

unread,
Jun 1, 2011, 11:38:49 AM6/1/11
to
"Kleuskes & Moos" <kle...@xs4all.nl> writes:

I have no idea, and no, it's not up to me to name it. You've made an
assertion about how relative jump instructions work, presumably on *all*
systems.

>
> <snip>
>
>> > Not so obscure that unsigned integers are not supported, after all,
>> > every physical and logical memory address is, basically, an unsigned
>> > integer.
>>
>> On some systems, yes.  On the hypothetical system in question, it's
>> likely that physical and logical memory addresses are represented as
>> signed integers.  For example, if it's a 32-bit system, addresses might
>> range from -2147483648 to +2147483647.
>
> Ah. Another hypothetical system that defies the laws of common
> sense... Great, what advantages does your hypothetical system with
> signed memory addresses have? More importantly, how does that jive
> with the electronics of the address-bus of that hypothetical system?

("jibe")

I wasn't aware that signed addresses defied the laws of common
sense. Perhaps they do. I do not claim that such systems have
any particular advantages, or even that they necessarily exist (I
honestly don't know whether they do or not), merely that they're
possible.

Now that I think about it, perhaps a virtual machine would be more
likely to have such characteristics, especially if it's implemented
in a language that doesn't support unsigned arithmetic (Pascal,
for example). I have some documentation on the old UCSD Pascal system;
I'll look into it later.

> Objections concerning some hypothetical system which uses signed
> integers as memory addresses and do not support nsigned integers are
> not quite taken seriously on my side of the NNTP-server. Why not
> invent a hypothetical system which uses pink bunnies to address
> memory?

Why not? There's no particular reason such a system coulnd't
support a conforming C implementation.

>> > So, noting the absence of any argument otherwise, UINT_MAX==INT_MAX is
>> > only possible theoretically, but the chances of actually running into
>> > it are virtually nonexistent, that is, somewhere between "highly
>> > unlikely" and "absolute zero". In practice UINT_MAX > INT_MAX, and i
>> > dare you to show me an example to the contrary.
>>
>> I do not claim that such an example exists, but it *could*.
>
> Yes. And the chip-select signal might be delivered by invisible pink
> unicorns, the Carry Flag might be hoisted by Daffy Duck and interrupts
> might be implemented by Yosemity Sam, firing his guns at the hootenest-
> tootenest-shootenest programmable interrupt controller north, east,
> south AAAAND west of the Pecos.

And you find these as plausible as UINT_MAX==INT_MAX?

> To me, the flurry of hypotheticals merely indicates that i was right,
> but you don't like admitting it, so you're grasping at hypothetical
> straws.

You were right about what, exactly?

This is not a zero-sum game where one of us wins and one of us loses.
It's a technical discussion.

Do any systems with UINT_MAX==INT_MAX actually exist? I don't know; I
make no claim one way or the other. You claim that there are no such
systems; as far as I know, you may be right.

But this is comp.lang.c, not comp.arch, and the main point I've been
making is that such systems are permitted by the C standard. (I'm
reasonably sure I'm right about that; if you agree, that doens't mean
you're admitting defeat.)

>> Perhaps it could be a specialized system, not primarily designed with C
>> in mind, but intended to support an environment which has no need for
>> unsigned arithmetic (and no, address calculations don't *necessarily*
>> require unsigned integer arithmetic).
>
> Ok. So name a (non-hypothetical) example of adddresses not being
> unsigned integers. I wager a case of Grolsch you won't be able to. Not
> because it's not possible, just because it's unpractical.

You're asking me to support a claim I never made.

> Besides, a pattern of bits on the address bus, and this would be the
> best objection you can make, can be interpreted as signed _or_
> unsigned at the whim of whomsoever is interpreting it. It really makes
> no difference, it's just a question of how you interpret them.

But ok, let's say we have a system with a 16-bit address space,
with addresses ranging from 0 to 65535, and with each byte in that
range being addressible. What happens if you start with a pointer
to memory location 65535, and then increment it? Unless there's
hardware checking, the result will probably point to address 0.
Or, equivalently, we started with address -1 and incremented it to
address 0.

> But, given the practicalities of hardware design, they are usually
> interpreted as being unsigned, simply because it's more convenient.

If address arithmetic doesn't check for overflow or wraparound, it may
be *conceptually* more convenient to think of addresses as unsigned, but
the hardware doesn't necessarily care.

>> On such a system, it would make sense to have:
>>     INT_MIN = -2147483648
>>     INT_MAX = +2147483647
>>     UINT_MAX = INT_MAX
>> and for unsigned int to have a single padding bit in place of
>> signed int's sign bit.  And that would probably result in much more
>> efficient code than forcing unsigned int to have 32 value bits and
>> perform most unsigned operations in software.  (Perhaps unsigned long
>> would be 32 bits wide, and be slower than unsigned int.)
>
> Ok.
>
> Let me pose two simple questions...
>
> How does a CPU add 1 and 1 and arrive at the (correct) answer: 2?

Magic. 8-)} Seriously, I'm not much of a hardware guy, but we can
agree that CPUs can add.

> How do signed and unsigned versions of this operation differ?

Depends on the system. On some systems, there might be a hardware trap
on overflow; the conditions in which that trap will be triggered might
differ for signed and unsigned add instructions. Maybe. And no, I
don't have concrete examples.

> The question "why doesn't anyone design a computer that does not
> support unsigned arithmatic" should be obvious, if you got the above
> two questions right, and hypothetical, highly unpractical systems
> should then be laid to rest.
>
>> The point is that C is designed to be implementable with reasonable
>> efficiency on a wide variety of systems.
>
> True. But i doubt you'll find ANY that match the exotic, nay,
> excentric hardware you describe. I still dare yoou to name a single
> CPU that does not support unsigned integers, and i'm very confident
> you won't find any.

Again, I've never claimed that such a CPU exists.

>> There are more things in heaven and earth, Horatio, ...
>
> True. But that's no argument.

--

Keith Thompson

unread,
Jun 1, 2011, 11:41:44 AM6/1/11
to
James Kuyper <james...@verizon.net> writes:
> On 06/01/2011 04:46 AM, Kleuskes & Moos wrote:
>> On May 31, 8:41 pm, Keith Thompson <ks...@mib.org> wrote:
[...]

>>> On such a system, it would make sense to have:
>>> INT_MIN = -2147483648
>>> INT_MAX = +2147483647
>>> UINT_MAX = INT_MAX
>
> Actually, Keith's example is somewhat poorly chosen. Because of the
> equivalence between 2's complement signed operations and corresponding
> unsigned operations, a platform which could only natively support signed
> math would have to be using one of the other two representations.

I was thinking of a (hypothetical) system that traps on signed overflow.
This might be more plausible for a virtual machine than for real hardware.

[...]

Kleuskes & Moos

unread,
Jun 1, 2011, 12:28:08 PM6/1/11
to

Absolutely right.

Kleuskes & Moos

unread,
Jun 1, 2011, 12:37:40 PM6/1/11
to
> Seehttp://en.wikipedia.org/wiki/Full_adder

>
>
>
>
>
>
>
>
>
> > > How do signed and unsigned versions of this operation differ?
>
> > The signed instructions interpret the memory as signed integers in,
> > let's say, 1's complement representation. Unsigned versions of the
> > operation are non-existent, and unsigned arithmetic must therefore be
> > emulated, since the signed operations would not produce the correct
> > result if the unsigned result would be greater than INT_MAX, or if the
> > result of applying those operations would be negative.
>
> > > The question "why doesn't anyone design a computer that does not
> > > support unsigned arithmatic" should be obvious, if you got the above
> > > two questions right, and hypothetical, highly unpractical systems
> > > should then be laid to rest.
>
> > You'll need to explain the point you're trying to make. It's not at all
> > clear to me what problem you're thinking of, so I can't evaluate whether
> > it makes any sense to worry about that problem.
>
> In terms of hardware, the actual gates being used, both 1's complement
> (mainly because of the double 0) and signed magnitude are expensive.
> You need extra hardware to take care of a lot of things and extra
> hardware costs time, money and dissipates power.
> 2's complement can use the same circuits to do addition, subtraction
> (and hence comparisons), signed and unsigned, using the same circuits.
> That, basically, is why every processor uses 2's complement nowadays,
> instead of one of the others.
>
> And that's why i feel quite comfortable knowing UINT_MAX==INT_MAX only
> in hypothetical cases.

Adding to that, i've never in my career encountered a situation where
i had to assume anything beyond what's in the standard about either
UINT_MAX or INT_MAX.

James Kuyper

unread,
Jun 1, 2011, 12:53:31 PM6/1/11
to
On 06/01/2011 11:21 AM, Kleuskes & Moos wrote:
> On Jun 1, 4:04�pm, James Kuyper <jameskuy...@verizon.net> wrote:
...

> The 'hypothetical' was enough. 'Hypothetical' means 'I have no
> counterexample so i'll just start playing games'.

It doesn't bother me that I have no counter-example, because the point
I'm making only concerns with what the standard mandates and allows.
Keeping track of what's actually true on all current implementations of
C is something that would require more work than I have time for.

If that constitutes "playing games" in your book, then feel free to
think of it that way; I'll continue to think of what you're doing as
"making unjustified assumptions".

...


>> No, it simply means that we prefer to avoid writing code that relies
>> upon guarantees not provided by the standard.
>
> How nice.
>
>> You don't care about that,
>
> Nope. I just program _real_ computers instead of hypothetical ones.

I program real computers with code that will work even on hypothetical
ones, so long as there is a compiler on those machines which is at least
backwardly compatible with C99; which means my code will continue to
work even if your assumptions about what "real machines" do become
inaccurate.

...


> Besides, your claim to know what i care about or not is not only
> presumptious, but also quite pigheaded.

You're correct, I can only infer what you care about from your actual
words, you could secretly care about these issues far more than your
words imply. But I can only pay attention to your words, I don't have
the power to read your mind. I'll have to form my guesses as to what you
care about based upon your actual words. If those guesses are
inaccurate, it's due to a failure of your words to correctly reflect
your true beliefs.

...


>> With that right comes the
>> responsibility of accepting the consequences in the event (which you
>> consider very unlikely to occur) that your code needs to be ported to an
>> implementation which violates them.

...


> Your rather pompous assertions about my sense of responsability

I said nothing about your sense of responsibility, only your
responsibilities themselves. That you might be insensible of those
responsibilities seems entirely plausible, and even likely.

...


>> I'm
>> not a hardware designer, I'm sure I'd make lots of mistakes in any
>> attempt to provide such a specification. Are you really in doubt about
>> the feasibility of implementing signed arithmetic?
>
> Nope. I just wanted to know whether or not you have any idea what
> you're talking about. It appears you have no idea.

I have an idea, it's just not my specialty. I know enough to know that
all three of the methods permitted by the C standard for representing
signed integers have actually been implemented, on the hardware level,
on real machines, and that only one of those methods (admittedly, the
most popular) provides any support for your arguments. That is, it would
provide such support for your arguments, if it were the only possible
way of doing signed integer arithmetic; but it isn't.

...


>> You'll need to explain the point you're trying to make. It's not at all
>> clear to me what problem you're thinking of, so I can't evaluate whether
>> it makes any sense to worry about that problem.
>
> In terms of hardware, the actual gates being used, both 1's complement
> (mainly because of the double 0) and signed magnitude are expensive.

Which is very different from being impossible, which is what would need
to be the case to support your argument. The other ways of representing
signed integers have their own peculiar advantages, or they would never
have been tried. When our current technologies for making computer chips
have become obsolete, and something entirely different comes along to
replace them, with it's own unique cost structure, the costs could
easily swing the other way.

I don't know whether this particular assumption will ever fail. But if
you make a habit of relying on such assumptions, as seems likely, I can
virtually guarantee that one of your assumptions will fail. Code which
avoids making unnecessary assumptions about such things will survive
such a transition; the programmers who have to port the other code will
spend a lot of time swearing at you.
--
James Kuyper

James Kuyper

unread,
Jun 1, 2011, 12:55:11 PM6/1/11
to
On 06/01/2011 12:37 PM, Kleuskes & Moos wrote:
...
> Adding to that, i've never in my career encountered a situation where
> i had to assume anything beyond what's in the standard about either
> UINT_MAX or INT_MAX.

Then why in the world are you arguing against restricting one's
assumptions to those guaranteed by the standard?


--
James Kuyper

Seebs

unread,
Jun 1, 2011, 1:53:49 PM6/1/11
to
On 2011-06-01, Kleuskes & Moos <kle...@xs4all.nl> wrote:
> Dating back from the 60's. Ok. Welcome back to the stone age.

Okay, key insight:

What makes sense in hardware has been known to change RADICALLY over the
course of a couple of decades.

> In short: UINT_MAX==INT_MAX is only found in hypothetical and
> prehistoric machines. Which still does not invalidate the point that
> the situation in question is (anno domini 2011) hypothetical at best.

But it does invalidate the presumed argument that it will *remain*
hypothetical and that it is reasonable to assume that it's irrelevant.

-s
--
Copyright 2011, all wrongs reversed. Peter Seebach / usenet...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
I am not speaking for my employer, although they do rent some of my opinions.

pete

unread,
Jun 1, 2011, 2:10:32 PM6/1/11
to
Noob wrote:
>
> J. J. Farrell wrote:

> > If someone is choosing to write code which is portable to all
> > possible legal C implementations, they have to allow for it.

> Would this work


> (I'm uncertain about pre-processor arithmetic type promotion)
>
> #if UINT_MAX <= INT_MAX
> #error UNSUPPORTED PLATFORM !!!
> #endif

You could change that to

#if UINT_MAX == INT_MAX

If (UINT_MAX < INT_MAX) is true, then it's not C.

--
pete

Kleuskes & Moos

unread,
Jun 1, 2011, 2:52:00 PM6/1/11
to

I wasn't. I was just making the point.

It is loading more messages.
0 new messages