Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is it good to use char instead of int to save memory?

29 views
Skip to first unread message

dspfun

unread,
Mar 18, 2010, 11:40:23 AM3/18/10
to
Hi,

Is it recommended to use char instead of int for variables that you
know never will contain a value larger than 255? A small example
follows:

#include <stdio.h>
int main(void)
{
//int i = 1;
char i = 1; //is this better than previous line?
printf("i=%d\n",i);
return 0;
}

What are the pros and cons of using char instead of int when values
will not exceed 255?

Brs,
Markus

Marc Boyer

unread,
Mar 18, 2010, 11:50:34 AM3/18/10
to

First, you have to use signed char or unsigned char to get portable
code, because char can be signed or not.

Second, well, it depends... On modern PC CPU, there are alignment constraints
leading common compilers to use the same amount of memory for a single
variable char and a single variable int.

But, it could be different in other contexts. For example, some collegue of
me have worked on a DSP where int and char are both 32bits values.

This is different with arrays.
int vali[1024];
char valc[1024];

In this case, valc will use less memory than vali (the ratio beeing
sizeof(char) ).

Marc Boyer

pete

unread,
Mar 18, 2010, 12:51:42 PM3/18/10
to

I wouldn't use types lower ranking than int,
unless they were in an array.

It is possible though not guaranteed
that operations on char types might be slower than int.

It is also possible though not guaranteed
that char type objects might be aligned on int boundaries,
meaning that the few bytes after a char type object
might not be useable.

--
pete

Dr Malcolm McLean

unread,
Mar 18, 2010, 12:28:14 PM3/18/10
to
On 18 Mar, 11:40, dspfun <dsp...@hotmail.com> wrote:
>
> What are the pros and cons of using char instead of int when values
> will not exceed 255?
>
As others have said, you need to specify signed char or unsigned char.
Plain chars are for actual characters.
It's rare that memory is so tight that declaring a single variable as
"char" is a saving worth having, and the operation on a char might
well be slower. int is meant to be the natural integer representation
for the machine.
However if you have a large array, char might save significnat space.
The obvious example is a raster buffer for image data. Usually we
allow 8 bits per rgb channel. So an array of unsigned chars is the
obvious representation.

spinoza1111

unread,
Mar 18, 2010, 12:34:38 PM3/18/10
to

I'm sure one of the regulars will be all agog to remind you that chars
don't necessarily range between 0 and 255.

Pro:

* Memory saved
* Speed IF your "digital signal processor" has hardwired character
arithmetic
* Everyone will think you're smart

Con:

* Memory lost when your "digital signal processor" does char arith by
transforming the char into an integer
* Speed in the same scenario
* Negative numbers?
* Everyone will think you're an idiot

Decision:

When all else fails, program on purpose
When the wind is out of your sails, code what you mean
A char actor is not a number, and one would suppose
That arithmetic doesn't happen between consenting chars, jellybean.

Don't get cute, don't show off:
The urge to grandstand should be blown off.
If it's a natural number be a regular guy
And use int, say I with a sigh.

Eric Sosman

unread,
Mar 18, 2010, 12:41:02 PM3/18/10
to
On 3/18/2010 7:40 AM, dspfun wrote:
> Hi,
>
> Is it recommended to use char instead of int for variables that you
> know never will contain a value larger than 255? A small example
> follows:
>
> #include<stdio.h>
> int main(void)
> {
> //int i = 1;
> char i = 1; //is this better than previous line?

No.

> printf("i=%d\n",i);
> return 0;
> }
>
> What are the pros and cons of using char instead of int when values
> will not exceed 255?

If you need [0..255] you should use unsigned char, not plain
char. (If you need [-127..127] make it signed char.)

Using a char (or short) variant instead of an int will reduce
data size. This is worth doing only if the data is "too big,"
which usually means one of two things:

- You are storing "billyuns and billyuns" of data instances.

- You are packing data into an externally-specified and size-
limited format ("Make it all fit in a 32-byte record").

On some processors, using sub-int data types may increase
code size (sometimes significantly) and execution time (usually
not by much).

--
Eric Sosman
eso...@ieee-dot-org.invalid

Julienne Walker

unread,
Mar 18, 2010, 12:58:50 PM3/18/10
to
On Mar 18, 8:34 am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> On Mar 18, 7:40 pm, dspfun <dsp...@hotmail.com> wrote:
>
> > Hi,
>
> > Is it recommended to use char instead of int for variables that you
> > know never will contain a value larger than 255?
>
> I'm sure one of the regulars will be all agog to remind you that chars
> don't necessarily range between 0 and 255.

Unfortunately, the OP didn't give us an absolute range of allowed
values to work with, so it would be unwise to assume the lower limit
of his allowed range given only the upper limit. In this case I would
ask for more information before answering with an unwarranted
assumption, or give multiple answers depending on the most reasonable
assumptions:

If we're talking about values that also won't be negative, an unsigned
char is fine and there's no need to talk about ranges because the 8-
bit range is the required minimum. On the other hand, if negative
values *are* allowed (ie. signed char), the upper limit of 255 is
beyond the guaranteed minimum and short or int would be better suited.

> Pro:


> *  Everyone will think you're smart
> Con:

> *  Everyone will think you're an idiot

I'm confident that one's mental abilities aren't judged by choice of
numeric data type in C. ^_^ Some people might, but for those people I
would recommend chilling out.

Ersek, Laszlo

unread,
Mar 18, 2010, 1:01:13 PM3/18/10
to
In article <4BA221...@mindspring.com>, pete <pfi...@mindspring.com> writes:

> dspfun wrote:
>>
>> Is it recommended to use char instead of int for variables that you
>> know never will contain a value larger than 255?
>
> I wouldn't use types lower ranking than int,
> unless they were in an array.

This. The variable will be promoted, most of the time, to int or
unsigned int. Such enclosing expressions will have to prepare for both
signednesses. I'd say save those explicit casts and use unsigned int for
individual variables.

lacos

bartc

unread,
Mar 18, 2010, 1:25:58 PM3/18/10
to
Julienne Walker wrote:
> On Mar 18, 8:34 am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>> On Mar 18, 7:40 pm, dspfun <dsp...@hotmail.com> wrote:

>>> Is it recommended to use char instead of int for variables that you
>>> know never will contain a value larger than 255?
>>
>> I'm sure one of the regulars will be all agog to remind you that
>> chars don't necessarily range between 0 and 255.
>
> Unfortunately, the OP didn't give us an absolute range of allowed
> values to work with, so it would be unwise to assume the lower limit
> of his allowed range given only the upper limit. In this case I would
> ask for more information before answering with an unwarranted
> assumption, or give multiple answers depending on the most reasonable
> assumptions:

This is an even more pedantic answer than spinoza was anticipating.

I would guess the OP had a lower bound of 0 in mind, wouldn't you? Unless
there's a one in a thousand chance he's working with 9-bit chars. And one in
a million chance of something even weirder.

--
Bartc

Thad Smith

unread,
Mar 18, 2010, 2:43:59 PM3/18/10
to
Eric Sosman wrote:
> On 3/18/2010 7:40 AM, dspfun wrote:

> Using a char (or short) variant instead of an int will reduce
> data size. This is worth doing only if the data is "too big,"
> which usually means one of two things:
>
> - You are storing "billyuns and billyuns" of data instances.
>
> - You are packing data into an externally-specified and size-
> limited format ("Make it all fit in a 32-byte record").

- You are near the resources of the processor.


>
> On some processors, using sub-int data types may increase
> code size (sometimes significantly) and execution time (usually
> not by much).

On some processors using int data types increases code size and execution time
over char data types. You are normally aware if that is so.

--
Thad

bartc

unread,
Mar 18, 2010, 1:47:04 PM3/18/10
to
pete wrote:
> dspfun wrote:
>>
>> Hi,
>>
>> Is it recommended to use char instead of int for variables that you
>> know never will contain a value larger than 255? A small example
>> follows:
>>
>> #include <stdio.h>
>> int main(void)
>> {
>> //int i = 1;
>> char i = 1; //is this better than previous line?
>> printf("i=%d\n",i);
>> return 0;
>> }
>>
>> What are the pros and cons of using char instead of int when values
>> will not exceed 255?
>
> I wouldn't use types lower ranking than int,
> unless they were in an array.


This code prints 2147303489 on my machine, instead of 65; changing to char
ch fixes that:

#include <stdio.h>

int main(void){
int ch;
char* p=&ch;

*p=65;

printf("CH = %d\n",ch);
}

It happens that char pointers might sometimes point inside structs and
arrays, and sometimes at standalone variables.

--
Bartc

dspfun

unread,
Mar 18, 2010, 2:18:03 PM3/18/10
to

The range will be between 0 and 8.

dspfun

unread,
Mar 18, 2010, 2:20:38 PM3/18/10
to
> esos...@ieee-dot-org.invalid

Could you give some more examples of why it is bad (or not worth) to
store numbers (that we know will have a range for example between 0
and 8) in unsigned chars?

dspfun

unread,
Mar 18, 2010, 2:26:02 PM3/18/10
to
On Mar 18, 3:20 pm, dspfun <dsp...@hotmail.com> wrote:

Is the same reasoning as you all have described above valid for the
following two versions of my_struct1 and my_struct2, i.e. is
my_struct2 better than my_struct1 ?

typedef struct
{
unsigned int u;
unsigned char v; //will never be larger than 255
unsigned char w;//will never be larger than 255
}my_struct1;


typedef struct
{
unsigned int u;
unsigned int v;//will never be larger than 255
unsigned int w;//will never be larger than 255
}my_struct2;

?

Eric Sosman

unread,
Mar 18, 2010, 2:28:09 PM3/18/10
to
On 3/18/2010 10:20 AM, dspfun wrote:
> On Mar 18, 1:41 pm, Eric Sosman<esos...@ieee-dot-org.invalid> wrote:
>> On 3/18/2010 7:40 AM, dspfun wrote:
>>
>>> Is it recommended to use char instead of int for variables that you
>>> know never will contain a value larger than 255?
>> [...]

>> On some processors, using sub-int data types may increase
>> code size (sometimes significantly) and execution time (usually
>> not by much).
>>
>> --
>> Eric Sosman
>> esos...@ieee-dot-org.invalid
>
> Could you give some more examples of why it is bad (or not worth) to
> store numbers (that we know will have a range for example between 0
> and 8) in unsigned chars?

Which part of "using sub-int data types may increase code
size and execution time" are you having trouble with?

(By the way, it's considered silly or even bad form to
quote signatures.)

--
Eric Sosman
eso...@ieee-dot-org.invalid

Eric Sosman

unread,
Mar 18, 2010, 2:30:45 PM3/18/10
to
On 3/18/2010 9:47 AM, bartc wrote:
>
> This code prints 2147303489 on my machine, instead of 65; changing to
> char ch fixes that:
>
> #include <stdio.h>
>
> int main(void){
> int ch;
> char* p=&ch;
>
> *p=65;
>
> printf("CH = %d\n",ch);
> }

The undefined behavior of broken code is not a good basis
on which to choose variable types.

--
Eric Sosman
eso...@ieee-dot-org.invalid

dspfun

unread,
Mar 18, 2010, 2:33:14 PM3/18/10
to
On Mar 18, 3:28 pm, Eric Sosman <esos...@ieee-dot-org.invalid> wrote:
> On 3/18/2010 10:20 AM, dspfun wrote:
>
>
>
> > On Mar 18, 1:41 pm, Eric Sosman<esos...@ieee-dot-org.invalid>  wrote:
> >> On 3/18/2010 7:40 AM, dspfun wrote:
>
> >>> Is it recommended to use char instead of int for variables that you
> >>> know never will contain a value larger than 255?
> >> [...]
> >>       On some processors, using sub-int data types may increase
> >> code size (sometimes significantly) and execution time (usually
> >> not by much).
>
>
> > Could you give some more examples of why it is bad (or not worth) to
> > store numbers (that we know will have a range for example between 0
> > and 8) in unsigned chars?
>
>      Which part of "using sub-int data types may increase code
> size and execution time" are you having trouble with?
>
>      (By the way, it's considered silly or even bad form to
> quote signatures.)
>

Sorry about including the signature!

Consider if we/you know that "using sub-int data types" does not
increase code size and execution time, what are the drawbacks of using
unsigned char for numbers in the range 0 to 255?

bartc

unread,
Mar 18, 2010, 2:54:47 PM3/18/10
to
dspfun wrote:
> On Mar 18, 3:28 pm, Eric Sosman <esos...@ieee-dot-org.invalid> wrote:


>> Which part of "using sub-int data types may increase code
>> size and execution time" are you having trouble with?

> Consider if we/you know that "using sub-int data types" does not


> increase code size and execution time, what are the drawbacks of using
> unsigned char for numbers in the range 0 to 255?

I really wouldn't worry about it too much. Just use char if the data is
small.

And inside structs, definitely use char: the char fields will pack tighter;
if you have many structs, you save memory (and the data is not so spread
out), and sometimes you can do tricks such as handle several char values as
a single int.

If performance is really an issue, then you can experiment with using int
instead of char for individual variables (as mixing int and char in
expressions, or passing char values to a function, may involve widening).

>> Which part of "using sub-int data types may increase code
>> size and execution time" are you having trouble with?

Using int data-types instead of char, can also increase not only code size
and execution time, but also data size.

--
bartc


Eric Sosman

unread,
Mar 18, 2010, 2:49:45 PM3/18/10
to
On 3/18/2010 10:33 AM, dspfun wrote:
> On Mar 18, 3:28 pm, Eric Sosman<esos...@ieee-dot-org.invalid> wrote:
>> On 3/18/2010 10:20 AM, dspfun wrote:
>>
>>
>>
>>> On Mar 18, 1:41 pm, Eric Sosman<esos...@ieee-dot-org.invalid> wrote:
>>>> On 3/18/2010 7:40 AM, dspfun wrote:
>>
>>>>> Is it recommended to use char instead of int for variables that you
>>>>> know never will contain a value larger than 255?
>>>> [...]
>>>> On some processors, using sub-int data types may increase
>>>> code size (sometimes significantly) and execution time (usually
>>>> not by much).
>>
>>
>>> Could you give some more examples of why it is bad (or not worth) to
>>> store numbers (that we know will have a range for example between 0
>>> and 8) in unsigned chars?
>>
>> Which part of "using sub-int data types may increase code
>> size and execution time" are you having trouble with?
>>
> Consider if we/you know that "using sub-int data types" does not
> increase code size and execution time, what are the drawbacks of using
> unsigned char for numbers in the range 0 to 255?

None that I can think of, given the prior knowledge. One
problem with such knowledge, though, is that code quite often
outlives its original environment, and that something you know
about the machine on which you developed it turns out to be
untrue about the next machine on which it runs. To put it
another way, hardware changes more rapidly than software (even
though "soft" suggests the opposite).

--
Eric Sosman
eso...@ieee-dot-org.invalid

bartc

unread,
Mar 18, 2010, 2:56:30 PM3/18/10
to
Eric Sosman wrote:
> On 3/18/2010 9:47 AM, bartc wrote:
>>
>> This code prints 2147303489 on my machine, instead of 65; changing to
>> char ch fixes that:
>>
>> #include <stdio.h>
>>
>> int main(void){
>> int ch;
>> char* p=&ch;
>>
>> *p=65;
>>
>> printf("CH = %d\n",ch);
>> }
>
> The undefined behavior of broken code is not a good basis
> on which to choose variable types.

But using a single char variable (which everyone has been arguing against)
fixes that.

--
Bartc

Eric Sosman

unread,
Mar 18, 2010, 2:58:43 PM3/18/10
to
On 3/18/2010 10:33 AM, dspfun wrote:
> On Mar 18, 3:28 pm, Eric Sosman<esos...@ieee-dot-org.invalid> wrote:
>> On 3/18/2010 10:20 AM, dspfun wrote:
>>
>>
>>
>>> On Mar 18, 1:41 pm, Eric Sosman<esos...@ieee-dot-org.invalid> wrote:
>>>> On 3/18/2010 7:40 AM, dspfun wrote:
>>
>>>>> Is it recommended to use char instead of int for variables that you
>>>>> know never will contain a value larger than 255?
>>>> [...]
>>>> On some processors, using sub-int data types may increase
>>>> code size (sometimes significantly) and execution time (usually
>>>> not by much).
>>
>>
>>> Could you give some more examples of why it is bad (or not worth) to
>>> store numbers (that we know will have a range for example between 0
>>> and 8) in unsigned chars?
>>
>> Which part of "using sub-int data types may increase code
>> size and execution time" are you having trouble with?
>>
> Consider if we/you know that "using sub-int data types" does not
> increase code size and execution time, what are the drawbacks of using
> unsigned char for numbers in the range 0 to 255?

None that I can think of, given the prior knowledge. One

Dr Malcolm McLean

unread,
Mar 18, 2010, 2:59:23 PM3/18/10
to
On 18 Mar, 14:33, dspfun <dsp...@hotmail.com> wrote:
>
> Consider if we/you know that "using sub-int data types" does not
> increase code size and execution time, what are the drawbacks of using
> unsigned char for numbers in the range 0 to 255?- Hide quoted text -
>

consider this

void flagduplicates(char **str, bool *flags, unsigned char N)
{
unsigned char i;
for(i=0; i <N-1;i++)
if(!strcmp(str[i], str[i+1]))
flags[i] = true;
else
flags[i] = false;
flags[i] = false;
}

or this

unsigned char ch;

while( (ch = fgetc(fp)) != EOF )
/* do something */;

or this

unsigned char exponent;
double mantissa;

mantissa = frexp(x, &exponent);

Eric Sosman

unread,
Mar 18, 2010, 3:01:01 PM3/18/10
to

... as would using an int* instead of a char*. So?

--
Eric Sosman
eso...@ieee-dot-org.invalid

Dr Malcolm McLean

unread,
Mar 18, 2010, 3:07:15 PM3/18/10
to
On 18 Mar, 13:25, "bartc" <ba...@freeuk.com> wrote:
>
> I would guess the OP had a lower bound of 0 in mind, wouldn't you? Unless
> there's a one in a thousand chance he's working with 9-bit chars.
>
Zero? Danagerous Saracen magic. The lower bound should be one.

bartc

unread,
Mar 18, 2010, 3:29:32 PM3/18/10
to
Eric Sosman wrote:
> On 3/18/2010 10:56 AM, bartc wrote:
>> Eric Sosman wrote:
>>> On 3/18/2010 9:47 AM, bartc wrote:
>>>>
>>>> This code prints 2147303489 on my machine, instead of 65; changing
>>>> to char ch fixes that:
>>>>
>>>> #include <stdio.h>
>>>>
>>>> int main(void){
>>>> int ch;
>>>> char* p=&ch;
>>>>
>>>> *p=65;
>>>>
>>>> printf("CH = %d\n",ch);
>>>> }
>>>
>>> The undefined behavior of broken code is not a good basis
>>> on which to choose variable types.
>>
>> But using a single char variable (which everyone has been arguing
>> against) fixes that.
>
> ... as would using an int* instead of a char*. So?

The context is that something wants to store a value via a char* (perhaps a
function call). You can't use int*.

In this case it's one example of where an individual char variable can be
useful.

I have others..

--
Bartc

Eric Sosman

unread,
Mar 18, 2010, 3:38:25 PM3/18/10
to
On 3/18/2010 11:29 AM, bartc wrote:
>
> The context is that something wants to store a value via a char*
> (perhaps a function call). You can't use int*.

"The context" is

//int i = 1;
char i = 1; //is this better than previous line?

There is no char* anywhere in sight. There is not even a store
operation anywhere in sight, certainly not a store through a char*
via the intermediary of a function call. You're imagining things.

--
Eric Sosman
eso...@ieee-dot-org.invalid

Seebs

unread,
Mar 18, 2010, 3:58:57 PM3/18/10
to
On 2010-03-18, dspfun <dsp...@hotmail.com> wrote:
> Is it recommended to use char instead of int for variables that you
> know never will contain a value larger than 255? A small example
> follows:

"Is it recommended"? By someone, somewhere? Certainly.

I would not recommend it.

> What are the pros and cons of using char instead of int when values
> will not exceed 255?

Never use char unless you mean it. If you just want a value, and don't
really care too much about its range, always use int. In many cases,
declaring the variable "char" will cost several times more space than it
saves.

-s
--
Copyright 2010, all wrongs reversed. Peter Seebach / usenet...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

bartc

unread,
Mar 18, 2010, 4:23:38 PM3/18/10
to

"Eric Sosman" <eso...@ieee-dot-org.invalid> wrote in message
news:hnthdh$qnd$1...@news.eternal-september.org...

I was replying to:

"pete" <pfi...@mindspring.com> wrote in message
news:4BA221...@mindspring.com...

> I wouldn't use types lower ranking than int,
> unless they were in an array.

I gave an example where a char variable would be handy. That example stored
something via a char* type. That was the context, expressed as:

*p=65;

for illustration, but a function call might be an example where a char*
might be imposed:

char c;

while (readnextvalue(f,&c)) printf("%d",c);

--
Bartc

dspfun

unread,
Mar 18, 2010, 4:28:54 PM3/18/10
to
On Mar 18, 4:58 pm, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-03-18, dspfun <dsp...@hotmail.com> wrote:
>
> > Is it recommended to use char instead of int for variables that you
> > know never will contain a value larger than 255? A small example
> > follows:
>
> "Is it recommended"?  By someone, somewhere?  Certainly.
>
> I would not recommend it.
>
> > What are the pros and cons of using char instead of int when values
> > will not exceed 255?
>
> Never use char unless you mean it.  If you just want a value, and don't
> really care too much about its range, always use int.  In many cases,
> declaring the variable "char" will cost several times more space than it
> saves.

Can you give some specific examples of why unsigned int is better than
unsigned char when values are in the range 0-255? I know with respect
to performance and memory there can be a difference.

Seebs

unread,
Mar 18, 2010, 4:50:49 PM3/18/10
to

Uh, yeah. Those are the specific examples: Performance and memory.

I don't understand what part of the answers you already have you don't
understand.

Julienne Walker

unread,
Mar 18, 2010, 5:11:33 PM3/18/10
to
On Mar 18, 9:25 am, "bartc" <ba...@freeuk.com> wrote:
> Julienne Walker wrote:
> > On Mar 18, 8:34 am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> >> On Mar 18, 7:40 pm, dspfun <dsp...@hotmail.com> wrote:
> >>> Is it recommended to use char instead of int for variables that you
> >>> know never will contain a value larger than 255?
>
> >> I'm sure one of the regulars will be all agog to remind you that
> >> chars don't necessarily range between 0 and 255.
>
> > Unfortunately, the OP didn't give us an absolute range of allowed
> > values to work with, so it would be unwise to assume the lower limit
> > of his allowed range given only the upper limit. In this case I would
> > ask for more information before answering with an unwarranted
> > assumption, or give multiple answers depending on the most reasonable
> > assumptions:
>
> This is an even more pedantic answer than spinoza was anticipating.

Not by my reading. It seems to me he was expecting the "a char doesn't
have to be 8 bits" nitpick, which is the default unhelpful observation
by pedants when someone asks about a specific limit.

> I would guess the OP had a lower bound of 0 in mind, wouldn't you?

Yes, I would. But my point was that by guessing you could easily reach
the wrong conclusion. Maybe he wants (-256,256) and doesn't understand
the distinction between signed and unsigned. The correct approach in
my opinion is to confirm the range his program needs and *then* answer
the question.

spinoza1111

unread,
Mar 18, 2010, 5:32:46 PM3/18/10
to
On Mar 18, 11:07 pm, Dr Malcolm McLean

Malcolm!! Are congratulations in order, old fellow? Did you complete
your PhD? If so: my sincere compliments. If not, my best wishes.

spinoza1111

unread,
Mar 18, 2010, 5:35:46 PM3/18/10
to
On Mar 18, 8:58 pm, Julienne Walker <happyfro...@hotmail.com> wrote:
> On Mar 18, 8:34 am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>
> > On Mar 18, 7:40 pm, dspfun <dsp...@hotmail.com> wrote:
>
> > > Hi,

>
> > > Is it recommended to use char instead of int for variables that you
> > > know never will contain a value larger than 255?
>
> > I'm sure one of the regulars will be all agog to remind you that chars
> > don't necessarily range between 0 and 255.
>
> Unfortunately, the OP didn't give us an absolute range of allowed
> values to work with, so it would be unwise to assume the lower limit
> of his allowed range given only the upper limit. In this case I would
> ask for more information before answering with an unwarranted
> assumption, or give multiple answers depending on the most reasonable
> assumptions:
>
> If we're talking about values that also won't be negative, an unsigned
> char is fine and there's no need to talk about ranges because the 8-
> bit range is the required minimum. On the other hand, if negative
> values *are* allowed (ie. signed char), the upper limit of 255 is
> beyond the guaranteed minimum and short or int would be better suited.
>
> > Pro:
> > *  Everyone will think you're smart
> > Con:
> > *  Everyone will think you're an idiot
>
> I'm confident that one's mental abilities aren't judged by choice of
> numeric data type in C. ^_^ Some people might, but for those people I
> would recommend chilling out.

Hey, this is clc. And instead of chilling out, in my view, Julienne,
people need to get rowdy about the behavior of some of the regulars.
But you know my views.

spinoza1111

unread,
Mar 18, 2010, 5:36:39 PM3/18/10
to

Strange things happen in the embedded world, so Julienne's question
made sense, IMO.
>
> --
> Bartc

spinoza1111

unread,
Mar 18, 2010, 5:40:07 PM3/18/10
to
On Mar 18, 11:07 pm, Dr Malcolm McLean
<malcolm.mcle...@btinternet.com> wrote:
> On 18 Mar, 13:25, "bartc" <ba...@freeuk.com> wrote:
>
> > I would guess the OP had a lower bound of 0 in mind, wouldn't you? Unless
> > there's a one in a thousand chance he's working with 9-bit chars.
>
> Zero? Danagerous Saracen magic. The lower bound should be one.

Malcolm seems to be referring to the lack of zero in Roman arithmetic.
The Romans were practical men, and they figured that if you had
nothing, you should commit suicide like Brutus in the old play. They
are the reason the nineteenth century starts with 1800 and not 1900, I
believe. They remind me of some of the more narrow-minded and
unimaginative types here.

Anand Hariharan

unread,
Mar 18, 2010, 6:09:24 PM3/18/10
to
On Mar 18, 6:50 am, Marc Boyer <Marc.Bo...@cert.onera.fr.invalid>
wrote:
(...)
>    int  vali[1024];
>    char valc[1024];
>
> In this case, valc will use less memory than vali (the ratio beeing
> sizeof(char) ).
>

ITYM sizeof(int).

Anand Hariharan

unread,
Mar 18, 2010, 6:14:07 PM3/18/10
to
On Mar 18, 7:41 am, Eric Sosman <esos...@ieee-dot-org.invalid> wrote:
> On 3/18/2010 7:40 AM, dspfun wrote:
(...)

> > What are the pros and cons of using char instead of int when values
> > will not exceed 255?
>
>      If you need [0..255] you should use unsigned char, not plain
> char.  (If you need [-127..127] make it signed char.)
>

Are there any 1's complement machines out there which have C
implementations that are actively used?

- Anand

Eric Sosman

unread,
Mar 18, 2010, 6:36:18 PM3/18/10
to

No, of course not. And no signed-magnitude machines,
either. What's more, there *never* will be, no, not ever,
not while your children's children's children still use C.

That guy Heraclitus was a nitwit.

--
Eric Sosman
eso...@ieee-dot-org.invalid

Sjouke Burry

unread,
Mar 18, 2010, 7:25:08 PM3/18/10
to
dspfun wrote:
> Hi,

>
> Is it recommended to use char instead of int for variables that you
> know never will contain a value larger than 255? A small example
> follows:
>
> #include <stdio.h>
> int main(void)
> {

> //int i = 1;
> char i = 1; //is this better than previous line?
> printf("i=%d\n",i);
> return 0;

> }
>
> What are the pros and cons of using char instead of int when values
> will not exceed 255?
>
> Brs,
> Markus
I only once did that in an astro program that held data
about several millions of stars, where each byte saved made a
huge difference.
In all other cases, dont do it.

jamm

unread,
Mar 18, 2010, 7:52:44 PM3/18/10
to
dspfun wrote:

> Hi,
>
> Is it recommended to use char instead of int for variables that you
> know never will contain a value larger than 255? A small example
> follows:

Hmm people here seem to forget that C isn't just used on desktop Intel
PCs...

On an 8 bit uC with 128 bytes of SRAM, where char is 8 bit and int is 16
bit, yes char is what you're going to want to use. You would usually avoid
larger types for performance/memory reasons unless you really need an int.
Then use an int.

bartc

unread,
Mar 18, 2010, 8:28:49 PM3/18/10
to

"Sjouke Burry" <burrynu...@ppllaanneett.nnll> wrote in message
news:4ba27e14$0$14133$703f...@textnews.kpn.nl...

I ran some tests using this struct:

typedef struct {char x,y; short z;} D;

On my x86-32 machine, using int instead of char and short made them take up
to 6 times as long to execute.

(One test passed the struct to a function by value, and returned a new
value. Another just copied one to another. Nothing involving large arrays.)

The OP may not have such a PC, but should do his own evaluations.

--
Bartc

Dr Malcolm McLean

unread,
Mar 18, 2010, 8:53:08 PM3/18/10
to
Veteran readers of comp.lang.c will have noticed a change in my
moniker. I'm Dr Malcolm McLean now.

Dr Malcolm McLean

unread,
Mar 18, 2010, 8:58:05 PM3/18/10
to
On 18 Mar, 16:28, dspfun <dsp...@hotmail.com> wrote:
>
> Can you give some specific examples of why unsigned int is better than
> unsigned char when values are in the range 0-255? I know with respect
> to performance and memory there can be a difference.
>
The main reason is that anyone reading your code will do a double
take, and say "unsigned char? What's going on here? Normally unsigned
chars are used for arbitrary bytes of data, but this looks like an
index/count/counter. Why has he chosen that data type?"
Eventually of course he'll figure it out, but every little bit of
difficulty adds to the cost of maintaining your code, in a
supercumulative manner. Two little bits of difficulty add more than
twice the cost of one little bit, because humans can tolerate a
certain amount of distraction and get overwhelmed by too much.

Dr Malcolm McLean

unread,
Mar 18, 2010, 9:01:37 PM3/18/10
to
On 18 Mar, 19:52, jamm <nos...@nomail.net> wrote:
>
> On an 8 bit uC with 128 bytes of SRAM, where char is 8 bit and int is 16
> bit, yes char is what you're going to want to use. You would usually avoid
> larger types for performance/memory reasons unless you really need an int.
> Then use an int.
>
On such a machine a program than runs in 100 bytes of memory is
usually as useful and cost-ecffective as one that runs in 50.
However, yes, this would be a circumstance in which char for small
integers is appropriate. In fact on such compilers int is often, non-
conformingly, 8 bits and long is 16 bits.

Ersek, Laszlo

unread,
Mar 18, 2010, 9:13:25 PM3/18/10
to
In article
<28790bea-491a-41ef...@u33g2000yqe.googlegroups.com>,

Dr Malcolm McLean <malcolm...@btinternet.com> writes:

> Veteran readers of comp.lang.c will have noticed a change in my
> moniker. I'm Dr Malcolm McLean now.

I'm not a veteran reader but I did notice it. Congratulations! Do you
have a public, low-res research page?

(Sorry for being OT.)

lacos

Thad Smith

unread,
Mar 19, 2010, 3:19:15 AM3/19/10
to
Dr Malcolm McLean wrote:
> On 18 Mar, 19:52, jamm <nos...@nomail.net> wrote:
>> On an 8 bit uC with 128 bytes of SRAM, where char is 8 bit and int is 16
>> bit, yes char is what you're going to want to use. You would usually avoid
>> larger types for performance/memory reasons unless you really need an int.
>> Then use an int.
>>
> On such a machine a program than runs in 100 bytes of memory is
> usually as useful and cost-ecffective as one that runs in 50.

I have no idea what that statement means. Having 128 bytes of RAM and running
in 100 bytes are two different things -- one is normally data memory, the other
code memory. Processors with a few hundred bytes of RAM typically have several
kilobytes of non-volatile code memory.

The question for those processors isn't how small you can make the program, but
rather whether it fits at all and how much data it can handle. Or the question
is how expensive a processor do we need for a given job?

> However, yes, this would be a circumstance in which char for small
> integers is appropriate. In fact on such compilers int is often, non-
> conformingly, 8 bits and long is 16 bits.

This is at least one such non-conforming compiler, but most compilers for 8-bit
processors support ints and longs that are conforming in size.

--
Thad

spinoza1111

unread,
Mar 19, 2010, 2:40:36 AM3/19/10
to
On Mar 19, 4:53 am, Dr Malcolm McLean <malcolm.mcle...@btinternet.com>

wrote:
> On 18 Mar, 17:32, spinoza1111 <spinoza1...@yahoo.com> wrote:> On Mar 18, 11:07 pm, Dr Malcolm McLean
>
> > <malcolm.mcle...@btinternet.com> wrote:
> > > On 18 Mar, 13:25, "bartc" <ba...@freeuk.com> wrote:
>
> > > > I would guess the OP had a lower bound of 0 in mind, wouldn't you? Unless
> > > > there's a one in a thousand chance he's working with 9-bit chars.
>
> > > Zero? Danagerous Saracen magic. The lower bound should be one.
>
> > Malcolm!! Are congratulations in order, old fellow? Did you complete
> > your PhD? If so: my sincere compliments. If not, my best wishes.
>
> Veteran readers of comp.lang.c will have noticed a change in my
> moniker. I'm Dr Malcolm McLean now.

Congratulations. Some people here think that academic qualifications
aren't as important as subservience to management and backstabbing
coworkers. I don't.

Ian Collins

unread,
Mar 19, 2010, 2:41:15 AM3/19/10
to

Agreed, I don't think I've ever used a compiler for an 8 bit micro where
int wasn't a conforming size. Which is why 8 bit types are often used
for small integers or counters

--
Ian Collins

spinoza1111

unread,
Mar 19, 2010, 2:43:49 AM3/19/10
to
On Mar 19, 3:25 am, Sjouke Burry <burrynulnulf...@ppllaanneett.nnll>
wrote:

By the end of the IBM 1401's useful life, I was "squozing" more and
more data into its 8K memory using an extrabit which was intended to
demarcate operands to get a 7 bit rather than 6 bit word.

A bit later, I acquired a Sinclair by post from Britain, plugged it
in, and it started to smoke (American power, British machine). After
the fire department left and the smoke had cleared, it still worked
and used storage most elegantly. It ran Basic in 2K.

Dr Malcolm McLean

unread,
Mar 19, 2010, 11:29:30 AM3/19/10
to
On 19 Mar, 03:19, Thad Smith <ThadSm...@acm.org> wrote:
> Dr Malcolm McLean wrote:
>
> > On such a machine a program than runs in 100 bytes of memory is
> > usually as useful and cost-ecffective as one that runs in 50.
>
> I have no idea what that statement means.
>
You've got 128 bytes of memory in your chosen processor. The cost of a
program that requires 1 byte is the exactly the same as the cost of a
program that requires 128 bytes. The cost of a program that requires
129 bytes is significantly higher, because it demands a hardware
redesign.
Now if you need 50 variables you are comfortably within the limits of
your processor. There is no advantage to making them chars (50 bytes)
over and above makingthem ints (100 bytes).

dspfun

unread,
Mar 19, 2010, 12:22:01 PM3/19/10
to
The question is related to embedded system where most of our types are
typdefed as

typedef unsigned long int U32;
typedef unsigned short int U16;
typedef unsigned char U8;

and so on.

In this scenario, for variables that we know will always have values
in the range 0-255, would you prefer using U8 or U32 ?

Which is preferrable with respects to:

1) Code readability.
2) Keeping it simple, i.e. minimizing risk for bugs.
3) Reducing the probablitity of casts.
4) Code maintenancability.

Ersek, Laszlo

unread,
Mar 19, 2010, 1:24:15 PM3/19/10
to
In article <cd08b939-ed1f-49e4...@k17g2000yqb.googlegroups.com>, dspfun <dsp...@hotmail.com> writes:
> The question is related to embedded system where most of our types are
> typdefed as
>
> typedef unsigned long int U32;
> typedef unsigned short int U16;
> typedef unsigned char U8;
>
> and so on.
>
> In this scenario, for variables that we know will always have values
> in the range 0-255, would you prefer using U8 or U32 ?

Neither. I would prefer "unsigned". The above typedef's are there so you
can choose exact width integers. That is, the selection criterion is
width, for whatever reason.

In this case however, I assume, you want something that's fast,
unsigned, and covers [0..255]. On a C99 platform I would recommend
uint_fast8_t, a type required by C99 7.18.1.3 "Fastest minimum-width
integer types".

Anywhere else I would recommend "unsigned". In my opinion, the default
integer promotions reflect the fact that there is a "most natural" word
size for any given machine, and that the most useful default behavior
for anything narrower is to get promoted to this "most natural" word
size as soon as possible -- obviously for speed. Therefore, for
individual variables *in general*, I think it's wrong to fight this
"rule of thumb".

I hope people with actual knowledge on the origins of the default
integer promotions will enlighten us.

Returning to the typedefs you quoted, "unsigned int" may coincide with
U16 or U32. The point is, you don't care, you simply want the fastest
one (which might be even "U24", theoretically).

lacos

Prof Craver

unread,
Mar 19, 2010, 4:29:57 PM3/19/10
to
On Mar 18, 8:58 am, Julienne Walker <happyfro...@hotmail.com> wrote:
> On Mar 18, 8:34 am, spinoza1111 <spinoza1...@yahoo.com> wrote:

> > Pro:
> > *  Everyone will think you're smart
> > Con:
> > *  Everyone will think you're an idiot
>
> I'm confident that one's mental abilities aren't judged by choice of
> numeric data type in C. ^_^ Some people might, but for those people I
> would recommend chilling out.

And yet, it matters enough that we routinely see bugs and even
gaping security holes because some coder has to look all l33t and
clever.

One baffling example is the format specifier bug: while these
sometimes
happen for more complex reasons, mostly it's some kid showing off that
he
knows he can substitute printf( string ) for printf( "%s", string ).

I think it's astonishing that one of our most common security holes
occurs
for no other reason than vanity. Buffer overflows result from
genuine
mistakes or hurried coders facing a ship date; SQL injection
vulnerabilities
are caused by the same. Cross-site scripting vulnerabilities result
from
the genuine difficulty of sanitizing user-supplied HTML code. They
all
happen for a good bad reason. Meanwhile, with some exceptions,
format
specifier bugs are just some kid being smart/stupid.

Another example is the XOR swap trick to save one extra variable/
register.
It makes you look all cool and K-rad until you use your swap macro to
swap array[i] with array[i].

I run the underhanded C contest (underhanded.xcott.com), and the XOR
swap trick was used by a couple submissions to trigger an error under
exactly those circumstances.

--X

Ben Pfaff

unread,
Mar 19, 2010, 4:39:58 PM3/19/10
to
Prof Craver <xcott...@gmail.com> writes:

> And yet, it matters enough that we routinely see bugs and even gaping
> security holes because some coder has to look all l33t and clever.
>
> One baffling example is the format specifier bug: while these
> sometimes happen for more complex reasons, mostly it's some kid
> showing off that he knows he can substitute printf( string ) for
> printf( "%s", string ).

Really? This is not my experience. Please give an example.
--
Ben Pfaff
http://benpfaff.org

Lew Pitcher

unread,
Mar 19, 2010, 5:24:42 PM3/19/10
to

I think that Prof Craver is suggesting something like

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[])
{
if (argc > 2)
printf(argv[1]); /* maybe Bang! you're dead! */

return EXIT_SUCCESS;
}

where the program code uses an unedited string as the format string for a
printf() call. This will work successfully (the printf() will generate
proper printable results, and will return a valid returnvalue) most of the
time, *but* will fail (won't generate proper printable results, may cause a
program abend or other unrecoverable error, may cause printf() not to
return a valid returnvalue) if the string contains a valid print specifier
escape sequence.

--
Lew Pitcher
Master Codewright & JOAT-in-training | Registered Linux User #112576
Me: http://pitcher.digitalfreehold.ca/ | Just Linux: http://justlinux.ca/
---------- Slackware - Because I know what I'm doing. ------


Ben Pfaff

unread,
Mar 19, 2010, 5:39:33 PM3/19/10
to
Lew Pitcher <lpit...@teksavvy.com> writes:

> On March 19, 2010 12:39, in comp.lang.c, b...@cs.stanford.edu wrote:
>
>> Prof Craver <xcott...@gmail.com> writes:
>>
>>> And yet, it matters enough that we routinely see bugs and even gaping
>>> security holes because some coder has to look all l33t and clever.
>>>
>>> One baffling example is the format specifier bug: while these
>>> sometimes happen for more complex reasons, mostly it's some kid
>>> showing off that he knows he can substitute printf( string ) for
>>> printf( "%s", string ).
>>
>> Really? This is not my experience. Please give an example.
>
> I think that Prof Craver is suggesting something like

[...]


> if (argc > 2)
> printf(argv[1]); /* maybe Bang! you're dead! */

[...]

That's clearly the code that he's suggesting. What I want as an
example is a case where this was done because "some coder has to
look all l33t". In my experience it's just a mistake, not some
kind of weird misplaced pride.
--
char a[]="\n .CJacehknorstu";int putchar(int);int main(void){unsigned long b[]
={0x67dffdff,0x9aa9aa6a,0xa77ffda9,0x7da6aa6a,0xa67f6aaa,0xaa9aa9f6,0x11f6},*p
=b,i=24;for(;p+=!*p;*p/=4)switch(0[p]&3)case 0:{return 0;for(p--;i--;i--)case+
2:{i++;if(i)break;else default:continue;if(0)case 1:putchar(a[i&15]);break;}}}

jamm

unread,
Mar 19, 2010, 7:56:48 PM3/19/10
to
Dr Malcolm McLean wrote:

hmmm? I don't quite follow. Having written code for 8bit micros, one
frequently desires to do some logging or buffering. Thus every byte of RAM
counts. Also int/16 bit operations can require two 8 bit operations and so
take more clock cycles.

Ian Collins

unread,
Mar 19, 2010, 8:00:46 PM3/19/10
to
On 03/20/10 01:22 AM, dspfun wrote:
> The question is related to embedded system where most of our types are
> typdefed as
>
> typedef unsigned long int U32;
> typedef unsigned short int U16;
> typedef unsigned char U8;
>
> and so on.
>
> In this scenario, for variables that we know will always have values
> in the range 0-255, would you prefer using U8 or U32 ?

The answer is still "it depends".

It may save space, it may not. That largely depends on the the CPU's
alignment requirements.

It my be faster, it may not. That largely depends on the the CPU's
register (whether shorter types have to be sign extended) and bus width.

--
Ian Collins

Flash Gordon

unread,
Mar 19, 2010, 8:34:00 PM3/19/10
to

Then you have a new requirement which needs another 29 bytes and your
shafted. This *does* happen, and I've been the one to tell the
management that what they thought was a simple and profitable mod would
actually mean major work. As far as I'm aware it meant the job had to be
turned down, so it cost the company money. It was not my code!

Also, on military projects it certainly used to be the case that there
was a written requirement for 50% free capacity specifically to allow
for later enhancements, because it is very common for things to be upgraded.

So if you are working on a very limited resource project, you do *not*
go wasting the resources!
--
Flash Gordon

Dr Malcolm McLean

unread,
Mar 20, 2010, 7:21:49 AM3/20/10
to
On 19 Mar, 20:34, Flash Gordon <s...@spam.causeway.com> wrote:
>
> Then you have a new requirement which needs another 29 bytes and your
> shafted. This *does* happen, and I've been the one to tell the
> management that what they thought was a simple and profitable mod would
> actually mean major work. As far as I'm aware it meant the job had to be
> turned down, so it cost the company money. It was not my code!
>
> Also, on military projects it certainly used to be the case that there
> was a written requirement for 50% free capacity specifically to allow
> for later enhancements, because it is very common for things to be upgraded.
>
> So if you are working on a very limited resource project, you do *not*
> go wasting the resources!
>
Alternatively the new requirements could make some of those 8-bit ints
go over 255.
The rules of (micro) optimisation
a) Don't do it.
b) Don't do it yet.

Nick

unread,
Mar 20, 2010, 9:38:15 AM3/20/10
to
Lew Pitcher <lpit...@teksavvy.com> writes:

> I think that Prof Craver is suggesting something like
>
> #include <stdio.h>
> #include <stdlib.h>
>
> int main(int argc, char *argv[])
> {
> if (argc > 2)
> printf(argv[1]); /* maybe Bang! you're dead! */
>
> return EXIT_SUCCESS;
> }
>
> where the program code uses an unedited string as the format string for a
> printf() call. This will work successfully (the printf() will generate
> proper printable results, and will return a valid returnvalue) most of the
> time, *but* will fail (won't generate proper printable results, may cause a
> program abend or other unrecoverable error, may cause printf() not to
> return a valid returnvalue) if the string contains a valid print specifier
> escape sequence.

I saw just the opposite yesterday - a Windows pop-up box which contained
a literal "%s" in it where you'd expect some more useful information.
--
Online waterways route planner | http://canalplan.eu
Plan trips, see photos, check facilities | http://canalplan.org.uk

pete

unread,
Mar 20, 2010, 12:30:51 PM3/20/10
to
bartc wrote:
>
> "Eric Sosman" <eso...@ieee-dot-org.invalid> wrote in message
> news:hnthdh$qnd$1...@news.eternal-september.org...
>> On 3/18/2010 11:29 AM, bartc wrote:
>>>
>>> The context is that something wants to store a value via a char*
>>> (perhaps a function call). You can't use int*.
>>
>> "The context" is

>>
>> //int i = 1;
>> char i = 1; //is this better than previous line?
>>
>> There is no char* anywhere in sight. There is not even a store
>> operation anywhere in sight, certainly not a store through a char*
>> via the intermediary of a function call. You're imagining things.
>
> I was replying to:
>
> "pete" <pfi...@mindspring.com> wrote in message
> news:4BA221...@mindspring.com...
>
>> I wouldn't use types lower ranking than int,
>> unless they were in an array.
>
> I gave an example where a char variable would be handy. That example
> stored something via a char* type. That was the context, expressed as:
>
> *p=65;
>
> for illustration, but a function call might be an example where a char*
> might be imposed:
>
> char c;
>
> while (readnextvalue(f,&c)) printf("%d",c);
>

That's not an example of
"Is it good to use char instead of int to save memory?"

That's an example of using char because you have to.

--
pete

Mark Hobley

unread,
Mar 20, 2010, 1:08:03 PM3/20/10
to
Seebs <usenet...@seebs.net> wrote:
> Never use char unless you mean it. If you just want a value, and don't
> really care too much about its range, always use int.

I would create a macro USMALL in a header file, and then define your small
numbers using:

USMALL c;

On an computer types where using a char datatype is more efficient (such as
on IBM compatible computers) the header file would define USMALL as follows:

#define USMALL unsigned char

(ie an unsigned char would be used)

On a platform where this produces overheads, the header file would define
USMALL as follows:

#define USMALL unsigned int

This would enable your code to be used portably on both platform types.

Mark.

--
Mark Hobley
Linux User: #370818 http://markhobley.yi.org/

Dr Malcolm McLean

unread,
Mar 20, 2010, 2:07:02 PM3/20/10
to
On 20 Mar, 13:08, markhob...@hotpop.donottypethisbit.com (Mark Hobley)
wrote:

> Seebs <usenet-nos...@seebs.net> wrote:
> > Never use char unless you mean it.  If you just want a value, and don't
> > really care too much about its range, always use int.
>
> I would create a macro USMALL in a header file, and then define your small
> numbers using:
>
> USMALL c;
>
>
> #define USMALL unsigned int
>
The problem with that is that the code rapidly becomes almost
unreadable.


eg
void zeroduplicates(UBIG *x, USMALL N)
{
USMALL i;
for(i=0;i<N-1;i++)
if(x[i] == x[i+1])
x[i] = 0;
}

it's hard to spot the bug.

Keith Thompson

unread,
Mar 20, 2010, 4:51:11 PM3/20/10
to

I'd use a typedef rather than a #define.

That's exactly what C99's uint_fast8_t is for (and uint_fast16_t and
so forth).

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Lew Pitcher

unread,
Mar 21, 2010, 7:40:36 AM3/21/10
to
Warning:

Lew Pitcher, who posts to this newsgroup, is a domain thief.

Read the full story at http://www.lpitcher.ca

Lew Pitcher

unread,
Mar 21, 2010, 7:56:31 AM3/21/10
to
Lew Pitcher <LPit...@brownsavvy.com> trolled:

> Warning:
>
> Lew Pitcher, who posts to this newsgroup, is a domain thief.
>
> Read the full story at http://www.lpitcher.ca

Sorry, that should read www.lewpitcher.ca

Richard Bos

unread,
Mar 21, 2010, 3:54:02 PM3/21/10
to
Dr Malcolm McLean <malcolm...@btinternet.com> wrote:

> On 20 Mar, 13:08, markhob...@hotpop.donottypethisbit.com (Mark Hobley)

> > #define USMALL unsigned int


> >
> The problem with that is that the code rapidly becomes almost
> unreadable.
>

> void zeroduplicates(UBIG *x, USMALL N)
> {
> USMALL i;
> for(i=0;i<N-1;i++)
> if(x[i] == x[i+1])
> x[i] = 0;
> }
>
> it's hard to spot the bug.

Pray tell. It's so hard to spot that I can't spot it.

(Not that I'd do that myself. I'd use a more descriptive name, _if_ I
found a typedef necessary at all, and by preference get (or under C89,
copy) one from <stdint.h>.)

Richard

Eric Sosman

unread,
Mar 21, 2010, 7:10:40 PM3/21/10
to
On 3/21/2010 11:54 AM, Richard Bos wrote:
> Dr Malcolm McLean<malcolm...@btinternet.com> wrote:
>
>> On 20 Mar, 13:08, markhob...@hotpop.donottypethisbit.com (Mark Hobley)
>
>>> #define USMALL unsigned int
>>>
>> The problem with that is that the code rapidly becomes almost
>> unreadable.
>>
>> void zeroduplicates(UBIG *x, USMALL N)
>> {
>> USMALL i;
>> for(i=0;i<N-1;i++)
>> if(x[i] == x[i+1])
>> x[i] = 0;
>> }
>>
>> it's hard to spot the bug.
>
> Pray tell. It's so hard to spot that I can't spot it.

zeroduplicates(pointer, 0);

--
Eric Sosman
eso...@ieee-dot-org.invalid

Dr Malcolm McLean

unread,
Mar 22, 2010, 12:27:50 PM3/22/10
to
On 21 Mar, 19:10, Eric Sosman <esos...@ieee-dot-org.invalid> wrote:
> On 3/21/2010 11:54 AM, Richard Bos wrote:
>
> > Dr Malcolm McLean<malcolm.mcle...@btinternet.com>  wrote:

>
> >> On 20 Mar, 13:08, markhob...@hotpop.donottypethisbit.com (Mark Hobley)
>
> >>> #define USMALL unsigned int
>
> >> The problem with that is that the code rapidly becomes almost
> >> unreadable.
>
> >> void zeroduplicates(UBIG *x, USMALL N)
> >> {
> >>    USMALL i;
> >>    for(i=0;i<N-1;i++)
> >>      if(x[i] == x[i+1])
> >>        x[i] = 0;
> >> }
>
> >> it's hard to spot the bug.
>
> > Pray tell. It's so hard to spot that I can't spot it.
>
>         zeroduplicates(pointer, 0);
>
>
That's always a difficult one. It's harder to spot when a user-defined
type is used. Although the "U" in USMALL should give us a reminder
that the type is unsigned, it's easy to overlook that.

Richard Bos

unread,
Mar 23, 2010, 10:38:46 AM3/23/10
to
Eric Sosman <eso...@ieee-dot-org.invalid> wrote:

That's not a bug that was introduced, or IMO made harder to spot, by the
typedefs/defines. In fact, it may not be a bug at all. It could easily
be part of the preconditions for this function that N be at least 2.
It's no more nor less a bug _in this function_ than

zeroduplicates(null_pointer, 42);

or

zeroduplicates(array_of_10_UBIGs, 14);

Richard

Dr Malcolm McLean

unread,
Mar 23, 2010, 1:44:29 PM3/23/10
to
On 23 Mar, 10:38, ralt...@xs4all.nl (Richard Bos) wrote:

> Eric Sosman <esos...@ieee-dot-org.invalid> wrote:
> > On 3/21/2010 11:54 AM, Richard Bos wrote:
> > > Dr Malcolm McLean<malcolm.mcle...@btinternet.com>  wrote:

>
> > >> On 20 Mar, 13:08, markhob...@hotpop.donottypethisbit.com (Mark Hobley)
>
> > >>> #define USMALL unsigned int
>
> > >> The problem with that is that the code rapidly becomes almost
> > >> unreadable.
>
> > >> void zeroduplicates(UBIG *x, USMALL N)
> > >> {
> > >>    USMALL i;
> > >>    for(i=0;i<N-1;i++)
> > >>      if(x[i] == x[i+1])
> > >>        x[i] = 0;
> > >> }
>
> > >> it's hard to spot the bug.
>
> > > Pray tell. It's so hard to spot that I can't spot it.
>
> >    zeroduplicates(pointer, 0);
>
> That's not a bug that was introduced, or IMO made harder to spot, by the
> typedefs/defines. In fact, it may not be a bug at all. It could easily
> be part of the preconditions for this function that N be at least 2.
> It's no more nor less a bug _in this function_ than
>
>   zeroduplicates(null_pointer, 42);
>
> or
>
>   zeroduplicates(array_of_10_UBIGs, 14);
>
Let's say that, for some reason, the function crashed when passed 15
as the number of dupicates. Of course we could document this and
claim that the fucntion was bug free. But in the absence of any
documentation, you assume that if input is valid, the function will
work. A list of no members is usally considered valid.
The bug could easily have been introduced by a typedef.
typedef char USMALL;

"Oh oh", someone realises, on this platform char is signed (but in
fact none of the integers go above 100).
Better make that
typedef unsigned char USMALL;

Bang.

Richard Bos

unread,
Mar 23, 2010, 7:15:32 PM3/23/10
to
Dr Malcolm McLean <malcolm...@btinternet.com> wrote:

> The bug could easily have been introduced by a typedef.
> typedef char USMALL;

So, basically you're complaining that people can give typedefs imbecilic
names? Wooohooo, what a massive hole you've discovered! No wonder you're
now a doctor! Better get working on identifier names as well; before you
know it someone writes

unsigned int plus_or_minus;

and then you'll have an even worse bug.

Richard

Squeamizh

unread,
Mar 23, 2010, 7:35:31 PM3/23/10
to
On Mar 23, 12:15 pm, ralt...@xs4all.nl (Richard Bos) wrote:

> Dr Malcolm McLean <malcolm.mcle...@btinternet.com> wrote:
>
> > The bug could easily have been introduced by a typedef.
> > typedef char USMALL;
>
> So, basically you're complaining that people can give typedefs imbecilic
> names? Wooohooo, what a massive hole you've discovered! No wonder you're

Discovering massive holes must really excite you.

> now a doctor! Better get working on identifier names as well; before you
> know it someone writes
>
>   unsigned int plus_or_minus;
>
> and then you'll have an even worse bug.

You're a damned idiot.

Dr Malcolm McLean

unread,
Mar 24, 2010, 1:37:43 PM3/24/10
to
On 23 Mar, 19:15, ralt...@xs4all.nl (Richard Bos) wrote:

> Dr Malcolm McLean <malcolm.mcle...@btinternet.com> wrote:
>
> > The bug could easily have been introduced by a typedef.
> > typedef char USMALL;
>
> So, basically you're complaining that people can give typedefs imbecilic
> names?
>
That's one possibility. Another is that plain char is guaranteed to be
faster or as fast as unsigned char on the particular set of platforms
for which we target the code. Since USMALLs never go above 100, it is
reasonable to make them chars, whilst giving them the U prefix to
document that they are unsigned. Target platform now changes to one on
which unsigned char is faster than signed char, so it's a simple case
(they think) of redoing the typedef and the code doesn't need to be
rewritten.
Not all people who make poor decisions do so for reasons of
imbecility.

Tim Rentsch

unread,
Mar 25, 2010, 6:27:23 AM3/25/10
to
dspfun <dsp...@hotmail.com> writes:

> Hi,
>
> Is it recommended to use char instead of int for variables that you
> know never will contain a value larger than 255? A small example
> follows:
>
> #include <stdio.h>
> int main(void)
> {


> //int i = 1;
> char i = 1; //is this better than previous line?

> printf("i=%d\n",i);
> return 0;
> }
>
> What are the pros and cons of using char instead of int when values
> will not exceed 255?

As a general rule it's better to use 'int' or 'unsigned int' (or
wider) than any narrower type such as 'short' or 'char' (or the
unsigned variants of those). There are two reasons for this
general rule. One, the behavior upon use in computation is often
unexpected and sometimes surprising. Two, usually the savings in
data and/or code size is either small, non-existent, or negative
(and ditto for speed). Also, it depends on what kind of variable
you're talking about -- it's almost always a bad idea to use a
type like 'short' or 'char' for a simple parameter or automatic
variable (here simple means scalar, ie, not arrays or members of
structs/unions).

Understanding that the general rule is right a lot more often
than it's wrong, here are some more-or-less standard
counterexamples --

arrays when the arrays are large

members of struct or union types, when
a) there will be lots of the outer struct/union type
b) the exact layout is constrained for some other reason

specific resource limitations or constraints, eg,
a) extremely limited global data space
b) extremely limited stack space
c) trading code space for data space or vice versa
d) trading speed for space or vice versa

using a narrow type can enforce a desired range limit, eg
a) an index to a 256 element array (and CHAR_BIT == 8)
b) similarly for arrays with 65,536 elements
c) want to do some computations modulo 256 or 65536
(which happens "for free" by storing into 'unsigned
char' or 'unsigned short' variables)

Notice that the kinds of considerations that go into the
different counterexamples are often quite different from
one to another.

Hopefully this outlines gives you a good sense of the
"shape" of the pros/cons you're looking for, and will
let you make better decisions when new cases come up.

Also, one other consideration, which is code evolution. In the
particular case you give above, it may be short sighted to use
'char' because "you know it will never contain a value larger
than 255". Perhaps that's true for the code as it is today, but
who knows how the code might change tomorrow? Unless there are
definite reasons for believing a particular value will _never_ be
exceeded, it's usually better to expect that it will.

Tim Rentsch

unread,
Mar 25, 2010, 6:54:30 AM3/25/10
to

The problem here is with the function body, not the choice of
type name. Right off the bat, it's a mistake to use a type of
unknown signedness/promotion characteristics in arithmetic
computation; the parameter 'N' may be forced to be of type
USMALL, but not 'i'. And the comparand 'N-1' is obviously
fraught with peril.

Besides type conversion problems, writing the comparison 'i<N-1'
is suspect because it expresses the desired relationship
less directly than it could. We're indexing 'x' by 'i+1';
hence what we want is 'i+1<N'. Even if the types chosen
are left as is, this change eliminates the corner case.

Hard to spot the bug? Not hard at all -- the bug is
that the code should never have made it out of code
review.

Dr Malcolm McLean

unread,
Mar 26, 2010, 7:36:50 AM3/26/10
to
On 25 Mar, 06:54, Tim Rentsch <t...@x-alumni2.alumni.caltech.edu>

wrote:
> Dr Malcolm McLean <malcolm.mcle...@btinternet.com>
> > void zeroduplicates(UBIG *x, USMALL N)
> > {
> >   USMALL i;
> >   for(i=0;i<N-1;i++)
> >     if(x[i] == x[i+1])
> >       x[i] = 0;
> > }
>
> > it's hard to spot the bug.
>
> Hard to spot the bug?  Not hard at all -- the bug is
> that the code should never have made it out of code
> review.
>
I've nothing against code reviews. As you say, review ought to pick up
mistakes like this. But not everyone does reviews, and the reviews are
only as good as the people doing the reviewing. (In practice they
don't eliminate all bugs, or the software crisis would be solved).

If you tell someone that a four-line fragment of code has a bug, and
they fail to spot it, I think we can reasonably conclude that the bug
is hard to spot.

Tim Rentsch

unread,
Mar 26, 2010, 1:49:26 PM3/26/10
to
Dr Malcolm McLean <malcolm...@btinternet.com> writes:

The reason I find this argument unconvincing is that the code
was constructed deliberately to make the bug hard to spot.
Personally I don't need to see the 'i<N-1' at all to find
reasons to raise objections (or at least questions), but
that's beside the point. The argument was based on a
strawman example, and generally such arguments aren't
convincing. You see my point?

blm...@myrealbox.com

unread,
Mar 27, 2010, 6:02:58 PM3/27/10
to
In article <28790bea-491a-41ef...@u33g2000yqe.googlegroups.com>,

Dr Malcolm McLean <malcolm...@btinternet.com> wrote:
> On 18 Mar, 17:32, spinoza1111 <spinoza1...@yahoo.com> wrote:
> > On Mar 18, 11:07 pm, Dr Malcolm McLean
> >
> > <malcolm.mcle...@btinternet.com> wrote:
> > > On 18 Mar, 13:25, "bartc" <ba...@freeuk.com> wrote:

[ snip ]

> Veteran readers of comp.lang.c will have noticed a change in my
> moniker. I'm Dr Malcolm McLean now.

Belated but hearty congratulations!!!! [*] I noticed the change
but assumed it was because you had decided for some reason to draw
attention to your title. It didn't occur to me that perhaps you
had just earned the right to use the title. (I think this may
be a compliment of sorts.)

[*] Normally four exclamation points would be excessive and bad
style. Here, though, I think they're just right.

--
B. L. Massingill
ObDisclaimer: I don't speak for my employers; they return the favor.

Ben Pfaff

unread,
Mar 27, 2010, 6:15:19 PM3/27/10
to
Dr Malcolm McLean <malcolm...@btinternet.com> writes:

> Veteran readers of comp.lang.c will have noticed a change in my
> moniker. I'm Dr Malcolm McLean now.

Congratulations!
--
Ben Pfaff
http://benpfaff.org

Herbert Rosenau

unread,
Mar 27, 2010, 11:03:15 PM3/27/10
to
On Thu, 18 Mar 2010 14:59:23 UTC, Dr Malcolm McLean
<malcolm...@btinternet.com> wrote:


>
> or this
>
> unsigned char ch;
>
> while( (ch = fgetc(fp)) != EOF )

This is designed to create trouble at all!

fgetc returns int, not char.
EOF is an int, not a char.

So you'll fails always to see that the file readed completely because
(char) EOF is noways comperable with EOF because significant bits are
lost by assigning the result of fgetc() to a char instead of int.

> /* do something */;
>
> or this
>
> unsigned char exponent;
> double mantissa;
>
> mantissa = frexp(x, &exponent);


Nonsense because exponent is signed, not unsigned. There is no
guarantee that exponent is not wider than a single char contains bits.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!

Herbert Rosenau

unread,
Mar 27, 2010, 11:03:15 PM3/27/10
to
On Thu, 18 Mar 2010 14:43:59 UTC, Thad Smith <Thad...@acm.org>
wrote:

> Eric Sosman wrote:
> > On 3/18/2010 7:40 AM, dspfun wrote:
>
> > Using a char (or short) variant instead of an int will reduce
> > data size. This is worth doing only if the data is "too big,"
> > which usually means one of two things:
> >
> > - You are storing "billyuns and billyuns" of data instances.
> >
> > - You are packing data into an externally-specified and size-
> > limited format ("Make it all fit in a 32-byte record").
>
> - You are near the resources of the processor.
> >
> > On some processors, using sub-int data types may increase
> > code size (sometimes significantly) and execution time (usually
> > not by much).
>
> On some processors using int data types increases code size and execution time
> over char data types. You are normally aware if that is so.
>
No, because int is the natural wide of the processor. handling int
will always produce the most possible code because C is designed to
use int to get best.

char on other hand is implementation defined either unsigned or
signed, so if you only defines char without explicite type specifier
signed or unsigned it will give you unwanted behavior when you changes
the implementation or simply only the version of it.

There are processors alive where access to a char costs significant
more runtime and or extra instructions because the processor is unable
to access a single char in memory and must truncate and expand single
chars to/from int in memory.

Herbert Rosenau

unread,
Mar 27, 2010, 11:03:15 PM3/27/10
to
On Thu, 18 Mar 2010 18:36:18 UTC, Eric Sosman
<eso...@ieee-dot-org.invalid> wrote:

> On 3/18/2010 2:14 PM, Anand Hariharan wrote:


> > On Mar 18, 7:41 am, Eric Sosman<esos...@ieee-dot-org.invalid> wrote:
> >> On 3/18/2010 7:40 AM, dspfun wrote:

> > (...)


> >>> What are the pros and cons of using char instead of int when values
> >>> will not exceed 255?
> >>

> >> If you need [0..255] you should use unsigned char, not plain
> >> char. (If you need [-127..127] make it signed char.)
> >>
> >
> > Are there any 1's complement machines out there which have C
> > implementations that are actively used?
>
> No, of course not. And no signed-magnitude machines,
> either. What's more, there *never* will be, no, not ever,
> not while your children's children's children still use C.

I'm 1000% sure that you are unable to enumerate each and every CPU
ever builded, having at least one valid C compiler and written lots of
programs for it. i86 is widely used, yes - but at least it is only a
rare CPU with an ill design.

Ian Collins

unread,
Mar 27, 2010, 11:36:09 PM3/27/10
to
On 03/28/10 12:03 PM, Herbert Rosenau wrote:
> On Thu, 18 Mar 2010 14:43:59 UTC, Thad Smith<Thad...@acm.org>
> wrote:
>>
>> On some processors using int data types increases code size and execution time
>> over char data types. You are normally aware if that is so.
>>
> No, because int is the natural wide of the processor. handling int
> will always produce the most possible code because C is designed to
> use int to get best.

Not always, int has to be at least 16 bits which can make extra work for
8 bit processors.

> char on other hand is implementation defined either unsigned or
> signed, so if you only defines char without explicite type specifier
> signed or unsigned it will give you unwanted behavior when you changes
> the implementation or simply only the version of it.
>
> There are processors alive where access to a char costs significant
> more runtime and or extra instructions because the processor is unable
> to access a single char in memory and must truncate and expand single
> chars to/from int in memory.

The converse is also true.

--
Ian Collins

James Harris

unread,
Mar 27, 2010, 11:43:38 PM3/27/10
to
On 18 Mar, 17:40, spinoza1111 <spinoza1...@yahoo.com> wrote:

...

> Malcolm seems to be referring to the lack of zero in Roman arithmetic.
> The Romans
...
> are the reason the nineteenth century starts with 1800 and not 1900, I
> believe.

A nitpick but since you brought up the absence of zero it is ironic
that you started the nineteenth century in 1800. Strictly speaking it
probably started in 1801. (Zero was as absent from the span of years
as it was from Roman numerals!)

James

Eric Sosman

unread,
Mar 28, 2010, 12:55:05 AM3/28/10
to
On 3/27/2010 7:03 PM, Herbert Rosenau wrote:
> On Thu, 18 Mar 2010 18:36:18 UTC, Eric Sosman
> <eso...@ieee-dot-org.invalid> wrote:
>
>> On 3/18/2010 2:14 PM, Anand Hariharan wrote:
>>>
>>> Are there any 1's complement machines out there which have C
>>> implementations that are actively used?
>>
>> No, of course not. And no signed-magnitude machines,
>> either. What's more, there *never* will be, no, not ever,
>> not while your children's children's children still use C.
>
> I'm 1000% sure that you are unable to enumerate each and every CPU
> ever builded, having at least one valid C compiler and written lots of
> programs for it. i86 is widely used, yes - but at least it is only a
> rare CPU with an ill design.

Herbert, my post was sarcasm. In a written medium like
Usenet, the spoken inflections, pauses, emphases, and so on
that signal a speaker's "attitude" are absent. Sometimes the
reader, deprived of those signals, may not recognize the writer's
actual meaning (which may be at variance with a straightforward
literal reading). This seems to be especially true when the
writer and reader have different native languages.

In other words: I meant the opposite of what I wrote, and
what I wrote was intended as a sarcastic parody of short-sighted
attitudes that I feel are all too common around here.

(By the way, your use of "1000%" is itself ironic for USAnian
readers. Two words: Thomas Eagleton.)

--
Eric Sosman
eso...@ieee-dot-org.invalid

Dr Malcolm McLean

unread,
Mar 28, 2010, 8:10:14 AM3/28/10
to
Absoutely.
My cousin refused to have anything to do with the 2000 millenium
celbrations, and waited until 2001.

Apparently the Greeks also refused to recognise 1 as "a number". All
mathematical proofs had to be done for the case of unity, and for the
case of numbers. So we have an interesting trend here. Extrapolating,
we'll soon have languages with -1-based arrays.

0 new messages