Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ISO C standard - which features need to be removed?

2 views
Skip to first unread message

Marco

unread,
Nov 28, 2009, 8:18:41 AM11/28/09
to
On a previous thread many folks lamented about the C99 standard not
being fully implemented and still referencing the C95 version. Maybe
the ISO needs to reconsider some of the C99 features that are not
being universally implemented in C compilers and make them an optional
feature or deprecate them. The C99 standard has a lot of good changes
like defining portable fixed width integers <stdint.h> but these tend
to be overshadowed by the un-implemented features.

Any suggestions?


[ example snippet from other thread

>
> > »This second edition cancels and replaces the first
> > edition, ISO/IEC 9899:1990, as amended and corrected by
> > ISO/IEC 9899/COR1:1994, ISO/IEC 9899/AMD1:1995, and
> > ISO/IEC 9899/COR2:1996.«
>
> > ISO/IEC 9899:1999 (E)
>
> > This means that C is the language specified by ISO/IEC
> > 9899:1999 (E). »ISO/IEC 9899:1990« is canceled.
>
> In theory, yes. In practice, conforming C90 compilers are still
> much more common than conforming C99 compilers. Many compilers
> implement large parts of C99, but others, still in common use,
> implement almost none of it (more precisely, almost none of the
> differences between C90 and C99).
>
> --
> Keith Thompson (The_Other_Keith) ks...@mib.org
end snippet]

Malcolm McLean

unread,
Nov 29, 2009, 1:51:04 PM11/29/09
to
"Marco" <prenom...@yahoo.com> wrote in message news:

>On a previous thread many folks lamented about the C99 standard not
>being fully implemented and still referencing the C95 version. Maybe
>the ISO needs to reconsider some of the C99 features that are not
>being universally implemented in C compilers and make them an optional
>feature or deprecate them. The C99 standard has a lot of good changes
>like defining portable fixed width integers <stdint.h> but these tend
>to be overshadowed by the un-implemented features.
>
>Any suggestions?
>
variable length arrays.
Also, the different integer types have a huge drawback, which is that the
exact type has to be passed by indirection. The more types you have, the
less likely it is that the type ypu are using in caller matches the type
demanded by callee.


jacob navia

unread,
Nov 29, 2009, 2:35:44 PM11/29/09
to
Marco a �crit :


Complex arithmetic should be replaced with operator overloading, that
gives a better and general solution to the problem of new numeric types
than each time changing the language to add one.

Take for instance the situation now, where decimal based floating point,
fixed point and some other formats are all proposed as a Technical
Report to the standard committee. We can't accomodate them all in the
language.

The solution is to allow the user to develop its own types as many other
languages allow, from venerable FORTRAN to C# and many others.


This solution allows the user to have that extension to numeric types if
he/she wants it, and not forcing him/her to swallow a predefined
solution that will be wrong in most cases.

Even in relatively simple things like complex division, many users will
have different needs than the proposed compiled solution since some will
favor accuracy, others will need more speed, etc etc!

Malcolm McLean

unread,
Nov 29, 2009, 3:40:30 PM11/29/09
to

"jacob navia" <ja...@spamsink.net> wrote in message

> This solution allows the user to have that extension to numeric types if
> he/she wants it, and not forcing him/her to swallow a predefined solution
> that will be wrong in most cases.
>
I can't agree with you here. There's a relatively short list of common
things you need to do (huge integers, arbitrary-precision floating point,
complex numbers, fixed point). A standard solution will be right for most
people, most of the time.

The problem with roll your own is that you recreate the bool problem.
Everyone defines "complex" in a slightly different way, maybe using r and j
if an enegineer, r and i if a mathermatician, real and imag if a programmer,
x and y if slightly eccentric. Then pieces of code can't talk to each other
without huge wodges of conversion routines, all to convert between types
which are essentially the same.


jacob navia

unread,
Nov 29, 2009, 4:21:21 PM11/29/09
to
Malcolm McLean a �crit :

> "jacob navia" <ja...@spamsink.net> wrote in message
>> This solution allows the user to have that extension to numeric types if
>> he/she wants it, and not forcing him/her to swallow a predefined solution
>> that will be wrong in most cases.
>>
> I can't agree with you here. There's a relatively short list of common
> things you need to do (huge integers, arbitrary-precision floating point,
> complex numbers, fixed point). A standard solution will be right for most
> people, most of the time.
>

Sure.

How huge must be those "huge integers" so that everyone is satisfied?

Complex numbers must favor speed or accuracy?

Arbitrary precision floating point must present 352 bits precision?
Or rather 512? or maybe 1024?

You can't have them ALL.

> The problem with roll your own is that you recreate the bool problem.
> Everyone defines "complex" in a slightly different way, maybe using r and j

Yes, that notation is not going to die anytime soon because it is needed
in many contexts! C99 decided for cartesian coordinates, what means that
polar coordinates must be done outside the standard notation anyway.

> if an enegineer, r and i if a mathermatician, real and imag if a programmer,
> x and y if slightly eccentric. Then pieces of code can't talk to each other
> without huge wodges of conversion routines, all to convert between types
> which are essentially the same.

They are the same as 1.0L is the same as 1ULL or 0x1 but you will agree
with me that the above representations of the number one in C are all
different, even if they represent the same number. This is an old
problem, and surely it doesn't help to favor one over the other!


Ian Collins

unread,
Nov 29, 2009, 11:37:29 PM11/29/09
to

The solution is simple - do what C++ did and put them in the standard
library.

--
Ian Collins

Nick Keighley

unread,
Nov 30, 2009, 4:02:19 AM11/30/09
to
On 29 Nov, 20:40, "Malcolm McLean" <regniz...@btinternet.com> wrote:
> "jacob navia" <ja...@spamsink.net> wrote in message

> > This solution allows the user to have that extension to numeric types if
> > he/she wants it, and not forcing him/her to swallow a predefined solution
> > that will be wrong in most cases.
>
> I can't agree with you here. There's a relatively short list of common
> things you need to do (huge integers, arbitrary-precision floating point,
> complex numbers, fixed point).

rationals, quaternions


<snip>

Nobody

unread,
Nov 30, 2009, 4:46:44 AM11/30/09
to
On Sun, 29 Nov 2009 22:21:21 +0100, jacob navia wrote:

> How huge must be those "huge integers" so that everyone is satisfied?

> Arbitrary precision floating point must present 352 bits precision?


> Or rather 512? or maybe 1024?

In general, "big" (i.e. arbitrary-size) numbers are limited only by
available resources; i.e. they must fit into memory, their size must fit
into a machine word, and if they're so large that primitive arithmetic
operations exceed available memory then you lose.

Most modern high-level languages include big integers as a primitive type
(typically via either the BSD MP or GNU MP libraries). Some also include
arbitrary-precision rational and/or floating-point numbers (typically via
GNU MP), either as primitive types or as standard libraries.

jacob navia

unread,
Nov 30, 2009, 5:22:19 AM11/30/09
to
Nobody a écrit :

> On Sun, 29 Nov 2009 22:21:21 +0100, jacob navia wrote:
>
>> How huge must be those "huge integers" so that everyone is satisfied?
>
>> Arbitrary precision floating point must present 352 bits precision?
>> Or rather 512? or maybe 1024?
>
> In general, "big" (i.e. arbitrary-size) numbers are limited only by
> available resources; i.e. they must fit into memory, their size must fit
> into a machine word, and if they're so large that primitive arithmetic
> operations exceed available memory then you lose.
>

If you do a simple loop of (say) 50 iterations, each time multiplying
a bignum by another, does the result bit count grow exponentially?

Or it stays fixed at some value?

If you choose solution (1) you can't do multiplications in a loop because
the result would have around 2^50 bits precision.

If you choose solution (2) you have to ask the user at how much precision
you want the bignums to stop.

If we have operator overloading, each user can choose the solution he/she
needs. Note that the code using this solution is MUCH more portable than
what is possible now since code like:
c = (b+c)/n;
will stay THE SAME independently of which solution is preferred. Now you
would have to write:
c = divide(add(b+c),n);
where "divide" and "add" have to be replaced by specific library calls.

Lcc-win implements operator overloading in the context of C, and the big
number library furnished by lccwin can be replaced by another library
like gnu mp without making any modifications to the user code.

> Most modern high-level languages include big integers as a primitive type
> (typically via either the BSD MP or GNU MP libraries).

True. Lccwin provides those too.

> Some also include
> arbitrary-precision rational and/or floating-point numbers (typically via
> GNU MP), either as primitive types or as standard libraries.
>

With operator overloading it is easy to implement a rational or a quaternion package.

By avoiding complex numbers a s a primitive type we make the language smaller
and easier to implement. The operator overloading solution makes possible to
implement all kinds of numbers where needed, without imposing constraints in
compilers for embedded systems where complex numbers are (in general) not used
a lot.

Nobody

unread,
Dec 1, 2009, 1:49:27 AM12/1/09
to
On Mon, 30 Nov 2009 11:22:19 +0100, jacob navia wrote:

>>> How huge must be those "huge integers" so that everyone is satisfied?
>>
>>> Arbitrary precision floating point must present 352 bits precision?
>>> Or rather 512? or maybe 1024?
>>
>> In general, "big" (i.e. arbitrary-size) numbers are limited only by
>> available resources; i.e. they must fit into memory, their size must fit
>> into a machine word, and if they're so large that primitive arithmetic
>> operations exceed available memory then you lose.
>>
>
> If you do a simple loop of (say) 50 iterations, each time multiplying
> a bignum by another, does the result bit count grow exponentially?
>
> Or it stays fixed at some value?

Are you talking about integers/rationals or floats?

Integers and rationals grow, using as much space as is necessary to
represent the result (or until you run out of memory, in which case the
result simply isn't representable; while it's often convenient to treat
computers is if they were Turing machines, they're really only finite
automata).

Arbitrary-precision floating-point usually requires the precision to be
specified, rather than being dictated by the operands (otherwise, how many
bits should be used for e.g. 1/3?)

jacob navia

unread,
Dec 1, 2009, 6:54:52 AM12/1/09
to
Nobody a écrit :

> On Mon, 30 Nov 2009 11:22:19 +0100, jacob navia wrote:
>
>>>> How huge must be those "huge integers" so that everyone is satisfied?
>>>> Arbitrary precision floating point must present 352 bits precision?
>>>> Or rather 512? or maybe 1024?
>>> In general, "big" (i.e. arbitrary-size) numbers are limited only by
>>> available resources; i.e. they must fit into memory, their size must fit
>>> into a machine word, and if they're so large that primitive arithmetic
>>> operations exceed available memory then you lose.
>>>
>> If you do a simple loop of (say) 50 iterations, each time multiplying
>> a bignum by another, does the result bit count grow exponentially?
>>
>> Or it stays fixed at some value?
>
> Are you talking about integers/rationals or floats?
>
> Integers and rationals grow, using as much space as is necessary to
> represent the result (or until you run out of memory, in which case the
> result simply isn't representable; while it's often convenient to treat
> computers is if they were Turing machines, they're really only finite
> automata).
>

This is completely impractical since with just 40-50 multiplications
you find yourself with gigabyte big numbers that are unusable.

lccwin lets you specify the maximum size of bignums, and then they are
fixed.

But your way is better in other applications of course. And this proves
that each application should be using the bignums it needs, using
operator overloading.


> Arbitrary-precision floating-point usually requires the precision to be
> specified, rather than being dictated by the operands (otherwise, how many
> bits should be used for e.g. 1/3?)
>

The same problems will appear here. There is no "one size fits all"
solution.

The true solution is to let the user specify the number type he/she
needs. You get some basic types, then you can add your own.

Ben Bacarisse

unread,
Dec 1, 2009, 7:26:15 AM12/1/09
to
jacob navia <ja...@spamsink.net> writes:

> Nobody a écrit :
>> On Mon, 30 Nov 2009 11:22:19 +0100, jacob navia wrote:
>>
>>>>> How huge must be those "huge integers" so that everyone is satisfied?
>>>>> Arbitrary precision floating point must present 352 bits precision?
>>>>> Or rather 512? or maybe 1024?
>>>> In general, "big" (i.e. arbitrary-size) numbers are limited only by
>>>> available resources; i.e. they must fit into memory, their size must fit
>>>> into a machine word, and if they're so large that primitive arithmetic
>>>> operations exceed available memory then you lose.
>>>>
>>> If you do a simple loop of (say) 50 iterations, each time multiplying
>>> a bignum by another, does the result bit count grow exponentially?
>>>
>>> Or it stays fixed at some value?
>>
>> Are you talking about integers/rationals or floats?
>>
>> Integers and rationals grow, using as much space as is necessary to
>> represent the result (or until you run out of memory, in which case the
>> result simply isn't representable; while it's often convenient to treat
>> computers is if they were Turing machines, they're really only finite
>> automata).
>
> This is completely impractical since with just 40-50 multiplications
> you find yourself with gigabyte big numbers that are unusable.

You've said this twice now, but I can't see what you mean. At first I
thought you'd simply mistyped what you intended to say but it seems
not. Multiplying a bignum by (for example) 1024 adds 10 bits to the
required length. Doing that 50 times adds 500 bits. 500+ bit numbers
are common in many applications of bignums. Even multiplying a
1000-bit number by another 1000 bit number 50 times makes a 51,000-bit
number. Not at all unmanageable.

<snip>
--
Ben.

bartc

unread,
Dec 1, 2009, 8:25:39 AM12/1/09
to

"Ben Bacarisse" <ben.u...@bsb.me.uk> wrote in message
news:0.93219753881b05cb6811.2009...@bsb.me.uk...

Multiplying a number by itself will approximately double the number of bits.
Repeat that process, and the number of bits increases exponentially.

But I agree, a lot of useful work can still be done with variable width
bignums without overflowing memory.

Applying a fixed width (if that is the alternative) is harder to work with
(how do you know how many bits will be needed), and wastes resources when
the numbers are smaller.

--
Bartc

Ian Collins

unread,
Dec 1, 2009, 1:22:36 PM12/1/09
to

How do you know if an expression overflows?

--
Ian Collins

bartc

unread,
Dec 1, 2009, 5:08:10 PM12/1/09
to

"Ian Collins" <ian-...@hotmail.com> wrote in message
news:7nl57bF...@mid.individual.net...

> bartc wrote:
>>
>> "Ben Bacarisse" <ben.u...@bsb.me.uk> wrote in message
>> news:0.93219753881b05cb6811.2009...@bsb.me.uk...

>>> You've said this twice now, but I can't see what you mean. At first I


>>> thought you'd simply mistyped what you intended to say but it seems
>>> not. Multiplying a bignum by (for example) 1024 adds 10 bits to the
>>> required length. Doing that 50 times adds 500 bits. 500+ bit numbers
>>> are common in many applications of bignums. Even multiplying a
>>> 1000-bit number by another 1000 bit number 50 times makes a 51,000-bit
>>> number. Not at all unmanageable.
>>
>> Multiplying a number by itself will approximately double the number of
>> bits. Repeat that process, and the number of bits increases
>> exponentially.
>>
>> But I agree, a lot of useful work can still be done with variable width
>> bignums without overflowing memory.
>>
>> Applying a fixed width (if that is the alternative) is harder to work
>> with (how do you know how many bits will be needed), and wastes resources
>> when the numbers are smaller.
>
> How do you know if an expression overflows?

Of variable width bigints? Expressions don't overflow, although memory can
get tight.

With fixed width bigints: I've never used these, I assume there's some
mechanism to find out. But it's another mark against them.

--
Bartc

Ben Bacarisse

unread,
Dec 1, 2009, 8:14:50 PM12/1/09
to
"bartc" <ba...@freeuk.com> writes:

Of course. I was responding to what Jacob wrote originally.
Multiplying one number by another does not sound like he meant
multiplying one number by itself. I did not comment at first but then
repeated the remark with even more general language "just 40-50
multiplications" so I though it best to clear the matter up.

<snip>
--
Ben.

Ian Collins

unread,
Dec 1, 2009, 11:41:48 PM12/1/09
to

That highlights one of the problems with adding operator overloading to
C: how to report errors?

--
Ian Collins

Nobody

unread,
Dec 2, 2009, 12:12:10 AM12/2/09
to
On Tue, 01 Dec 2009 12:54:52 +0100, jacob navia wrote:

>>> If you do a simple loop of (say) 50 iterations, each time multiplying
>>> a bignum by another, does the result bit count grow exponentially?
>>>
>>> Or it stays fixed at some value?
>>
>> Are you talking about integers/rationals or floats?
>>
>> Integers and rationals grow, using as much space as is necessary to
>> represent the result (or until you run out of memory, in which case the
>> result simply isn't representable; while it's often convenient to treat
>> computers is if they were Turing machines, they're really only finite
>> automata).
>
> This is completely impractical since with just 40-50 multiplications
> you find yourself with gigabyte big numbers that are unusable.

If an integer is so large that it requires a gigabyte of memory to store
it, then it requires a gigabyte of memory to store it. If you don't have
that much memory, then you may as well simply terminate with an
out-of-memory condition. There is no advantage to continuing using an
incorrect (i.e. wrapped) value. Neither approach will give you the correct
answer, but at least the former isn't likely to lead anyone astray.

If you only need an approximate answer, then you use floating point.

>> Arbitrary-precision floating-point usually requires the precision to be
>> specified, rather than being dictated by the operands (otherwise, how many
>> bits should be used for e.g. 1/3?)
>>
>
> The same problems will appear here. There is no "one size fits all"
> solution.
>
> The true solution is to let the user specify the number type he/she
> needs. You get some basic types, then you can add your own.

This approach becomes inconvenient when you move beyond stand-alone
programs and need to use libraries, as the application has to mediate
between the various formats (assuming that this is even possible).

bartc

unread,
Dec 2, 2009, 5:58:45 AM12/2/09
to

"Ian Collins" <ian-...@hotmail.com> wrote in message
news:7nm9gcF...@mid.individual.net...

> bartc wrote:
>>
>> "Ian Collins" <ian-...@hotmail.com> wrote in message
>> news:7nl57bF...@mid.individual.net...

>>> How do you know if an expression overflows?


>>
>> Of variable width bigints? Expressions don't overflow, although memory
>> can get tight.
>>
>> With fixed width bigints: I've never used these, I assume there's some
>> mechanism to find out. But it's another mark against them.
>
> That highlights one of the problems with adding operator overloading to C:
> how to report errors?

You mean because using functions instead allows more arguments to be used?

The problems are no different to the difficulties of detecting errors with
the current built-in operators.

And how do floating point operations (many of which *are* implemented as
functions) deal with it?

--
Bartc

Keith Thompson

unread,
Dec 2, 2009, 11:30:19 AM12/2/09
to
"bartc" <ba...@freeuk.com> writes:
> "Ian Collins" <ian-...@hotmail.com> wrote in message
> news:7nm9gcF...@mid.individual.net...
[...]

>> That highlights one of the problems with adding operator overloading
>> to C: how to report errors?
>
> You mean because using functions instead allows more arguments to be used?
>
> The problems are no different to the difficulties of detecting errors
> with the current built-in operators.
>
> And how do floating point operations (many of which *are* implemented
> as functions) deal with it?

Clumsily.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Malcolm McLean

unread,
Dec 4, 2009, 11:33:01 AM12/4/09
to
"Nobody" <nob...@nowhere.com> wrote in message

>> The true solution is to let the user specify the number type he/she
>> needs. You get some basic types, then you can add your own.
>
> This approach becomes inconvenient when you move beyond stand-alone
> programs and need to use libraries, as the application has to mediate
> between the various formats (assuming that this is even possible).
>
That's exactly the problem. Error handling is the same problem. Whilst
almost all programs will want to flag an attempt to divide by zero, the
question is exactly how to convey the message to the user. The C standard
solution, printing a message to stderr, isn't appropriate for many programs.
(There's also the question of whether to terminate or to return infinity).


Keith Thompson

unread,
Dec 4, 2009, 1:06:57 PM12/4/09
to

Printing a message to stderr isn't "The C standard solution".
Division by zero invokes undefined behavior. (The behavior might be
defined in some cases if you have IEC 60559 support. I'm too lazy
to look up the details, but I'm sure it doesn't involve printing
a message to stderr, unless your program does it explicitly.)

--

Marco

unread,
Dec 6, 2009, 7:36:19 AM12/6/09
to
On Nov 29, 11:51 am, "Malcolm McLean" <regniz...@btinternet.com>
wrote:
> "Marco" <prenom_no...@yahoo.com> wrote in message news:

> >On a previous thread many folks lamented about the C99 standard not
> >being fully implemented and still referencing the C95 version. Maybe
> >the ISO needs to reconsider some of the C99 features that are not
> >being universally implemented in C compilers and make them an optional
> >feature or deprecate them.  The C99 standard has a lot of good changes
> >like defining portable fixed width integers <stdint.h> but these tend
> >to be overshadowed by the un-implemented features.
>
> >Any suggestions?
>
> variable length arrays.

good choice - not many compilers have implemented it

> Also, the different integer types have a huge drawback, which is that the
> exact type has to be passed by indirection. The more types you have, the
> less likely it is that the type ypu are using in caller matches the type
> demanded by callee.

not sure what you mean here - the fixed width types should be used
where necessary such as interfacing to HW registers.
For most algorithm use - I would just use a "int" or "long" type with
an assert if the caller did not conform on the particular platform
that the code was compiled on

you think that the bad old days (C89) where every project rolled their
own 32, 16, 8 bit, etc unsigned integer is the way to go??

I mostly do embedded work

Ian Collins

unread,
Dec 6, 2009, 1:23:00 PM12/6/09
to
Marco wrote:
> On Nov 29, 11:51 am, "Malcolm McLean" <regniz...@btinternet.com>
> wrote:
>> "Marco" <prenom_no...@yahoo.com> wrote in message news:
>>> On a previous thread many folks lamented about the C99 standard not
>>> being fully implemented and still referencing the C95 version. Maybe
>>> the ISO needs to reconsider some of the C99 features that are not
>>> being universally implemented in C compilers and make them an optional
>>> feature or deprecate them. The C99 standard has a lot of good changes
>>> like defining portable fixed width integers <stdint.h> but these tend
>>> to be overshadowed by the un-implemented features.
>>> Any suggestions?
>> variable length arrays.
>
> good choice - not many compilers have implemented it

gcc being an obvious exception.

>> Also, the different integer types have a huge drawback, which is that the
>> exact type has to be passed by indirection. The more types you have, the
>> less likely it is that the type ypu are using in caller matches the type
>> demanded by callee.
>
> not sure what you mean here - the fixed width types should be used
> where necessary such as interfacing to HW registers.
> For most algorithm use - I would just use a "int" or "long" type with
> an assert if the caller did not conform on the particular platform
> that the code was compiled on

Malcolm has an obsession with 64 bit ints.

> you think that the bad old days (C89) where every project rolled their
> own 32, 16, 8 bit, etc unsigned integer is the way to go??
>
> I mostly do embedded work

Malcolm doesn't!

--
Ian Collins

Malcolm McLean

unread,
Dec 7, 2009, 11:55:30 AM12/7/09
to
"Keith Thompson" <ks...@mib.org> wrote in message news:

> "Malcolm McLean" <regn...@btinternet.com> writes:
>> "Nobody" <nob...@nowhere.com> wrote in message
>>>> The true solution is to let the user specify the number type he/she
>>>> needs. You get some basic types, then you can add your own.
>>>
>>> This approach becomes inconvenient when you move beyond stand-alone
>>> programs and need to use libraries, as the application has to mediate
>>> between the various formats (assuming that this is even possible).
>>>
>> That's exactly the problem. Error handling is the same problem. Whilst
>> almost all programs will want to flag an attempt to divide by zero, the
>> question is exactly how to convey the message to the user. The C standard
>> solution, printing a message to stderr, isn't appropriate for many
>> programs.
>> (There's also the question of whether to terminate or to return
>> infinity).
>
> Printing a message to stderr isn't "The C standard solution".
> Division by zero invokes undefined behavior. (The behavior might be
> defined in some cases if you have IEC 60559 support. I'm too lazy
> to look up the details, but I'm sure it doesn't involve printing
> a message to stderr, unless your program does it explicitly.)
>
We're not talking about the built in types. If you implement a big integer
library, what action should the code take if the division routine is called
with a denominator of zero? One answer is to deliberately invoke undefined
behaviour (probably a dvision by a zero small integer), but that's only one
answer, and probably not the best one.


Keith Thompson

unread,
Dec 7, 2009, 12:38:10 PM12/7/09
to

Ok. I still don't understand why you say that printing a message
to stderr is the "C standard solution" (and that's the only thing
I'm disputing here). There are any number of ways a C library can
support error detecting and reporting. Printing a message to stderr
is probably one of the worst.

0 new messages