Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Undefined Behaviour

572 views
Skip to first unread message

jerry.j...@gmail.com

unread,
Aug 29, 2018, 10:14:03 PM8/29/18
to
Hello all,

I am curious If the code presented below causes UB in C++. I know the original question was about C but I am specifically asking about C++.

Thanks very much.

Jerry

https://stackoverflow.com/a/34626013/2193968

#include <stdio.h>

int num;

void print( int a[][num], int row, int col )
{
int i, j;
for(i = 0; i < row; i++)
{
for(j = 0; j < col; j++)
printf("%3d ", a[i][j]);
printf("\n");
}
}


int main()
{
int a[10][10];
int i, j;

for(i = 0; i < 10; i++)
for(j = 0; j < 10; j++)
a[i][j] = i*10+j;

num = 10;
print(a, 10, 10);

printf("\n\n");

print((int(*)[num])(&a[4][3]), 5, 4);

return 0;
}

Öö Tiib

unread,
Aug 29, 2018, 11:07:47 PM8/29/18
to
On Thursday, 30 August 2018 05:14:03 UTC+3, jerry.j...@gmail.com wrote:
> Hello all,
>
> I am curious If the code presented below causes UB in C++. I know
> the original question was about C but I am specifically asking about C++.
>
> Thanks very much.
>
> Jerry

It is hard to tell because presented code should not compile as C++.

>
> https://stackoverflow.com/a/34626013/2193968
>
> #include <stdio.h>
>
> int num;
>
> void print( int a[][num], int row, int col )

That does not make sense in C++. Array dimension must be constant
expression not uninitialized int num. Is that really allowed in C?

James Kuyper

unread,
Aug 30, 2018, 12:09:41 AM8/30/18
to
On 08/29/2018 11:07 PM, Öö Tiib wrote:
> On Thursday, 30 August 2018 05:14:03 UTC+3, jerry.j...@gmail.com wrote:
>> Hello all,
>>
>> I am curious If the code presented below causes UB in C++. I know
>> the original question was about C but I am specifically asking about C++.
>>
>> Thanks very much.
>>
>> Jerry
>
> It is hard to tell because presented code should not compile as C++.
>
>>
>> https://stackoverflow.com/a/34626013/2193968
>>
>> #include <stdio.h>
>>
>> int num;
>>
>> void print( int a[][num], int row, int col )
>
> That does not make sense in C++. Array dimension must be constant
> expression not uninitialized int num. Is that really allowed in C?

Yes, it is. It's called a variable length array. It was added in C99.

Sam

unread,
Aug 30, 2018, 7:01:27 AM8/30/18
to
Ok, but this is C++, and not C.

No self-respecting C++ compiler would accept this code.

bol...@cylonhq.com

unread,
Aug 30, 2018, 7:16:33 AM8/30/18
to
On Thu, 30 Aug 2018 07:01:12 -0400
Sam <s...@email-scan.com> wrote:
>> >> #include <stdio.h>
>> >>
>> >> int num;
>> >>
>> >> void print( int a[][num], int row, int col )
>> >
>> > That does not make sense in C++. Array dimension must be constant
>> > expression not uninitialized int num. Is that really allowed in C?
>>
>> Yes, it is. It's called a variable length array. It was added in C99.
>
>Ok, but this is C++, and not C.
>
>No self-respecting C++ compiler would accept this code.

Clang seems happy with it but not g++, which is absurd since gcc compiles
it without a problem in C mode.

Its unfortunate syntax like this which would be useful on a daily basis isn't
included in the C++ standard whereas the standards committee are quite happy
to constantly chuck in obscure irrelevant nonsense (eg user literals) or
reinventing the wheel (eg using aliases instead of typedef) that 99% of coders
will never use.


James Kuyper

unread,
Aug 30, 2018, 7:24:48 AM8/30/18
to
Öö asked explicitly about C, so I answered about C.

> No self-respecting C++ compiler would accept this code.

Perhaps - I suppose that depends upon how they go about respecting
themselves. However, a fully standard conforming implementation of C++
is allowed to accept it, after issuing the required diagnostic. C++ code
could use a container class to achieve the same practical effect, and in
many ways that would be a better solution - but VLAs provide a
conceptually simpler way to do it, and I have seen people express a
desire to add them to C++, even if only as an implementation-specific
extension.

David Brown

unread,
Aug 30, 2018, 9:05:20 AM8/30/18
to
On 30/08/18 13:16, bol...@cylonHQ.com wrote:
> On Thu, 30 Aug 2018 07:01:12 -0400
> Sam <s...@email-scan.com> wrote:
>>>>> #include <stdio.h>
>>>>>
>>>>> int num;
>>>>>
>>>>> void print( int a[][num], int row, int col )
>>>>
>>>> That does not make sense in C++. Array dimension must be constant
>>>> expression not uninitialized int num. Is that really allowed in C?
>>>
>>> Yes, it is. It's called a variable length array. It was added in C99.
>>
>> Ok, but this is C++, and not C.
>>
>> No self-respecting C++ compiler would accept this code.
>
> Clang seems happy with it but not g++, which is absurd since gcc compiles
> it without a problem in C mode.

gcc compiles C in C mode, and C++ in C++ mode. The code is valid C but
not valid C++, so there is nothing absurd about it.

Now, it's fine for a C++ compiler to support VLA's as an extension. gcc
certainly supports some VLA syntax in (non-pedantic) C++ modes - you can
have a local variable that is a VLA, for example. But it looks like
this particular VLA usage is not allowed by gcc in C++ modes - while
clang supports it.

>
> Its unfortunate syntax like this which would be useful on a daily basis isn't
> included in the C++ standard whereas the standards committee are quite happy
> to constantly chuck in obscure irrelevant nonsense (eg user literals) or
> reinventing the wheel (eg using aliases instead of typedef) that 99% of coders
> will never use.
>

I can't imagine any significant use for this particular syntax -
certainly not "on a daily basis". I've used VLA's sometimes in C, as
local variables - they can be a convenient way of making a buffer. The
same usage could apply to C++ (as supported by gcc, as a C++ extension)
as an alternative to having a vector.

I would not object to VLA's in C++ - but I can't see it as such a big issue.

bol...@cylonhq.com

unread,
Aug 30, 2018, 9:59:47 AM8/30/18
to
On Thu, 30 Aug 2018 15:05:05 +0200
David Brown <david...@hesbynett.no> wrote:
>On 30/08/18 13:16, bol...@cylonHQ.com wrote:
>> Clang seems happy with it but not g++, which is absurd since gcc compiles
>> it without a problem in C mode.
>
>gcc compiles C in C mode, and C++ in C++ mode. The code is valid C but
>not valid C++, so there is nothing absurd about it.

Compiler writers need to accept the reality that C/C++ coders mix and match
C and C++ code in the same source. If its valid in C99 then a C++ compiler
should compile it especially if it does so in another mode already. Plus as
you say gcc is perfectly happy to compile some other C99 syntax. C99 has
been out 19 years, it really shouldn't be considered something new or exotic
any more.


Öö Tiib

unread,
Aug 30, 2018, 4:06:10 PM8/30/18
to
On Thursday, 30 August 2018 14:24:48 UTC+3, James Kuyper wrote:
> On 08/30/2018 07:01 AM, Sam wrote:
> > James Kuyper writes:
> >
> >> On 08/29/2018 11:07 PM, Öö Tiib wrote:
> >>> On Thursday, 30 August 2018 05:14:03 UTC+3, jerry.j...@gmail.com wrote:
> >>>> Hello all,
> >>>>
> >>>> I am curious If the code presented below causes UB in C++. I know
> >>>> the original question was about C but I am specifically asking about C++.
> >>>>
> >>>> Thanks very much.
> >>>>
> >>>> Jerry
> >>>
> >>> It is hard to tell because presented code should not compile as C++.
> >>>
> >>>>
> >>>> https://stackoverflow.com/a/34626013/2193968
> >>>>
> >>>> #include <stdio.h>
> >>>>
> >>>> int num;
> >>>>
> >>>> void print( int a[][num], int row, int col )
> >>>
> >>> That does not make sense in C++. Array dimension must be constant
> >>> expression not uninitialized int num. Is that really allowed in C?
> >>
> >> Yes, it is. It's called a variable length array. It was added in C99.
> >
> > Ok, but this is C++, and not C.
>
> Öö asked explicitly about C, so I answered about C.

Thanks.

>
> > No self-respecting C++ compiler would accept this code.
>
> Perhaps - I suppose that depends upon how they go about respecting
> themselves. However, a fully standard conforming implementation of C++
> is allowed to accept it, after issuing the required diagnostic. C++ code
> could use a container class to achieve the same practical effect, and in
> many ways that would be a better solution - but VLAs provide a
> conceptually simpler way to do it, and I have seen people express a
> desire to add them to C++, even if only as an implementation-specific
> extension.

My low knowledge is because I avoid using VLAs even in C. Also I would
continue avoiding if those were added to C++. But ... I would not protest
either if others desire such a feature.

My reason is that existence of automatic storage limits and breaking
those is undefined behavior that C++ standard avoids addressing.
Automatic storage is faster but usually more limited resource than
dynamic storage. Also there is std::bad_alloc that makes exceeding
dynamic storage limits possible to handle. VLA would add another
unknown factor to risk of breaking automatic storage limits. That
makes std::vector safer buffer of dynamic dimensions. When the
reason of VLA usage is performance then usually there are safer
ways to improve it.

May be I just haven't met a problem for what VLA is best tool.
For example OP problem. In C++ I would expect something like:

std::cout << a.submatrix(4, 3, 5, 4);

Instead of:

num = 10;

David Brown

unread,
Aug 30, 2018, 4:14:13 PM8/30/18
to
On 30/08/18 15:59, bol...@cylonHQ.com wrote:
> On Thu, 30 Aug 2018 15:05:05 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 30/08/18 13:16, bol...@cylonHQ.com wrote:
>>> Clang seems happy with it but not g++, which is absurd since gcc compiles
>>> it without a problem in C mode.
>>
>> gcc compiles C in C mode, and C++ in C++ mode. The code is valid C but
>> not valid C++, so there is nothing absurd about it.
>
> Compiler writers need to accept the reality that C/C++ coders mix and match
> C and C++ code in the same source.

They do, sometimes. But such programmers write in a common subset of
versions of C and C++ (say, C99 and C++11) - and VLAs are not in that
common subset. They might, in some cases, write in the common subset of
gcc-C and gcc-C++ versions (like gnu11 and gnu++14) - that could include
some uses of VLAs. Equally, they could use a common set matching a
different specific compiler.

> If its valid in C99 then a C++ compiler
> should compile it especially if it does so in another mode already.

You could equally well say that if it is valid C++11, then a C11
compiler should handle it in C since the compiler can already handle it
in C++. That would be nonsense just the same.

C is not a subset of C++. It is /close/ to a subset, but not an exact one.

> Plus as
> you say gcc is perfectly happy to compile some other C99 syntax. C99 has
> been out 19 years, it really shouldn't be considered something new or exotic
> any more.
>

If people had really wanted this particular form of VLAs in their C++
code, I am sure the gcc team would have allowed it as an extension. If
you feel strongly that it is a feature you would like to see in C++
modes of gcc, then the way forward is to file a bug in gcc asking for
the missing feature. If other people agree with you, and it is as
simple to support as you think, then maybe it will get added. It is
certainly a more likely way to get the feature than posts here!

bol...@cylonhq.com

unread,
Aug 31, 2018, 4:31:32 AM8/31/18
to
On Thu, 30 Aug 2018 22:14:02 +0200
David Brown <david...@hesbynett.no> wrote:
>On 30/08/18 15:59, bol...@cylonHQ.com wrote:
>> If its valid in C99 then a C++ compiler
>> should compile it especially if it does so in another mode already.
>
>You could equally well say that if it is valid C++11, then a C11
>compiler should handle it in C since the compiler can already handle it
>in C++. That would be nonsense just the same.

The superset should handle the subset, not vice verca.

>C is not a subset of C++. It is /close/ to a subset, but not an exact one.

Its close enough as makes no difference and that is how its used. Actual use
cases should trump academic opinion.

>> Plus as
>> you say gcc is perfectly happy to compile some other C99 syntax. C99 has
>> been out 19 years, it really shouldn't be considered something new or exotic
>> any more.
>>
>
>If people had really wanted this particular form of VLAs in their C++
>code, I am sure the gcc team would have allowed it as an extension. If

Having used gcc for MANY years I think its fair to say the gcc team include
and exclude what they feel like, not necessarily what their users want. Which
given the compiler is free and a lot of them do it for no money is entirely
reasonable. But one shouldn't assume the users are at the head of their list
when it comes to wish fulfillment.

>you feel strongly that it is a feature you would like to see in C++
>modes of gcc, then the way forward is to file a bug in gcc asking for
>the missing feature. If other people agree with you, and it is as
>simple to support as you think, then maybe it will get added. It is

Its already supported. I very much doubt the C++ and C sections of the compiler
use different parsers and lexical analysers, its almost certainly the same ones
with various feature flags switched on and off depending on the language.


Ian Collins

unread,
Aug 31, 2018, 5:50:42 AM8/31/18
to
On 31/08/18 20:31, bol...@cylonHQ.com wrote:
> On Thu, 30 Aug 2018 22:14:02 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 30/08/18 15:59, bol...@cylonHQ.com wrote:
>>> If its valid in C99 then a C++ compiler
>>> should compile it especially if it does so in another mode already.
>>
>> You could equally well say that if it is valid C++11, then a C11
>> compiler should handle it in C since the compiler can already handle it
>> in C++. That would be nonsense just the same.
>
> The superset should handle the subset, not vice verca.

Which doesn't include VLAs.

--
Ian

bol...@cylonhq.com

unread,
Aug 31, 2018, 5:56:35 AM8/31/18
to
It seems to me idiotic to switch off a feature which is already built into the
compiler simply because its not part of an official standard.

Ian Collins

unread,
Aug 31, 2018, 6:03:54 AM8/31/18
to
One problem with VLAs and C++ is constructs that are regular arrays in
C++ have to be VLAs in C. For example

const size_t s = 42;

int n[s];

--
Ian.

David Brown

unread,
Aug 31, 2018, 6:38:52 AM8/31/18
to
On 31/08/18 10:31, bol...@cylonHQ.com wrote:
> On Thu, 30 Aug 2018 22:14:02 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 30/08/18 15:59, bol...@cylonHQ.com wrote:
>>> If its valid in C99 then a C++ compiler
>>> should compile it especially if it does so in another mode already.
>>
>> You could equally well say that if it is valid C++11, then a C11
>> compiler should handle it in C since the compiler can already handle it
>> in C++. That would be nonsense just the same.
>
> The superset should handle the subset, not vice verca.

Obviously - /if/ C++ were a superset of C. It isn't - it is only
approximately, and so C++ can only handle most of C code instead of all
of it.

I think what you are trying to say is that you would prefer C++ to be a
full superset of C. That would not work (though it could, reasonably,
be slightly closer than it is today).

>
>> C is not a subset of C++. It is /close/ to a subset, but not an exact one.
>
> Its close enough as makes no difference and that is how its used. Actual use
> cases should trump academic opinion.
>

No, that's wrong - on both accounts.

It is not hard to write code that is compatible with both languages -
and that is the practical solution that people use if they have to mix
the languages. There are a few fiddly bits, but mostly it's fine.

There are, I think, three main cases for such mixes. One is header
files that should be usable by both C and C++. There you see a fine
example of incompatibility - a declaration of an external function is
incompatible between C and C++. It has an easy solution for headers,
but it is still needed.

A second case is for people programming C on Windows with MSVC, because
the compiler is crap for C. So they write code that is C (or at least
is intended to be C), but compiled as C++.

A final case is for mixed development groups where code is originally in
C but is migrated to C++.


However, C written to be compatible with C++ has some restrictions. You
are likely to get some non-idiomatic coding, like casting the result of
malloc (some C programmers have very strong opinions against that).
There are lots of other things where C++ is stricter than C, leading to
code that is fine in C being a failure in C++ (some types of casts,
mixing enums and ints, having multiple tentative declarations of the
same variable in a file). Some of these - such as K&R function
declarations and definitions - most C programmers are happy to see gone.

Then there are features that can be very useful in C that don't exist in
C++. Designated initialisers are, I think, the biggest in this class.
(C++ plans to adopt them in C++20.) VLAs and complex numbers are others
that are somewhat less used. _Generic is a new one in C11.

There are things that exist in both languages, but have different
details - such as the type of 'a' or file-scope "const" data.

And clearly keywords in C++ that don't exist in C can be identifiers in
normal C code.


So no, C++ is not a proper superset of C, and never can be. It could be
closer, of course. VLA's were not supported in C++ because there was a
hope of introducing a better alternative, but that initiative failed
because no one could figure out how to implement it with the desired
syntax and semantics. So some compilers (like gcc) have partial support
for VLAs in C++.


And no, "actual use cases" should not trump the standards. Compilers
and standards committees should work together (as they do) - sometimes
one will be ahead of the other, but we don't want compiler implementers
having a free-for-all, even for copying features from C. That leads to
incompatibilities and restrictions in later versions of the language.


>>> Plus as
>>> you say gcc is perfectly happy to compile some other C99 syntax. C99 has
>>> been out 19 years, it really shouldn't be considered something new or exotic
>>> any more.
>>>
>>
>> If people had really wanted this particular form of VLAs in their C++
>> code, I am sure the gcc team would have allowed it as an extension. If
>
> Having used gcc for MANY years I think its fair to say the gcc team include
> and exclude what they feel like, not necessarily what their users want. Which
> given the compiler is free and a lot of them do it for no money is entirely
> reasonable. But one shouldn't assume the users are at the head of their list
> when it comes to wish fulfillment.

You are mixing up "users in general" with individual users. gcc
developers are interested in hearing the ideas and opinions of
individuals, but will usually not implement something unless it seems
useful to a wide range of users (or the individual writes the
implementation themselves).

So if someone says they want VLAs with sizes from file-scope variables
passed to functions, it is not likely to come high up on their to-do
lists. It would be a lot of work for little gain. If several people
ask, and show how that coding style is used in a lot of C programs and
is better than alternative constructs, and if the gcc C++ experts can
see that it will not conflict with other things in C++ present or
expected future, /then/ they would give it some priority.

>
>> you feel strongly that it is a feature you would like to see in C++
>> modes of gcc, then the way forward is to file a bug in gcc asking for
>> the missing feature. If other people agree with you, and it is as
>> simple to support as you think, then maybe it will get added. It is
>
> Its already supported. I very much doubt the C++ and C sections of the compiler
> use different parsers and lexical analysers, its almost certainly the same ones
> with various feature flags switched on and off depending on the language.
>

They use different parsers and different lexical analysis, AFAIUI. The
C++ front-end was split off from the C front-end long ago as it became
too complex to maintain as a "C with extras" parser. It would be
possible, I suppose, to use the C++ parser to handle C as a sort of "C++
with lots of limits and a few extra bits". But I don't expect that to
happen - the result would be a good deal less efficient for C, and
probably lose many of the useful gcc C extensions.

clang has a different development history, and different models for
their front-end handling. Where gcc started with C and gradually added
C++ as the language developed, clang started when C++ was a
well-established language. AFAIK, it /does/ handle C and C++ in the
same parser, and so is much more likely to support C constructs in C++.



bol...@cylonhq.com

unread,
Aug 31, 2018, 7:02:30 AM8/31/18
to
On Fri, 31 Aug 2018 22:03:43 +1200
And the problem is...?

Sam

unread,
Aug 31, 2018, 7:02:56 AM8/31/18
to
bol...@cylonHQ.com writes:

> It seems to me idiotic to switch off a feature which is already built into
> the
> compiler simply because its not part of an official standard.

Feel free to write your own C++ compiler, to show everyone how it should
be done.


bol...@cylonhq.com

unread,
Aug 31, 2018, 7:14:33 AM8/31/18
to
On Fri, 31 Aug 2018 12:38:40 +0200
David Brown <david...@hesbynett.no> wrote:
>Then there are features that can be very useful in C that don't exist in
>C++. Designated initialisers are, I think, the biggest in this class.

I don't remember seeing them ever used in code whereas I've seen VLAs a number
of times.

>And clearly keywords in C++ that don't exist in C can be identifiers in
>normal C code.

That is pretty much the only real issue preventing C++ being a proper superset
of C.

>And no, "actual use cases" should not trump the standards. Compilers

I disagree. A programming language and its compiler is a tool to get a job
done, its not a purely academic exercise. You can see this with objective C++
compilers - a hideous mash up of languages which unfortunately is required
because of Apple sticking with Objective-C for its APIs long after it should
have been taken out back and shot.

>So if someone says they want VLAs with sizes from file-scope variables
>passed to functions, it is not likely to come high up on their to-do
>lists. It would be a lot of work for little gain. If several people
>ask, and show how that coding style is used in a lot of C programs and
>is better than alternative constructs, and if the gcc C++ experts can
>see that it will not conflict with other things in C++ present or
>expected future, /then/ they would give it some priority.

I imagine most programmers have neither the time nor inclination to get into
in depth argumenst over compiler features with compiler writers, they're too
busy doing their job. The only people who will are those involved in compilers
or students who have lots of free time. Neither of which will probably be
using the compilers in a "get it done by yesterday" business enviroment.

>> Its already supported. I very much doubt the C++ and C sections of the
>compiler
>> use different parsers and lexical analysers, its almost certainly the same
>ones
>> with various feature flags switched on and off depending on the language.
>>
>
>They use different parsers and different lexical analysis, AFAIUI. The
>C++ front-end was split off from the C front-end long ago as it became
>too complex to maintain as a "C with extras" parser. It would be

Well thats odd because very often with gcc you get a "use flag -xyz to use
this language feature" type of error. Though perhaps its only within a
particular language, not between them.

bol...@cylonhq.com

unread,
Aug 31, 2018, 7:24:05 AM8/31/18
to
On Fri, 31 Aug 2018 07:02:47 -0400
Sam <s...@email-scan.com> wrote:
>This is a MIME GnuPG-signed message. If you see this text, it means that
>your E-mail or Usenet software does not support MIME signed messages.

Indeed it doesn't, I don't need any attachment shite with usenet.

>The Internet standard for MIME PGP messages, RFC 2015, was published in 1996.
>To open this message correctly you will need to install E-mail or Usenet
>software that supports modern Internet standards.
>
>--=_monster.email-scan.com-115481-1535713367-0001
>Content-Type: text/plain; format=flowed; delsp=yes; charset="UTF-8"
>Content-Disposition: inline
>Content-Transfer-Encoding: 7bit
I should have known the utterly moronic "If you don't like something do it
yourself" argument would pop up from some muppet eventually. Sorry sonny, I
have a job to do. When you get one you might realise that free time can be
scarce (he says writing a usenet post, but hey ;). But FWIW I wrote a (subset
of) C interpreter about 10 years ago so I do have some skin in the game.


David Brown

unread,
Aug 31, 2018, 7:42:22 AM8/31/18
to
On 31/08/18 13:14, bol...@cylonHQ.com wrote:
> On Fri, 31 Aug 2018 12:38:40 +0200
> David Brown <david...@hesbynett.no> wrote:
>> Then there are features that can be very useful in C that don't exist in
>> C++. Designated initialisers are, I think, the biggest in this class.
>
> I don't remember seeing them ever used in code whereas I've seen VLAs a number
> of times.

People write code in different ways. I have seen and used designated
initialisers more often than VLAs - but equally I have no reason to
doubt your experience.

In discussions of "what does C have that C++ does not", I find
designated initialisers coming up more often than VLAs. Perhaps it is
because there is no equivalent in C++, while C++ programmers have
various alternatives to use in situations where a VLA might be of
interest in C.

VLA's in C have (unfairly, IMHO) an image of being "dangerous" -
designated initialisers are considered either harmless convenient
syntax, or directly useful for safer more legible coding.

>
>> And clearly keywords in C++ that don't exist in C can be identifiers in
>> normal C code.
>
> That is pretty much the only real issue preventing C++ being a proper superset
> of C.

No, it is not - that's why I gave other issues. But the keyword one
here is certainly a big and insurmountable issue.

>
>> And no, "actual use cases" should not trump the standards. Compilers
>
> I disagree. A programming language and its compiler is a tool to get a job
> done, its not a purely academic exercise. You can see this with objective C++
> compilers - a hideous mash up of languages which unfortunately is required
> because of Apple sticking with Objective-C for its APIs long after it should
> have been taken out back and shot.

I agree about Objective-C++. But that goes against your case - it is an
example of a language created in compilers for actual use, rather than
one that was planned or defined by academics or committees.

>
>> So if someone says they want VLAs with sizes from file-scope variables
>> passed to functions, it is not likely to come high up on their to-do
>> lists. It would be a lot of work for little gain. If several people
>> ask, and show how that coding style is used in a lot of C programs and
>> is better than alternative constructs, and if the gcc C++ experts can
>> see that it will not conflict with other things in C++ present or
>> expected future, /then/ they would give it some priority.
>
> I imagine most programmers have neither the time nor inclination to get into
> in depth argumenst over compiler features with compiler writers, they're too
> busy doing their job. The only people who will are those involved in compilers
> or students who have lots of free time. Neither of which will probably be
> using the compilers in a "get it done by yesterday" business enviroment.
>

I'm lost here. You are blaming the language for being run by
"academics" rather than real compilers (forgetting that the standards
committee consists mainly of major language users and compiler
developers, with only a few academics). You blame the compiler writers
for making things up for their own interest instead of listening to
users. You blame the users for being too busy working to tell compiler
developers what they want.

Everyone, it seems, is to blame for not solving your impossible request
for making C++ a full superset of C.

>>> Its already supported. I very much doubt the C++ and C sections of the
>> compiler
>>> use different parsers and lexical analysers, its almost certainly the same
>> ones
>>> with various feature flags switched on and off depending on the language.
>>>
>>
>> They use different parsers and different lexical analysis, AFAIUI. The
>> C++ front-end was split off from the C front-end long ago as it became
>> too complex to maintain as a "C with extras" parser. It would be
>
> Well thats odd because very often with gcc you get a "use flag -xyz to use
> this language feature" type of error. Though perhaps its only within a
> particular language, not between them.
>

There are few features that are controlled by flags in gcc (other than
the obvious ones that are controlled by the choice of language
standards). But there are certainly a large number of extensions to C
that are not available in C++ in gcc - usually because doing so would
mean duplicating effort, or at least adding significant effort, for a
feature deemed of little worth.


Bo Persson

unread,
Aug 31, 2018, 9:46:34 AM8/31/18
to
The problem is with C where this is a VLA, which are optional in C11. So
C99 only?

In C++ this is a fixed size array (s is a constant), so just works.


Bo Persson

bol...@cylonhq.com

unread,
Aug 31, 2018, 9:48:06 AM8/31/18
to
On Fri, 31 Aug 2018 13:42:11 +0200
David Brown <david...@hesbynett.no> wrote:
>On 31/08/18 13:14, bol...@cylonHQ.com wrote:
>In discussions of "what does C have that C++ does not", I find
>designated initialisers coming up more often than VLAs. Perhaps it is
>because there is no equivalent in C++, while C++ programmers have
>various alternatives to use in situations where a VLA might be of
>interest in C.

To me designated initialisers is one of those features that looks like it
would be very useful but rarely is. But as you say, each to their own.

>> compilers - a hideous mash up of languages which unfortunately is required
>> because of Apple sticking with Objective-C for its APIs long after it should
>
>> have been taken out back and shot.
>
>I agree about Objective-C++. But that goes against your case - it is an
>example of a language created in compilers for actual use, rather than
>one that was planned or defined by academics or committees.

Well not really, what it shows is that when there's a will there's a way
and if the gcc and clang teams can merge the hideous syntax of Obj-C with C++
then C99 should be a doddle.

>I'm lost here. You are blaming the language for being run by
>"academics" rather than real compilers (forgetting that the standards
>committee consists mainly of major language users and compiler

I'm not sure what you mean by "major language users". Corporations will
inveitably try and skew things there own way to suit their own compiler
implementations and needs which don't necessarily intersect with coders
at large.

>developers, with only a few academics). You blame the compiler writers
>for making things up for their own interest instead of listening to
>users. You blame the users for being too busy working to tell compiler
>developers what they want.

You said it yourself - when solitary users do make a point it generally gets
lost in the noise.

>Everyone, it seems, is to blame for not solving your impossible request
>for making C++ a full superset of C.

I never said a full superset, I simply said it should support the syntax of
NINETEEN year old C99.


Paavo Helde

unread,
Aug 31, 2018, 11:46:53 AM8/31/18
to
On 31.08.2018 16:47, bol...@cylonHQ.com wrote:
> I never said a full superset, I simply said it should support the syntax of
> NINETEEN year old C99.

I'm curious - how do the C programmers protect themselves against stack
overflow with VLA-s? A VLA takes an unknown amount of space in the
stack, and the stack size is typically very limited (in order of some
MB-s). Also, running out of stack is UB, there is no error recovery.

In my little test I could only call 5 nested functions each having a VLA
of meager 50,000 doubles, until getting a segfault. Max stack depth 5
seems too small even for C, not to speak about C++ where I routinely see
stack backtraces of depth 100 or more.

David Brown

unread,
Aug 31, 2018, 11:57:13 AM8/31/18
to
On 31/08/18 17:46, Paavo Helde wrote:
> On 31.08.2018 16:47, bol...@cylonHQ.com wrote:
>> I never said a full superset, I simply said it should support the
>> syntax of
>> NINETEEN year old C99.
>
> I'm curious - how do the C programmers protect themselves against stack
> overflow with VLA-s? A VLA takes an unknown amount of space in the
> stack, and the stack size is typically very limited (in order of some
> MB-s). Also, running out of stack is UB, there is no error recovery.

C programmers (good ones, anyway!) make sure they don't allocate too big
VLAs - they check the sizes or get the size from a known source. They
don't ask web site users to give them a number and use that as the size
of the array!

It is exactly the same as for any other allocation. You don't
pre-allocate a std::vector with a size that is unreasonable.

Yes, overrunning your stack space is UB. For most real programs, so is
overrunning your heap. On some OS's (like Linux, with typical
configurations) the OS will happily accept memory requests for much more
space than it can give you, and crash your application (or a different
program) when the space gets used. Even when OS's are more
conservative, you can happily allocate so much space that other programs
get moved out to swap and everything slows to a fraction of its normal
speed. The C or C++ code may have defined behaviour in such cases, but
the user certainly doesn't! And if you have an OS is clever enough to
refuse the allocation, programs are rarely smart enough to deal with the
situation well. Exiting due to an unhandled exception, or simply
failing to do the job expected of the program, is not much better than
crashing from a stack overflow.


>
> In my little test I could only call 5 nested functions each having a VLA
> of meager 50,000 doubles, until getting a segfault. Max stack depth 5
> seems too small even for C, not to speak about C++ where I routinely see
> stack backtraces of depth 100 or more.
>

Use VLAs - or any other stack data - in a way that makes sense for the
target. There is nothing special about VLAs here - the same would
happen if you made a std::array of 50,000 doubles local to the function.


Paavo Helde

unread,
Aug 31, 2018, 12:32:17 PM8/31/18
to
That's right. Relying on the bad_alloc exception is not wise because at
that point the computer has already become unusably slow anyway and
there is also a good chance the OS will kill your program before it gets
bad_alloc.

Also, the memory reported as free is often actually not unused. The OS
is using it for disk file caching, for example. If the programs will
grab it to themselves, the system performance will go down.

Decent memory-demanding software should actually monitor the available
free memory and reduce the number of parallel threads or refuse to start
new tasks if there is too little of it. OTOH, being nice will often just
mean that other programs can grab more. When I get a notice from my
program "avoiding parallelization because of low memory", I already know
this is because Firefox has eaten up 12 GB again.

David Brown

unread,
Aug 31, 2018, 12:44:17 PM8/31/18
to
Use the same attitude - thinking sensibly and realistically - when using
VLAs, and you will be equally successful at using them as with any other
kind of resource.

Take a cavalier attitude of allocating resources without a care, and
you'll get trouble regardless of whether it is stack space, heap space,
threads, processor time, or any other resource.

VLAs are not different or more dangerous than anything else in programming.


Chris Vine

unread,
Aug 31, 2018, 3:29:30 PM8/31/18
to
On Fri, 31 Aug 2018 18:44:05 +0200
David Brown <david...@hesbynett.no> wrote:
[snip]
> Use the same attitude - thinking sensibly and realistically - when using
> VLAs, and you will be equally successful at using them as with any other
> kind of resource.
>
> Take a cavalier attitude of allocating resources without a care, and
> you'll get trouble regardless of whether it is stack space, heap space,
> threads, processor time, or any other resource.
>
> VLAs are not different or more dangerous than anything else in programming.

I know that you know this, and moving back to the original proposition,
it is worth mentioning that VLAs are also by no means the main part of
the things in C99 that do not form part of valid C++. The proposition
of your respondent that "[VLAs] is pretty much the only real issue
preventing C++ being a proper superset of C" is weird.

C++ has numerous incompatibilities with C99, and even more with C11.
For example there are a number of C99 types, macros and operators
beginning with an underscore and followed by a capital letter which are
and always will be represented differently in C++ (_Bool, _Complex,
_Imaginery, _Pragma), and more in C11 (_Alignas, _Alignof, _Atomic,
_Noreturn, _Static_assert, _Thread_local). Aside from that, off the top
of my head, C99 has compound literals, designated initializers and the
restrict qualifier which are not in C++, C's unions behave differently
and its character literals are of different size. I am sure there are
lots of other incompatibilities as well.

I don't know anyone who writes code other than headers which is
deliberately designed to be both valid C and valid C++. Headers are
different. Many C libraries aim to be usable in C++ and their headers
are in the common subset between C89/90 and C++.

Sam

unread,
Aug 31, 2018, 8:28:31 PM8/31/18
to
bol...@cylonHQ.com writes:

> On Fri, 31 Aug 2018 07:02:47 -0400
> Sam <s...@email-scan.com> wrote:
> >This is a MIME GnuPG-signed message. If you see this text, it means that
> >your E-mail or Usenet software does not support MIME signed messages.
>
> Indeed it doesn't, I don't need any attachment shite with usenet.

Perhaps some day you may wish to reconsider your refusal to upgrade to 21st
century technology.

> >The Internet standard for MIME PGP messages, RFC 2015, was published in
> 1996.

That would be more than 20 years ago, according to my math.

> >Feel free to write your own C++ compiler, to show everyone how it should
> >be done.
>
> I should have known the utterly moronic "If you don't like something do it
> yourself" argument would pop up from some muppet eventually. Sorry sonny, I
> have a job to do. When you get one you might realise that free time can be
> scarce (he says writing a usenet post, but hey ;). But FWIW I wrote a (subset
> of) C interpreter about 10 years ago so I do have some skin in the game.

<Slow clap>. And I wrote a full C compiler, mid-to-late '90s, for the-then
ANSI C, using flex for lexical analysis, and bison for an LALR(1) parser;
producing m68k assembly code.

Ok, so now that we've exchanged our mutual credentials, now what?


Jorgen Grahn

unread,
Sep 1, 2018, 3:46:20 AM9/1/18
to
On Sat, 2018-09-01, Sam wrote:
...
> --=_monster.email-scan.com-127001-1535761698-0001
> Content-Type: text/plain; format=flowed; delsp=yes; charset="UTF-8"
> Content-Disposition: inline
> Content-Transfer-Encoding: 7bit
>
> bol...@cylonHQ.com writes:
>
>> On Fri, 31 Aug 2018 07:02:47 -0400
>> Sam <s...@email-scan.com> wrote:
>> >This is a MIME GnuPG-signed message. If you see this text, it means that
>> >your E-mail or Usenet software does not support MIME signed messages.
>>
>> Indeed it doesn't, I don't need any attachment shite with usenet.
>
> Perhaps some day you may wish to reconsider your refusal to upgrade to 21st
> century technology.
>
>> >The Internet standard for MIME PGP messages, RFC 2015, was published in
>> 1996.
>
> That would be more than 20 years ago, according to my math.

That doesn't matter. RFC 5536 (Nov 2009) defines MIME as used in
Usenet postings; *that's* the text which defines a meaning for your
posting.

My newsreader doesn't support that RFC, so I see your raw MIME
structure. But I'm not boltar, so I don't feel a need to be rude
about it.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

David Brown

unread,
Sep 1, 2018, 2:13:24 PM9/1/18
to
On 31/08/18 21:29, Chris Vine wrote:
> On Fri, 31 Aug 2018 18:44:05 +0200
> David Brown <david...@hesbynett.no> wrote:
> [snip]
>> Use the same attitude - thinking sensibly and realistically - when using
>> VLAs, and you will be equally successful at using them as with any other
>> kind of resource.
>>
>> Take a cavalier attitude of allocating resources without a care, and
>> you'll get trouble regardless of whether it is stack space, heap space,
>> threads, processor time, or any other resource.
>>
>> VLAs are not different or more dangerous than anything else in programming.
>
> I know that you know this, and moving back to the original proposition,
> it is worth mentioning that VLAs are also by no means the main part of
> the things in C99 that do not form part of valid C++. The proposition
> of your respondent that "[VLAs] is pretty much the only real issue
> preventing C++ being a proper superset of C" is weird.
>

Agreed.

> C++ has numerous incompatibilities with C99, and even more with C11.
> For example there are a number of C99 types, macros and operators
> beginning with an underscore and followed by a capital letter which are
> and always will be represented differently in C++ (_Bool, _Complex,
> _Imaginery, _Pragma), and more in C11 (_Alignas, _Alignof, _Atomic,
> _Noreturn, _Static_assert, _Thread_local). Aside from that, off the top
> of my head, C99 has compound literals, designated initializers and the
> restrict qualifier which are not in C++, C's unions behave differently
> and its character literals are of different size. I am sure there are
> lots of other incompatibilities as well.

Some of these are less incompatible than the might seem. C has
<stdbool.h> that gives you "bool", "true" and "false" that can be used
in almost the same way as C++'s "bool" type. C11's _Static_assert is
the same as C++'s static_assert in all but details of the name. Many
compilers (including gcc) have a "__restrict" or "__restrict__" in C++
mode, and support compound literals. With the right standard headers in
each language, you can have a subset of atomic uses that work in C11 and
in C++11.

However, there can be small or subtle differences in the details here,
which you might need to be careful about. And as I noted elsewhere,
designated initialisers seems to be the most important difference in
practice for many people. (It is coming to C++20, I believe.)

>
> I don't know anyone who writes code other than headers which is
> deliberately designed to be both valid C and valid C++. Headers are
> different. Many C libraries aim to be usable in C++ and their headers
> are in the common subset between C89/90 and C++.
>

I gave two other reasons in another post. But I agree that headers are
the most important overlap, and it is rarely difficult to make C headers
that are C++ compatible.

James Kuyper

unread,
Sep 2, 2018, 12:28:12 PM9/2/18
to
On 08/31/2018 03:29 PM, Chris Vine wrote:
...
> C++ has numerous incompatibilities with C99, and even more with C11.
> For example there are a number of C99 types, macros and operators
> beginning with an underscore and followed by a capital letter which are
> and always will be represented differently in C++ (_Bool, _Complex,
> _Imaginery, _Pragma), and more in C11 (_Alignas, _Alignof, _Atomic,
> _Noreturn, _Static_assert, _Thread_local). ...

Taking, for instance, _Bool, they had to do that because "bool" was not
a reserved identifier, so strictly conforming C90 code might have used
that identifier for just about anything, including things that might
conflict with a standard-defined "bool" keyword. Such code can compile
with no problems using C2011. However, most C code that uses the "bool",
"true" and "false" identifiers does so in a fashion that's entirely
compatible with _Bool. _Bool has a couple of characteristics that no
other C type has (conversion to _Bool produces either 0 or 1, and
nothing else, _Bool is the only integer type that a pointer value can be
converted to with a standard-defined result, and _Bool has an integer
conversion rank lower than that of char). If you want to modify existing
code to take advantage of those features, you can replace any existing
code that declares/defines those identifiers with

#include <stdbool.h>

Existing code that's intended to be compilable as either C or C++ code
will, necessarily, not declare/define those those identifiers when
compiled as C++ code, since they are keywords in C++, so this problem
can't come up in such code. However, for compilation as C code, it can
also use <stdbool.h>, and with current versions of C++, it doesn't even
need to be conditionally #included - stdbool.h is also supported as a
part of the C++ standard library, explicitly required to NOT define
macros named bool, true, or false, so the conditional compilation will
occur inside of stdbool.h, not your own code.

My copy of n4567.pdf says that _Pragma is spelled the same in C++ as it
is in C. Is that out-of-date?

<complex.h> does the same kind of thing for _Complex, and <stdalign.h>
does it for _Alignas and _Alignof. <stdatomic.h>, <stdnoreturn.h>, and
<threads.h> could do the same thing for _Atomic, _Noreturn, and
_Thread_local, but they are not included in n4567.pdf, which might just
be due to it being out of date. You might be able to take the same
approach in your own code for _Static_assert. I haven't reviewed those
features to confirm whether there would be any problems with, for
instance, using _Static_assert in C and static_assert in C++ with the
same arguments.

Chris Vine

unread,
Sep 2, 2018, 1:38:56 PM9/2/18
to
I think you are perhaps missing the point I was making. I was not
commenting on the extent to which C++ has equivalents to _Bool,
_Complex, _Imaginery, _Alignas, _Alignof, _Atomic, _Noreturn,
_Static_assert and _Thread_local (it does have equivalents, albeit
bool is a macro in C stdbool.h, and a type in C++ stdbool.h and
cstdbool); it was that the fact that C++ doesn't support these names
(and doesn't have compound literals, designated initializers and the
restrict qualifier and is incompatible in other ways) means that VLAs
are not "pretty much the only real issue preventing C++ being a proper
superset of C". Given what you say about _Pragma I will have to look
that one up (I can't do that right now).

If you were suggesting C++ stdbool.h has to provide _Bool, that is
wrong. g++ provides it in C++ as an extension. But perhaps you meant
something else.

bol...@cylonhq.com

unread,
Sep 2, 2018, 2:58:00 PM9/2/18
to
On Fri, 31 Aug 2018 20:28:18 -0400
Sam <s...@email-scan.com> wrote:
>> Indeed it doesn't, I don't need any attachment shite with usenet.
>
>Perhaps some day you may wish to reconsider your refusal to upgrade to 21st
>century technology.

Usenet doesn't require attachments. Anyone who uses them on this service
is an idiot and simply wastes everyone elses bandwidth. If you want to post
binaries to usenet then take a time machine back to 1990 and head to one
of the alt.binaries groups.

>That would be more than 20 years ago, according to my math.

Well done. Whats your next trick, telling us what the month is?

>> I should have known the utterly moronic "If you don't like something do it
>> yourself" argument would pop up from some muppet eventually. Sorry sonny, I
>> have a job to do. When you get one you might realise that free time can be
>> scarce (he says writing a usenet post, but hey ;). But FWIW I wrote a (subset
>
>> of) C interpreter about 10 years ago so I do have some skin in the game.
>
><Slow clap>. And I wrote a full C compiler, mid-to-late '90s, for the-then
>ANSI C, using flex for lexical analysis, and bison for an LALR(1) parser;
>producing m68k assembly code.
>
>Ok, so now that we've exchanged our mutual credentials, now what?

I wasn't boasting, simply making a point. You however obviously are.
Though admitting you had to use tools to write your parser and lexical analyser
instead of being able to doing it yourself doesn't exactly paint you as a
master coder. I've written at least 15 parsers of various languages both
common and internal company ones and never had to resort to lex, yacc or
bison once.


Öö Tiib

unread,
Sep 2, 2018, 3:53:55 PM9/2/18
to
On Sunday, 2 September 2018 21:58:00 UTC+3, bol...@cylonhq.com wrote:
> Though admitting you had to use tools to write your parser and lexical
> analyser instead of being able to doing it yourself doesn't exactly
> paint you as a master coder.

Admitting to use tools like shovel to dig that hole you are in instead
of being able to doing it yourself does not exactly paint you as master
mole. Keep digging!

James Kuyper

unread,
Sep 2, 2018, 5:28:42 PM9/2/18
to
> _Static_assert and _Thread_local ...

Neither was I. I was making a point about how both language standards
have been carefully written to coordinate with each other, making the
fact that they use different spellings for those features a very minor
problem to solve.

> ... (it does have equivalents, albeit
> bool is a macro in C stdbool.h, and a type in C++ stdbool.h and
> cstdbool); it was that the fact that C++ doesn't support these names

My point was that both languages do support <stdbool.h>, and if it is
used, both languages do support "bool", so the fact that C++ doesn't
support _Bool, and that C doesn't support "bool" unless <stdbool.h> is
#included, isn't very important.

> (and doesn't have compound literals, designated initializers and the
> restrict qualifier and is incompatible in other ways) means that VLAs
> are not "pretty much the only real issue preventing C++ being a proper
> superset of C".

I have no disagreement with that point, I was merely pointing out that
the other issues you raised were substantially less problematic that
those issues.

> If you were suggesting C++ stdbool.h has to provide _Bool, that is

No, I was suggesting that a fully conforming C implementation of
<stdbool.h> has to define a macro named bool which must expand to _Bool
(C2011 7.18p2), while a fully conforming C++ implementation of C++ is
prohibited from #defining that macro (n4567.pdf, 18.10p8), with the net
result that code can be written to compile with either language by
#including <stdbool.h> and using bool. Do you disagree with any part of
that suggestion?

James Kuyper

unread,
Sep 2, 2018, 5:34:02 PM9/2/18
to
On 09/02/2018 12:46 PM, Stefan Ram wrote:
> James Kuyper <james...@alumni.caltech.edu> writes:
>> My copy of n4567.pdf says that _Pragma is spelled the same in C++ as it
>> is in C. Is that out-of-date?
>
> Both David Bowie and Fidel Castro were still alive when
> n4567 was published.

I've been programming computers since 1976. The date on n4567.pdf is
2015-11-09, which counts as recent events by my standards. My twins were
born that same year, an event that sticks in my memory much better than
either of the other two you mentioned.

> The most recent C++ draft I am aware of is n4762 as of
> 2018-07-07. It mentions »_Pragma«:

So the spelling is still the same between the C and C++ standards. Good.

James Kuyper

unread,
Sep 2, 2018, 6:03:58 PM9/2/18
to
On 09/02/2018 05:28 PM, James Kuyper wrote:
,,,
> (C2011 7.18p2), while a fully conforming C++ implementation of C++ is

That should have been "implementation of <stdbool.h>".

Chris Vine

unread,
Sep 2, 2018, 7:27:15 PM9/2/18
to
On Sun, 2 Sep 2018 17:28:31 -0400
James Kuyper <james...@alumni.caltech.edu> wrote:
> On 09/02/2018 01:38 PM, Chris Vine wrote:
[snip]
> > I think you are perhaps missing the point I was making. I was not
> > commenting on the extent to which C++ has equivalents to _Bool,
> > _Complex, _Imaginery, _Alignas, _Alignof, _Atomic, _Noreturn,
> > _Static_assert and _Thread_local ...
>
> Neither was I. I was making a point about how both language standards
> have been carefully written to coordinate with each other, making the
> fact that they use different spellings for those features a very minor
> problem to solve.

OK. Your postings are sometimes in a style which takes some reading.
When replying to a posting, if you are raising a different point than
the one made in the posting to which you are replying it would help me
at least if you were to indicate that more explicitly.

Do I agree with your point? That's a difficult one because as I said I
don't write code intended to be both valid C and valid C++ apart from
headers, and I don't know of anyone else who does so. That does not
mean no one is doing it. But isn't 'bool' something of a special case?
For things like _Alignas and _Alignof you would need to litter your
code with #ifdef's providing a C version and a C++ version of the
alignment specification to make it compilable in both languages. But
how would you make std::complex and _Complex implementations work
together in the same code? I am not convinced that they are intended
for that.

If all you are saying is that these things have been designed (apart
possibly from _Complex) to make it not too difficult to port your C
code to C++, then that is probably true. There has been a definite
effort for the runtimes to be workable together; in consequence the
respective specifications for the memory model and threads and atomics
are supposed to be equivalent.

> > ... (it does have equivalents, albeit
> > bool is a macro in C stdbool.h, and a type in C++ stdbool.h and
> > cstdbool); it was that the fact that C++ doesn't support these names
>
> My point was that both languages do support <stdbool.h>, and if it is
> used, both languages do support "bool", so the fact that C++ doesn't
> support _Bool, and that C doesn't support "bool" unless <stdbool.h> is
> #included, isn't very important.
>
> > (and doesn't have compound literals, designated initializers and the
> > restrict qualifier and is incompatible in other ways) means that VLAs
> > are not "pretty much the only real issue preventing C++ being a proper
> > superset of C".
>
> I have no disagreement with that point, I was merely pointing out that
> the other issues you raised were substantially less problematic that
> those issues.
>
> > If you were suggesting C++ stdbool.h has to provide _Bool, that is
>
> No, I was suggesting that a fully conforming C implementation of
> <stdbool.h> has to define a macro named bool which must expand to _Bool
> (C2011 7.18p2), while a fully conforming C++ implementation of C++ is
> prohibited from #defining that macro (n4567.pdf, 18.10p8), with the net
> result that code can be written to compile with either language by
> #including <stdbool.h> and using bool. Do you disagree with any part of
> that suggestion?

No, not really, although I still would not put 'bool' in a header
intended to be used in both C99 and C++. As far as I am aware there is
no requirement for C _Bool and C++ bool to be the same size. The
requirement for both is that objects of those types should evaluate to
either 1 or 0 for true and false respectively. It would be a poor
implementation which did have the types of different size however.

Sam

unread,
Sep 3, 2018, 9:15:55 AM9/3/18
to
bol...@cylonHQ.com writes:

> Usenet doesn't require attachments. Anyone who uses them on this service
> is an idiot and simply wastes everyone elses bandwidth. If you want to post
> binaries to usenet then take a time machine back to 1990 and head to one
> of the alt.binaries groups.

I must've missed the memo that announced your appointment to the post of
Commissar of Usenet Idiots. When did it happen?

> >That would be more than 20 years ago, according to my math.
>
> Well done. Whats your next trick, telling us what the month is?

Sure, it was October. October of 1996. For some reason I thought it was
September though, not sure why.

What is really confusing, actually, is why you were unable to look it up
yourself. The document in question is publicly available, of course. Anyone
can simply pull it up. And, look, it even has the author and a date on it!

You must've been too busy with your exhaustive search for Usenet idiots; but
I'm HTH nevertheless.

> I wasn't boasting, simply making a point. You however obviously are.
> Though admitting you had to use tools to write your parser and lexical
> analyser
> instead of being able to doing it yourself doesn't exactly paint you as a
> master coder.

Oh, you must be using your own computer you've constructed yourself, by
mining lead, copper, and other minerals; fabricating your own
semiconductors, and finally assembling the Univac that you're using just to
write this message. I'm impressed.

> I've written at least 15 parsers of various languages both
> common and internal company ones and never had to resort to lex, yacc or
> bison once.

You have my condolensces.

bol...@cylonhq.com

unread,
Sep 3, 2018, 12:19:13 PM9/3/18
to
I'm sure that made sense to someone.

bol...@cylonhq.com

unread,
Sep 3, 2018, 12:24:29 PM9/3/18
to
On Mon, 03 Sep 2018 09:15:41 -0400
Sam <s...@email-scan.com> wrote:
>This is a MIME GnuPG-signed message. If you see this text, it means that
>your E-mail or Usenet software does not support MIME signed messages.
>The Internet standard for MIME PGP messages, RFC 2015, was published in 1996.
>To open this message correctly you will need to install E-mail or Usenet
>software that supports modern Internet standards.
>
>--=_monster.email-scan.com-40844-1535980541-0001
>Content-Type: text/plain; format=flowed; delsp=yes; charset="UTF-8"
>Content-Disposition: inline
>Content-Transfer-Encoding: 7bit
>
>bol...@cylonHQ.com writes:
>
>> Usenet doesn't require attachments. Anyone who uses them on this service
>> is an idiot and simply wastes everyone elses bandwidth. If you want to post
>> binaries to usenet then take a time machine back to 1990 and head to one
>> of the alt.binaries groups.
>
>I must've missed the memo that announced your appointment to the post of
>Commissar of Usenet Idiots. When did it happen?

Usenet is a text based service and pretty much always has been. Since you
only post text anyway why exactly do you need to send MIME encoded posts?
You honestly think anyone is going to use your PGP public key to email you
an encrypted reply to some post you made? You think you're that important?
Or maybe you're just an arrogant ass.

>Sure, it was October. October of 1996. For some reason I thought it was
>September though, not sure why.

No, you'd have to go back further than that for you to fit in.

[rest of "I can't think of a response" ad hominen BS snipped]

>> instead of being able to doing it yourself doesn't exactly paint you as a
>> master coder.
>
>Oh, you must be using your own computer you've constructed yourself, by
>mining lead, copper, and other minerals; fabricating your own
>semiconductors, and finally assembling the Univac that you're using just to
>write this message. I'm impressed.

That would be hard work. Writing ones own parser isn't. So long as you know
what you're doing. Apparently that rules you out.

>> I've written at least 15 parsers of various languages both
>> common and internal company ones and never had to resort to lex, yacc or
>> bison once.
>
>You have my condolensces.

Its called working for a living, maybe try it one day when you're out of
short trousers.

>
>-----BEGIN PGP SIGNATURE-----
>
>iQIcBAABAgAGBQJbjTP9AAoJEGs6Yr4nnb8ldMMQANgOO1D33a6VSMd/3AJCBO5o
>8mDjcb85naJMr4AHIw6vrNOwCvHRY/ahRkny477pg8Z7NMTTOC60+xsxoAY+wFs9

"Look how important I am! I've got PGP! Please send me an email!"

Alf P. Steinbach

unread,
Sep 3, 2018, 12:54:14 PM9/3/18
to
On 03.09.2018 15:15, Sam wrote:
> You have my condolensces.

Hey, shouldn't that be «condolences»?

Cheers!, 🍻

- Alf


Sam

unread,
Sep 3, 2018, 9:08:32 PM9/3/18
to
It sure did.


Sam

unread,
Sep 3, 2018, 9:28:01 PM9/3/18
to
bol...@cylonHQ.com writes:

> On Mon, 03 Sep 2018 09:15:41 -0400
> Sam <s...@email-scan.com> wrote:
>
> >bol...@cylonHQ.com writes:
> >
> >> Usenet doesn't require attachments. Anyone who uses them on this service
> >> is an idiot and simply wastes everyone elses bandwidth. If you want to
> post
> >> binaries to usenet then take a time machine back to 1990 and head to one
> >> of the alt.binaries groups.
> >
> >I must've missed the memo that announced your appointment to the post of
> >Commissar of Usenet Idiots. When did it happen?
>
> Usenet is a text based service and pretty much always has been. Since you
> only post text anyway why exactly do you need to send MIME encoded posts?

Because MIME is used for text content? For example, if I wished to tell you
that "vous êtes un crétin", without MIME it wouldn't be possible.

And that would be a darn shame.

> You honestly think anyone is going to use your PGP public key to email you
> an encrypted reply to some post you made? You think you're that important?

I think I'm very important, yes.

> Or maybe you're just an arrogant ass.

I'm that too.

> >Sure, it was October. October of 1996. For some reason I thought it was
> >September though, not sure why.
>
> No, you'd have to go back further than that for you to fit in.

Can you translate that into English? You're not making sense. The
publication date is pretty clear. Are you suggesting a grand conspiracy, of
some kind, which attempts for unknown reason to misrepresent the publication
date of the formal specification for PGP-signed MIME content? Why would
someone want to misrepresent the publication date? Enlighten us, please.

> >> instead of being able to doing it yourself doesn't exactly paint you as a
> >> master coder.
> >
> >Oh, you must be using your own computer you've constructed yourself, by
> >mining lead, copper, and other minerals; fabricating your own
> >semiconductors, and finally assembling the Univac that you're using just to
> >write this message. I'm impressed.
>
> That would be hard work. Writing ones own parser isn't.

Who said it was hard?

> So long as you know
> what you're doing. Apparently that rules you out.

You are forced to qualify and hedge your bets by using the "apparently"
qualifier. Well, when you figure it out conclusively, be sure to let me
know. I'm always anxious to know what I'm doing, at any particular moment,
and I'm so glad I finally found someone to tell me that. You're like a
personal Google Assistant. I never found it useful, or much source of
entertainment. But you seem to do a much better job, at it.

And I feel obligated to return the favor, and I tell you that I have no
doubts at all, not any, whatsoever, as to whether you know or don't know
what you're doing. Not much of a mystery, there.

>
> >> I've written at least 15 parsers of various languages both
> >> common and internal company ones and never had to resort to lex, yacc or
> >> bison once.
> >
> >You have my condolensces.
>
> Its called working for a living, maybe try it one day when you're out of
> short trousers.

I'll keep your very profound advice in mind, thank you very much.

bol...@cylonhq.com

unread,
Sep 5, 2018, 2:39:59 PM9/5/18
to
On Mon, 03 Sep 2018 21:27:51 -0400
Sam <s...@email-scan.com> wrote:
>> Usenet is a text based service and pretty much always has been. Since y=
>ou
>> only post text anyway why exactly do you need to send MIME encoded post=
>s?
>
>Because MIME is used for text content? For example, if I wished to tell y=
>ou =20
>that "vous =C3=AAtes un cr=C3=A9tin", without MIME it wouldn't be possibl=
>e.

Wouldn't it? I wonder how usenet coped with 8 bit data and character sets
before MIME support came along... Oh , I know! But sadly you don't. Why not go
look it up you mouth breather.

>And that would be a darn shame.

Perhaps being able to speak 4 words of French is impressive in the US, not
so much over here in Europe.

>> No, you'd have to go back further than that for you to fit in.
>
>Can you translate that into English? You're not making sense. The =20

A bit complex for you? Posting binaries to usenet (how did they do it
without MIME... its a mystery) was big back in the late 80s and early 90s.
Figure out the rest youself.

>date of the formal specification for PGP-signed MIME content? Why would =20
>someone want to misrepresent the publication date? Enlighten us, please.

Fuck knows what you're on about there, off in your own little world.

>> >write this message. I'm impressed.
>>
>> That would be hard work. Writing ones own parser isn't.
>
>Who said it was hard?

Well you needed help.

>And I feel obligated to return the favor, and I tell you that I have no =20
>doubts at all, not any, whatsoever, as to whether you know or don't know =20
>what you're doing. Not much of a mystery, there.

When you can demonstrate you have the basic ability to find out simple facts
about usenet then feel free to criticise others.

>> short trousers.
>
>I'll keep your very profound advice in mind, thank you very much.

I would if I were you.

bol...@cylonhq.com

unread,
Sep 5, 2018, 2:46:24 PM9/5/18
to
On Mon, 03 Sep 2018 21:08:23 -0400
Sam <s...@email-scan.com> wrote:
>This is a MIME GnuPG-signed message. If you see this text, it means that
>your E-mail or Usenet software does not support MIME signed messages.
>The Internet standard for MIME PGP messages, RFC 2015, was published in 1996.
>To open this message correctly you will need to install E-mail or Usenet
>software that supports modern Internet standards.
>
>--=_monster.email-scan.com-49708-1536023303-0001
>Content-Type: text/plain; format=flowed; delsp=yes; charset="UTF-8"
>Content-Disposition: inline
>Content-Transfer-Encoding: 7bit
>
I can't say I'm surprised you find his piss poor grammar intelligable.

Now, please excuse me, I must use tools like compiler to doing code!

Sam

unread,
Sep 6, 2018, 7:15:46 AM9/6/18
to
bol...@cylonHQ.com writes:

> On Mon, 03 Sep 2018 21:27:51 -0400
> Sam <s...@email-scan.com> wrote:
> >> Usenet is a text based service and pretty much always has been. Since y=
> >ou
> >> only post text anyway why exactly do you need to send MIME encoded post=
> >s?
> >
> >Because MIME is used for text content? For example, if I wished to tell y=
> >ou =20
> >that "vous =C3=AAtes un cr=C3=A9tin", without MIME it wouldn't be possibl=
> >e.
>
> Wouldn't it? I wonder how usenet coped with 8 bit data and character sets
> before MIME support came along... Oh , I know! But sadly you don't. Why not
> go
> look it up you mouth breather.

That's "Mr. Mouth Breather" to you.

And it didn't. Everything depended on pot luck that someone's client used
the same character set as you did. Until MIME came along and standardized
everything, welcoming the 21st century with open arms.

But, for some peculiar reasons, some people stubbornly prefer to live in the
past, trying to suffer the modern world using obsolete technoogy. Why? Beats
me, I have no idea.

> >And that would be a darn shame.
>
> Perhaps being able to speak 4 words of French is impressive in the US, not
> so much over here in Europe.

So, what exactly impresses you folks, over there? You know you're arguing
against self-interest, you know. Without the internationalization brought on
by MIME you wouldn't be able to use all your funny-looking characters in
your messages.

>
> >> No, you'd have to go back further than that for you to fit in.
> >
> >Can you translate that into English? You're not making sense. The =20
>
> A bit complex for you? Posting binaries to usenet (how did they do it
> without MIME... its a mystery) was big back in the late 80s and early 90s.
> Figure out the rest youself.

We're not talking about binaries, but plain text messages. Try to keep up.

> >date of the formal specification for PGP-signed MIME content? Why would =20
> >someone want to misrepresent the publication date? Enlighten us, please.
>
> Fuck knows what you're on about there, off in your own little world.

I'm sorry to have disturbed you with reality.

> >> >write this message. I'm impressed.
> >>
> >> That would be hard work. Writing ones own parser isn't.
> >
> >Who said it was hard?
>
> Well you needed help.

No, I do not believe I ever asked for any help. I was looking for some free
entertainment, which you helpfully provided.


Sam

unread,
Sep 6, 2018, 7:21:15 AM9/6/18
to
bol...@cylonHQ.com writes:

> >> >Admitting to use tools like shovel to dig that hole you are in instead
> >> >of being able to doing it yourself does not exactly paint you as master
> >> >mole. Keep digging!
> >>
> >> I'm sure that made sense to someone.
> >
> >It sure did.
>
> I can't say I'm surprised you find his piss poor grammar intelligable.

You must've meant to write ".. that you find ..". Just trying to help you on
your quest to improve the usage of King's English on the intertubes.

Before you get your panties in a wad about someone else's grammar and
spelling, you should make sure that yours is up to par. Looks pretty bad,
otherwise.

I wouldn't necessarily complain about an occasional typo, or a missed
preposition, myself. I make mistakes too. But whining about typos or
grammatical errors typically indicates a feeble and immature mind.

> Now, please excuse me, I must use tools like compiler to doing code!

"like .. a .. compiler", or perhaps "like compilers".

"to .. to .. code", or better, "to write code".

May the force be with you.


bol...@cylonhq.com

unread,
Sep 7, 2018, 8:02:21 AM9/7/18
to
On Thu, 06 Sep 2018 07:15:36 -0400
Sam <s...@email-scan.com> wrote:
>> Wouldn't it? I wonder how usenet coped with 8 bit data and character sets
>> before MIME support came along... Oh , I know! But sadly you don't. Why not
>
>> go
>> look it up you mouth breather.
>
>That's "Mr. Mouth Breather" to you.
>
>And it didn't. Everything depended on pot luck that someone's client used

Look up uuencode you pillock. If you can't even manage that then nothing
else you say is worth a damn.

>> A bit complex for you? Posting binaries to usenet (how did they do it
>> without MIME... its a mystery) was big back in the late 80s and early 90s.
>> Figure out the rest youself.
>
>We're not talking about binaries, but plain text messages. Try to keep up.

FFS utf8 is 8 bit, thats binary as far as usenet is concerned.

There are some ignorant idiots on here, but you definately win this months
prize.

bol...@cylonhq.com

unread,
Sep 7, 2018, 8:03:42 AM9/7/18
to
On Thu, 06 Sep 2018 07:21:04 -0400
Sam <s...@email-scan.com> wrote:
>> Now, please excuse me, I must use tools like compiler to doing code!
>
>"like .. a .. compiler", or perhaps "like compilers".
>
>"to .. to .. code", or better, "to write code".
>
>May the force be with you.

Congratulations, you can't spot irony either. But then you're a yank so that
will come as a surprise to precisely no one.

Sam

unread,
Sep 7, 2018, 6:20:21 PM9/7/18
to
bol...@cylonHQ.com writes:

> On Thu, 06 Sep 2018 07:15:36 -0400
> Sam <s...@email-scan.com> wrote:
> >> Wouldn't it? I wonder how usenet coped with 8 bit data and character sets
> >> before MIME support came along... Oh , I know! But sadly you don't. Why
> not
> >
> >> go
> >> look it up you mouth breather.
> >
> >That's "Mr. Mouth Breather" to you.
> >
> >And it didn't. Everything depended on pot luck that someone's client used
>
> Look up uuencode you pillock. If you can't even manage that then nothing
> else you say is worth a damn.

And how exactly did uuencoded content specify its character set, and type,
Einstein?

> >> A bit complex for you? Posting binaries to usenet (how did they do it
> >> without MIME... its a mystery) was big back in the late 80s and early 90s.
> >> Figure out the rest youself.
> >
> >We're not talking about binaries, but plain text messages. Try to keep up.
>
> FFS utf8 is 8 bit, thats binary as far as usenet is concerned.

No, utf8 is not binary. Try again.

> There are some ignorant idiots on here, but you definately win this months
> prize.

What prize do I get?

Sam

unread,
Sep 7, 2018, 6:23:38 PM9/7/18
to
bol...@cylonHQ.com writes:

> On Thu, 06 Sep 2018 07:21:04 -0400
> Sam <s...@email-scan.com> wrote:
> >> Now, please excuse me, I must use tools like compiler to doing code!
> >
> >"like .. a .. compiler", or perhaps "like compilers".
> >
> >"to .. to .. code", or better, "to write code".
> >
> >May the force be with you.
>
> Congratulations, you can't spot irony either.

So, your claim of being an uberhacker, and the 2nd coming of Kevin Mitnick,
is irony?

> But then you're a yank so that
> will come as a surprise to precisely no one.

Indeed. You french are world-famous for your sense of humor. You even think
that Jerry Lewis is funny.


bol...@cylonhq.com

unread,
Sep 10, 2018, 4:51:21 AM9/10/18
to
On Fri, 07 Sep 2018 18:20:08 -0400
Sam <s...@email-scan.com> wrote:
>> Look up uuencode you pillock. If you can't even manage that then nothing
>> else you say is worth a damn.
>
>And how exactly did uuencoded content specify its character set, and type,
>Einstein?

*facepalm*

Listen, why not go look up how UTF8 works instead of ever increasingly
demonstrating what an utter muppet you are. UTF8 doesn't need to specify a
character set. Thats the WHOLE FECKING POINT OF IT!

Jesus christ...

>> FFS utf8 is 8 bit, thats binary as far as usenet is concerned.
>
>No, utf8 is not binary. Try again.

Yes, it is. Its an 8 bit format, usenet is 7 bit. You work out the rest you
cretin.

bol...@cylonhq.com

unread,
Sep 10, 2018, 4:53:16 AM9/10/18
to
On Fri, 07 Sep 2018 18:23:29 -0400
Sam <s...@email-scan.com> wrote:
>This is a MIME GnuPG-signed message. If you see this text, it means that
>your E-mail or Usenet software does not support MIME signed messages.
>The Internet standard for MIME PGP messages, RFC 2015, was published in 1996.
>To open this message correctly you will need to install E-mail or Usenet
>software that supports modern Internet standards.
>
>--=_monster.email-scan.com-125108-1536359009-0003
>Content-Type: text/plain; format=flowed; delsp=yes; charset="UTF-8"
>Content-Disposition: inline
>Content-Transfer-Encoding: 7bit
>
>bol...@cylonHQ.com writes:
>
>> On Thu, 06 Sep 2018 07:21:04 -0400
>> Sam <s...@email-scan.com> wrote:
>> >> Now, please excuse me, I must use tools like compiler to doing code!
>> >
>> >"like .. a .. compiler", or perhaps "like compilers".
>> >
>> >"to .. to .. code", or better, "to write code".
>> >
>> >May the force be with you.
>>
>> Congratulations, you can't spot irony either.
>
>So, your claim of being an uberhacker, and the 2nd coming of Kevin Mitnick,
>is irony?

If I didn't read it I wouldn't believe someone who posts to a group like this
could be as gormless as you. Hey ho, takes all sorts I suppose.

>
>> But then you're a yank so that
>> will come as a surprise to precisely no one.
>
>Indeed. You french are world-famous for your sense of humor. You even think
>that Jerry Lewis is funny.

I'm not french. Guess again.

bol...@cylonhq.com

unread,
Sep 10, 2018, 5:23:39 AM9/10/18
to
On Fri, 07 Sep 2018 18:20:08 -0400
Sam <s...@email-scan.com> wrote:
>bol...@cylonHQ.com writes:
>> Look up uuencode you pillock. If you can't even manage that then nothing
>> else you say is worth a damn.
>
>And how exactly did uuencoded content specify its character set, and type,
>Einstein?

How did any document specify its character set before MIME came along?

>> FFS utf8 is 8 bit, thats binary as far as usenet is concerned.
>
>No, utf8 is not binary. Try again.

Its 8 bit, usenet is 7 bit only. You work out the rest.

Sam

unread,
Sep 10, 2018, 6:52:13 AM9/10/18
to
bol...@cylonHQ.com writes:

> On Fri, 07 Sep 2018 18:20:08 -0400
> Sam <s...@email-scan.com> wrote:
> >> Look up uuencode you pillock. If you can't even manage that then nothing
> >> else you say is worth a damn.
> >
> >And how exactly did uuencoded content specify its character set, and type,
> >Einstein?
>
> *facepalm*
>
> Listen, why not go look up how UTF8 works instead of ever increasingly

I know how UTF8 works.

> demonstrating what an utter muppet you are. UTF8 doesn't need to specify a
> character set. Thats the WHOLE FECKING POINT OF IT!

The whole fecking point is that a random blob of uuencoded content is
completely context-free. There's nothing about it that says "I contain UTF-8
text". For that, you need MIME. That's what MIME is for. It's not a
complicated concept to understand.

>
> Jesus christ...
>
> >> FFS utf8 is 8 bit, thats binary as far as usenet is concerned.
> >
> >No, utf8 is not binary. Try again.
>
> Yes, it is.

No, it's not.

> Its an 8 bit format, usenet is 7 bit.

Usenet hasn't been 7 bit for a long long time. If you insist on living in
ancient history, it's your choice alone and you don't get it to impose on
everyone else, just to make it easy for your obsolete message reader.

> You work out the rest you cretin.

That's Mr. Cretin to you.

Sam

unread,
Sep 10, 2018, 6:53:04 AM9/10/18
to
bol...@cylonHQ.com writes:

> If I didn't read it I wouldn't believe someone who posts to a group like this
> could be as gormless as you. Hey ho, takes all sorts I suppose.

That only shows that you don't know everything.

> >> But then you're a yank so
> that
> >> will come as a surprise to precisely no one.
> >
> >Indeed. You french are world-famous for your sense of humor. You even think
> >that Jerry Lewis is funny.
>
> I'm not french.

I'm pretty sure you are.

Sam

unread,
Sep 10, 2018, 6:54:33 AM9/10/18
to
bol...@cylonHQ.com writes:

> On Fri, 07 Sep 2018 18:20:08 -0400
> Sam <s...@email-scan.com> wrote:
> >bol...@cylonHQ.com writes:
> >> Look up uuencode you pillock. If you can't even manage that then nothing
> >> else you say is worth a damn.
> >
> >And how exactly did uuencoded content specify its character set, and type,
> >Einstein?
>
> How did any document specify its character set before MIME came along?

It didn't. That's the whole point, Einstein.

> >> FFS utf8 is 8 bit, thats binary as far as usenet is concerned.
> >
> >No, utf8 is not binary. Try again.
>
> Its 8 bit, usenet is 7 bit only. You work out the rest.

Insisting that the sky is purple won't really change its color. You can
stomp your little feet, and demand that Usenet must be 7 bits, but all non-
English writer will just laugh and point their finger at you – look,
another one!

bol...@cylonhq.com

unread,
Sep 10, 2018, 8:13:46 AM9/10/18
to
On Mon, 10 Sep 2018 06:54:25 -0400
Sam <s...@email-scan.com> wrote:
>> Its 8 bit, usenet is 7 bit only. You work out the rest.
>
>Insisting that the sky is purple won't really change its color. You can =20
>stomp your little feet, and demand that Usenet must be 7 bits, but all no=
>n-=20
>English writer will just laugh and point their finger at you =E2=80=93 lo=
>ok, =20
>another one!

Perhaps you'd like to explain to us why your newsreader is base64 encoding
your pgo sig then? Oh, whats that, you've never seen it? Here it is:

>--=_monster.email-scan.com-181043-1536576865-0003
>Content-Type: application/pgp-signature
>Content-Transfer-Encoding: 7bit

Oh, whats this? ^^^^

>
>-----BEGIN PGP SIGNATURE-----
>
>iQIcBAABAgAGBQJblk1hAAoJEGs6Yr4nnb8l9DYQAPYAfnA8Mw+cambFb/J6apOn
>7mwK7QIiOTy6FNTyOb8oKVj0/2jDRuOb5iM+7p49zXr+OgrSIZfExWan0+S9Fz2x

etc.

Education is very useful. You should try it sometime.

>> I'm not french.
>
>I'm pretty sure you are.

You seem to think its an insult. If I had to choose between living in france
or the USA it wouldn't be the latter.

bol...@cylonhq.com

unread,
Sep 10, 2018, 8:14:33 AM9/10/18
to
On Mon, 10 Sep 2018 06:54:25 -0400
Sam <s...@email-scan.com> wrote:
>> Its 8 bit, usenet is 7 bit only. You work out the rest.
>
>Insisting that the sky is purple won't really change its color. You can =20
>stomp your little feet, and demand that Usenet must be 7 bits, but all no=
>n-=20
>English writer will just laugh and point their finger at you =E2=80=93 lo=
>ok, =20
>another one!

Perhaps you'd like to explain to us why your newsreader is base64 encoding
your pgo sig then? Pardon, whats that, you've never seen it? Here it is:

>--=_monster.email-scan.com-181043-1536576865-0003
>Content-Type: application/pgp-signature
>Content-Transfer-Encoding: 7bit

Oh, and whats this? ^^^^

bol...@cylonhq.com

unread,
Sep 10, 2018, 8:14:34 AM9/10/18
to
On Mon, 10 Sep 2018 06:52:02 -0400
Sam <s...@email-scan.com> wrote:
>> Listen, why not go look up how UTF8 works instead of ever increasingly
>
>I know how UTF8 works.

No, you really don't.

>> demonstrating what an utter muppet you are. UTF8 doesn't need to specify a
>> character set. Thats the WHOLE FECKING POINT OF IT!
>
>The whole fecking point is that a random blob of uuencoded content is
>completely context-free. There's nothing about it that says "I contain UTF-8
>text". For that, you need MIME. That's what MIME is for. It's not a
>complicated concept to understand.

For the type of content and the code page (or not) there's the usenet header
field "Content-Type" - the clue is in the name. You don't need mime unless you
really need multipart messages. And why would you post those to usenet? Try
learning to use google.

>> >No, utf8 is not binary. Try again.
>>
>> Yes, it is.
>
>No, it's not.

Ok, define binary data for us in the context of NNTP.


Sam

unread,
Sep 10, 2018, 8:29:13 PM9/10/18
to
bol...@cylonHQ.com writes:

> >The whole fecking point is that a random blob of uuencoded content is
> >completely context-free. There's nothing about it that says "I contain UTF-8
> >text". For that, you need MIME. That's what MIME is for. It's not a
> >complicated concept to understand.
>
> For the type of content and the code page (or not) there's the usenet header
> field "Content-Type" - the clue is in the name. You don't need mime unless

And do you know where the Content-Type: header came from, Einstein?

MIME.

Thank you for playing. We have some lovely consolation prizes for you.

> >> >No, utf8 is not binary. Try again.
> >>
> >> Yes, it is.
> >
> >No, it's not.
>
> Ok, define binary data for us in the context of NNTP.

It means "you don't know what you're talking about".

Sam

unread,
Sep 10, 2018, 8:37:43 PM9/10/18
to
bol...@cylonHQ.com writes:

> On Mon, 10 Sep 2018 06:54:25 -0400
> Sam <s...@email-scan.com> wrote:
> >> Its 8 bit, usenet is 7 bit only. You work out the rest.
> >
> >Insisting that the sky is purple won't really change its color. You can =20
> >stomp your little feet, and demand that Usenet must be 7 bits, but all no=
> >n-=20
> >English writer will just laugh and point their finger at you =E2=80=93 lo=
> >ok, =20
> >another one!
>
> Perhaps you'd like to explain to us why your newsreader is base64 encoding
> your pgo sig then?

It is not, Einstein.

> Oh, whats that, you've never seen it? Here it is:

Of course I've seen it. After all, I wrote it.

> >--=_monster.email-scan.com-181043-1536576865-0003
> >Content-Type: application/pgp-signature
> >Content-Transfer-Encoding: 7bit
>
> Oh, whats this? ^^^^

It's called the "Content-Transfer-Encoding" MIME header. HTH.

> >
> >-----BEGIN PGP SIGNATURE-----
> >
> >iQIcBAABAgAGBQJblk1hAAoJEGs6Yr4nnb8l9DYQAPYAfnA8Mw+cambFb/J6apOn
> >7mwK7QIiOTy6FNTyOb8oKVj0/2jDRuOb5iM+7p49zXr+OgrSIZfExWan0+S9Fz2x

And that's called an ascii-armored PGP signature. It may come as a shock to
you, but it was not created by my newsreader. You really have a lot to
learn, grasshopper.

> etc.
>
> Education is very useful. You should try it sometime.

You should try doing your homework. And listen, and learn from, your mental
superiors.

> >> I'm not french.
> >
> >I'm pretty sure you are.
>
> You seem to think its an insult. If I had to choose between living in france
> or the USA it wouldn't be the latter.

Au revoir.

bol...@cylonhq.com

unread,
Sep 11, 2018, 4:43:51 AM9/11/18
to
On Mon, 10 Sep 2018 20:29:02 -0400
Sam <s...@email-scan.com> wrote:
>> >text". For that, you need MIME. That's what MIME is for. It's not a
>> >complicated concept to understand.
>>
>> For the type of content and the code page (or not) there's the usenet header
>> field "Content-Type" - the clue is in the name. You don't need mime unless
>
>And do you know where the Content-Type: header came from, Einstein?
>
>MIME.
>
>Thank you for playing. We have some lovely consolation prizes for you.

24 hours and thats the best you can come up with? Where it came from doesn't
matter, the fact is you don't need MIME in a usenet post which brings us back
full circle.

>> Ok, define binary data for us in the context of NNTP.
>
>It means "you don't know what you're talking about".

Is that yankie for "Excuse me while I escape from this corner I've painted
myself in to"?

>> >-----BEGIN PGP SIGNATURE-----
>> >
>> >iQIcBAABAgAGBQJblk1hAAoJEGs6Yr4nnb8l9DYQAPYAfnA8Mw+cambFb/J6apOn
>> >7mwK7QIiOTy6FNTyOb8oKVj0/2jDRuOb5iM+7p49zXr+OgrSIZfExWan0+S9Fz2x
>
>And that's called an ascii-armored PGP signature. It may come as a shock to

Woah, "ascii-armoured", thats a big impressive phrase! Armour! Tough and manly!
No pussy encoding for you eh? LOL :)

I won't ask if you actually know what it means since clearly you don't and have
just learnt it parrot fashion from PGP For Dummies. Also since you can't use
google either I'll just point this out to you:

https://en.wikipedia.org/wiki/Binary-to-text_encoding

"These encodings are necessary for transmission of data when the channel does
not allow binary data (such as email or NNTP) or is not 8-bit clean. PGP
documentation (RFC 4880) uses the term ASCII armor for binary-to-text encoding
when referring to Base64."
^^^^^^

Have another 24 hours to think up a good backpedal for this why don't you.

Scott Lurndal

unread,
Sep 11, 2018, 9:44:46 AM9/11/18
to
Sam <s...@email-scan.com> writes:

>bol...@cylonHQ.com writes:
>
>> >The whole fecking point is that a random blob of uuencoded content is
>> >completely context-free. There's nothing about it that says "I contain UTF-8
>> >text". For that, you need MIME. That's what MIME is for. It's not a
>> >complicated concept to understand.
>>
>> For the type of content and the code page (or not) there's the usenet header
>> field "Content-Type" - the clue is in the name. You don't need mime unless
>
>And do you know where the Content-Type: header came from, Einstein?
>
>MIME.

Actually, the headers in USENET articles are defined in RFC 1036, not RFC 822.

"the USENET News standard is more restrictive than the Internet standard, placing additional
requirements on each message and forbidding use of certain Internet features"

Given that you're the only person posting MIME it seems you're the outlier.

You may also wish to consider that the second "M" in "MIME" stands for
MAIL.

Ben Bacarisse

unread,
Sep 11, 2018, 10:26:33 AM9/11/18
to
sc...@slp53.sl.home (Scott Lurndal) writes:

> Sam <s...@email-scan.com> writes:
>
>>bol...@cylonHQ.com writes:
>>
>>> >The whole fecking point is that a random blob of uuencoded content is
>>> >completely context-free. There's nothing about it that says "I contain UTF-8
>>> >text". For that, you need MIME. That's what MIME is for. It's not a
>>> >complicated concept to understand.
>>>
>>> For the type of content and the code page (or not) there's the usenet header
>>> field "Content-Type" - the clue is in the name. You don't need mime unless
>>
>>And do you know where the Content-Type: header came from, Einstein?
>>
>>MIME.
>
> Actually, the headers in USENET articles are defined in RFC 1036, not
> RFC 822.

RFC 1036 is obsolete. The format is defined in RFC 5536:

Appendix B. Differences from RFC 1036 and Its Derivatives

o MIME is recognized as an integral part of Netnews.

<snip>
--
Ben.

Alf P. Steinbach

unread,
Sep 11, 2018, 1:24:31 PM9/11/18
to
Bah. Microsoft propaganda.

Cheers!,

- Alf

Sam

unread,
Sep 11, 2018, 6:14:48 PM9/11/18
to
bol...@cylonHQ.com writes:

> On Mon, 10 Sep 2018 20:29:02 -0400
> Sam <s...@email-scan.com> wrote:
> >> >text". For that, you need MIME. That's what MIME is for. It's not a
> >> >complicated concept to understand.
> >>
> >> For the type of content and the code page (or not) there's the usenet
> header
> >> field "Content-Type" - the clue is in the name. You don't need mime unless
> >
> >And do you know where the Content-Type: header came from, Einstein?
> >
> >MIME.
> >
> >Thank you for playing. We have some lovely consolation prizes for you.
>
> 24 hours and thats the best you can come up with?

Short, precise, and on the point, is the only requirement. No need to be
verbose. And I do not have a specific posting schedule. I choose to grace
you with my presence whenever I feel like it. Sometimes it's very often,
sometimes it's every once in a while. You never know, so it's always a
pleasant surprise, and that's how I like it.

> Where it came from doesn't matter,

Let's ignore facts that are somewhat inconvenient, ok?

> the fact is you don't need MIME in a usenet post which brings us back
> full circle.

You also don't need a "Subject:" header either. Or a valid "From:" header
either; none of which are required by NNTP. It seems that whatever point you
were trying to make by that: it's so profound, so mind-blowing, and so Earth-
shattering, that you are the only person in the world who actually knows
what it is.

> >> Ok, define binary data for us in the context of NNTP.
> >
> >It means "you don't know what you're talking about".
>
> Is that yankie for "Excuse me while I escape from this corner I've painted
> myself in to"?

You misspelled "yankee". I wouldn't normally make a point of it, but you
seem to be quite particular and sensitive to spelling and proper grammar,
according to your prior scribbling.

So it's only fair that you should be held up to the same standards you
expect from everyone else.

Oh, and it's "Mr. Yankee" do you.

>
> >> >-----BEGIN PGP SIGNATURE-----
> >> >
> >> >iQIcBAABAgAGBQJblk1hAAoJEGs6Yr4nnb8l9DYQAPYAfnA8Mw+cambFb/J6apOn
> >> >7mwK7QIiOTy6FNTyOb8oKVj0/2jDRuOb5iM+7p49zXr+OgrSIZfExWan0+S9Fz2x
> >
> >And that's called an ascii-armored PGP signature. It may come as a shock to
>
> Woah, "ascii-armoured", thats a big impressive phrase! Armour! Tough and
> manly!
> No pussy encoding for you eh? LOL :)

It's not my term. I am just a mere conveyance of accurate and truthful
information. Your allergic reaction to factual statements is not something
that modern science can cure, so I won't try myself.

> I won't ask if you actually know what it means since clearly you don't and
> have
> just learnt it parrot fashion from PGP For Dummies. Also since you can't use
> google either I'll just point this out to you:
>
> https://en.wikipedia.org/wiki/Binary-to-text_encoding
>
> "These encodings are necessary for transmission of data when the channel does
> not allow binary data (such as email or NNTP) or is not 8-bit clean. PGP
> documentation (RFC 4880) uses the term ASCII armor for binary-to-text
> encoding
> when referring to Base64."
> ^^^^^^

Congratulations for mastering the art of copying and pasting. I'm sure
you're very proud of your accomplishment, and you're expecting someone to
pat you on the head for it.

Unfortunately, your hyper-active cranial matter failed to note that the
content in question was explicitly identified as carrying a

Content-Type: application/pgp-signature

MIME type which is defined in RFC 3165 and specified this explicit format.
If you don't like how this standard works, I'm afraid there's nothing that I
can do about it, and you will need to take your complaints to the IETF,
instead of me.

Or, perhaps, you're having your nervous breakdown because you believe that I
implemented this standard incorrectly. You are more than welcome to cite a
specific section, and identify, with specificity, how this particular
implementation is non-compliant with that particular standard. I welcome the
opportunity to correct an implementation error.

But, regretfully, you've yet to demonstrate that not only that you're in
possession of a clue at this precise moment in time; but you even know what
a clue looks like, how big it is, whether it's alive, or dead, or pining for
the fjords; and whether you ever had one, and know how you can tell one
apart from a wet fart. Before you can even begin to argue Internet standards
with me you have to actually know them, and understand what they are and how
they work. That's a skill you've yet to master, young padawan. Copying and
pasting the only sentences from Wikipedia that you managed to understand, in
part, will not be sufficient. And until such time that one of the standards-
setting, acronym organizations come to you, for assistance with relevant
matters, you will continue flail, fail, and flame out every time, before
your mental superiors.

Finally, I regret to inform you that a wikipedia article is not
authoritative when it comes to Internet standards. That's obviously
something that you know very little, if any, about. Not that any information
sourced from Wikipedia is always inaccurate, of course. Many times it's
quite accurate but before you can actually use it, as a crutch, you have to
understand what it means, and to what exactly it applies. Just because you
recognie a few words of it doesn't mean that it actually applies to whatever
subject matter you're trying to wrap your brain around.


> Have another 24 hours to think up a good backpedal for this why don't you.

Have you picked up your consolation prizes, yet? It includes a copy of my
original helloworld.cpp, that was written even before the obsolete
newsreader that you're using, while you were still in diapers. It comes
autographed and with a certificate of authenticity. It's very rare, and
valuable.


Sam

unread,
Sep 11, 2018, 6:15:18 PM9/11/18
to
Scott Lurndal writes:

> Actually, the headers in USENET articles are defined in RFC 1036, not RFC
> 822.

https://www.rfc-editor.org/info/rfc1036

"Obsoleted by: RFC 5536, RFC 5537".

You may wish to consider checking into this late-breaking newsflash, that
came out in November of 2009.

bol...@cylonhq.com

unread,
Sep 12, 2018, 5:08:44 AM9/12/18
to
On Tue, 11 Sep 2018 18:14:32 -0400
Sam <s...@email-scan.com> wrote:
>> Where it came from doesn't matter,
>
>Let's ignore facts that are somewhat inconvenient, ok?

Its origin doesn't matter. It exists, MIME is NOT required.

>> the fact is you don't need MIME in a usenet post which brings us back
>> full circle.
>
>You also don't need a "Subject:" header either. Or a valid "From:" header

Really? My NNTP server thinks otherwise:

441 Missing required From: header
441 Required Subject: header is missing

Anything else you feel like getting wrong while you're at it? You've got quite
a list of bloopers so far.

>either; none of which are required by NNTP. It seems that whatever point you
>were trying to make by that: it's so profound, so mind-blowing, and so Earth-
>shattering, that you are the only person in the world who actually knows
>what it is.

The point was pretty simple. Unfortunately it has proved completely beyond you
to comprehend it.

>> Is that yankie for "Excuse me while I escape from this corner I've painted
>> myself in to"?
>
>You misspelled "yankee". I wouldn't normally make a point of it, but you
>seem to be quite particular and sensitive to spelling and proper grammar,
>according to your prior scribbling.

Its slang. It can be spelt however you want.

>> Woah, "ascii-armoured", thats a big impressive phrase! Armour! Tough and
>> manly!
>> No pussy encoding for you eh? LOL :)
>
>It's not my term. I am just a mere conveyance of accurate and truthful

Its a term you didn't understand.

>> "These encodings are necessary for transmission of data when the channel does
>
>> not allow binary data (such as email or NNTP) or is not 8-bit clean. PGP
>> documentation (RFC 4880) uses the term ASCII armor for binary-to-text
>> encoding
>> when referring to Base64."
>> ^^^^^^
>
>Congratulations for mastering the art of copying and pasting. I'm sure
>you're very proud of your accomplishment, and you're expecting someone to
>pat you on the head for it.

Well since you're incapable of using google someone has to point you to some
relevant material.

>Unfortunately, your hyper-active cranial matter failed to note that the
>content in question was explicitly identified as carrying a
>
>Content-Type: application/pgp-signature

And? Its still base64 encoded, something you denied. Any other not-up-for-
debate facts you wish to deny or are you done for the week?

[rest of tl;dr drivel snipped]

You can always tell when someones on the back foot when they post a mini
dissertation to try and put their point across. Give it rest mate, honestly
its just laughable.


Sam

unread,
Sep 12, 2018, 5:59:25 PM9/12/18
to
bol...@cylonHQ.com writes:

> On Tue, 11 Sep 2018 18:14:32 -0400
> Sam <s...@email-scan.com> wrote:
> >> Where it came from doesn't matter,
> >
> >Let's ignore facts that are somewhat inconvenient, ok?
>
> Its origin doesn't matter. It exists, MIME is NOT required.

You still failed to wrap your brain around what "required" actually means.
And you still insist that someone around here stated that MIME is required
to post to Usenet… Well, when you find that someone, let me know, so I can
also set him/her/it straight.

> >> the fact is you don't need MIME in a usenet post which brings us back
> >> full circle.
> >
> >You also don't need a "Subject:" header either. Or a valid "From:" header
>
> Really? My NNTP server thinks otherwise:
>
> 441 Missing required From: header
> 441 Required Subject: header is missing

Your NNTP server is not an authoritative source of the NNTP specification.
Sorry to have confused you with facts. The current NNTP specification is RFC
3977. I'll wait until you find which part of it makes “From:” and “Subject:”
headers required for NNTP transport. Please make sure to mention the section
and paragraph number in your response. I'm really looking forward to it.

I'm going to try to teach you something, and we'll see if my efforts will go
for naught: just because a there's no requirement for something, unless it's
prohibited an individual implementation is allowed to implement it. NNTP
does not require a mandatory “From:“ or “Subject:“ header. This does not
prohibit individual NNTP servers from requiring it. Each NNTP server is free
to implement its own policies for transporting and distributing messages.
Just because your NNTP server requires them doesn't mean that every NNTP
server in the world does too, or that it's required by NNTP.

Do you always blather off on subject matter you have no clue about, or are
you just making a special exception, just for me? Just curious.

I have a sneaky feeling that we're just about to find out, in mere moments…

> Anything else you feel like getting wrong while you're at it? You've got
> quite
> a list of bloopers so far.

Are you auditioning for the role of the Black Knight, in the upcoming Monty
Python remake? For some reason I think that you have a pretty good chance of
landing the role.

> >either; none of which are required by NNTP. It seems that whatever point you
> >were trying to make by that: it's so profound, so mind-blowing, and so
> Earth-
> >shattering, that you are the only person in the world who actually knows
> >what it is.
>
> The point was pretty simple. Unfortunately it has proved completely beyond
> you
> to comprehend it.

It can't be simple. After all, if it were, you would be able to explain it.
Or cite an authoritative reference for this mysterious NNTP requirement of
whose existance you're absolutely sure of, but just can't find any evidence.

Must be some kind of a Vast Right Wing Conspiracy, to make you look like an
abject fool, right?

> >> Is that yankie for "Excuse me while I escape from this corner I've painted
> >> myself in to"?
> >
> >You misspelled "yankee". I wouldn't normally make a point of it, but you
> >seem to be quite particular and sensitive to spelling and proper grammar,
> >according to your prior scribbling.
>
> Its slang. It can be spelt however you want.

Unfortunately, the historical custom of this, very fine, newsgroup is that
we all must strive to always use King's English. Not slang. You can leave
that for Facebook.

> >> Woah, "ascii-armoured", thats a big impressive phrase! Armour! Tough and
> >> manly!
> >> No pussy encoding for you eh? LOL :)
> >
> >It's not my term. I am just a mere conveyance of accurate and truthful
>
> Its a term you didn't understand.

You're projecting. You seem to be impressed by the term “ASCII armor”. It's
as if you've never heard of it before. You reacted as if this was the first
time you've read someone using this term, and, once again, ass-umed that
it's just a personal term of mine.

It's not, grasshopper. You claim “thats [sic] a big impressive phrase”. You
even used an exclamation mark, to underscore your surprise at this curious
term, that you're so unfamiliar with.

Well, in fact, “--armor” is the literal, actual name of an actual gpg
command line option, Einstein, which requests this fascinating task to be
accomplished by gpg. Would you care for a link to online documentation?

Yes, indeed, you found “ASCII armor” to be a very unusual choice of words
for describing what you get by running the “gpg“ command using the “--armor“
option. That pretty much sums up your latest, self-hoisted petard.

This option existed for several decades now. I can't make this stuff up.
Just curious: have you ever wondered why so many people point their fingers
in your direction, and laugh? Have you even executed the gpg command, just
once, with any option? Do you know how to manage your public and private
keyrings, and do the usual kind of pgp-ish stuff? You obviously know nothing
about it, you don't know about the “--armor” option, but you think you have
the qualifications to discuss PGP signing of Internet messages. I guess
you'll just have to add one more failure, to your impressive resume of
flameouts.

This turns out to be a fairly common, standard term used in PGP technical
documentation, since the beginning; and the term is now commonly used in
general technical documentation also. Your unfamiliarity with it
accidentally reveals the fact that you were trying, valiantly, to discuss a
highly technical subject matter you don't know anything about. If you did
know anything about it, the term would not've been such a surprise, and you
would not've made an issue out of it. After all, the “--armor“ option is one
of the common ones, and in fact is the precise option that generates the PGP
signature which caused so much confusion on your part. But that's ok, by
making an issue out of it you've just underscored your own level of
dumbassetry.

But there's a bigger issue besides your fascination with this latest shining
ball, called “ASCII armor” (oooh!, big fancy words!); specifically your
general confusion about the fundamental difference between MIME base64
encoding and PGP ASCII Armoring; leading you to believe they're one and the
same just because both use the same alphabet. Unfortunately that's not the
case, and they serve fundamentally distinct purposes. Get back to me when
you've figured it out, and finally read the referenced application/pgp-
signature MIME type specification, that gives, pretty much, a verbatim
equivalent of the original snippet of the message that included the Ascii-
armored signature. Did you know that an option called “--armor” was used to
generate it?

If you still believe that you're looking at MIME base64 encoding, then maybe
you should write their authors and tell them they've got it wrong, and that
you know better.

Really, why haven't you, yet, reviewed the application/pgp-signature spec?
Can't find it? Too many long words, that your brain can't absorb? Or you did
read it, but are too ashamed to admit how full of crap you are? Which is the
case?

> >> "These encodings are necessary for transmission of data when the channel
> does
> >
> >> not allow binary data (such as email or NNTP) or is not 8-bit clean. PGP
> >> documentation (RFC 4880) uses the term ASCII armor for binary-to-text
> >> encoding
> >> when referring to Base64."
> >> ^^^^^^
> >
> >Congratulations for mastering the art of copying and pasting. I'm sure
> >you're very proud of your accomplishment, and you're expecting someone to
> >pat you on the head for it.
>
> Well since you're incapable of using google someone has to point you to some
> relevant material.

Maybe, maybe not. When you find that someone, let me know. Sounds like very
useful service, having someone else Google something for me, instead of
doing it myself. That must be what that "Google Assistant" gizmo, that I
keep hearing about in the news, is all about. Perhaps that Google Assistant
can help you find the application/pgp-signature MIME specification, so that
you can enlighten yourself.

Or, perhaps, your Google Assistant can help you find scores of articles
explaining what ASCII armoring is, since you've never heard of it before.

> >Unfortunately, your hyper-active cranial matter failed to note that the
> >content in question was explicitly identified as carrying a
> >
> >Content-Type: application/pgp-signature
>
> And?

It's something that called a "fact". You should invest a few minutes in
learning what that word means, and what application/pgp-signature's
specification says.

> Its still base64 encoded, something you denied.

It is not MIME base-64 encoded, you are simply incapable of understanding
the difference between PGP ASCII armoring, MIME content, and MIME content
transfer encoding. It's a very nuanced, but a very important, distinction. A
distinction you are clearly incapable of understanding. Too many big words…
Too many big concepts… I really can't understand why can't someone just
write a large coloring book, to help people like you understand these
complicated things? And even include a box of crayons, to make the whole
thing easier.

> Any other not-up-for-
> debate facts you wish to deny or are you done for the week?

Well, here's one "not-up-for-debate fact": you are a clueless dummy with a
severe case of delusion of self-grandeur, who thinks that he/she/it is hot
stuff because he/she/it can successfully compile helloworld.cpp in Visual
Studio. I am not going to deny that.

Grow up, young man/woman/non-gendered-entity (just doing my part to be
inclusive); you're in the adult world now, and what you think you know,
doesn't really amount to much. Which is why you must pay attention, and make
careful notes, when your mental superiors are schooling you. There is
slight, just a slight, chance that you might understand and learn something.
Which would be a step forward; but even if not, at least you've tried. But
you're not even trying. And that's a shame. All those helpless, innocent
electrons, being sacrificed on the altar of Usenet, just in order for your
shining paragon of ignorance to make it to every corner of the world, and
for everyone to laugh at it. Although we, mere mortals, certainly gain some
enjoyment, we can't ignore the utter waste of useful energy that was needed
to propagate such rubbish. Win some, lose some, I guess.

> [rest of tl;dr drivel snipped]

Oh, please. It's quite you've read every word of it. And loved it. Just like
you've read every word of this, and despite being embarrased at daring to
try to match wits with your mental superior, you found your experience to be
strangely to your liking.

> You can always tell when someones on the back foot when they post a mini
> dissertation to try and put their point across. Give it rest mate, honestly
> its just laughable.

Nope. Why would I give up on free entertainment?

And you call /that/ a dissertation? You really have low standards. Maybe it
would be a dissertation of a lifetime for you, but my record for a Usenet
flame racks up to about a 3000 line post, as I recall*. This, right here –
what you're reading right now – is nothing. Absolutely nothing, compared to
the flames of long ago, in a galaxy far-far away. You've yet to learn much,
young padawan.

*Of course, I had some, shall we say… technical assistance in writing it up.
But let this be our little secret, just between you and me. Nobody else
needs to know, and may the schwartz be with you.

bol...@cylonhq.com

unread,
Sep 13, 2018, 4:35:41 AM9/13/18
to
On Wed, 12 Sep 2018 17:59:14 -0400
Sam <s...@email-scan.com> wrote:
>bol...@cylonHQ.com writes:
>
>> On Tue, 11 Sep 2018 18:14:32 -0400
>> Sam <s...@email-scan.com> wrote:
>> >> Where it came from doesn't matter,
>> >
>> >Let's ignore facts that are somewhat inconvenient, ok?
>>
>> Its origin doesn't matter. It exists, MIME is NOT required.
>
>You still failed to wrap your brain around what "required" actually means=
>.. =20
>And you still insist that someone around here stated that MIME is require=
>d =20
>to post to Usenet=E2=80=A6 Well, when you find that someone, let me know, =
>so I can =20
>also set him/her/it straight.

Oh look, another goalpost move there by mouth breather. Every time he gets
caught out he changes the argument slightly to match what he said. Nice try.

First you claim that MIME is required for posting non ascii data then after
I've shown you 2 ways its done without it (because you're too dim too google
it) you're now claiming I said its required to post to usenet full stop.

Seriously, your 101 debating tactics wouldn't fool a class of pre-teens.

>> 441 Missing required From: header
>> 441 Required Subject: header is missing
>
>Your NNTP server is not an authoritative source of the NNTP specification=

Good like finding one that will accept a post without them.

[Usual self justifying BS snipped, tl;dr]

You really are too stupid to argue with. You were amusing, now you're just
borderline pathetic. Feel free to have the last fatuous word and goalpost move
yet again, I'm done here.

David Brown

unread,
Sep 13, 2018, 5:05:06 AM9/13/18
to
On 13/09/18 10:35, bol...@cylonHQ.com wrote:
> I'm done here.
>

Hurray!


Sam

unread,
Sep 13, 2018, 7:10:57 AM9/13/18
to
bol...@cylonHQ.com writes:

> On Wed, 12 Sep 2018 17:59:14 -0400
> Sam <s...@email-scan.com> wrote:
> >bol...@cylonHQ.com writes:
> >
> >> On Tue, 11 Sep 2018 18:14:32 -0400
> >> Sam <s...@email-scan.com> wrote:
> >> >> Where it came from doesn't matter,
> >> >
> >> >Let's ignore facts that are somewhat inconvenient, ok?
> >>
> >> Its origin doesn't matter. It exists, MIME is NOT required.
> >
> >You still failed to wrap your brain around what "required" actually means=
> >.. =20
> >And you still insist that someone around here stated that MIME is require=
> >d =20
> >to post to Usenet=E2=80=A6 Well, when you find that someone, let me know, =
> >so I can =20
> >also set him/her/it straight.
>
> Oh look, another goalpost move there by mouth breather. Every time he gets

Have you yet figured out what ASCII armoring means?

> caught out he changes the argument slightly to match what he said. Nice try.

+---------------------------------------------------------------------------+
| $1 OFF |
| |
| COUPON |
| |
| EXTRA-STRENGTH BUTTHURT CREAM |
| |
| $1 OFF |
+---------------------------------------------------------------------------+

Here you go, I think you might be able to use it.

> First you claim that MIME is required for posting non ascii data then after

It is. That's absolutely true.

> I've shown you 2 ways its done without it (because you're too dim too google

You've done nothing of that sort. Maybe in your fevered mind, but in noone
else's.

> it) you're now claiming I said its required to post to usenet full stop.
>
> Seriously, your 101 debating tactics wouldn't fool a class of pre-teens.

So says Mr. "What is ascii armoring?" PGP expert.

> >> 441 Missing required From: header
> >> 441 Required Subject: header is missing
> >
> >Your NNTP server is not an authoritative source of the NNTP specification=
>
> Good like finding one that will accept a post without them.

Sure, I have one right here.

> [Usual self justifying BS snipped, tl;dr]

Translation: please ignore the bootprint on my forehead.

> You really are too stupid to argue with. You were amusing, now you're just
> borderline pathetic. Feel free to have the last fatuous word and goalpost
> move
> yet again, I'm done here.

Your toys are all packaged, in the cardboard box, over there in a corner.
You may pick them up when you're ready to go home.

Another satisfied customer. You're welcome, everyone.

bol...@cylonhq.com

unread,
Sep 13, 2018, 7:31:36 AM9/13/18
to
Don't get too excited, I'm sure our little friend sammy will be along soon to
make a fool of himself once again. Just give em the rope...


David Brown

unread,
Sep 13, 2018, 7:57:43 AM9/13/18
to
Look, the two of you are equally guilty of having a childish
name-calling argument. It is utterly irrelevant which, if either, of
you is right. Any factual or informative discussion is long since over,
and the two of you both lost - you both made fools of yourselves.


bol...@cylonhq.com

unread,
Sep 13, 2018, 8:57:33 AM9/13/18
to
On Thu, 13 Sep 2018 13:57:32 +0200
David Brown <david...@hesbynett.no> wrote:
>On 13/09/18 13:31, bol...@cylonHQ.com wrote:
>> On Thu, 13 Sep 2018 11:04:56 +0200
>> David Brown <david...@hesbynett.no> wrote:
>>> On 13/09/18 10:35, bol...@cylonHQ.com wrote:
>>>> I'm done here.
>>>>
>>>
>>> Hurray!
>>
>> Don't get too excited, I'm sure our little friend sammy will be along soon to
>
>> make a fool of himself once again. Just give em the rope...
>>
>
>Look, the two of you are equally guilty of having a childish
>name-calling argument. It is utterly irrelevant which, if either, of
>you is right. Any factual or informative discussion is long since over,
>and the two of you both lost - you both made fools of yourselves.

Sums up usenet for the last 30 years doesn't it? :)

David Brown

unread,
Sep 13, 2018, 9:13:21 AM9/13/18
to
No. This group is usually much better than that. It is far from
perfect, but this "battle" between you and Sam is well below average. I
have nothing against the odd off-topic thread (although some other
people here do) - a rational discussion about newsreaders or Usenet
standards is fine by me. But no one here is interested in a insult and
name-calling competition. If that is what interests the two of you,
please take it elsewhere - I am sure there are Usenet groups where it is
suitable.

bol...@cylonhq.com

unread,
Sep 13, 2018, 9:32:51 AM9/13/18
to
On Thu, 13 Sep 2018 15:13:09 +0200
David Brown <david...@hesbynett.no> wrote:
>On 13/09/18 14:57, bol...@cylonHQ.com wrote:
>> On Thu, 13 Sep 2018 13:57:32 +0200
>> David Brown <david...@hesbynett.no> wrote:
>>> On 13/09/18 13:31, bol...@cylonHQ.com wrote:
>>>> On Thu, 13 Sep 2018 11:04:56 +0200
>>>> David Brown <david...@hesbynett.no> wrote:
>>>>> On 13/09/18 10:35, bol...@cylonHQ.com wrote:
>>>>>> I'm done here.
>>>>>>
>>>>>
>>>>> Hurray!
>>>>
>>>> Don't get too excited, I'm sure our little friend sammy will be along soon
>to
>>>
>>>> make a fool of himself once again. Just give em the rope...
>>>>
>>>
>>> Look, the two of you are equally guilty of having a childish
>>> name-calling argument. It is utterly irrelevant which, if either, of
>>> you is right. Any factual or informative discussion is long since over,
>>> and the two of you both lost - you both made fools of yourselves.
>>
>> Sums up usenet for the last 30 years doesn't it? :)
>>
>
>No. This group is usually much better than that. It is far from
>perfect, but this "battle" between you and Sam is well below average. I

At least its about something computer related, unlike the vast tracts of
biblical gibberish Hodgkin keeps inflicting on the group.


Sam

unread,
Sep 13, 2018, 11:16:23 AM9/13/18
to
No, just your own posts.


Paavo Helde

unread,
Sep 13, 2018, 12:35:25 PM9/13/18
to
On 13.09.2018 16:32, bol...@cylonHQ.com wrote:
>
> At least its about something computer related, unlike the vast tracts of
> biblical gibberish Hodgkin keeps inflicting on the group.

Hodgin can be safely killfiled as he never posts anything even remotely
interesting. It's a shame some people are responding to him, ruining my
filters.

Tim Rentsch

unread,
Sep 20, 2018, 7:49:16 PM9/20/18
to
Paavo Helde <myfir...@osa.pri.ee> writes:

> [..scenarios that might overflow the program stack..]
>
> [...] running out of stack is UB,

This assertion is fairly common, but it isn't right. Running out
of stack space is not undefined behavior as the C standard defines
the term. A lot of people find this very counter-intuitive; let
me see if I can explain it.

The Standard defines the behavior of C programs. (In some areas
it also deals with the behavior of compilers, linkers, etc, but
that does not concern us here.) To define a behavior means to
specify how a program should behave. This is done by giving
specifications for individual program constructs, eg, expressions,
statements, and declarations, and separately giving higher-level
rules for how the component constructs are connected and joined
together.

The Standard specifies the behaviors of component elements and
their various interconnections in terms of an abstract machine.
The abstract machine doesn't have a stack, nor does it have any
notion of capacity or capacity limits. To say a program construct
or a program as a whole has defined behavior means there is a
specification for what actions must occur in the abstract machine.
Loosely speaking, behavior is defined "if the Standard tells us
what the abstract machine is supposed to do."

The situation of "running out of stack space" simply doesn't exist
inside the abstract machine. The defining semantic descriptions
cannot depend on such things, because the abstract machine has no
way to represent them. To say that another way, none of the
Standard's semantic descriptions are predicated on, or have
conditions of, things like there being enough memory capacity or
the stack not overflowing: the semantic descriptions must apply
whether those conditions exist (in the physical machine) or not.
Hence we know what the abstract machine is supposed to do, and
therefore the behavior is defined.

A common reaction at this point might ask "if the behavior is
defined, then where is it defined?" That's an understandable
question, but it is also a meaningless question. The Standard
specifies the behavior of /programs/; it does not specify the
behavior of /situations/. Try this exercise:

1. Write a program that runs out of stack but is otherwise
strictly conforming. (It's easy to do this.)

2. Identify the statement or declaration whose behavior is
undefined under the semantic descriptions given in the
Standard. (That claim should be supported by specific
references to relevant passages in the Standard.)

3. If no such statement or declaration can be identified,
then the program has no undefined behavior as the
Standard uses the term, and running out of stack space
doesn't change that.

If someone tries writing a program along these lines, I think
they will find that every expression, declaration, statement,
etc, has an explicit definition, and those definitions do not
depend on stacks, machine capacities, capacity limits, etc.
The Standard gives explicit definitions for behavior of program
elements, and these specifications apply /whether capacity
limits of physical hardware are exceeded or not/. Such things
do not exist in the abstract machine, and so the given semantic
descriptions continue to apply.

> there is no error recovery.

Here I think is the crux of the confusion. It's true, or at
least usually true, that running out of stack space blows up
a program. But that doesn't mean the program has undefined
behavior. Saying a program's behavior is defined doesn't make
any promises or guarantees about what the program /will/ do if
run on physical hardware. It means only that there is a
specification for what the program /should/ do. I think the
problem people have is they try to equate one with the other.


(P.S. The original context was C, and the above explanation is
written assuming that same context. AFAIK the same statements
apply for C++ and its ISO standard, but I am not as familiar
with the C++ standard, so please take that into account. I would
appreciate any pointers for passages in the C++ standard that
might be at odds with what is written above.)

Alf P. Steinbach

unread,
Sep 20, 2018, 8:33:46 PM9/20/18
to
On 21.09.2018 01:48, Tim Rentsch wrote:
> Paavo Helde <myfir...@osa.pri.ee> writes:
>
>> [..scenarios that might overflow the program stack..]
>>
>> [...] running out of stack is UB,
>
> This assertion is fairly common, but it isn't right. Running out
> of stack space is not undefined behavior as the C standard defines
> the term. A lot of people find this very counter-intuitive;

That's because it's the wrong standard. For C++ it's the C++ standard
that rules. Except where it defers to the C standard.
Nope.

You're confusing the C formal term with the formal term in the C++
standard, which does correspond closely to the common speech meaning.


C++17 $3.27 [defns.undefined]
<quote>
undefined behavior
behavior for which this International Standard imposes no requirements
[Note: Undefined behavior may be expected when this International
Standard omits any explicit definition of behavior or when a program
uses an erroneous construct or erroneous data. Permissible undefined
behavior ranges from ignoring the situation completely with
unpredictable results, to behaving during translation or program
execution in a documented manner characteristic of the environment (with
or without the issuance of a diagnostic message), to terminating a
translation or execution (with the issuance of a diagnostic message).
Many erroneous program constructs do not engender undefined behavior;
they are required to be diagnosed. Evaluation of a constant expression
never exhibits behavior explicitly specified as undefined (8.20). —end
note ]
</quote>

The note in the brackets is non-normative text, but explains things.

In contrast to C++, the C11 definition limits C “undefined behavior” to
“behavior, upon use of a nonportable or erroneous program construct or
of erroneous data, for which this International Standard imposes no
requirements”, which is indeed counter-intuitive and more than a little
bit impractical.


> A common reaction at this point might ask "if the behavior is
> defined, then where is it defined?" That's an understandable
> question, but it is also a meaningless question. The Standard
> specifies the behavior of /programs/; it does not specify the
> behavior of /situations/. Try this exercise:
>
> 1. Write a program that runs out of stack but is otherwise
> strictly conforming. (It's easy to do this.)
>
> 2. Identify the statement or declaration whose behavior is
> undefined under the semantic descriptions given in the
> Standard. (That claim should be supported by specific
> references to relevant passages in the Standard.)
>
> 3. If no such statement or declaration can be identified,
> then the program has no undefined behavior as the
> Standard uses the term, and running out of stack space
> doesn't change that.

Point 3 here blatantly contradicts the above quoted definition of
“undefined behavior” in C++17, which is the same definition as in
earlier C++ standards, back to C++98.


> [snip]
> (P.S. The original context was C, and the above explanation is
> written assuming that same context. AFAIK the same statements
> apply for C++ and its ISO standard, but I am not as familiar
> with the C++ standard, so please take that into account. I would
> appreciate any pointers for passages in the C++ standard that
> might be at odds with what is written above.)
>

Oh. Well. When you try to argue against someone's C++ terminology by
referring to a standard for /another language/, you're just too far out.

Not that it isn't interesting. I think that the silly definitions the
standardization committees sometimes end up with is interesting from a
psychological point of view. I think it's about establishing a kind of
group speech, using special meanings only known to (psychology term)
in-group members, thus helping to identify and exclude others.


Cheers!,

- Alf

Öö Tiib

unread,
Sep 20, 2018, 9:13:42 PM9/20/18
to
Standard does describe situations. Those are situations that cause
malloc to return null pointer or errno to be set ENOMEM or ENOSPC
and the like. What else these are?

> Try this exercise:
>
> 1. Write a program that runs out of stack but is otherwise
> strictly conforming. (It's easy to do this.)
>
> 2. Identify the statement or declaration whose behavior is
> undefined under the semantic descriptions given in the
> Standard. (That claim should be supported by specific
> references to relevant passages in the Standard.)
>
> 3. If no such statement or declaration can be identified,
> then the program has no undefined behavior as the
> Standard uses the term, and running out of stack space
> doesn't change that.

Anything that is not defined by any passage of standard *is*
undefined behavior in C++. That may be different in C.
Is it in C that because standard does not state that automatic
storage has limits and what happens when the limits are exhausted
then the automatic storage is unlimited in C?

> If someone tries writing a program along these lines, I think
> they will find that every expression, declaration, statement,
> etc, has an explicit definition, and those definitions do not
> depend on stacks, machine capacities, capacity limits, etc.
> The Standard gives explicit definitions for behavior of program
> elements, and these specifications apply /whether capacity
> limits of physical hardware are exceeded or not/. Such things
> do not exist in the abstract machine, and so the given semantic
> descriptions continue to apply.

In C++ it is that since standard does not define what happens when some
local variable breaks automatic storage limits then what actually happens
is undefined behavior exactly because standard did not define it.

Paavo Helde

unread,
Sep 21, 2018, 11:53:13 AM9/21/18
to
On 21.09.2018 2:48, Tim Rentsch wrote:
> Paavo Helde <myfir...@osa.pri.ee> writes:
>
>> [..scenarios that might overflow the program stack..]
>>
>
> A common reaction at this point might ask "if the behavior is
> defined, then where is it defined?"

For stack overflow, the behavior might be defined by the implementation.
For example, with MSVC++ on x86 there is a way to make stack overflow a
recoverable error, though it's tricky to get right and has a good chance
to make the whole program slower. Those not faint in the heart should
start with the _resetstkoflw() documentation.

Note that if the behavior were defined by the C++ standard, MSVC++ could
not implement their own behavior (at least not legally).





Tim Rentsch

unread,
Oct 5, 2018, 11:29:37 AM10/5/18
to
That isn't what I meant by "situation", but let's take a look at
it. The C++ standard says this (in n4659 23.10.11 p2) about the
<cstdlib> allocation functions (which includes malloc):

Effects: These functions have the semantics specified in the
C standard library.

The C standard, in n1570 7.22.3 p2, says this about the behavior
of memory management functions (which includes malloc) [in part;
the full paragraph is much longer, but this sentence is the only
relevant one]:

If the space cannot be allocated, a null pointer is returned.

Two points are important here. First, the condition "space
cannot be allocated" is stated in the text of the C standard,
which means it is something known to the abstract machine. In
fact we don't know what it means in terms of running on actual
hardware. The condition "being out of stack space" is exactly
the opposite of that: it is defined only in terms of what's
going on in the actual hardware, and not something that is known
to the abstract machine. Isn't it true that C++ programs, like C
programs, are defined in terms of behavior in the abstract
machine?

Second, the dependence (of what malloc(), etc., do) on this
condition is explicit in the text of the standard. Unless there
is some explicit statement to the contrary, the stated semantics
are meant to apply without exception. For example, 7.22.2.1 p2
gives these semantics for rand():

The rand function computes a sequence of pseudo-random
integers in the range 0 to RAND_MAX.

This statement doesn't mean rand() gives a pseudo-random value
unless one "cannot be generated"; it means rand() always gives a
pseudo-random value, or more precisely that the definition of
rand() specifies that it always gives a pseudo-random value.
Or here is another example. Consider the code fragment:

unsigned a = 10, b = -3, c = a+b;

The definition of the + operator specifies the result of
performing the addition, and that definition is unconditional.
If it happens that the chip running the executable has a hot spot
that flips a bit in one of its registers, that doesn't make the
expression 'a+b' have undefined behavior. The behavior is
defined regardless of whether running the program is carried out
correctly.

There is no explicit statement in the C standard, or AFAIAA the
C++ standard, giving a dependence (in the stated semantics) on
some condition like "running out of stack". Hence whether that
condition is true cannot change whether a program has defined
behavior or undefined behavior.

>> Try this exercise:
>>
>> 1. Write a program that runs out of stack but is otherwise
>> strictly conforming. (It's easy to do this.)
>>
>> 2. Identify the statement or declaration whose behavior is
>> undefined under the semantic descriptions given in the
>> Standard. (That claim should be supported by specific
>> references to relevant passages in the Standard.)
>>
>> 3. If no such statement or declaration can be identified,
>> then the program has no undefined behavior as the
>> Standard uses the term, and running out of stack space
>> doesn't change that.
>
> Anything that is not defined by any passage of standard *is*
> undefined behavior in C++. That may be different in C.

The same is true in C, with the understanding that any _behavior_
that is not defined is undefined behavior, which is also the
case for C++. If there is any explicit definition of behavior,
then the behavior is not undefined.

> Is it in C that because standard does not state that automatic
> storage has limits and what happens when the limits are exhausted
> then the automatic storage is unlimited in C?

AFAICT neither standard has any statement about automatic storage
having limits, let alone a statement about what happens when such
limits are exhausted.

The C++ standard gives this general statement, in 4.1 p2.1:

If a program contains no violations of the rules in this
International Standard, a conforming implementation shall,
within its resource limits, accept and correctly execute
that program.

There is no requirement that an implementation correctly execute
a program that exceeds any resource limit, including the resource
of automatic storage. But that doesn't change whether the
semantics of said program are defined: if the C/C++ standards
define the semantics, they are still defined whether the program
can be correctly executed or not.

>> If someone tries writing a program along these lines, I think
>> they will find that every expression, declaration, statement,
>> etc, has an explicit definition, and those definitions do not
>> depend on stacks, machine capacities, capacity limits, etc.
>> The Standard gives explicit definitions for behavior of program
>> elements, and these specifications apply /whether capacity
>> limits of physical hardware are exceeded or not/. Such things
>> do not exist in the abstract machine, and so the given semantic
>> descriptions continue to apply.
>
> In C++ it is that since standard does not define what happens when
> some local variable breaks automatic storage limits then what
> actually happens is undefined behavior exactly because standard
> did not define it.

That isn't right. In both standards, the rule is that when there
is a definition for the semantics of a particular construct, that
definition applies unconditionally unless there is an explicit
provision to the contrary. It is only when a construct has no
definition that the behavior is undefined.

To put this in concrete terms, consider the following program:

#include <stdio.h>

unsigned ribbet( unsigned );

int
main( int argc, char *argv[] ){
unsigned u = ribbet( 0U - argc );
printf( "%u\n", u );
return 0;
}

unsigned long
ribbet( unsigned u ){
if( u == 0 ) return 1234567;
return 997 + ribbet( u + 1000003 );
}

Going through it a piece at a time:

In main():

The expression '0U - argc' has explicitly defined behavior,
and the definition is unconditional.

The function call 'ribbet( 0U - argc )' has explicitly
defined behavior, and the definition is unconditional.

The initializing declaration 'unsigned u = ...' has
explicitly defined behavior, and the definition is
unconditional.

The statement calling printf() has explicitly defined
behavior, and the definition is unconditional (assuming
a hosted implementation in C, which IIUC C++ requires).

The 'return 0;' has explicitly defined behavior, and the
definition is unconditional.

In ribbet():

The expression 'u == 0' has explicitly defined behavior, and
the definition is unconditional.

The controlled statement 'return 1234567;' has explicitly
defined behavior, and the definition is unconditional.

The if() statement has explicitly defined behavior, and the
definition is unconditional.

The expression 'u + 1000003' has explicitly defined behavior,
and the definition is unconditional.

The recursive call to ribbet() has explicitly defined
behavior, and the definition is unconditional.

The expression '997 + ribbet( ... )' has explicitly defined
behavior, and the definition is unconditional.

The final return statement has explicitly defined behavior,
and the definition is unconditional.

Every piece of the program has its behavior explicitly defined,
and in every case there are no exceptions given for the stated
semantics. There is therefore no undefined behavior. If a
particular implementation runs out of some resource trying to
execute the program, it may not execute correctly, but that
doesn't change the definition of the program's semantics (which
is to say, its behavior). The key point is that the behavior is
_defined_: we know what the program is supposed to do, even if
running the program doesn't actually do that. Pulling the plug,
having the processor catch on fire, the OS killing the process
because it ran out of swap space, or running out of automatic
storage, all can affect what program execution actually does;
but none of those things changes what the standard says the
program is meant to do. Undefined behavior means the standard
doesn't say anything about what is meant to happen, and that
is simply not the case here.

Tim Rentsch

unread,
Oct 5, 2018, 11:45:37 AM10/5/18
to
Paavo Helde <myfir...@osa.pri.ee> writes:

> On 21.09.2018 2:48, Tim Rentsch wrote:
>
>> Paavo Helde <myfir...@osa.pri.ee> writes:
>>
>>> [..scenarios that might overflow the program stack..]
>>
>> A common reaction at this point might ask "if the behavior is
>> defined, then where is it defined?"
>
> For stack overflow, the behavior might be defined by the
> implementation.

In fact the behavior is already defined by the Standard(s).
Please see my longer reply just recently posted.

> For example, with MSVC++ on x86 there is a way to make
> stack overflow a recoverable error, though it's tricky to get right
> and has a good chance to make the whole program slower. Those not
> faint in the heart should start with the _resetstkoflw()
> documentation.
>
> Note that if the behavior were defined by the C++ standard, MSVC++
> could not implement their own behavior (at least not legally).

I think you're assuming that behavior being defined implies
correct execution. That isn't the case. This distinction
is touched on in my other posting. Please let me know if
you would like some clarification.

Öö Tiib

unread,
Oct 5, 2018, 12:27:11 PM10/5/18
to
Your position appears to be that the standard imposes no requirements
to a program that exceeds some (stated or otherwise) limit however
the behavior is still defined. My position is that it is undefined
in that situation. Perhaps we have to agree to disagree about it?

Tim Rentsch

unread,
Oct 8, 2018, 10:50:16 AM10/8/18
to
Before I agree or disagree I'd like to be sure we each understand
the other's position. Below are six C++ programs. For which of
these programs does the C++ standard impose requirements for what
status is returned? Assume size_t is at least 32 bits and that
all programs are accepted without complaint by the compiler.

int main(){ char a[ 10]; return 0; }
int main(){ char a[ 1000]; return 0; }
int main(){ char a[ 100000]; return 0; }
int main(){ char a[ 10000000]; return 0; }
int main(){ char a[1000000000]; return 0; }
int main(){ int x=1, y = 0; return x/y; }

Öö Tiib

unread,
Oct 8, 2018, 1:59:30 PM10/8/18
to
Last function contains undefined behavior by [expr.mul] "If the second
operand of / or % is zero the behavior is undefined."

Rest of the functions may compile and run flawlessly.

There are non-normative guidelines in Annex B [implimits] that every
implementation shall document its resource limitations. The minimum
recommended limit for object size is listed there to be 256 kilobytes.

In your example two main functions exceed that limit.
Standard does not impose any requirements to programs that
exceeds resource limitations of implementation so for implementation
that follows that recommendation these two functions have also
undefined behavior. However implementation is free to define that
limit to be million kilobytes (then all are defined) or 64 kilobytes
(then there are only two mains with defined behavior). That is my
opinion of course, your seems to be that these all are defined
no matter what.

IIRC under MSDOS the whole memory for *.com files was 64 kilobytes,
stack was set to end (something like 0xFFFE) and worked down from
there. When it hit static objects or code then it did blow up.

Scott Lurndal

unread,
Oct 8, 2018, 2:07:14 PM10/8/18
to
I expect them all to compile and run flawlessly on
most non-embedded processors, they don't
actually _access_ the stack, after all.

For example:

int main(){ char a[1000000000]; return 0; }

00000000004004f0 <main>:
4004f0: 55 push %rbp
4004f1: 48 89 e5 mov %rsp,%rbp
4004f4: 48 81 ec 88 c9 9a 3b sub $0x3b9ac988,%rsp
4004fb: b8 00 00 00 00 mov $0x0,%eax
400500: c9 leaveq
400501: c3 retq

Öö Tiib

unread,
Oct 8, 2018, 2:29:19 PM10/8/18
to
Good point. Compilers can perhaps even erase that array since the as-is rule.

Scott Lurndal

unread,
Oct 8, 2018, 4:16:27 PM10/8/18
to
Indeed, with -O3 gcc optimizes it away:

0000000000400400 <main>:
400400: 31 c0 xor %eax,%eax
400402: c3 retq
400403: 90 nop

Tim Rentsch

unread,
Oct 13, 2018, 12:32:00 AM10/13/18
to
Tiib <oot...@hot.ee> writes:

> On Monday, 8 October 2018 17:50:16 UTC+3, Tim Rentsch wrote:
>
>> Tiib <oot...@hot.ee> writes:
>>
>>> On Friday, 5 October 2018 18:29:37 UTC+3, Tim Rentsch wrote:
>>>
>>>> Tiib <oot...@hot.ee> writes:
>>>>
>>>>> On Friday, 21 September 2018 02:49:16 UTC+3, Tim Rentsch wrote:
>>>>>
>>>>>> Paavo Helde <myfir...@osa.pri.ee> writes:
>>>>>>
>>>>>>> [..scenarios that might overflow the program stack..]
>>>>>>>
>>>>>>> [...] running out of stack is UB,
>>>>>>
>>>>>> This assertion is fairly common, but it isn't right. Running out
>>>>>> of stack space is not undefined behavior as the C standard defines
>>>>>> the term. A lot of people find this very counter-intuitive; let
>>>>>> me see if I can explain it.

[...]
Your response doesn't answer the question that I asked. I
deliberately avoided using any of the terms "behavior",
"defined", or "undefined". Please read my question again: "For
which of these programs does the C++ standard impose requirements
for what status is returned?" Let's label the programs A, B, C,
D, E, and F, from top to bottom. The answer should be a set of
those letters (the empty set, all of them, or somewhere in
between). The phrase "impose requirements" could be read as
"impose a requirement", that is, singular or plural makes no
difference. Can you tell me what your answer to this question
is?

Öö Tiib

unread,
Oct 13, 2018, 7:22:28 AM10/13/18
to
The implied answer was that A and B are required to return 0,
if and what the C, D and E are required to return is
implementation-defined most likely also 0, and F is not required
to return anything but can also return 0.

The motivation behind your deliberate avoidance of "behavior" versus
"imposes requirements" is unclear because normative definition of
"undefined behavior" in [defns.undefined] appears to be "behavior for
which this International Standard imposes no requirements".
What a program is required to return if it is required to return
something is (part of) its behavior.

Tim Rentsch

unread,
Nov 23, 2018, 11:14:25 AM11/23/18
to
Tiib <oot...@hot.ee> writes:

> On Saturday, 13 October 2018 07:32:00 UTC+3, Tim Rentsch wrote:
>> Tiib <oot...@hot.ee> writes:
>>> On Monday, 8 October 2018 17:50:16 UTC+3, Tim Rentsch wrote:
>>
>> [what is the relationship of overflowing stack and definedness?]
>>
>>>> [...] I'd like to be sure we each understand
[Following quoted text reordered slightly]

> The motivation behind your deliberate avoidance of "behavior"
> versus "imposes requirements" is unclear

I asked the question the way I did because I want to understand
what you mean - not just what your conclusions are but how
you got to them. Otherwise I'm not sure what you're trying
to say.

> The implied answer was that A and B are required to return 0,
> if and what the C, D and E are required to return is
> implementation-defined most likely also 0, and F is not
> required to return anything but can also return 0.

Do you have trouble understanding the question I'm asking?
It's a yes/no question - not what is required but only if
the C++ standard imposes a requirement. For programs C, D, and E
you seem to be saying that the standard imposes a requirement
if it imposes a requirement. The question is does the standard
impose a requirement or not?

Is it your position that programs A and B are different from
programs C, D, and E in the matter of whether the standard
imposes a requirement? If so then what causes that difference?
As you pointed out Annex B is informative, not normative. Also
the second paragrah says "[T]hese quantities are only guidelines
and do not determine compliance."

> because normative definition of "undefined behavior" in
> [defns.undefined] appears to be "behavior for which this
> International Standard imposes no requirements". What a
> program is required to return if it is required to return
> something is (part of) its behavior.

AFAICT the C++ standard does not require any program be accepted
(ie, by a particular implementation). An implementation can
document any resource limits it chooses, or even none at all, and
there are no restrictions on the ranges of any resource limits.
If an implementation has a resource limit of three nestings of
parentheses, does that mean a statement like

return ((((1))));

has undefined behavior? Is whether this statement has defined
behavior an implementation-dependent property? Note the footnote
to section 4.1 p2.1

"Correct execution" can include undefined behavior, [...]"

This comment suggests exceeding a resource limit and behavior
being undefined are orthogonal conditions, not that one is a
subset of the other. Do you agree with that implication? If
not can you say why?

Öö Tiib

unread,
Nov 26, 2018, 7:31:14 AM11/26/18
to
I have already said all my arguments upstream and so
are apparently you so I suggested to agree to disagree
couple posts ago.

>
> > The implied answer was that A and B are required to return 0,
> > if and what the C, D and E are required to return is
> > implementation-defined most likely also 0, and F is not
> > required to return anything but can also return 0.
>
> Do you have trouble understanding the question I'm asking?
> It's a yes/no question - not what is required but only if
> the C++ standard imposes a requirement. For programs C, D, and E
> you seem to be saying that the standard imposes a requirement
> if it imposes a requirement. The question is does the standard
> impose a requirement or not?

I indeed do not understand how what I wrote can be perceived
as some sort of gibberish.

> Is it your position that programs A and B are different from
> programs C, D, and E in the matter of whether the standard
> imposes a requirement? If so then what causes that difference?
> As you pointed out Annex B is informative, not normative. Also
> the second paragrah says "[T]hese quantities are only guidelines
> and do not determine compliance."

The very subject of our conversation were limitations of
executive environment, particularly size of call depth and
automatic storage and if exceeding these limitations is
undefined behavior or not. My position is that standard says
that exceeding these limitations causes undefined behavior.

I deduce it from two things 1) it does not impose requirements
to behavior program that exceeds limitations and 2) behavior
to what standard does not impose requirements is defined as
undefined behavior.

> > because normative definition of "undefined behavior" in
> > [defns.undefined] appears to be "behavior for which this
> > International Standard imposes no requirements". What a
> > program is required to return if it is required to return
> > something is (part of) its behavior.
>
> AFAICT the C++ standard does not require any program be accepted
> (ie, by a particular implementation). An implementation can
> document any resource limits it chooses, or even none at all, and
> there are no restrictions on the ranges of any resource limits.
> If an implementation has a resource limit of three nestings of
> parentheses, does that mean a statement like
>
> return ((((1))));
>
> has undefined behavior?

I would except such implementation to handle it as ill-formed
since the condition is easy to diagnose. But that is just what
I feel like common sense.

> Is whether this statement has defined
> behavior an implementation-dependent property? Note the footnote
> to section 4.1 p2.1
>
> "Correct execution" can include undefined behavior, [...]"

"depending on the data being processed"

>
> This comment suggests exceeding a resource limit and behavior
> being undefined are orthogonal conditions, not that one is a
> subset of the other. Do you agree with that implication? If
> not can you say why?

I interpret it so that executed program with some input data
may have undefined behavior but with other input data its
behavior may be defined. For example if the limits of automatic
storage of executing environment are exceeded or not can
depend on data that is processed.

Tim Rentsch

unread,
Dec 16, 2018, 11:30:31 AM12/16/18
to
I am disappointed by your response. Basically all you have done
is repeat or rephrase things you have said earlier. There are
conclusions but no reasoning leading up to them. Also I asked at
least half a dozen questions, none of which you gave answers to
AFAICT. (At the end you give an answer but it's not an answer to
the question I asked.) I don't want to have an argument, I just
want to understand what you think and WHY you think it. Rather
than explain you just repeat your talking points. It's almost
like you're actively avoiding giving more information. What's
the point of that?

Alf P. Steinbach

unread,
Dec 16, 2018, 8:43:03 PM12/16/18
to
On 16.12.2018 17:30, Tim Rentsch wrote:
> Öö Tiib <oot...@hot.ee> writes:
>> [snip-a-lot]
>> I interpret it so that executed program with some input data
>> may have undefined behavior but with other input data its
>> behavior may be defined. For example if the limits of automatic
>> storage of executing environment are exceeded or not can
>> depend on data that is processed.
>
> I am disappointed by your response. Basically all you have done
> is repeat or rephrase things you have said earlier. There are
> conclusions but no reasoning leading up to them.

The reasoning seems clear to me.

Consider:

auto main( const int n, char** )
-> int
{ return 42/(n-1); }


Invoked with no arguments it will have n == 1 and Undefined Behavior.

Otherwise it will have n > 1 and in itself well-defined behavior,
although for a high enough number of arguments the behavior is system
dependent (e.g. max 8 bits return value in *nix, 32 bits in Windows).

So, the UB here depends on the program invocation, the input data.

There is no way to say that the program itself, without considering its
invocation, has UB or no UB. It has the potential for UB. For some input.


[snip-even-more]


Cheers!

- Alf (hoping he is not adding irrelevancies here, TLDR ;-) )

Öö Tiib

unread,
Dec 17, 2018, 7:38:30 AM12/17/18
to
On Sunday, 16 December 2018 18:30:31 UTC+2, Tim Rentsch wrote:
I am in dark what reasoning is missing. There simply are no separate
things like for example "limits breaching behavior" and "undefined
behavior". There is behavior and in some circumstances it can be
undefined.

> Also I asked at
> least half a dozen questions, none of which you gave answers to
> AFAICT. (At the end you give an answer but it's not an answer to
> the question I asked.)

Can you specifically show how my answers to your questions
did not address what was asked? Most usually the needs of
resources during execution (and whole flow of execution in
general) depends on input data.

I basically can't find anything that indicates that what happens
on case of exceeding resource limits is somehow defined (or
implied to be) as something else but behavior. What it is then?
Yes, if the implementation can discover such breach during
translation then it can handle it like it handles ill-formed code
by diagnosing it during translation but that is special, lucky
case and is nowhere required about automatic storage limit.

> I don't want to have an argument, I just
> want to understand what you think and WHY you think it. Rather
> than explain you just repeat your talking points. It's almost
> like you're actively avoiding giving more information. What's
> the point of that?

I also do not want to argue and have only tried to explain my
position and how and why it was reached. I don't see where
I have tried to hide or dissemble something. It is basically
that no one can demonstrate presence of requirements to
behavior on case of exceeding limits in C++ standard
therefore that behavior is undefined. There are no
System.StackOverflowException or things like that in C++.


It is loading more messages.
0 new messages