Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Here is why C and C++ are bad...

122 views
Skip to first unread message

Ramine

unread,
Dec 25, 2015, 4:54:13 PM12/25/15
to
Hello,

I was working on a project in C++, but i have encountered
a problem with the language that we call C++, this problem
is inherent to C++ and C, here is a small example that
shows the problem:

===

#include <iostream>
using namespace std;

int main()
{
unsigned int u = 1234;
int i = -1233;

unsigned int result1 = u + i;
std::cout << "Number is = " << result1 << std::endl;

}

==


In this example, C++ and C will convert i from a signed int type to
unsigned int type but this is not good , because this types of
conversion can cause bugs and errors of logic , so i think we must
forbid C and C++ on realtime critical systems... and i have read on
internet that C and C++ are strongly typed languages, how can you
say that ! ingnoring the simple fact that an implicit conversion
in C++ and C takes place between a signed int type and an unsigned int
type for example, so that makes C and C++ problematic and not suited for
realtime critical systems and such, ADA and Delphi and FreePascal are in
fact strongly typed and does not authorize this type of conversion.



Thank you,
Amine Moulay Ramdane.











Paul

unread,
Dec 25, 2015, 6:18:02 PM12/25/15
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
But the result here is in fact 1 so 1234 -1233 = 1 and you haven't illustrated any type of problem. This code might do a better job of illustrating your point.

Paul

int main()
{
unsigned y = 0;
if( -1 > y)
cout << "unexpected result which could cause bugs";
}


Ramine

unread,
Dec 25, 2015, 6:23:43 PM12/25/15
to

Hello...

Read this from my other posts:


"Because in C and C++ an implicit conversion from for example
a signed long to an unsigned long is authorized in C and C++,
and that's bad because it can cause bugs and errors of logic and
that's not good for high integrity and realtime safety critical systems,
in ADA and FreePascal and Delphi this type of implicit conversion is not
possible and more than that ADA can not permit
in general to assign a type to another type as i have showed you above,
so ADA is really suited for high integrity and realtime safety critical
systems and this is why C and C++ are bad as i have explained to you
before."

JiiPee

unread,
Dec 25, 2015, 6:25:11 PM12/25/15
to
On 25/12/2015 23:17, Paul wrote:
> But the result here is in fact 1 so 1234 -1233 = 1 and you haven't illustrated any type of problem. This code might do a better job of illustrating your point.
>
> Paul
>
> int main()
> {
> unsigned y = 0;
> if( -1 > y)
> cout << "unexpected result which could cause bugs";
> }
>

I also get 1.

One way to solve this problem would be to create classes for each basic
types. So using

class UInt

instead of unsigned . So we could overload > operator to behave
correctly with different types.
i guess this is why Bjarne prefers using int instead of unsigned, then
we do not get this problem. If I remember correctly he discourages using
unsigned.


JiiPee

unread,
Dec 25, 2015, 6:28:38 PM12/25/15
to
On 25/12/2015 23:17, Paul wrote:
> int main() { unsigned y = 0; if( -1 > y) cout << "unexpected result
> which could cause bugs"; }

I wonder why the compiler does not give an error here? I think it should
be like that: unsigned must be converted with static_cast to int, and
otherwise its a compilation error. Thats how it should be imo.

Alf P. Steinbach

unread,
Dec 25, 2015, 7:13:09 PM12/25/15
to
On 12/26/2015 1:54 AM, Ramine wrote:
>
> I was working on a project in C++, but i have encountered
> a problem with the language that we call C++, this problem
> is inherent to C++ and C, here is a small example that
> shows the problem:
>
> ===
>
> #include <iostream>
> using namespace std;
>
> int main()
> {
> unsigned int u = 1234;
> int i = -1233;
>
> unsigned int result1 = u + i;
> std::cout << "Number is = " << result1 << std::endl;
> }
>
> ==
>
>
> In this example, C++ and C will convert i from a signed int type to
> unsigned int type but this is not good , because this types of
> conversion can cause bugs and errors of logic

No, only ignorance about the implicit conversion to unsigned can cause
problems for this specific code. The above result is well-defined
because any conversion of integer to unsigned integer is defined as
yielding the result modulo 2^n where n is the number of value
representation bits in the unsigned type. And it's an expected result.

However, there is a closely related feature, namely implicit PROMOTION
up to unsigned in an expression with mixed type operands, where the
above conversion is invoked, that can cause trouble. Specifically, if
integer types Signed and Unsigned have the same size, or Unsigned is
larger than Size, and if such values occur as operands to some built-in
operator, then the Signed value is implicitly converted to Unsigned.
Which is well-defined but can yield nonsense results such as
`string("blah").length() < -5` evaluating to `true`.

For more and more specific information about the implicit conversions in
expressions, look up the "usual arithmetic conversions".

• • •

Recommendations: I recommend using signed integers for numbers and using
unsigned only for bitlevel things. That avoids the surprising automatic
promotions and related bugs, plus, it avoids the common and annoying
warnings about comparing signed and unsigned. Where you need to indicate
that an integer is non-negative, simply use a type alias such as
"Nonnegative", or better, some name specific to the use.

• • •

To do that, you will in practice need non-member `size` functions that
produce signed integer results, like `static_size`, `n_items` and
`length` below:

using Size = ptrdiff_t;

// Static capacity of a collection.

template< class Type, Size n >
auto constexpr static_size( Ref_<Type[n]> )
-> Size
{ return n; }

template< class Type, Size n >
auto constexpr static_size( Ref_<const std::array<Type, n>> )
-> Size
{ return n; }

template< Size n >
auto constexpr static_size( Ref_<const std::bitset<n>> )
-> Size
{ return n; }


// Right-typed (signed integer) dynamic size of a collection.

template< class Type >
auto n_items( Ref_<const Type> o )
-> Size
{ return o.size(); }

template< Size n >
auto n_items( Ref_<const std::bitset<n>> o )
-> Size
{ return o.count(); } // Corresponds to std::set<int>::size()


// Lengths of strings.

template< class Char, Size n >
auto constexpr length_of_literal( Ref_<const Char[n]> ) //
"length" wraps this.
-> Size
{ return n - 1; }

template< class Char, class Traits, class Alloc >
auto length( Object_argument, Ref_<const std::basic_string<Char,
Traits, Alloc>> s )
-> Size
{ return s.length(); }

template< class Char >
auto length( Pointer_argument, const Ptr_<const Char> s )
-> Size
{
auto p = s;
while( *p != 0 ) { ++p; }
return p - s;
}

template< class Char, Size n
, class Enabled_ = If_< In_< Character_types, Char > >
>
auto constexpr length( Array_argument, Ref_<const Char[n]> a )
-> Size
{ return length_of_literal( a ); }

template< class Type >
auto length( Ref_<const Type> o )
-> Size
{ return length( Arg_kind_<Type>(), o ); }

The above code is available at GitHub along with other such core
language support, as I recently announced in this group -- but while
that code works with Visual C++ it's not yet even been compiled with
g++, and some of it is only at the 1% to 2% state of completion:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4280

In C++1z (C++17?) some of the above functions can and probably for
clarity should be expressed in terms of the new `std::size` non-member
function. However, that function is generally unsigned and conflates the
`static_size` and `n_items` functions above, which means that the
wrappers will still be needed, unless the proposal is fixed. See

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4280

for more detailed info.


> , so i think we must
> forbid C and C++ on realtime critical systems... and i have read on
> internet that C and C++ are strongly typed languages, how can you
> say that !

C is not strongly typed in any sense.

C++ supports, but does not require, strongly typed code.

Betrand Meyer, in his Software Development book, went on at length about
the difference of /enabling/ something, versus fully /supporting/ it.
C++ enables strongly typed code, and C++ provides much support for it,
but does not require it. That has to do with its historical roots in C,
that C++ must support direct use of most C libraries.


> ingnoring the simple fact that an implicit conversion
> in C++ and C takes place between a signed int type and an unsigned int
> type for example, so that makes C and C++ problematic and not suited for
> realtime critical systems and such, ADA and Delphi and FreePascal are in
> fact strongly typed and does not authorize this type of conversion.

The problems for designing and implementing real time systems are much
to do with the dangers of parallel processing and less to do with the
low level details of the language at hand.

Well, there are of course also management problems. For an example you
might google the Ariane disaster... ;-)

Now that C++ supports threads in its standard library it's perhaps even
better suited for real time systems. Still I think very fondly of the
Ada rendezvous mechanism for threads. I don't think that can be easily
emulated (at least not efficiently) in C++!


Cheers & hth.,

- Alf

Alf P. Steinbach

unread,
Dec 25, 2015, 7:51:49 PM12/25/15
to
On 12/26/2015 1:12 AM, Alf P. Steinbach wrote:
>
> The above code is available at GitHub along with other such core
> language support, as I recently announced in this group -- but while
> that code works with Visual C++ it's not yet even been compiled with
> g++, and some of it is only at the 1% to 2% state of completion:
>
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4280

Sorry, I meant


https://github.com/alf-p-steinbach/cppx/blob/plutonium/core_language_support/sizes_and_lengths.hpp


Cheers,

- Alf

JiiPee

unread,
Dec 25, 2015, 9:17:37 PM12/25/15
to
On 26/12/2015 00:12, Alf P. Steinbach wrote:
> Recommendations: I recommend using signed integers for numbers and
> using unsigned only for bitlevel things.

yes this is what Bjarne seems to recommend as well

Paul

unread,
Dec 26, 2015, 6:22:01 AM12/26/15
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
I think C++ allows implicit conversion between unsigned int and int. If an unsigned int and int are in the same expression, there's a question of whether the int is converted to unsigned or vice versa. The answer is that the int is converted to unsigned.

If you think that a compiler should give an error here, that is up to you. On most settings, compilers will indeed warn when unsigneds are compared with ints. Gcc additionally has a "Treat warnings as errors" setting. Combine the two settings and voila!

Paul

JiiPee

unread,
Dec 26, 2015, 6:57:02 AM12/26/15
to
yes ok, that might the solution then. even for the OP. Although
personally I do not see this so much a problem currently as I rarely use
unsigned... I just use int almost everywhere.

Paul

unread,
Dec 26, 2015, 7:46:42 AM12/26/15
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
Ok, except that ints might not work very well for Ferrero Rocher chocolates. Since they taste so good (particularly at holiday times where you can eat guilt-free), you might want to use unsigned to buy 2^32-1 of them rather than an int-based approach which lets you eat only 2^31-1. Ints for the dieters, but unsigneds for those who want to maximise consumption.

Paul

Mr Flibble

unread,
Dec 26, 2015, 12:56:17 PM12/26/15
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
On 26/12/2015 00:12, Alf P. Steinbach wrote:
Absurd, time for my usual "Idiots" post sausages.

/Flibble

Jorgen Grahn

unread,
Dec 29, 2015, 9:16:10 AM12/29/15
to
On Sat, 2015-12-26, JiiPee wrote:
> On 26/12/2015 11:21, Paul wrote:
>> On Friday, December 25, 2015 at 11:28:38 PM UTC, JiiPee wrote:
>>> On 25/12/2015 23:17, Paul wrote:
>>>> int main() { unsigned y = 0; if( -1 > y) cout << "unexpected result
>>>> which could cause bugs"; }

>>> I wonder why the compiler does not give an error here? I think it should
>>> be like that: unsigned must be converted with static_cast to int, and
>>> otherwise its a compilation error. Thats how it should be imo.

But it's not, and it's not possible to change without breaking lots of
code and C compatibility.

>> I think C++ allows implicit conversion between unsigned int and
>> int. If an unsigned int and int are in the same expression,
>> there's a question of whether the int is converted to unsigned or
>> vice versa. The answer is that the int is converted to unsigned.
>>
>> If you think that a compiler should give an error here, that is up
>> to you. On most settings, compilers will indeed warn when
>> unsigneds are compared with ints. Gcc additionally has a "Treat
>> warnings as errors" setting. Combine the two settings and voila!
>>
>> Paul
>
> yes ok, that might the solution then. even for the OP. Although
> personally I do not see this so much a problem currently as I rarely use
> unsigned... I just use int almost everywhere.

That's hard to do in practice, since so many things in the language
(size_t, container<T>::size_type and so on) are unsigned. If you
choose to use int, you'll have a suspicious conversion as soon as you
use those things.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Alf P. Steinbach

unread,
Dec 29, 2015, 10:33:48 AM12/29/15
to
It's very easy to use signed types (preferably just "int") for integral
numbers throughout the code, simply by defining a [1]few general size
functions, like `static_size`, `n_items` and `length`.

It didn't used to be that way, e.g. because g++ erroneously & stubbornly
used to refuse to infer an array type for a template where the size was
signed.

Happily, as with the earlier g++ steadfast opposition to the UTF-8 BOM
(directly opposite of the requirements of the Visual C++ compiler, and
with the proponents never once acknowledging that that was the reason,
but instead going on about all kinds of silly ideals), those days of
by-compiler-as-proxy wars seem to be over. Let's hope it's for real.


Cheers & hth.,

- Alf

Notes:
[1] See <url:
https://github.com/alf-p-steinbach/cppx/blob/plutonium/core_language_support/sizes_and_lengths.hpp>
for an example implementation. I'm currently adding an `is_empty`
function to that mix. But I haven't tested it: it's supposed to
automatically ues a boolean `is_empty()`, `isEmpty`, `IsEmpty` or
`empty()` member function if one exists, where the last one is the
standard lib's convention.

JiiPee

unread,
Dec 29, 2015, 10:37:39 AM12/29/15
to
so you see.... its only one difference in exponent 32 <-> 31. I dont
know why people make that difference so big issue... they are both big
numbers anyway...

Jorgen Grahn

unread,
Dec 31, 2015, 4:23:40 PM12/31/15
to
On Sat, 2015-12-26, JiiPee wrote:
Where does he recommend that, so I can look it up?

JiiPee

unread,
Jan 2, 2016, 2:24:02 PM1/2/16
to
On 31/12/2015 21:23, Jorgen Grahn wrote:
> On Sat, 2015-12-26, JiiPee wrote:
>> On 26/12/2015 00:12, Alf P. Steinbach wrote:
>>> Recommendations: I recommend using signed integers for numbers and
>>> using unsigned only for bitlevel things.
>> yes this is what Bjarne seems to recommend as well
> Where does he recommend that, so I can look it up?
>
> /Jorgen
>

its possibly in the book "Programming Language 4th edition" which I have
been reading.... maybe to google. But not 100% sure.

JiiPee

unread,
Jan 2, 2016, 2:30:54 PM1/2/16
to
On 31/12/2015 21:23, Jorgen Grahn wrote:
> On Sat, 2015-12-26, JiiPee wrote:
>> On 26/12/2015 00:12, Alf P. Steinbach wrote:
>>> Recommendations: I recommend using signed integers for numbers and
>>> using unsigned only for bitlevel things.
>> yes this is what Bjarne seems to recommend as well
> Where does he recommend that, so I can look it up?
>
> /Jorgen

he says in this video "Use int until you have a reason not to" at 12:50

https://channel9.msdn.com/Events/GoingNative/2013/Interactive-Panel-Ask-Us-Anything

Also the other one says there the same.

But I think it was also in his book.

Jorgen Grahn

unread,
Jan 2, 2016, 4:49:52 PM1/2/16
to
On Sat, 2016-01-02, JiiPee wrote:
> On 31/12/2015 21:23, Jorgen Grahn wrote:
>> On Sat, 2015-12-26, JiiPee wrote:
>>> On 26/12/2015 00:12, Alf P. Steinbach wrote:
>>>> Recommendations: I recommend using signed integers for numbers and
>>>> using unsigned only for bitlevel things.
>>> yes this is what Bjarne seems to recommend as well
>> Where does he recommend that, so I can look it up?
>>
>> /Jorgen
>
> he says in this video "Use int until you have a reason not to" at 12:50
>
> https://channel9.msdn.com/Events/GoingNative/2013/Interactive-Panel-Ask-Us-Anything

Thanks. You're right, he does say that, quite clearly. And also the
thing about bit-level above, plus "never mix signed and unsigned".

> Also the other one says there the same.
>
> But I think it was also in his book.

Which makes me wonder what they do to size_t, std::vector<T>::size_type
and so on ...

Bo Persson

unread,
Jan 2, 2016, 4:56:29 PM1/2/16
to
I have heard that some of those guys now also regret making size_type an
unsigned type.

Having both size_type and difference_type in the containers makes it
very hard to follow the advice "never mix signed and unsigned".



Bo Persson


Jorgen Grahn

unread,
Jan 2, 2016, 5:34:53 PM1/2/16
to
Which still leaves me confused because the standard library (of C++
and C) cannot be changed, and Stroustrup strikes me as the kind of
guy who gives advice for the /real/ world ...

JiiPee

unread,
Jan 2, 2016, 9:13:59 PM1/2/16
to
On 02/01/2016 21:56, Bo Persson wrote:
> On 2016-01-02 22:49, Jorgen Grahn wrote:
>> On Sat, 2016-01-02, JiiPee wrote:
>>> On 31/12/2015 21:23, Jorgen Grahn wrote:
>>>> On Sat, 2015-12-26, JiiPee wrote:
>>>>> On 26/12/2015 00:12, Alf P. Steinbach wrote:
>>>>>> Recommendations: I recommend using signed integers for numbers and
>>>>>> using unsigned only for bitlevel things.
>>>>> yes this is what Bjarne seems to recommend as well
>>>> Where does he recommend that, so I can look it up?
>>>>
>>>> /Jorgen
>>>
>>> he says in this video "Use int until you have a reason not to" at 12:50
>>>
>>> https://channel9.msdn.com/Events/GoingNative/2013/Interactive-Panel-Ask-Us-Anything
>>>
>>
>> Thanks. You're right, he does say that, quite clearly. And also the
>> thing about bit-level above, plus "never mix signed and unsigned".
>>
>>> Also the other one says there the same.
>>>
>>> But I think it was also in his book.
>>
>> Which makes me wonder what they do to size_t, std::vector<T>::size_type
>> and so on ...
>>
>
> I have heard that some of those guys now also regret making size_type
> an unsigned type.
>

yes thats what they say also in that video

JiiPee

unread,
Jan 2, 2016, 9:17:01 PM1/2/16
to
I actually agree with Bjarne here..... what if you put -2 to unsigned
int? you might not get what you expect. They both can contain very big
numbers, so even that is not a reason.

unsigned int would work only if there would be a compile time error
ALWAYS when you try to put a negative number to it. but i guess its not
possible. also we have this constant problem converting uint to int
later in the code.

Paavo Helde

unread,
Jan 3, 2016, 2:39:57 AM1/3/16
to
JiiPee <n...@notvalid.com> wrote in news:ow%hy.317908$Xx.3...@fx45.am4:

> unsigned int would work only if there would be a compile time error
> ALWAYS when you try to put a negative number to it. but i guess its
> not possible.

There is a widely used idiom for initializing an unsigned integer with all
bits set. This is also extremely simple and extremely portable as the
conversion rules are explicitly stated in the C and C++ standards:

unsigned int x = -1;

std::uint64_t y = -1;

Cheers
Paavo

JiiPee

unread,
Jan 3, 2016, 11:33:39 AM1/3/16
to
you mean

unsigned int x = -1;

int y = x;

would guaranteed to set y to -1?

Paavo Helde

unread,
Jan 3, 2016, 2:22:24 PM1/3/16
to
JiiPee <n...@notvalid.com> wrote in news:b3ciy.355205$Xx.2...@fx45.am4:
No, I mean it is guaranteed that all bits of an unsigned integer are set
to 1 when -1 is assigned to it. Assigning this value back to an int is
implementation-defined (because the value cannot be represented in int).
You might get -1 back in some implementations, but this is not
guaranteed.

Cheers
Paavo


Vir Campestris

unread,
Jan 3, 2016, 2:53:47 PM1/3/16
to
On 03/01/2016 07:39, Paavo Helde wrote:
> There is a widely used idiom for initializing an unsigned integer with all
> bits set. This is also extremely simple and extremely portable as the
> conversion rules are explicitly stated in the C and C++ standards:
>
> unsigned int x = -1;
>
> std::uint64_t y = -1;

I've heard rumours of machines that don't use 2s complement arithmetic
(although I've never met one). Shouldn't we use ~0?

Andy

Alf P. Steinbach

unread,
Jan 3, 2016, 3:08:48 PM1/3/16
to
Only when the 0 is of the right size.

E.g.,

unsigned x = ~0u; // OK

but

uint64_t x2 = ~0u; // Probably not OK (e.g., not OK in Windows).

Paavo Helde

unread,
Jan 3, 2016, 4:28:13 PM1/3/16
to
Vir Campestris <vir.cam...@invalid.invalid> wrote in
news:t5KdnfSA7-Kk4RTL...@brightview.co.uk:
Conversion from -1 is well-defined on all machines:

4.7 Integral conversions [conv.integral]
[...]
2 If the destination type is unsigned, the resulting value is the least
unsigned integer congruent to the source integer (modulo 2n where n is
the number of bits used to represent the unsigned type). [ Note: In a
two’s complement representation, this conversion is conceptual and there
is no change in the bit pattern (if there is no truncation). —end note ]

The tilde operator however is more tricky:

5.3.1./10: The operand of ˜ shall have integral or unscoped enumeration
type; the result is the one’s complement of its operand. Integral
promotions are performed. The type of the result is the type of the
promoted operand.

In ~0, 0 is of type int, so the promoted type is int as well, and we get
one's complement of a signed int, that would be some implementation-
defined value as the representation of signed integers is implementation-
specific IIRC.

~0u would work better, but would still fail with larger unsigned types. I
still think using -1 is the simplest and most robust way.

Cheers
Paavo

JiiPee

unread,
Jan 3, 2016, 7:53:21 PM1/3/16
to
On 03/01/2016 21:27, Paavo Helde wrote:
> Vir Campestris <vir.cam...@invalid.invalid> wrote in
> news:t5KdnfSA7-Kk4RTL...@brightview.co.uk:
>
> I still think using -1 is the simplest and most robust way. Cheers Paavo

I guess the question here is , what Vir also said, that is it
*guaranteed* by the standard that -1 *always* does this? So can you show
from the starndard the place which guarantees this? because if not, then
I think it might be risky to use it....(it might work in 99% of
machines, but fail in 1% which for me would mean I would not use it).

Alf P. Steinbach

unread,
Jan 3, 2016, 8:36:36 PM1/3/16
to
On 1/4/2016 1:53 AM, JiiPee wrote:
> On 03/01/2016 21:27, Paavo Helde wrote:
>> Vir Campestris <vir.cam...@invalid.invalid> wrote in
>> news:t5KdnfSA7-Kk4RTL...@brightview.co.uk:
>>
>> I still think using -1 is the simplest and most robust way. Cheers Paavo
>
> I guess the question here is , what Vir also said, that is it
> *guaranteed* by the standard that -1 *always* does this? So can you show
> from the starndard the place which guarantees this?

Unfortunately the standard treats initialization and arithmetic
separately, so we need to be precise about what “this” is.

I'll assume that it's

unsigned x = -1;

Then, first, C++11 §8.5/16 last dash, says that for this case

“Standard conversions (Clause 4) will be used, if necessary, to convert
the initializer expression to the cv-unqualified version of the
destination type; no user-defined conversions are considered”

And over in the referred clause 4 one finds in §4.7/2 that

“If the destination type is unsigned, the resulting value is the least
unsigned integer congruent to the source integer (modulo 2^n where n is
the number of bits used to represent the unsigned type)”

This is a bit UNCLEAR because it doesn't use the established term “value
representation”, i.e. by not using that term it leaves open the
possibility of affecting some trap bits or such. But only the really
pedantic would fret about that. Nobody's posted a DR about it AFAIK.

JiiPee

unread,
Jan 4, 2016, 1:50:09 PM1/4/16
to
ok, thanks for the effort

Vir Campestris

unread,
Jan 4, 2016, 4:19:20 PM1/4/16
to
On 04/01/2016 00:53, JiiPee wrote:
> I guess the question here is , what Vir also said, that is it
> *guaranteed* by the standard that -1 *always* does this? So can you show
> from the starndard the place which guarantees this? because if not, then
> I think it might be risky to use it....(it might work in 99% of
> machines, but fail in 1% which for me would mean I would not use it).

I've worked on machines where the difference between subsequent
addresses is 1, 6, 8, 18, 24 and 36 bits; and where the native word size
is 8, 16, 24, 32, 36 and 64.

As I said, I've never met one which is not twos complement (ie -1 is
some number of FFFs).

I have met some where +0.0 is different to -0.0 - but then, equality is
pretty meaningless in floating point.

If they do exist it's a hell of a lot less than 1%.

However - Alf told me not to do it. Paavo quoted me the spec (which I
spent 5 minutes reading again and again) and if those two tell me not to
I won't.

Andy

Chris Vine

unread,
Jan 4, 2016, 5:24:38 PM1/4/16
to
No they didn't. They told you that the initialization:

unsigned x = -1;

is guaranteed to initialize all the bits of the integer to 1, by §4.7/2
of the standard. It is guaranteed to work, subject to the slight
imperfection of the wording to which Alf referred but which does not
affect the clear intention. (However, behaviour in the reverse
direction, namely on overflow on conversion to signed, is not
guaranteed. It is implementation defined - §4.7/3.)

Chris

Chris Vine

unread,
Jan 4, 2016, 5:37:43 PM1/4/16
to
On Mon, 4 Jan 2016 22:24:16 +0000
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> wrote:
[snip]
> No they didn't. They told you that the initialization:
>
> unsigned x = -1;
>
> is guaranteed to initialize all the bits of the integer to 1, by
> §4.7/2 of the standard. It is guaranteed to work, subject to the
> slight imperfection of the wording to which Alf referred but which
> does not affect the clear intention. (However, behaviour in the
> reverse direction, namely on overflow on conversion to signed, is not
> guaranteed. It is implementation defined - §4.7/3.)

It occurs to me that you may have been confused by the reference to bit
alteration in §4.7/2. The point here is that even where negative
numbers are represented by, say, one's complement

unsigned x = -1;

will still set all the bits of the integer to 1. However in one's
complement this is bit altering, because in the one's complement
representation of -1, the first bit is not set (all bits set in one's
complement is equivalent to -0, which is a value that two's complement
does not have).

Chris

Mr Flibble

unread,
Jan 4, 2016, 5:54:44 PM1/4/16
to
There is no such thing as negative 0 which makes (if you are correct)
one's complement erroneous. IEEE floating point is also wrong to include
support for negative 0 sausages.

/Flibble

Chris Vine

unread,
Jan 4, 2016, 6:09:11 PM1/4/16
to
On Mon, 4 Jan 2016 22:54:31 +0000
Mr Flibble <flibbleREM...@i42.co.uk> wrote:
[snip]
> There is no such thing as negative 0 which makes (if you are correct)
> one's complement erroneous. IEEE floating point is also wrong to
> include support for negative 0 sausages.

On the mathematical number line, only 0 exists. But -0 exists in one's
complement. No bits set is +0, all bits set is -0. The two compare
equal so it is an artifact of the representation - in one's complement,
all bits set is equal to no bits set.

This is a necessary consequence of the fact that in one's complement
a negative number of a given value is obtained by inverting all the
bits of its positive equivalent. This in turn means that in one's
complement, the range of an 8 bit signed integer is 127 to -127, not
127 to -128 as in two's complement.

Chris

Öö Tiib

unread,
Jan 4, 2016, 6:28:00 PM1/4/16
to
That is still unnecessary consequence. When we negate zero then it should
give zero again not "negative zero". The value with all bits set can be
used as "quiet saturating NaN" or even "Signaling NaN" ... such can be
useful. That -0 feels even less useful value than that -128.

Mr Flibble

unread,
Jan 4, 2016, 6:29:39 PM1/4/16
to
The last time I had to think about 1's complement was in my first year
at university (1990) which just reinforces (in my mind) that 1's
complement is broken sausages.

/Flibble

Scott Lurndal

unread,
Jan 5, 2016, 9:05:14 AM1/5/16
to
Vir Campestris <vir.cam...@invalid.invalid> writes:
>On 04/01/2016 00:53, JiiPee wrote:
>> I guess the question here is , what Vir also said, that is it
>> *guaranteed* by the standard that -1 *always* does this? So can you show
>> from the starndard the place which guarantees this? because if not, then
>> I think it might be risky to use it....(it might work in 99% of
>> machines, but fail in 1% which for me would mean I would not use it).
>
>I've worked on machines where the difference between subsequent
>addresses is 1, 6, 8, 18, 24 and 36 bits; and where the native word size
>is 8, 16, 24, 32, 36 and 64.
>
>As I said, I've never met one which is not twos complement (ie -1 is
>some number of FFFs).

I have used one, and am aware of another where a negative value is
not represented in twos (or even ones) complement. Now, both are
obsolete (yet one was running production code as late as 2010 for
a california city).

In Burroughs medium systems mainframes, memory is addressible to
the digit (nibble) and arithmetic is done on BCD values. The sign
digit (0xC for +, 0xD for -) occupies the leading digit of the field.

A null pointer is represented by the 8 digit (base 16) value CiEEEEEE
where the 'i' can be any value from 0 to 7 (it's a base indicant to select
which of the 8 active base-limit pairs to use).

http://vseries.lurndal.org/doku.php?id=instructions:add
http://vseries.lurndal.org/doku.php?id=instructions:slt

Similarly the IBM 1401 and other BCD systems.

https://en.wikipedia.org/wiki/IBM_1401

C (and C++) aren't a good match for such architectures.

Scott Lurndal

unread,
Jan 5, 2016, 9:06:23 AM1/5/16
to
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> writes:
>On Mon, 4 Jan 2016 22:54:31 +0000
>Mr Flibble <flibbleREM...@i42.co.uk> wrote:
>[snip]
>> There is no such thing as negative 0 which makes (if you are correct)
>> one's complement erroneous. IEEE floating point is also wrong to
>> include support for negative 0 sausages.
>
>On the mathematical number line, only 0 exists. But -0 exists in one's
>complement. No bits set is +0, all bits set is -0. The two compare
>equal so it is an artifact of the representation - in one's complement,
>all bits set is equal to no bits set.


-0 also exists in BCD architectures. In general, it's treated by
the hardware as identical to +0.

Chris Vine

unread,
Jan 5, 2016, 11:00:35 AM1/5/16
to
And in any sign-and-magnitude representation of signed integers
generally.

I saw your interesting post on BCD systems, which I took to mean that
they use 4-bit binary coded decimal for their numbers. I don't think
such systems could validly run a C++11 program, because §3.9.1/4
requires a standard power-of-2 per bit representation of unsigned
integers, something which is also assumed by §4.7/2 dealing with
unsigned overflow. I am not as familiar with the C11 standard but I
suspect that a BCD system may be able to run C. In particular
§6.3.1.3/2 of C11 on unsigned overflow does not assume power-of-2
representation, and although its effect is identical to §4.7/2 of C++11
on such representations, its requirements work OK for others (systems
would have to ensure that all necessary bit transformations are made so
that initialization or assignment of -1 to their unsigned integer type
sets the unsigned integer to its maximum value, which happens to be
all-bits-set for power-of-2 representations). Perhaps something else
in C rules out BCD, I don't know.

I remember many years ago having some hardware based logic in an
electronic device using BCD for its numbers. That is as close as I
have ever come to it. It struck me then as wasteful of storage, albeit
very convenient in that particular usage.

Chris

Scott Lurndal

unread,
Jan 5, 2016, 11:44:59 AM1/5/16
to
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> writes:
>On Tue, 05 Jan 2016 14:06:13 GMT
>sc...@slp53.sl.home (Scott Lurndal) wrote:
>
>> Chris Vine <chris@cvine--nospam--.freeserve.co.uk> writes:
>> >On Mon, 4 Jan 2016 22:54:31 +0000
>> >Mr Flibble <flibbleREM...@i42.co.uk> wrote:
>> >[snip] =20
>> >> There is no such thing as negative 0 which makes (if you are
>> >> correct) one's complement erroneous. IEEE floating point is also
>> >> wrong to include support for negative 0 sausages. =20
>> >
>> >On the mathematical number line, only 0 exists. But -0 exists in
>> >one's complement. No bits set is +0, all bits set is -0. The two
>> >compare equal so it is an artifact of the representation - in one's
>> >complement, all bits set is equal to no bits set. =20
>>=20
>> -0 also exists in BCD architectures. In general, it's treated by
>> the hardware as identical to +0.
>
>And in any sign-and-magnitude representation of signed integers
>generally.
>
>I saw your interesting post on BCD systems, which I took to mean that
>they use 4-bit binary coded decimal for their numbers.

I was one of the OS engineers for the Burroughs BCD systems three
decades ago. We did look (circa 1984) at porting V6 C to the
architecture, but it would have been very limiting; The architecture
allows variable length numeric fields of one to 100 digits. We looked
at mapping 'unsigned int' to a 8-digit (32-bit) field, but the largest
magnitude number supported would be 9,999,999 which wasn't particularly
useful. It was really designed as a target for COBOL compilers.

Making it more difficult, the user-level architecture included segmentation (up to
seven active data and one code segment at any time, with a non-local
function call instruction (virtual enter, VEN) to switch to a different
set of eight "segments". Each segment maxed out at 1 million digits
(i.e. a 6-digit address within the segment).

nullptr was a 6-digit EEEEEE (8-digits in an index register to allow
selection of one of the 8 active segments).

Segment 0, which contained the stack and the first three index registers
(which were mapped to addresses 8-15, 16-23 and 24-31) was generally
the same in all of an application environments (sets of segments).
The stack pointer was mapped to address 40-45 in segment zero.

Each process (task) had its own segment zero. Code segments (being
immutable) were shared as necessary both within a task and between
tasks.

Application APIS (core-to-core(synchronous) and storage queues(asynchronous))
allowed data to be shared between cooperating tasks.

Debugging was dead simple given that everything was in BCD.

The architecture did not have bit shifting capabilities[*], but did have
bit test, bit set and bit clear instructions as well as the logical
bit manipulation instructions (and, or, not).

[*] although the processor internally had a barrel shifter to efficiently
store and access nibbles using COTS memory parts (post magnetic core).

Early 70's versions (the architecture dates to 1965) didn't trap on
'undigit arithmetic' and the results were 'undefined'. Later architectures
would trap such operations; several customers had funky code that relied
on undigit (e.g. bcd values of 0xa - 0xf) arithmetic which was one of the
few backward compatability issues we had over the 30 year life of the
architecture; most applications written in 1966 still ran in 1996 on
the latest version of the hardware (a good thing, since the source for
many of those applications had been lost).

City of Santa Ana replaced their V380 (built 1987) with 20 windows
boxes in 2010, which was the last system I'm aware of that was still
in production. The system is now operational at the Living Computer
Museum in Seattle, Wash.

scott

> I don't think
>such systems could validly run a C++11 program, because =C2=A73.9.1/4
>requires a standard power-of-2 per bit representation of unsigned
>integers, something which is also assumed by =C2=A74.7/2 dealing with
>unsigned overflow. I am not as familiar with the C11 standard but I
>suspect that a BCD system may be able to run C. In particular
>=C2=A76.3.1.3/2 of C11 on unsigned overflow does not assume power-of-2
>representation, and although its effect is identical to =C2=A74.7/2 of C++11

Rosario19

unread,
Jan 5, 2016, 12:24:04 PM1/5/16
to
On Mon, 4 Jan 2016 23:08:47 +0000, Chris Vine wrote:
>On Mon, 4 Jan 2016 22:54:31 +0000
>Mr Flibble <flibbleREM...@i42.co.uk> wrote:
>[snip]
>> There is no such thing as negative 0 which makes (if you are correct)
>> one's complement erroneous. IEEE floating point is also wrong to
>> include support for negative 0 sausages.
>
>On the mathematical number line, only 0 exists. But -0 exists in one's
>complement. No bits set is +0, all bits set is -0. The two compare
>equal so it is an artifact of the representation - in one's complement,
>all bits set is equal to no bits set.

+0 or -0 are == 0
in what i have seen not exist in mathematic...
fixed point float would not have them too

Chris Vine

unread,
Jan 5, 2016, 2:25:20 PM1/5/16
to
On Tue, 05 Jan 2016 16:44:40 GMT
sc...@slp53.sl.home (Scott Lurndal) wrote:
[snip]
That's an interesting historical perspective. You must be about my age.

Having just looked at the museum's website I will see if I can fit in a
visit the next time I am at the west coast. I am a UK resident and
don't go there that often as it is such a long haul - I was last there
at the end of September to watch the Dodgers-Giants series (seeing
Bumgarner, Kershaw and Greinke pitching in the space of 2 days was
persuasive).

Chris

Scott Lurndal

unread,
Jan 5, 2016, 2:57:38 PM1/5/16
to
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> writes:
>On Tue, 05 Jan 2016 16:44:40 GMT
>sc...@slp53.sl.home (Scott Lurndal) wrote:
>[snip]
[snip burroughs description]
>>
>> City of Santa Ana replaced their V380 (built 1987) with 20 windows
>> boxes in 2010, which was the last system I'm aware of that was still
>> in production. The system is now operational at the Living Computer
>> Museum in Seattle, Wash.
>
>That's an interesting historical perspective. You must be about my age.
>
>Having just looked at the museum's website I will see if I can fit in a
>visit the next time I am at the west coast. I am a UK resident and
>don't go there that often as it is such a long haul - I was last there
>at the end of September to watch the Dodgers-Giants series (seeing
>Bumgarner, Kershaw and Greinke pitching in the space of 2 days was
>persuasive).

Yes, I'm able to frequently attend Giants games during the season
(although they take more parking away every year and I'm no longer
much interested in weekday night games so I've been attending fewer
games lately :-).

Many of my friends from the UK (Burroughs had offices in Citygate,
Uxbridge, Milton Keynes and other locations that I used to visit)
consider american baseball akin to rounders.

Chris Vine

unread,
Jan 5, 2016, 3:26:41 PM1/5/16
to
On Tue, 05 Jan 2016 19:57:27 GMT
sc...@slp53.sl.home (Scott Lurndal) wrote:
> Yes, I'm able to frequently attend Giants games during the season
> (although they take more parking away every year and I'm no longer
> much interested in weekday night games so I've been attending fewer
> games lately :-).

Stay in SF. Hotel prices are high but the public transport system there
is fantastic (and there are some decent bars to call in at on the way
back to Market if that is where you are picking up your tram/bus/train).
AT&T Park is one of the great venues.

> Many of my friends from the UK (Burroughs had offices in Citygate,
> Uxbridge, Milton Keynes and other locations that I used to visit)
> consider american baseball akin to rounders.

This is getting off topic so I will end it here, but having spent a
part of my formative years in the US I am ambidextrous with respect to
baseball and cricket (and other sports for that matter). I guess I
would have to say that the best sporting competition in the world in my
opinion is the European Champions League.

Chris

Scott Lurndal

unread,
Jan 5, 2016, 3:38:59 PM1/5/16
to
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> writes:
>On Tue, 05 Jan 2016 19:57:27 GMT
>sc...@slp53.sl.home (Scott Lurndal) wrote:
>> Yes, I'm able to frequently attend Giants games during the season
>> (although they take more parking away every year and I'm no longer
>> much interested in weekday night games so I've been attending fewer
>> games lately :-).
>
>Stay in SF. Hotel prices are high but the public transport system there
>is fantastic (and there are some decent bars to call in at on the way
>back to Market if that is where you are picking up your tram/bus/train).
>AT&T Park is one of the great venues.

I can take CalTrain to the stadium, but it takes well over an hour
each way. I live 50 miles south of PacBell^H^H^H^H^H^H^H AT&T park.

s

Vir Campestris

unread,
Jan 5, 2016, 4:39:03 PM1/5/16
to
Sorry, I snipped too much.

They told me not to use ~0...

Andy

Gareth Owen

unread,
Jan 6, 2016, 3:26:29 PM1/6/16
to
Watch the A's

red floyd

unread,
Jan 6, 2016, 7:55:03 PM1/6/16
to
In that tarp-covered monstrosity full of sewage overflow?

Gareth Owen

unread,
Jan 7, 2016, 2:00:24 AM1/7/16
to
red floyd <no....@its.invalid> writes:

>>
>> Watch the A's
>>
>
> In that tarp-covered monstrosity full of sewage overflow?

Sure. But on the plus side, there's plenty of parking, and you're less
likely to get too-stoned-to-drive from second hand smoke. :)
0 new messages