47 views

Skip to first unread message

Nov 30, 2021, 3:07:34 AM11/30/21

to

From another thread, discussion between David and Bart:

D> But if you have just one starting point, 0 is the sensible one.

D> You might not like the way C handles arrays (and I'm not going to

D> argue about it - it certainly has its cons as well as its pros),

D> but even you would have to agree that defining "A[i]" to be the

D> element at "address of A + i * the size of the elements" is neater

D> and clearer than one-based indexing.

B> That's a crude way of defining arrays. A[i] is simply the i'th

B> element of N slots, you don't need to bring offsets into it.

Why call it 'i'th? I know people do but wouldn't it be easier to call it

'element n' where n is its index? Then that would work with any basing.

B> With 0-based, there's a disconnect between the ordinal number of

B> the element you want, and the index that needs to be used. So A[2]

B> for the 3rd element.

Why not call A[2] element 2?

BTW, Bart, do you consider the first ten numbers as 1 to 10 rather than

0 to 9? If so, presumably you count the hundreds as starting at 111.

That's not the most logical viewpoint.

Similarly, on the day a child is born do you say that he is one year old?

--

James Harris

D> But if you have just one starting point, 0 is the sensible one.

D> You might not like the way C handles arrays (and I'm not going to

D> argue about it - it certainly has its cons as well as its pros),

D> but even you would have to agree that defining "A[i]" to be the

D> element at "address of A + i * the size of the elements" is neater

D> and clearer than one-based indexing.

B> That's a crude way of defining arrays. A[i] is simply the i'th

B> element of N slots, you don't need to bring offsets into it.

Why call it 'i'th? I know people do but wouldn't it be easier to call it

'element n' where n is its index? Then that would work with any basing.

B> With 0-based, there's a disconnect between the ordinal number of

B> the element you want, and the index that needs to be used. So A[2]

B> for the 3rd element.

Why not call A[2] element 2?

BTW, Bart, do you consider the first ten numbers as 1 to 10 rather than

0 to 9? If so, presumably you count the hundreds as starting at 111.

That's not the most logical viewpoint.

Similarly, on the day a child is born do you say that he is one year old?

--

James Harris

Nov 30, 2021, 4:18:32 AM11/30/21

to

On 2021-11-30 09:07, James Harris wrote:

> From another thread, discussion between David and Bart:

>

> D> But if you have just one starting point, 0 is the sensible one.

> D> You might not like the way C handles arrays (and I'm not going to

> D> argue about it - it certainly has its cons as well as its pros),

> D> but even you would have to agree that defining "A[i]" to be the

> D> element at "address of A + i * the size of the elements" is neater

> D> and clearer than one-based indexing.

>

> B> That's a crude way of defining arrays. A[i] is simply the i'th

> B> element of N slots, you don't need to bring offsets into it.

>

> Why call it 'i'th? I know people do but wouldn't it be easier to call it

> 'element n' where n is its index? Then that would work with any basing.

You are confusing position with index. Index can be of any ordered type.
> From another thread, discussion between David and Bart:

>

> D> But if you have just one starting point, 0 is the sensible one.

> D> You might not like the way C handles arrays (and I'm not going to

> D> argue about it - it certainly has its cons as well as its pros),

> D> but even you would have to agree that defining "A[i]" to be the

> D> element at "address of A + i * the size of the elements" is neater

> D> and clearer than one-based indexing.

>

> B> That's a crude way of defining arrays. A[i] is simply the i'th

> B> element of N slots, you don't need to bring offsets into it.

>

> Why call it 'i'th? I know people do but wouldn't it be easier to call it

> 'element n' where n is its index? Then that would work with any basing.

Position is an ordinal number: first, second, third element from the

array beginning.

> B> With 0-based, there's a disconnect between the ordinal number of

> B> the element you want, and the index that needs to be used. So A[2]

> B> for the 3rd element.

>

> Why not call A[2] element 2?

element corresponding to the index 2.

Remember, array is a mapping:

array : index -> element

In well-designed languages it is also spelt as a mapping:

A(2)

> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather than

> 0 to 9? If so, presumably you count the hundreds as starting at 111.

> That's not the most logical viewpoint.

>

> Similarly, on the day a child is born do you say that he is one year old?

like index. Duration is relative like position.

--

Regards,

Dmitry A. Kazakov

http://www.dmitry-kazakov.de

Nov 30, 2021, 5:28:04 AM11/30/21

to

On 30/11/2021 08:07, James Harris wrote:

> From another thread, discussion between David and Bart:

>

> D> But if you have just one starting point, 0 is the sensible one.

> D> You might not like the way C handles arrays (and I'm not going to

> D> argue about it - it certainly has its cons as well as its pros),

> D> but even you would have to agree that defining "A[i]" to be the

> D> element at "address of A + i * the size of the elements" is neater

> D> and clearer than one-based indexing.

>

> B> That's a crude way of defining arrays. A[i] is simply the i'th

> B> element of N slots, you don't need to bring offsets into it.

>

> Why call it 'i'th? I know people do but wouldn't it be easier to call it

> 'element n' where n is its index? Then that would work with any basing.

The most common base I use is 1 (about 2/3 of the time). You have a
> From another thread, discussion between David and Bart:

>

> D> But if you have just one starting point, 0 is the sensible one.

> D> You might not like the way C handles arrays (and I'm not going to

> D> argue about it - it certainly has its cons as well as its pros),

> D> but even you would have to agree that defining "A[i]" to be the

> D> element at "address of A + i * the size of the elements" is neater

> D> and clearer than one-based indexing.

>

> B> That's a crude way of defining arrays. A[i] is simply the i'th

> B> element of N slots, you don't need to bring offsets into it.

>

> Why call it 'i'th? I know people do but wouldn't it be easier to call it

> 'element n' where n is its index? Then that would work with any basing.

3-element array, the 1st is numbered 1, the last is 3, the 3rd is 3 too.

All very intuitive and user-friendly.

But this is that 3-element array as 3 adjoining cells:

mmmmmmmmmmmmmmmmmmmmmmmmm

m m m m

m 1 m 2 m 3 m Normal indexing

m +0 m +1 m +2 m Offsets

m m m m

mmmmmmmmmmmmmmmmmmmmmmmmm

0 1 2 3 Distance from start point

The numbering is 1, 2, 3 as I prefer when /counting/. Or you can choose

to use offsets from the first element as C does, shown as +0, +1, +2.

There is also /measuring/, which applies more when each cell has some

physical dimension, such as 3 adjoining square pixels. Or maybe these

are three fence panels, and the vertical columns are the posts.

Here, offsets are again used, but notionally considered to be measured

from the first 'post'.

In this case, an 'index' of 2.4 is meaningful, being 2.4 units from the

left, and 40% along that 3rd cell.

Measurement can also apply when the cells represent other units, like

time as DAK touched on: how many days from Monday to Wednesday? That is

not that meaningful when a day is considered an indivisible unit like an

array cell.

You can say the difference is +2 days. In real life, it depends on what

time Monday, and what time Wednesday, so it can vary from 24 to 72 hours

(24:00 Mon to 00:00 Wed, or 00:00 Mon to 24:00 Wed).

>

> B> With 0-based, there's a disconnect between the ordinal number of

> B> the element you want, and the index that needs to be used. So A[2]

> B> for the 3rd element.

>

> Why not call A[2] element 2?

>

> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather than

> 0 to 9? If so, presumably you count the hundreds as starting at 111.

> That's not the most logical viewpoint.

everyone else. It's a big deal when the '19' year prefix in use for 100

years, suddenly changes to '20'.

> Similarly, on the day a child is born do you say that he is one year old?

his age up to the next whole year; most people round down! So the child

would be 0 years, but in its first year.

However there is not enough resolution using years to accurately measure

ages of very young children, so people also use days, weeks and months.

So, when do I use 0-based:

(a) When porting zero-based algorithms from elsewhere. This works more

reliably than porting one-based code to C.

[N]int A # 1-based (also [1:N] or [1..N]

[0:N]int A # 0-based (also [0..N-1])

(b) When I have a regular array normally index from 1, but that index

can have 0 as an escape value, meaning not set or not valid:

global tabledata() [0:]ichar opndnames =

(no_opnd=0, $),

(mem_opnd, $),

(memaddr_opnd, $),

....

(c) When the value used as index naturally includes zero.

When do I use N-based: this is much less common. An example might be:

['A'..'Z']int counts

Here, it becomes less meaningful to use the ordinal position index: the

first element has index 65! So this kind of array has more in common

with a hash or dict type, when the index is a key that can anything.

But for the special case of the keys being consecutive integers over a

small range, then a regular, fixed-size array indexed by that range is

far more efficient.

However, the slice counts['D'..'F] will have elements indexed from 1..3,

not 'D'..'F'. There are some pros and cons, but overall the way I've

done it is simpler (slices have a lower bound known at compile-time, not

runtime).

Nov 30, 2021, 7:50:17 PM11/30/21

to

On 30/11/2021 08:07, James Harris wrote:

> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather

> than 0 to 9?

Until quite recently, Bart and almost everyone else would
> than 0 to 9?

certainly have done exactly that. Zero, as a number, was invented

in modern times [FSVO "modern"!]. "You have ten sheep and you sell

ten of them. How many sheep do you now have?" "??? I don't have

/any/ sheep left." Or, worse, "You have ten sheep and you sell

eleven of them. How many sheep do you now have?" "??? You can't

do that, it would be fraud." Or the Peano axioms for the natural

numbers: 1 is a natural number; for every n in the set, there

is a successor n' in the set; every n in the set /except/ 1 is

the successor of a unique member; .... Or look at any book;

only in a handful of rather weird books trying to make a point

is there a page 0. When you first learned to count, you almost

certainly started with a picture of a ball [or whatever] and

the caption "1 ball", then "2 cats", "3 trees", "4 cakes", ...

up to "12 candles"; not with an otherwise blank page showing

"0 children". [Note that 0 as a number in its own right is

different from the symbol 0 as a placeholder in the middle or at

the end of a number in Arabic numerals.]

Maths, inc numbers, counting, and science generally, got

along quite happily with only positive numbers from antiquity up

to around 1700, when the usefulness of the rest of the number

line became apparent, at least in maths and science if not to

the general public.

/Now/ the influence of computing has made zero-based

indexing more relevant. So have constructive arguments more

generally; eg, the surreal numbers -- a surreal number is two

sets of surreal numbers [with some conditions], so that the

natural starting point is there the two sets are empty, giving

the "empty" number, naturally identified with zero. So it was

only around 1970 that people started taking seriously the idea

of counting from zero. Of course, once you do that, then you

can contemplate not counting "from" anywhere at all; eg the

idea [which I first saw espoused by vd Meulen] that arrays

could be thought of as embedded in [-infinity..+infinity],

treated therefore always in practice as "sparse" arrays, with

almost all elements being "virtual".

> If so, presumably you count the hundreds as starting at

> 111. That's not the most logical viewpoint.

/and/ eleven", suggesting that it's eleven into the second

hundred. It's not illogical to suggest that the "hundreds"

start immediately after 100, nor to suggest that they start

/at/ 100. Dates are a special case, as there was [of course]

no year zero, so centuries "definitely" end on the "hundred"

years, not start on them. But, as Bart pointed out, there is

still an interest in the number clicking over from 1999 to

2000, and therefore the chance to get two parties.

> Similarly, on the day a child is born do you say that he is one year

> old?

eighth-born child "Septimus"?

Slightly more seriously, there are of course legal

questions surrounding this [esp when they concern the age

of majority and such-like], and they are resolved by the

law and by conventions [which may well differ around the

world] rather than by maths and logic.

--

Andy Walker, Nottingham.

Andy's music pages: www.cuboid.me.uk/andy/Music

Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Bendel

Dec 1, 2021, 3:43:15 AM12/1/21

to

On 01/12/2021 01:50, Andy Walker wrote:

> On 30/11/2021 08:07, James Harris wrote:

>> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather

>> than 0 to 9?

>

> Until quite recently, Bart and almost everyone else would

> certainly have done exactly that.

Remember Bart, and some others, think it is "natural" to count from
> On 30/11/2021 08:07, James Harris wrote:

>> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather

>> than 0 to 9?

>

> Until quite recently, Bart and almost everyone else would

> certainly have done exactly that.

32767 on to -32767 (or larger type equivalents - 16-bit numbers are

easier to write) in the context of programming. Clearly they would not

think that way when counting sheep. So why apply sheep counting to

other aspects of programming? Personally I prefer to think you can't

add 1 to 32767 (or larger type equivalents), which is of course almost

equally silly in terms of sheep.

> Zero, as a number, was invented

> in modern times [FSVO "modern"!].

It reached Europe around 1200, but had been around in India, amongst

other countries, for a good while before that. The Mayans also had a

number zero earlier on. It is difficult to be precise about times,

however, because "zero" is used for many different purposes and ideas

changed and evolved over time.)

> "You have ten sheep and you sell

> ten of them. How many sheep do you now have?" "??? I don't have

> /any/ sheep left." Or, worse, "You have ten sheep and you sell

> eleven of them. How many sheep do you now have?" "??? You can't

> do that, it would be fraud." Or the Peano axioms for the natural

> numbers: 1 is a natural number; for every n in the set, there

> is a successor n' in the set; every n in the set /except/ 1 is

> the successor of a unique member; .... Or look at any book;

> only in a handful of rather weird books trying to make a point

> is there a page 0. When you first learned to count, you almost

> certainly started with a picture of a ball [or whatever] and

> the caption "1 ball", then "2 cats", "3 trees", "4 cakes", ...

> up to "12 candles"; not with an otherwise blank page showing

> "0 children". [Note that 0 as a number in its own right is

> different from the symbol 0 as a placeholder in the middle or at

> the end of a number in Arabic numerals.]

>

at zero, not at one.

There is no mathematical consensus as to whether the set of natural

numbers ℕ starts with 0 or 1. But there is no doubt that the numbers

generated by the Peano axioms start at 0.

Other than that, we can simply say that different types of number are

useful for different purposes.

> Maths, inc numbers, counting, and science generally, got

> along quite happily with only positive numbers from antiquity up

> to around 1700, when the usefulness of the rest of the number

> line became apparent, at least in maths and science if not to

> the general public.

>

"number". They were used in accountancy, as well as by a few

mathematicians. But there general use, especially in Europe, came a lot

later.

> /Now/ the influence of computing has made zero-based

> indexing more relevant. So have constructive arguments more

> generally; eg, the surreal numbers -- a surreal number is two

> sets of surreal numbers [with some conditions], so that the

> natural starting point is there the two sets are empty, giving

> the "empty" number, naturally identified with zero. So it was

> only around 1970 that people started taking seriously the idea

> of counting from zero. Of course, once you do that, then you

> can contemplate not counting "from" anywhere at all; eg the

> idea [which I first saw espoused by vd Meulen] that arrays

> could be thought of as embedded in [-infinity..+infinity],

> treated therefore always in practice as "sparse" arrays, with

> almost all elements being "virtual".

>

nothing to do with surreals. Surreal numbers are rather esoteric, and

very far from useful in array indexing in programming (which always

boils down to some kind of finite integer).

Dec 1, 2021, 6:13:29 AM12/1/21

to

On 01/12/2021 08:43, David Brown wrote:

> On 01/12/2021 01:50, Andy Walker wrote:

>> On 30/11/2021 08:07, James Harris wrote:

>>> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather

>>> than 0 to 9?

>>

>> Until quite recently, Bart and almost everyone else would

>> certainly have done exactly that.

>

> Remember Bart, and some others, think it is "natural" to count from

> 32767 on to -32767 (or larger type equivalents - 16-bit numbers are

> easier to write) in the context of programming.

Remember David think's it's natural to count from 65535 onto 0.
> On 01/12/2021 01:50, Andy Walker wrote:

>> On 30/11/2021 08:07, James Harris wrote:

>>> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather

>>> than 0 to 9?

>>

>> Until quite recently, Bart and almost everyone else would

>> certainly have done exactly that.

>

> Remember Bart, and some others, think it is "natural" to count from

> 32767 on to -32767 (or larger type equivalents - 16-bit numbers are

> easier to write) in the context of programming.

I simply acknowledge that that is how most hardware works. Otherwise how

do you explain that the upper limit of some value is (to ordinary

people) the arbitrary figure of 32,767 or 65,535 instead of 99,999?

> Clearly they would not

> think that way when counting sheep. So why apply sheep counting to

> other aspects of programming? Personally I prefer to think you can't

> add 1 to 32767 (or larger type equivalents), which is of course almost

> equally silly in terms of sheep.

count; what are you going to do?

> I am quite confident that the idea of starting array indexes from 0 had

> nothing to do with surreals.

> Surreal numbers are rather esoteric, and

> very far from useful in array indexing in programming (which always

> boils down to some kind of finite integer).

>

a{infinity} := 100

a{-infinity} := 200

println a # [Infinity:100, -Infinity:200]

Dec 1, 2021, 7:25:04 AM12/1/21

to

On 01/12/2021 12:13, Bart wrote:

> On 01/12/2021 08:43, David Brown wrote:

>> On 01/12/2021 01:50, Andy Walker wrote:

>>> On 30/11/2021 08:07, James Harris wrote:

>>>> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather

>>>> than 0 to 9?

>>>

>>> Until quite recently, Bart and almost everyone else would

>>> certainly have done exactly that.

>>

>> Remember Bart, and some others, think it is "natural" to count from

>> 32767 on to -32767 (or larger type equivalents - 16-bit numbers are

>> easier to write) in the context of programming.

>

> Remember David think's it's natural to count from 65535 onto 0.

No, I don't - as you would know if you read my posts.
> On 01/12/2021 08:43, David Brown wrote:

>> On 01/12/2021 01:50, Andy Walker wrote:

>>> On 30/11/2021 08:07, James Harris wrote:

>>>> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather

>>>> than 0 to 9?

>>>

>>> Until quite recently, Bart and almost everyone else would

>>> certainly have done exactly that.

>>

>> Remember Bart, and some others, think it is "natural" to count from

>> 32767 on to -32767 (or larger type equivalents - 16-bit numbers are

>> easier to write) in the context of programming.

>

> Remember David think's it's natural to count from 65535 onto 0.

>

> I simply acknowledge that that is how most hardware works. Otherwise how

> do you explain that the upper limit of some value is (to ordinary

> people) the arbitrary figure of 32,767 or 65,535 instead of 99,999?

>

hardware if you like. People can understand that perfectly well.

Limits are quite natural in counting and measuring - wrapping is much

rarer (though it does occur, such as with times and angles).

>

>

>> Clearly they would not

>> think that way when counting sheep. So why apply sheep counting to

>> other aspects of programming? Personally I prefer to think you can't

>> add 1 to 32767 (or larger type equivalents), which is of course almost

>> equally silly in terms of sheep.

>

> It might be silly, but you'd still be stuck if you had 33,000 sheep to

> count; what are you going to do?

>

It is perfectly reasonable to say that you are counting sheep by putting

them in a pen, and if the pen only holds 20 sheep then you can't count

beyond 20.

>

>

>> I am quite confident that the idea of starting array indexes from 0 had

>> nothing to do with surreals.

>

> More to do with conflating them with offsets.

clear, obvious and efficient. (And again, I like having higher-level

array handling where index types can be more flexible - such as integer

subranges or enumeration types.)

>

>> Surreal numbers are rather esoteric, and

>> very far from useful in array indexing in programming (which always

>> boils down to some kind of finite integer).

>>

>

> a:=[:]

>

> a{infinity} := 100

> a{-infinity} := 200

>

> println a # [Infinity:100, -Infinity:200]

>

>

arrays (though some languages combine them). They are suitable (and

very useful) in higher level languages, but should not be part of the

core language for low-level languages. Libraries can then offer a range

of different variations on the theme, letting programmers pick the

version that fits their needs.

(Oh, and there is no such surreal as "infinity" - most surreals are

non-finite. But that's really getting off-topic!)

Dec 1, 2021, 7:12:00 PM12/1/21

to

On 01/12/2021 08:43, David Brown wrote:

> [I wrote:]
>> Zero, as a number, was invented

>> in modern times [FSVO "modern"!].

> (Historical note:

> It reached Europe around 1200, but had been around in India, amongst

> other countries, for a good while before that.

Yes, but that's nearly always zero as a placeholder, not
>> in modern times [FSVO "modern"!].

> (Historical note:

> It reached Europe around 1200, but had been around in India, amongst

> other countries, for a good while before that.

as a number in its own right. [I'm not convinced by many of the

claimed exceptions, which often smack of flag-waving.]

[...]

> The first Peano axiom is "0 is a natural number". They start counting

> at zero, not at one.

> There is no mathematical consensus as to whether the set of natural

> numbers ℕ starts with 0 or 1. But there is no doubt that the numbers

> generated by the Peano axioms start at 0.

When Peano first wrote his axioms, he started at 1. Later
> at zero, not at one.

> There is no mathematical consensus as to whether the set of natural

> numbers ℕ starts with 0 or 1. But there is no doubt that the numbers

> generated by the Peano axioms start at 0.

he wrote a version starting at 0. The foundational maths books on

my shelves, even modern ones, are split; it really matters very

little.

[...]

> Negative numbers long pre-date the general acceptance of 0 as a

> "number". They were used in accountancy, as well as by a few

> mathematicians. But there general use, especially in Europe, came a

> lot later.

My impression is that accountants used red ink rather than
> "number". They were used in accountancy, as well as by a few

> mathematicians. But there general use, especially in Europe, came a

> lot later.

negative numbers. As late as the 1970s, hand/electric calculators

still used red numerals rather than a minus sign.

>> /Now/ the influence of computing has made zero-based

>> indexing more relevant. So have constructive arguments more

> I am quite confident that the idea of starting array indexes from 0 had

> nothing to do with surreals. [...]
Surreal numbers were an example; they are part of the

explanation for mathematics also tending to become zero-based.

Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Ketterer

Dec 2, 2021, 2:37:33 AM12/2/21

to

On 02/12/2021 01:11, Andy Walker wrote:

> On 01/12/2021 08:43, David Brown wrote:

>> [I wrote:]

>>> Zero, as a number, was invented

>>> in modern times [FSVO "modern"!].

>> (Historical note:

>> It reached Europe around 1200, but had been around in India, amongst

>> other countries, for a good while before that.

>

> Yes, but that's nearly always zero as a placeholder, not

> as a number in its own right. [I'm not convinced by many of the

> claimed exceptions, which often smack of flag-waving.]

>

Certainly zero as a placeholder was much more common. As a number -
> On 01/12/2021 08:43, David Brown wrote:

>> [I wrote:]

>>> Zero, as a number, was invented

>>> in modern times [FSVO "modern"!].

>> (Historical note:

>> It reached Europe around 1200, but had been around in India, amongst

>> other countries, for a good while before that.

>

> Yes, but that's nearly always zero as a placeholder, not

> as a number in its own right. [I'm not convinced by many of the

> claimed exceptions, which often smack of flag-waving.]

>

well, since there was not even a consensus as to what a "number" is

until more rigorous mathematics of the past few centuries, it is very

difficult to tell. And of course we don't exactly have complete records

of all mathematics in all cultures for the last few millennium. So

there is definitely place for interpretation, opinions and hypotheses in

the history here, with no good way to judge the accuracy.

> [...]

>> The first Peano axiom is "0 is a natural number". They start counting

>> at zero, not at one.

>> There is no mathematical consensus as to whether the set of natural

>> numbers ℕ starts with 0 or 1. But there is no doubt that the numbers

>> generated by the Peano axioms start at 0.

>

> When Peano first wrote his axioms, he started at 1. Later

> he wrote a version starting at 0. The foundational maths books on

> my shelves, even modern ones, are split; it really matters very

> little.

identity. I suppose you /could/ define addition with the starting point

"a + 1 = succ(a)" rather than "a + 0 = a", but it is all much easier and

neater when you start with 0. That is certainly how I learned it at

university, and how I have seen it a few other places - but while I

think I have a couple of books covering them, they are buried in the

attic somewhere.

>

> [...]

>> Negative numbers long pre-date the general acceptance of 0 as a

>> "number". They were used in accountancy, as well as by a few

>> mathematicians. But there general use, especially in Europe, came a

>> lot later.

>

> My impression is that accountants used red ink rather than

> negative numbers. As late as the 1970s, hand/electric calculators

> still used red numerals rather than a minus sign.

>

cultures. "Red ink" is certainly a well-known phrase in modern

English-speaking countries. But brackets, minus signs, and other

methods are used. Go far enough back and people didn't write with ink

at all.

But again, it is difficult to decide when something was considered "a

negative number" rather than "a number to be subtracted rather than added".

>>> /Now/ the influence of computing has made zero-based

>>> indexing more relevant. So have constructive arguments more

>>> generally; eg, the surreal numbers [...].

>> I am quite confident that the idea of starting array indexes from 0 had

>> nothing to do with surreals. [...]

>

> Surreal numbers were an example; they are part of the

> explanation for mathematics also tending to become zero-based.

>

purpose. Constructions of surreal numbers will normally start with 0 -

but so will constructions of other more familiar types, such as

integers, reals, ordinals, cardinals, and almost any other numbers.

Maybe it is just that with surreals, few people ever have much idea of

what they are, or get beyond reading how they are constructed! (Some

day I must get the book on them - it was Conway that developed them, and

Knuth that wrote the book, right?)

Dec 2, 2021, 9:56:51 AM12/2/21

to

On 01/12/2021 12:25, David Brown wrote:

> On 01/12/2021 12:13, Bart wrote:

>> On 01/12/2021 08:43, David Brown wrote:

>>> Remember Bart, and some others, think it is "natural" to count from

>>> 32767 on to -32767 (or larger type equivalents - 16-bit numbers are

>>> easier to write) in the context of programming.

>>

>> Remember David think's it's natural to count from 65535 onto 0.

>

> No, I don't - as you would know if you read my posts.

>

>>

>> I simply acknowledge that that is how most hardware works. Otherwise how

>> do you explain that the upper limit of some value is (to ordinary

>> people) the arbitrary figure of 32,767 or 65,535 instead of 99,999?

>>

>

> You say the limit is 32767, or whatever - explaining it in terms of the

> hardware if you like. People can understand that perfectly well.

> Limits are quite natural in counting and measuring - wrapping is much

> rarer (though it does occur, such as with times and angles).

Yes, exactly. You travel east but when you hit 180E, it suddenly turns
> On 01/12/2021 12:13, Bart wrote:

>> On 01/12/2021 08:43, David Brown wrote:

>>> Remember Bart, and some others, think it is "natural" to count from

>>> 32767 on to -32767 (or larger type equivalents - 16-bit numbers are

>>> easier to write) in the context of programming.

>>

>> Remember David think's it's natural to count from 65535 onto 0.

>

> No, I don't - as you would know if you read my posts.

>

>>

>> I simply acknowledge that that is how most hardware works. Otherwise how

>> do you explain that the upper limit of some value is (to ordinary

>> people) the arbitrary figure of 32,767 or 65,535 instead of 99,999?

>>

>

> You say the limit is 32767, or whatever - explaining it in terms of the

> hardware if you like. People can understand that perfectly well.

> Limits are quite natural in counting and measuring - wrapping is much

> rarer (though it does occur, such as with times and angles).

into 180W, and the next degree along will be 179W not 181E.

The integer values represented by N bits can be thought of as being

arranged in a circle, here shown for N=3 as either unsigned, two's

complement or signed magnitude:

u3 i3 s3

000 0 000 0 000 +0 Origin

001 +1 001 +1 001 +1

010 +2 010 +2 010 +2

011 +3 011 +3 011 +3

100 +4 100 -4 100 -0

101 +5 101 -3 101 -1

110 +6 110 -2 110 -2

111 +7 111 -1 111 -3

000 0 000 0 000 +0 Origin

001 +1 001 +1 001 +1

...

Degrees of longtitude, if they were whole numbers rather than

continuous, would correspond most closely with the middle column (but

there would be 179E then 180W; no 180E).

Whatever column is chosen, wrapping behaviour is well-defined, even if

it may not be meaningful if your prefered result would need 6 bits to

represent; you don't want just the bottom 3.

But if you're in an aircraft flying along the equator, travelling 10

degrees east then 10 degrees west would normally get to you back to the

same longitude, whatever the start point, even when you cross the 180th

meridian.

Dec 2, 2021, 12:29:52 PM12/2/21

to

On 30/11/2021 10:28, Bart wrote:

> On 30/11/2021 08:07, James Harris wrote:

>> From another thread, discussion between David and Bart:

...
> On 30/11/2021 08:07, James Harris wrote:

>> From another thread, discussion between David and Bart:

>> B> That's a crude way of defining arrays. A[i] is simply the i'th

>> B> element of N slots, you don't need to bring offsets into it.

See below about cardinal and ordinal numbers.

>>

>> Why call it 'i'th? I know people do but wouldn't it be easier to call

>> it 'element n' where n is its index? Then that would work with any

>> basing.

>

> The most common base I use is 1 (about 2/3 of the time). You have a

> 3-element array, the 1st is numbered 1, the last is 3, the 3rd is 3 too.

> All very intuitive and user-friendly.

>

> But this is that 3-element array as 3 adjoining cells:

>

> mmmmmmmmmmmmmmmmmmmmmmmmm

> m m m m

> m 1 m 2 m 3 m Normal indexing

> m +0 m +1 m +2 m Offsets

> m m m m

> mmmmmmmmmmmmmmmmmmmmmmmmm

>

> 0 1 2 3 Distance from start point

>

>

> The numbering is 1, 2, 3 as I prefer when /counting/. Or you can choose

> to use offsets from the first element as C does, shown as +0, +1, +2.

>

> There is also /measuring/, which applies more when each cell has some

> physical dimension, such as 3 adjoining square pixels. Or maybe these

> are three fence panels, and the vertical columns are the posts.

The natural place to count an element - any element - is when it is

complete; however, where I think the conflict appears is that if an

element is known to be indivisible then its can never be partially

present so we know when we see the start of it that it is complete. That

'trick' works for whole elements but does not work in the general case.

To explain, consider a decimal number and take the units. You may see

them as

1, 2, 3, etc

but now take the tens position. In those numbers the tens position is

zero so they can be seen as (in normal notation, not in C-form octal!)

01, 02, 03, etc

Similarly, the number of hundreds in those numbers is also zero, i.e.

with three digits they are

001, 002, 003, etc

The tens and the hundreds are each subdivided (into smaller elements

1/10th of their value). We have to wait for the units to tick past 9

before we add 1 to the tens column, and for the tens column to tick past

9 before we add another hundred. So the mathematically natural indexing

for tens and hundreds and all higher digit positions more is from zero.

It's more consistent, then, to number the units from zero, too, but we

often find it natural to count them from 1. Here's an idea as to why:

Perhaps we think of counting units from 1 because we normally count

/whole/ objects. We don't need to wait for them to complete; since they

are indivisible we know they are complete when we first see them.

But that's a special case. The more general case is to start from zero.

>

> Here, offsets are again used, but notionally considered to be measured

> from the first 'post'.

>

> In this case, an 'index' of 2.4 is meaningful, being 2.4 units from the

> left, and 40% along that 3rd cell.

...

>> Similarly, on the day a child is born do you say that he is one year old?

>

>

> This is 'measurement'; see above. However my dad always liked to round

> his age up to the next whole year; most people round down! So the child

> would be 0 years, but in its first year.

>

> However there is not enough resolution using years to accurately measure

> ages of very young children, so people also use days, weeks and months.

months, for example.

Cardinals and ordinals

Going back to your point at the beginning, as above the ordinal of

something is naturally one more than its cardinal number. Our /first/

year is when we are age zero whole years. In the 20th century the

century portion of the date was 19. Etc.

So in a zero-based array it would be inconsistent to refer to

A[1]

as the first element even though lots of people do it. It is, in fact,

the second. It's probably easiest to refer to it as

element 1

then the bounds don't matter.

--

James Harris

Dec 2, 2021, 12:39:59 PM12/2/21

to

On 30/11/2021 09:18, Dmitry A. Kazakov wrote:

> On 2021-11-30 09:07, James Harris wrote:

>> From another thread, discussion between David and Bart:

...
> On 2021-11-30 09:07, James Harris wrote:

>> From another thread, discussion between David and Bart:

>> B> That's a crude way of defining arrays. A[i] is simply the i'th

>> B> element of N slots, you don't need to bring offsets into it.

>>

>> Why call it 'i'th? I know people do but wouldn't it be easier to call

>> it 'element n' where n is its index? Then that would work with any

>> basing.

>

> You are confusing position with index. Index can be of any ordered type.

> Position is an ordinal number: first, second, third element from the

> array beginning.

...

>> Why not call A[2] element 2?

>

> Because it would be wrong. In most languages A[2] means the array

> element corresponding to the index 2.

just as A["XX"] would be element "XX"?

>

> Remember, array is a mapping:

>

> array : index -> element

>

> In well-designed languages it is also spelt as a mapping:

>

> A(2)

it I'm curious, Dmitry, as to whether you would accept such a mapping

returning the address of, as in

A(2) = A(2) + 1

--

James Harris

Dec 2, 2021, 1:05:17 PM12/2/21

to

On 2021-12-02 18:39, James Harris wrote:

> On 30/11/2021 09:18, Dmitry A. Kazakov wrote:

>> On 2021-11-30 09:07, James Harris wrote:

>>> Why not call A[2] element 2?

>>

>> Because it would be wrong. In most languages A[2] means the array

>> element corresponding to the index 2.

>

> "Element 2" doesn't mean "second element"

Again, A[2] is the element corresponding to the index 2. Not "element
> On 30/11/2021 09:18, Dmitry A. Kazakov wrote:

>> On 2021-11-30 09:07, James Harris wrote:

>>> Why not call A[2] element 2?

>>

>> Because it would be wrong. In most languages A[2] means the array

>> element corresponding to the index 2.

>

> "Element 2" doesn't mean "second element"

2," not "second element," just an element denoted by the index value 2.

> so why is A[2] not "element 2"

> just as A["XX"] would be element "XX"?

>> Remember, array is a mapping:

>>

>> array : index -> element

>>

>> In well-designed languages it is also spelt as a mapping:

>>

>> A(2)

>

> This is something I want to come back to elsewhere but since you mention

> it I'm curious, Dmitry, as to whether you would accept such a mapping

> returning the address of, as in

>

> A(2) = A(2) + 1

to the index 2 on both sides. No any addresses. A is a mapping, mutable

in this case. It does not return anything.

You as always confuse implementation with semantics. There is an

uncountable number of valid implementations of a mapping. The programmer

does not care most of the time, because he presumes the compiler vendors

are sane people, until proven otherwise.

Dec 2, 2021, 3:11:46 PM12/2/21

to

On 02/12/2021 17:29, James Harris wrote:

> On 30/11/2021 10:28, Bart wrote:

> To explain, consider a decimal number and take the units. You may see

> them as

>

> 1, 2, 3, etc

>

> but now take the tens position. In those numbers the tens position is

> zero so they can be seen as (in normal notation, not in C-form octal!)

>

> 01, 02, 03, etc

>

> Similarly, the number of hundreds in those numbers is also zero, i.e.

> with three digits they are

>

> 001, 002, 003, etc

>

> The tens and the hundreds are each subdivided (into smaller elements

> 1/10th of their value). We have to wait for the units to tick past 9

> before we add 1 to the tens column, and for the tens column to tick past

> 9 before we add another hundred. So the mathematically natural indexing

> for tens and hundreds and all higher digit positions more is from zero.

> It's more consistent, then, to number the units from zero, too, but we

> often find it natural to count them from 1.

They're not really numbered, they're counted, and the number of tens go
> On 30/11/2021 10:28, Bart wrote:

> To explain, consider a decimal number and take the units. You may see

> them as

>

> 1, 2, 3, etc

>

> but now take the tens position. In those numbers the tens position is

> zero so they can be seen as (in normal notation, not in C-form octal!)

>

> 01, 02, 03, etc

>

> Similarly, the number of hundreds in those numbers is also zero, i.e.

> with three digits they are

>

> 001, 002, 003, etc

>

> The tens and the hundreds are each subdivided (into smaller elements

> 1/10th of their value). We have to wait for the units to tick past 9

> before we add 1 to the tens column, and for the tens column to tick past

> 9 before we add another hundred. So the mathematically natural indexing

> for tens and hundreds and all higher digit positions more is from zero.

> It's more consistent, then, to number the units from zero, too, but we

> often find it natural to count them from 1.

from 0 to 9 in total.

Most people do count from zero, in that the start point when you have

nothing is zero; the next is designated 1; the next 2, and so on. The

last in your collection is designated N, and you have N things in all.

Except the tens in your example are not ordered nor individually

numbered, you just need the total.

I guess if you had two cars in your household, you would agree there

were '2' cars and not '1' (which would confuse everyone, and would mean

that anyone without a car would have, what, -1 cars? If doesn't work!).

But if you had to number the cars, with a number on the roof, or on the

keytags, you can choose to number them 0 and 1, or 1 and 2, or 5000 and

5001, if you needed a sequential order.

The number of tens digit in that column however, must correspond to the

number of cars in a household, and not to the highest value of whatever

numbering scheme you favour.

>> This is 'measurement'; see above. However my dad always liked to round

>> his age up to the next whole year; most people round down! So the

>> child would be 0 years, but in its first year.

>>

>> However there is not enough resolution using years to accurately

>> measure ages of very young children, so people also use days, weeks

>> and months.

>

> Partials, again. A person doesn't become 1 year old until he reaches 12

> months, for example.

physical measurement.

If you go back to my fence and fenceposts example, you have N panels and

N+1 posts for a straight fence.

If you number the /posts/ from 0 to N, then the number gives you the

physical distance from the start (in fence panel units).

You wouldn't number the panels to get that information, because it would

be inaccurate; the panels are too wide.

The panels however do correspond to the elements of an array. This is

where I'd number them from 1 (since there is no reason to use 0 or

anything else); you'd probably use 0 for misguided reasons (perhaps too

much time spent coding in C or Python).

> Going back to your point at the beginning, as above the ordinal of

> something is naturally one more than its cardinal number. Our /first/

> year is when we are age zero whole years.

whole years when it has floor() applied to round it down.

> In the 20th century the

> century portion of the date was 19. Etc.

>

> A[1]

>

> as the first element even though lots of people do it. It is, in fact,

> the second. It's probably easiest to refer to it as

>

> element 1

>

> then the bounds don't matter.

1-based scheme, they would be A[1] and A[N]; 0-based is A[0] and A[N-1].

X-based (since N is the length) gets ugly, eg. A.[A.lwb] and A.[A.upb]

or A[$].

However it looks you're itching to start your arrays from 0; then just

do so. You don't need an excuse.

I happen to think that 1-based is better:

* It's more intuitive and easier to understand

* It corresponds to how most discrete things are numbered in real life

* If there are N elements, the first is 1, and the last N; there is no

dis-connect are there is with 0-based

* It plays well with the rest of a language, so for-loops can go from

1 to N instead of 0 to N-1.

* In N-way select (n | a,b,c |z), then n=1/2/3 selects 1st/2nd/3rd

* If you have a list indexed 1..N, then a search function can return

1..N for success, and 0 for failure. How would it work for 0-based

since 0 could be a valid return value?

* Such a return code will also be True in conditional (if x in A then...)

But despite the advantages, I still use 0-based too; it's just not the

primary choice.

Dec 2, 2021, 3:31:31 PM12/2/21

to

On 02/12/2021 18:05, Dmitry A. Kazakov wrote:

> On 2021-12-02 18:39, James Harris wrote:

>> On 30/11/2021 09:18, Dmitry A. Kazakov wrote:

>>> On 2021-11-30 09:07, James Harris wrote:

>

>>>> Why not call A[2] element 2?

>>>

>>> Because it would be wrong. In most languages A[2] means the array

>>> element corresponding to the index 2.

>>

>> "Element 2" doesn't mean "second element"

>

> Again, A[2] is the element corresponding to the index 2. Not "element

> 2," not "second element," just an element denoted by the index value 2.

>

>> so why is A[2] not "element 2"

>

> Because "element 2" is undefined, so far.

>

>> just as A["XX"] would be element "XX"?

>

> Nope. It is the element corresponding to the index "XX".

Just as the house corresponding with the number 48a is commonly called
> On 2021-12-02 18:39, James Harris wrote:

>> On 30/11/2021 09:18, Dmitry A. Kazakov wrote:

>>> On 2021-11-30 09:07, James Harris wrote:

>

>>>> Why not call A[2] element 2?

>>>

>>> Because it would be wrong. In most languages A[2] means the array

>>> element corresponding to the index 2.

>>

>> "Element 2" doesn't mean "second element"

>

> Again, A[2] is the element corresponding to the index 2. Not "element

> 2," not "second element," just an element denoted by the index value 2.

>

>> so why is A[2] not "element 2"

>

> Because "element 2" is undefined, so far.

>

>> just as A["XX"] would be element "XX"?

>

> Nope. It is the element corresponding to the index "XX".

house 48a, then. I am suggesting referring to elements of arrays by

their labels rather than by their positions.

>

>>> Remember, array is a mapping:

>>>

>>> array : index -> element

>>>

>>> In well-designed languages it is also spelt as a mapping:

>>>

>>> A(2)

>>

>> This is something I want to come back to elsewhere but since you

>> mention it I'm curious, Dmitry, as to whether you would accept such a

>> mapping returning the address of, as in

>>

>> A(2) = A(2) + 1

>

> It does not return address. A(2) denotes the array element corresponding

> to the index 2 on both sides. No any addresses. A is a mapping, mutable

> in this case. It does not return anything.

>

> You as always confuse implementation with semantics. There is an

> uncountable number of valid implementations of a mapping. The programmer

> does not care most of the time, because he presumes the compiler vendors

> are sane people, until proven otherwise.

semantics. But in this case I certainly was referring at least to a

reference or ideal implentation from which information (and other

potential implementations with the same semantics) can be inferred.

But to the point, are you comfortable with the idea of the A(2) in

x = A(2) + 0

meaning the same mapping result as the A(2) in

A(2) = 0

?

--

James Harris

Dec 2, 2021, 3:49:55 PM12/2/21

to

index 2. That is the semantics of A(2).

Dec 2, 2021, 4:25:45 PM12/2/21

to

On 02/12/2021 20:11, Bart wrote:

> On 02/12/2021 17:29, James Harris wrote:

>> On 30/11/2021 10:28, Bart wrote:

>

>

>> To explain, consider a decimal number and take the units. You may see

>> them as

>>

>> 1, 2, 3, etc

>>

>> but now take the tens position. In those numbers the tens position is

>> zero so they can be seen as (in normal notation, not in C-form octal!)

>>

>> 01, 02, 03, etc

>>

>> Similarly, the number of hundreds in those numbers is also zero, i.e.

>> with three digits they are

>>

>> 001, 002, 003, etc

>>

>> The tens and the hundreds are each subdivided (into smaller elements

>> 1/10th of their value). We have to wait for the units to tick past 9

>> before we add 1 to the tens column, and for the tens column to tick

>> past 9 before we add another hundred. So the mathematically natural

>> indexing for tens and hundreds and all higher digit positions more is

>> from zero. It's more consistent, then, to number the units from zero,

>> too, but we often find it natural to count them from 1.

>

> They're not really numbered, they're counted, and the number of tens go

> from 0 to 9 in total.

So (whatever you prefer to call it) do you agree that the number line
> On 02/12/2021 17:29, James Harris wrote:

>> On 30/11/2021 10:28, Bart wrote:

>

>

>> To explain, consider a decimal number and take the units. You may see

>> them as

>>

>> 1, 2, 3, etc

>>

>> but now take the tens position. In those numbers the tens position is

>> zero so they can be seen as (in normal notation, not in C-form octal!)

>>

>> 01, 02, 03, etc

>>

>> Similarly, the number of hundreds in those numbers is also zero, i.e.

>> with three digits they are

>>

>> 001, 002, 003, etc

>>

>> The tens and the hundreds are each subdivided (into smaller elements

>> 1/10th of their value). We have to wait for the units to tick past 9

>> before we add 1 to the tens column, and for the tens column to tick

>> past 9 before we add another hundred. So the mathematically natural

>> indexing for tens and hundreds and all higher digit positions more is

>> from zero. It's more consistent, then, to number the units from zero,

>> too, but we often find it natural to count them from 1.

>

> They're not really numbered, they're counted, and the number of tens go

> from 0 to 9 in total.

has the tens, hundreds, and above starting at zero and increasing to 9?

If so why not apply that to the units digit, too and say the natural

first number is zero?

...

> I guess if you had two cars in your household, you would agree there

> were '2' cars and not '1' (which would confuse everyone, and would mean

> that anyone without a car would have, what, -1 cars? If doesn't work!).

>

> But if you had to number the cars, with a number on the roof, or on the

> keytags, you can choose to number them 0 and 1, or 1 and 2, or 5000 and

> 5001, if you needed a sequential order.

cars are whole units.

But if you were putting petrol in one of the cars would you count

yourself as having received a tankful when the first drop went in? No,

where elements are partial we don't count the whole until it is complete.

Similarly, if you sold one of the cars to a friend who was to pay you

£100 a month for it would you count yourself as having received the

payment after the first month? No, this is also partial so you'd count

it at the end.

Ergo it's only for indivisible units that 1-based can possibly be seen

as natural. It's more general, though, to begin counting from zero -

even if it is less familiar.

...

> The panels however do correspond to the elements of an array. This is

> where I'd number them from 1 (since there is no reason to use 0 or

> anything else); you'd probably use 0 for misguided reasons (perhaps too

> much time spent coding in C or Python).

also an array.

...

>> In the 20th century the century portion of the date was 19. Etc.

>

> Yeah, that confuses a lot of people, but not us, right?

century number zero, not one, and the N'th century has century number

N - 1

IOW the numbering begins at zero.

That's not a convention, by the way, but how all numbering works: things

with partial phases begin at zero.

>

>>

>> A[1]

>>

>> as the first element even though lots of people do it. It is, in fact,

>> the second. It's probably easiest to refer to it as

>>

>> element 1

>>

>> then the bounds don't matter.

>

> Very often you do need to refer to the first or the last. In a strictly

> 1-based scheme, they would be A[1] and A[N]; 0-based is A[0] and A[N-1].

>

> X-based (since N is the length) gets ugly, eg. A.[A.lwb] and A.[A.upb]

> or A[$].

>

> However it looks you're itching to start your arrays from 0; then just

> do so. You don't need an excuse.

your position and see where the argument led me.

>

> I happen to think that 1-based is better:

>

> * It's more intuitive and easier to understand

single, simple array. But if you have arrays being processed in nested

loops then it might be best if you didn't count the outer one as

complete until the first set of iterations of the inner one have

finished. That's why I asked you before if you start numbering your

three-digit numbers at 111...!

>

> * It corresponds to how most discrete things are numbered in real life

>

> * If there are N elements, the first is 1, and the last N; there is no

> dis-connect are there is with 0-based

>

> * It plays well with the rest of a language, so for-loops can go from

> 1 to N instead of 0 to N-1.

>

> * In N-way select (n | a,b,c |z), then n=1/2/3 selects 1st/2nd/3rd

n of (a, b)

If n is zero it will pick a; if one, b. If you treat integers as

booleans (as you do below) then it doubles as a boolean test in the

order false, true - the opposite of C's ?: operator.

>

> * If you have a list indexed 1..N, then a search function can return

> 1..N for success, and 0 for failure. How would it work for 0-based

> since 0 could be a valid return value?

strcmp as a boolean when it is typically a signum or a difference.

As for the alternative, some options: -1, N, exception, designated

default value, boolean instead of index.

>

> * Such a return code will also be True in conditional (if x in A then...)

>

> But despite the advantages, I still use 0-based too; it's just not the

> primary choice.

mainly in discrete units then we can become accustomed to thinking

1-based. Yet that begins to run out of steam when processing hierarchies.

--

James Harris

Dec 2, 2021, 4:42:33 PM12/2/21

to

On 02/12/2021 20:49, Dmitry A. Kazakov wrote:

> On 2021-12-02 21:31, James Harris wrote:

...
> On 2021-12-02 21:31, James Harris wrote:

>> But to the point, are you comfortable with the idea of the A(2) in

>>

>> x = A(2) + 0

>>

>> meaning the same mapping result as the A(2) in

>>

>> A(2) = 0

>>

>> ?

>

> Yes, in both cases the result is the array element corresponding to the

> index 2. That is the semantics of A(2).

return v

then what would you want those A(2)s to mean and should they still mean

the same as each other? The latter expression would look strange to many.

I've been meaning to reply to Charles about the same issue but what you

said reminded me of it.

--

James Harris

Dec 2, 2021, 5:38:50 PM12/2/21

to

On 02/12/2021 21:25, James Harris wrote:

> On 02/12/2021 20:11, Bart wrote:

> As I said, whole units do not have partial, incomplete phases, and the

> cars are whole units.

>

> But if you were putting petrol in one of the cars would you count

> yourself as having received a tankful when the first drop went in? No,

> where elements are partial we don't count the whole until it is complete.

>

> Similarly, if you sold one of the cars to a friend who was to pay you

> £100 a month for it would you count yourself as having received the

> payment after the first month? No, this is also partial so you'd count

> it at the end.

>

> Ergo it's only for indivisible units that 1-based can possibly be seen

> as natural. It's more general, though, to begin counting from zero -

> even if it is less familiar.

Continuous measurements need to start from 0.0.
> On 02/12/2021 20:11, Bart wrote:

> As I said, whole units do not have partial, incomplete phases, and the

> cars are whole units.

>

> But if you were putting petrol in one of the cars would you count

> yourself as having received a tankful when the first drop went in? No,

> where elements are partial we don't count the whole until it is complete.

>

> Similarly, if you sold one of the cars to a friend who was to pay you

> £100 a month for it would you count yourself as having received the

> payment after the first month? No, this is also partial so you'd count

> it at the end.

>

> Ergo it's only for indivisible units that 1-based can possibly be seen

> as natural. It's more general, though, to begin counting from zero -

> even if it is less familiar.

Discrete entities are counted, starting at 0 for none, then 1 for 1 (see

Xs below).

Some are in-between, where continuous quantities are represented as lots

of small steps. (Example: money in steps of £0.01, or time measured in

whole seconds.)

> ...

>

>> The panels however do correspond to the elements of an array. This is

>> where I'd number them from 1 (since there is no reason to use 0 or

>> anything else); you'd probably use 0 for misguided reasons (perhaps

>> too much time spent coding in C or Python).

>

> No, I use 0 because it scales better. BTW, it sounds like the posts are

> also an array.

store data. If you were draw a diagram of bits or bytes or array

elements in memory, they would be the lines separate those elements.

> But do you see the point of it? The first century /naturally/ had

> century number zero, not one, and the N'th century has century number

>

> N - 1

>

> IOW the numbering begins at zero.

For me it means assigning sequential integers to a series of entities.

But you need an entity to hang a number from. With no entities, where

are you going to stick that zero?

>>

>> I happen to think that 1-based is better:

>>

>> * It's more intuitive and easier to understand

>

> It's easier on indivisible elements. That's fine if you only have a

> single, simple array. But if you have arrays being processed in nested

> loops then it might be best if you didn't count the outer one as

> complete until the first set of iterations of the inner one have

> finished. That's why I asked you before if you start numbering your

> three-digit numbers at 111...!

abc

has the value a*10^2 + b*10^1 + c*10^0.

The value of each of a,b,c is in the range 0..9 exclusive. That's just

how decimal notation works. Each digit represents as count as I said.

I'm not sure what you're trying to argue here; that because 0 is used to

mean nothing, then that must be the start point for everything?

Here are some sets of Xs increasing in size:

How many X's? Numbered as? Number of the Last?

--------

- 0 - -

--------

X 1 1 1

--------

X X 2 1 2 2

--------

X X X 3 1 2 3 3

--------

How would /you/ fill in those columns? I'd guess my '1 2 3' becomes '0 1

2', and that that last '3' becomes '2'.

But what about the first '3' on that last line; don't tell me it becomes

'2'! (Because then what happens to the '0'?)

Using you scheme (as I assume it will be); there is too much disconnect:

a '0' in the first row, and two 0s the second; a '1' in the second, and

two 1s in the third. Everything is out of step!

> Yes, you are talking about discreet units which are not made of parts.

>> But despite the advantages, I still use 0-based too; it's just not the

>> primary choice.

>

> Sure. For discrete units either will do - and if our programming is

> mainly in discrete units then we can become accustomed to thinking

> 1-based. Yet that begins to run out of steam when processing hierarchies.

[A..B]T X # or [A:N] where N is the length (B=A+N-1)

So, an array X of T, indexed from A to B inclusive. Here, whether A is

0, 1 or anything else doesn't come into it.

I just need to be aware of it so that I don't assume a specific lower

bound. (But usually I will know when A is 1 so I can take advantage.)

Dec 2, 2021, 7:08:28 PM12/2/21

to

On 02/12/2021 22:25, James Harris wrote:

> On 02/12/2021 20:11, Bart wrote:

>> On 02/12/2021 17:29, James Harris wrote:

>>> In the 20th century the century portion of the date was 19. Etc.

>>

>> Yeah, that confuses a lot of people, but not us, right?

>

> But do you see the point of it? The first century /naturally/ had

> century number zero, not one, and the N'th century has century number

>

> N - 1

>

> IOW the numbering begins at zero.

>

> That's not a convention, by the way, but how all numbering works: things

> with partial phases begin at zero.

>

Note, however, that the first century began with year 1 AD (or 1 CE, if
> On 02/12/2021 20:11, Bart wrote:

>> On 02/12/2021 17:29, James Harris wrote:

>>> In the 20th century the century portion of the date was 19. Etc.

>>

>> Yeah, that confuses a lot of people, but not us, right?

>

> But do you see the point of it? The first century /naturally/ had

> century number zero, not one, and the N'th century has century number

>

> N - 1

>

> IOW the numbering begins at zero.

>

> That's not a convention, by the way, but how all numbering works: things

> with partial phases begin at zero.

>

you prefer). The preceding year was 1 BC. There was no year 0. This

means the first century was the years 1 to 100 inclusive.

It really annoyed me that everyone wanted to celebrate the new

millennium on 01.01.2000, when in fact it did not begin until 01.01.2001.

It would have been so much simpler, and fitted people's expectations

better, if years have been numbered from 0 onwards instead of starting

counting at 1.

Dec 2, 2021, 8:42:26 PM12/2/21

to

On 03/12/2021 00:08, David Brown wrote:

> On 02/12/2021 22:25, James Harris wrote:

>> On 02/12/2021 20:11, Bart wrote:

>>> On 02/12/2021 17:29, James Harris wrote:

>

>>>> In the 20th century the century portion of the date was 19. Etc.

>>>

>>> Yeah, that confuses a lot of people, but not us, right?

>>

>> But do you see the point of it? The first century /naturally/ had

>> century number zero, not one, and the N'th century has century number

>>

>> N - 1

>>

>> IOW the numbering begins at zero.

>>

>> That's not a convention, by the way, but how all numbering works: things

>> with partial phases begin at zero.

>>

> Note, however, that the first century began with year 1 AD (or 1 CE, if

> you prefer). The preceding year was 1 BC. There was no year 0. This

> means the first century was the years 1 to 100 inclusive.

So -1 was followed by +1?
> On 02/12/2021 22:25, James Harris wrote:

>> On 02/12/2021 20:11, Bart wrote:

>>> On 02/12/2021 17:29, James Harris wrote:

>

>>>> In the 20th century the century portion of the date was 19. Etc.

>>>

>>> Yeah, that confuses a lot of people, but not us, right?

>>

>> But do you see the point of it? The first century /naturally/ had

>> century number zero, not one, and the N'th century has century number

>>

>> N - 1

>>

>> IOW the numbering begins at zero.

>>

>> That's not a convention, by the way, but how all numbering works: things

>> with partial phases begin at zero.

>>

> Note, however, that the first century began with year 1 AD (or 1 CE, if

> you prefer). The preceding year was 1 BC. There was no year 0. This

> means the first century was the years 1 to 100 inclusive.

> It really annoyed me that everyone wanted to celebrate the new

> millennium on 01.01.2000, when in fact it did not begin until 01.01.2001.

> It would have been so much simpler, and fitted people's expectations

> better, if years have been numbered from 0 onwards instead of starting

> counting at 1.

AD, which can be an honorary year 0.

It must have been a big deal on 24:00 on 31-12-999 when not only a new

century began, from the 10th to the 11th, and the century year changed

not only from 9 to 10, but from 1 digit to 2 digits.

Then probably some spoilsport came along and said it didn't count, they

were still in the same century really, despite that '10' in the year,

and they'd have to wait until midnight on 31-12-1000.

Dec 3, 2021, 2:31:44 AM12/3/21

to

On 03/12/2021 02:42, Bart wrote:

> On 03/12/2021 00:08, David Brown wrote:

>> On 02/12/2021 22:25, James Harris wrote:

>>> On 02/12/2021 20:11, Bart wrote:

>>>> On 02/12/2021 17:29, James Harris wrote:

>>

>>>>> In the 20th century the century portion of the date was 19. Etc.

>>>>

>>>> Yeah, that confuses a lot of people, but not us, right?

>>>

>>> But do you see the point of it? The first century /naturally/ had

>>> century number zero, not one, and the N'th century has century number

>>>

>>> N - 1

>>>

>>> IOW the numbering begins at zero.

>>>

>>> That's not a convention, by the way, but how all numbering works: things

>>> with partial phases begin at zero.

>>>

>> Note, however, that the first century began with year 1 AD (or 1 CE, if

>> you prefer). The preceding year was 1 BC. There was no year 0. This

>> means the first century was the years 1 to 100 inclusive.

>

> So -1 was followed by +1?

Yes. Although of course the idea of AD and BC numbering was developed
> On 03/12/2021 00:08, David Brown wrote:

>> On 02/12/2021 22:25, James Harris wrote:

>>> On 02/12/2021 20:11, Bart wrote:

>>>> On 02/12/2021 17:29, James Harris wrote:

>>

>>>>> In the 20th century the century portion of the date was 19. Etc.

>>>>

>>>> Yeah, that confuses a lot of people, but not us, right?

>>>

>>> But do you see the point of it? The first century /naturally/ had

>>> century number zero, not one, and the N'th century has century number

>>>

>>> N - 1

>>>

>>> IOW the numbering begins at zero.

>>>

>>> That's not a convention, by the way, but how all numbering works: things

>>> with partial phases begin at zero.

>>>

>> Note, however, that the first century began with year 1 AD (or 1 CE, if

>> you prefer). The preceding year was 1 BC. There was no year 0. This

>> means the first century was the years 1 to 100 inclusive.

>

> So -1 was followed by +1?

long afterwards. The people living in 1 BC didn't know their year was

called 1 BC :-)

>

>

>> It really annoyed me that everyone wanted to celebrate the new

>> millennium on 01.01.2000, when in fact it did not begin until 01.01.2001.

>

>> It would have been so much simpler, and fitted people's expectations

>> better, if years have been numbered from 0 onwards instead of starting

>> counting at 1.

>

> I'm sure we can all pretend that the start point was the year before 1

> AD, which can be an honorary year 0.

difference, since many BC dates are only known approximately anyway.

>

> It must have been a big deal on 24:00 on 31-12-999 when not only a new

> century began, from the 10th to the 11th, and the century year changed

> not only from 9 to 10, but from 1 digit to 2 digits.

>

extra digit.

And at that time, day was from the first hour starting about dawn - what

we call 06:00 - until the twelfth hour about sunset - what we call

18:00. The length of hours in the day and the night depended on the

time of year. They were really only tracked by monasteries, where they

had their obsession about prayers and masses at different times. For

example, they needed to know when the ninth hour was (about 15:00 modern

timing) for their "noon" prayers.

It's easy to assume that people saw the change to year 1000 (or 1001) as

a big thing or perhaps the time for the "second coming" or apocalypse,

but from the records we have, it does not seem to be the case. (I'm

talking about the UK and Europe here - folks like the Mayans and Chinese

always loved a really big party at calender rollovers.) We have banking

records of people taking out 10 year loans in 998, for example, without

any indication that it was unusual.

> Then probably some spoilsport came along and said it didn't count, they

> were still in the same century really, despite that '10' in the year,

> and they'd have to wait until midnight on 31-12-1000.

>

Dec 3, 2021, 2:41:26 AM12/3/21

to

On 2021-12-02 22:42, James Harris wrote:

> On 02/12/2021 20:49, Dmitry A. Kazakov wrote:

>> On 2021-12-02 21:31, James Harris wrote:

>

> ...

>

>>> But to the point, are you comfortable with the idea of the A(2) in

>>>

>>> x = A(2) + 0

>>>

>>> meaning the same mapping result as the A(2) in

>>>

>>> A(2) = 0

>>>

>>> ?

>>

>> Yes, in both cases the result is the array element corresponding to

>> the index 2. That is the semantics of A(2).

>

> Cool. If A were, instead, a function that, say, ended with

>

> return v

PL/1 had those, if I correctly remember. But no, it is not a function.
> On 02/12/2021 20:49, Dmitry A. Kazakov wrote:

>> On 2021-12-02 21:31, James Harris wrote:

>

> ...

>

>>> But to the point, are you comfortable with the idea of the A(2) in

>>>

>>> x = A(2) + 0

>>>

>>> meaning the same mapping result as the A(2) in

>>>

>>> A(2) = 0

>>>

>>> ?

>>

>> Yes, in both cases the result is the array element corresponding to

>> the index 2. That is the semantics of A(2).

>

> Cool. If A were, instead, a function that, say, ended with

>

> return v

If you want to go for fully abstract array types, it is a procedure (and

a method):

procedure Setter (A : in out Array; I : Index; E : Element)

So

A(2) = 0

must compile into

Setter (A, 2, 0) or A.Setter (2, 0)

whatever notation you prefer.

The other one is a Getter:

function Getter (A : Array; I : Index) return Element;

> then what would you want those A(2)s to mean and should they still mean

> the same as each other?

In a language that does not have abstract arrays a programmer might

implement that using helper types decomposing it in the [wrong way] you

suggested. That would involve all sorts of helper types having

referential semantics, smart pointers etc. Unfortunately such

abstractions leak producing quite a mess of unreadable error messages.

You asked about methods and free functions. One of the leakage points is

that these helper types are unrelated and the operations on them might

get not fully visible in some context etc. This is why it is better to

provide abstract arrays on the language level.

Dec 3, 2021, 4:08:52 AM12/3/21

to

On 02/12/2021 22:42, James Harris wrote:

> On 02/12/2021 20:49, Dmitry A. Kazakov wrote:

>> On 2021-12-02 21:31, James Harris wrote:

>

> ...

>

>>> But to the point, are you comfortable with the idea of the A(2) in

>>>

>>> x = A(2) + 0

>>>

>>> meaning the same mapping result as the A(2) in

>>>

>>> A(2) = 0

>>>

>>> ?

>>

>> Yes, in both cases the result is the array element corresponding to

>> the index 2. That is the semantics of A(2).

>

> Cool. If A were, instead, a function that, say, ended with

>

> return v

>

> then what would you want those A(2)s to mean and should they still mean

> the same as each other? The latter expression would look strange to many.

>

Do you mean like returning a reference in C++ style?
> On 02/12/2021 20:49, Dmitry A. Kazakov wrote:

>> On 2021-12-02 21:31, James Harris wrote:

>

> ...

>

>>> But to the point, are you comfortable with the idea of the A(2) in

>>>

>>> x = A(2) + 0

>>>

>>> meaning the same mapping result as the A(2) in

>>>

>>> A(2) = 0

>>>

>>> ?

>>

>> Yes, in both cases the result is the array element corresponding to

>> the index 2. That is the semantics of A(2).

>

> Cool. If A were, instead, a function that, say, ended with

>

> return v

>

> then what would you want those A(2)s to mean and should they still mean

> the same as each other? The latter expression would look strange to many.

>

int a[10];

void foo1(int i, int x) {

a[i] = x;

}

int& A(int i) {

return a[i];

}

void foo2(int i, int x) {

A(i) = x;

}

foo1 and foo2 do the same thing, and have the same code. Of course,

foo2 could add range checking, or offsets (for 1-based array), or have

multiple parameters for multi-dimensional arrays, etc. And in practice

you'd make such functions methods of a class so that the class owns the

data, rather than having a single global source of the data.

Dec 3, 2021, 7:24:50 AM12/3/21

to

On 02/12/2021 07:37, David Brown wrote:

>>> [...] But there is no doubt that the numbers

addition; and once you have got to anywhere interesting, "0" is

defined anyway. It's really not important, except when you get to

rationals [when 1-based is better, as you don't have to make a

special case for when the denominator is zero].

> I suppose you /could/ define addition with the starting point

> "a + 1 = succ(a)" rather than "a + 0 = a", but it is all much easier and

> neater when you start with 0.

I don't see "a + 1 == a'" as interestingly harder than

"a + 0 = a". In some ways it's easier; if we abbreviate [eg]

3' [or succ(3)] as 4, in the usual way, then 1-based has [eg]

3+4 = (3+3)' = ((3+2)')' = (((3+1)')')' = 4''' = 5'' = 6' = 7,

whereas 0-based has

3+4 = (3+3)' = ((3+2)')' = (((3+1)')')' = ((((3+0)')')')' =

3'''' = 4''' = 5'' = 6' = 7,

and you have two extra steps with every addition.

> That is certainly how I learned it at

> university, and how I have seen it a few other places - but while I

> think I have a couple of books covering them, they are buried in the

> attic somewhere.

Fine; as I said, books vary, and it's at the whim of

the lecturer [if any -- it's commonly not taught at all, except

at the level of "if you really want to know how all this stuff

gets defined, look at (some book -- Landau in my case)"].

[...]

>>> I am quite confident that the idea of starting array indexes from 0 had

>>> nothing to do with surreals.

way that /mathematics/, predominantly 1-based, has /tended/ to

become 0-based. That's not a /purpose/; it just so happens that

some relatively recent maths has found uses where 0-based seems

more natural than 1-based. There are still plenty where 1-based

remains more usual/natural.

> Constructions of surreal numbers will normally start with 0 -

> but so will constructions of other more familiar types, such as

> integers, reals, ordinals, cardinals, and almost any other numbers.

You're assuming the answer! As above, you can equally

get to integers [and so rationals and reals] from 1.

> Maybe it is just that with surreals, few people ever have much idea of

> what they are, or get beyond reading how they are constructed! (Some

> day I must get the book on them - it was Conway that developed them, and

> Knuth that wrote the book, right?)

Knuth wrote /a/ book on them; /the/ book is Conway's "On

Numbers and Games", but a more accessible version is "Winning Ways"

by Berlekamp, Conway and Guy [all three of whom, sadly, died within

a year and two days in 2019-20]; expensive to buy, but there is a

PDF freely available online. What most people don't realise is the

motivation: Conway couldn't see /why/ the step from rationals to

reals is so difficult. We define naturals eg by Peano, then get

by equivalence classes to integers and rationals, and then ...?

The usual constructions of reals seem so artificial, and not at

all related to what happens earlier. So Conway wondered what

would happen if we went the other way -- start from the concept

of a Dedekind section, forget that it relies on knowing about

the rationals, and just build on what we know. Thus we get the

idea of partitioning whatever numbers we know into two sets.

That is how we build the surreals, without exceptions or special

cases. Oh, we also get [combinatorial] games as a side-effect;

which is where it gets interesting to people like me, and to CS

more generally, and why it's not as esoteric as people think.

Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Boccherini

>>> [...] But there is no doubt that the numbers

>>> generated by the Peano axioms start at 0.

>> When Peano first wrote his axioms, he started at 1. Later

>> he wrote a version starting at 0. The foundational maths books on

>> my shelves, even modern ones, are split; it really matters very

>> little.

> It matters a lot once you get into the arithmetic - 0 is the additive

> identity.

"Additive identity" is meaningless before you have defined
>> When Peano first wrote his axioms, he started at 1. Later

>> he wrote a version starting at 0. The foundational maths books on

>> my shelves, even modern ones, are split; it really matters very

>> little.

> It matters a lot once you get into the arithmetic - 0 is the additive

> identity.

addition; and once you have got to anywhere interesting, "0" is

defined anyway. It's really not important, except when you get to

rationals [when 1-based is better, as you don't have to make a

special case for when the denominator is zero].

> I suppose you /could/ define addition with the starting point

> "a + 1 = succ(a)" rather than "a + 0 = a", but it is all much easier and

> neater when you start with 0.

"a + 0 = a". In some ways it's easier; if we abbreviate [eg]

3' [or succ(3)] as 4, in the usual way, then 1-based has [eg]

3+4 = (3+3)' = ((3+2)')' = (((3+1)')')' = 4''' = 5'' = 6' = 7,

whereas 0-based has

3+4 = (3+3)' = ((3+2)')' = (((3+1)')')' = ((((3+0)')')')' =

3'''' = 4''' = 5'' = 6' = 7,

and you have two extra steps with every addition.

> That is certainly how I learned it at

> university, and how I have seen it a few other places - but while I

> think I have a couple of books covering them, they are buried in the

> attic somewhere.

the lecturer [if any -- it's commonly not taught at all, except

at the level of "if you really want to know how all this stuff

gets defined, look at (some book -- Landau in my case)"].

[...]

>>> I am quite confident that the idea of starting array indexes from 0 had

>>> nothing to do with surreals.

>> Surreal numbers were an example; they are part of the

>> explanation for mathematics also tending to become zero-based.

> Really? Again, I would suggest that they are far too esoteric for the

> purpose.

Again, I would repeat that they were an /example/ of the
>> explanation for mathematics also tending to become zero-based.

> Really? Again, I would suggest that they are far too esoteric for the

> purpose.

way that /mathematics/, predominantly 1-based, has /tended/ to

become 0-based. That's not a /purpose/; it just so happens that

some relatively recent maths has found uses where 0-based seems

more natural than 1-based. There are still plenty where 1-based

remains more usual/natural.

> Constructions of surreal numbers will normally start with 0 -

> but so will constructions of other more familiar types, such as

> integers, reals, ordinals, cardinals, and almost any other numbers.

get to integers [and so rationals and reals] from 1.

> Maybe it is just that with surreals, few people ever have much idea of

> what they are, or get beyond reading how they are constructed! (Some

> day I must get the book on them - it was Conway that developed them, and

> Knuth that wrote the book, right?)

Numbers and Games", but a more accessible version is "Winning Ways"

by Berlekamp, Conway and Guy [all three of whom, sadly, died within

a year and two days in 2019-20]; expensive to buy, but there is a

PDF freely available online. What most people don't realise is the

motivation: Conway couldn't see /why/ the step from rationals to

reals is so difficult. We define naturals eg by Peano, then get

by equivalence classes to integers and rationals, and then ...?

The usual constructions of reals seem so artificial, and not at

all related to what happens earlier. So Conway wondered what

would happen if we went the other way -- start from the concept

of a Dedekind section, forget that it relies on knowing about

the rationals, and just build on what we know. Thus we get the

idea of partitioning whatever numbers we know into two sets.

That is how we build the surreals, without exceptions or special

cases. Oh, we also get [combinatorial] games as a side-effect;

which is where it gets interesting to people like me, and to CS

more generally, and why it's not as esoteric as people think.

Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Boccherini

Dec 3, 2021, 9:38:14 AM12/3/21

to

On 03/12/2021 13:24, Andy Walker wrote:

> On 02/12/2021 07:37, David Brown wrote:

>>>> [...] But there is no doubt that the numbers

>>>> generated by the Peano axioms start at 0.

>>> When Peano first wrote his axioms, he started at 1. Later

>>> he wrote a version starting at 0. The foundational maths books on

>>> my shelves, even modern ones, are split; it really matters very

>>> little.

>> It matters a lot once you get into the arithmetic - 0 is the additive

>> identity.

>

> "Additive identity" is meaningless before you have defined

> addition; and once you have got to anywhere interesting, "0" is

> defined anyway. It's really not important, except when you get to

> rationals [when 1-based is better, as you don't have to make a

> special case for when the denominator is zero].

>

Rationals are easier when you have 0 :
> On 02/12/2021 07:37, David Brown wrote:

>>>> [...] But there is no doubt that the numbers

>>>> generated by the Peano axioms start at 0.

>>> When Peano first wrote his axioms, he started at 1. Later

>>> he wrote a version starting at 0. The foundational maths books on

>>> my shelves, even modern ones, are split; it really matters very

>>> little.

>> It matters a lot once you get into the arithmetic - 0 is the additive

>> identity.

>

> "Additive identity" is meaningless before you have defined

> addition; and once you have got to anywhere interesting, "0" is

> defined anyway. It's really not important, except when you get to

> rationals [when 1-based is better, as you don't have to make a

> special case for when the denominator is zero].

>

ℚ = { p / q : p, q ∈ ℤ, q > 0 }

vs.

ℚ = { p / q : p, q ∈ ℕ⁺ } ∪ { 0 } ∪ { -p / q : p, q ∈ ℕ⁺ }

>> I suppose you /could/ define addition with the starting point

>> "a + 1 = succ(a)" rather than "a + 0 = a", but it is all much easier and

>> neater when you start with 0.

>

> I don't see "a + 1 == a'" as interestingly harder than

> "a + 0 = a". In some ways it's easier; if we abbreviate [eg]

> 3' [or succ(3)] as 4, in the usual way, then 1-based has [eg]

>

> 3+4 = (3+3)' = ((3+2)')' = (((3+1)')')' = 4''' = 5'' = 6' = 7,

>

> whereas 0-based has

>

> 3+4 = (3+3)' = ((3+2)')' = (((3+1)')')' = ((((3+0)')')')' =

> 3'''' = 4''' = 5'' = 6' = 7,

>

> and you have two extra steps with every addition.

efficient way of adding up! But calling the additive identity "0" or

"1" /does/ matter, because one choice makes sense and the other choice

is pointlessly confusing. (The definition of addition does not actually

require an identity.)

I can happily agree that you /can/ define Peano numbers starting with 1,

and I am sure some people think that is better. But personally I think

there are good reasons why 0 is the more common starting point (as far

as I could see from a statistically invalid and non-scientific google

search) - the big hint comes from Peano himself who started with 1, then

changed his mind and started with 0.

>

>> That is certainly how I learned it at

>> university, and how I have seen it a few other places - but while I

>> think I have a couple of books covering them, they are buried in the

>> attic somewhere.

>

> Fine; as I said, books vary, and it's at the whim of

> the lecturer [if any -- it's commonly not taught at all, except

> at the level of "if you really want to know how all this stuff

> gets defined, look at (some book -- Landau in my case)"].

constructed them from 0 in a Haskell-like functional programming

language as a practical exercise.

>

> [...]

>>>> I am quite confident that the idea of starting array indexes from 0 had

>>>> nothing to do with surreals.

>>> Surreal numbers were an example; they are part of the

>>> explanation for mathematics also tending to become zero-based.

>> Really? Again, I would suggest that they are far too esoteric for the

>> purpose.

>

> Again, I would repeat that they were an /example/ of the

> way that /mathematics/, predominantly 1-based, has /tended/ to

> become 0-based. That's not a /purpose/; it just so happens that

> some relatively recent maths has found uses where 0-based seems

> more natural than 1-based. There are still plenty where 1-based

> remains more usual/natural.

(It's not that I don't find surreals interesting, it's just that most

people have probably never heard of them. Mind you, this thread has

given me some information about them that I didn't know, so thanks for

that anyway!)

>

>> Constructions of surreal numbers will normally start with 0 -

>> but so will constructions of other more familiar types, such as

>> integers, reals, ordinals, cardinals, and almost any other numbers.

>

> You're assuming the answer! As above, you can equally

> get to integers [and so rationals and reals] from 1.

>

with a degree in mathematics and theoretical computing, but not having

worked in a mathematical profession) it is usually simpler and easier,

and more common, to start from 0.

>> Maybe it is just that with surreals, few people ever have much idea of

>> what they are, or get beyond reading how they are constructed! (Some

>> day I must get the book on them - it was Conway that developed them, and

>> Knuth that wrote the book, right?)

>

> Knuth wrote /a/ book on them; /the/ book is Conway's "On

> Numbers and Games", but a more accessible version is "Winning Ways"

> by Berlekamp, Conway and Guy [all three of whom, sadly, died within

> a year and two days in 2019-20]; expensive to buy, but there is a

> PDF freely available online.

(It was sad that the inventor of "life" died of Covid.)

> What most people don't realise is the

> motivation: Conway couldn't see /why/ the step from rationals to

> reals is so difficult. We define naturals eg by Peano, then get

> by equivalence classes to integers and rationals, and then ...?

> The usual constructions of reals seem so artificial, and not at

> all related to what happens earlier. So Conway wondered what

> would happen if we went the other way -- start from the concept

> of a Dedekind section, forget that it relies on knowing about

> the rationals, and just build on what we know. Thus we get the

> idea of partitioning whatever numbers we know into two sets.

> That is how we build the surreals, without exceptions or special

> cases.

and endpoints in the reals and filling them in (though I know the

construction doesn't do that).

I wonder if the lengths in TeX, which have 3 (IIRC) layers of infinities

and infinitesimals, were invented as a kind of computer approximation to

surreals?

> Oh, we also get [combinatorial] games as a side-effect;

> which is where it gets interesting to people like me, and to CS

> more generally, and why it's not as esoteric as people think.

>

application or two. That is often what makes it interesting.

Dec 3, 2021, 11:26:42 AM12/3/21

to

On 03/12/2021 14:38, David Brown wrote:

> Rationals are easier when you have 0 :

> ℚ = { p / q : p, q ∈ ℤ, q > 0 }

> vs.

> ℚ = { p / q : p, q ∈ ℕ⁺ } ∪ { 0 } ∪ { -p / q : p, q ∈ ℕ⁺ }

Or vs ℚ = { p, q : p ∈ ℤ, q ∈ ℕ }.
> Rationals are easier when you have 0 :

> ℚ = { p / q : p, q ∈ ℤ, q > 0 }

> vs.

> ℚ = { p / q : p, q ∈ ℕ⁺ } ∪ { 0 } ∪ { -p / q : p, q ∈ ℕ⁺ }

> [...] But calling the additive identity "0" or

> "1" /does/ matter, because one choice makes sense and the other choice

> is pointlessly confusing. (The definition of addition does not actually

> require an identity.)

No-one calls the additive identity 1, esp as it's not even
> is pointlessly confusing. (The definition of addition does not actually

> require an identity.)

useful until you get to much more advanced maths, by which time you

already have 0 [part of ℤ, defined by equivalence classes of pairs

of members of ℕ].

> [...] In my experience (as someone

> with a degree in mathematics and theoretical computing, [...].

No such thing when I was a student!

> I had thought of surreals as trying to find gaps

> and endpoints in the reals and filling them in (though I know the

> construction doesn't do that).

numbers] produces the reals and more. If the partitioning is

ordered, you get numbers; if unordered, you get games [of which

numbers are therefore a subset].

> I wonder if the lengths in TeX, which have 3 (IIRC) layers of infinities

> and infinitesimals, were invented as a kind of computer approximation to

> surreals?

experts in typography in my dept [consultants for major academic

publishers, exam boards, etc], and some of that rubbed off. They

disliked the "look" of Knuth's books, and even more so that we

kept being told that TeX knows best. They devoted their time to

tweaking Troff, which takes a more pragmatic view [and some of

the tweaks found their way back into "official" Troff]. I also

quite like Lout, FWIW.

[...]

> All sorts of maths that is esoteric to most people has some odd

> application or two. That is often what makes it interesting.

/All/ maths is esoteric to most people! A large majority
> application or two. That is often what makes it interesting.

don't even know that there is maths beyond arithmetic, apart from

the algebra [etc] that never made any sense at all to them.

Dec 3, 2021, 12:49:28 PM12/3/21

to

On 03/12/2021 17:26, Andy Walker wrote:

> On 03/12/2021 14:38, David Brown wrote:

>> Rationals are easier when you have 0 :

>> ℚ = { p / q : p, q ∈ ℤ, q > 0 }

>> vs.

>> ℚ = { p / q : p, q ∈ ℕ⁺ } ∪ { 0 } ∪ { -p / q : p, q ∈ ℕ⁺ }

>

> Or vs ℚ = { p, q : p ∈ ℤ, q ∈ ℕ }.

>

ℕ is ambiguous - you need to write something like ℕ⁺ unless it is clear
> On 03/12/2021 14:38, David Brown wrote:

>> Rationals are easier when you have 0 :

>> ℚ = { p / q : p, q ∈ ℤ, q > 0 }

>> vs.

>> ℚ = { p / q : p, q ∈ ℕ⁺ } ∪ { 0 } ∪ { -p / q : p, q ∈ ℕ⁺ }

>

> Or vs ℚ = { p, q : p ∈ ℤ, q ∈ ℕ }.

>

from earlier. While it is quite easy to write ℕ⁺, there is no good

argument for suggesting it is noticeably simpler than using ℤ, nor any

special case handling for 0.

>> [...] But calling the additive identity "0" or

>> "1" /does/ matter, because one choice makes sense and the other choice

>> is pointlessly confusing. (The definition of addition does not actually

>> require an identity.)

>

> No-one calls the additive identity 1, esp as it's not even

> useful until you get to much more advanced maths, by which time you

> already have 0 [part of ℤ, defined by equivalence classes of pairs

> of members of ℕ].

You wrote:

I don't see "a + 1 == a'" as interestingly harder than

"a + 0 = a". In some ways it's easier;

Presumably then that was a typo. (Fair enough, we all do that.)
I don't see "a + 1 == a'" as interestingly harder than

"a + 0 = a". In some ways it's easier;

>

>> [...] In my experience (as someone

>> with a degree in mathematics and theoretical computing, [...].

>

> No such thing when I was a student!

>

>> I had thought of surreals as trying to find gaps

>> and endpoints in the reals and filling them in (though I know the

>> construction doesn't do that).

>

> No, rather that "filling in gaps" [ie, partitioning known

> numbers] produces the reals and more. If the partitioning is

> ordered, you get numbers; if unordered, you get games [of which

> numbers are therefore a subset].

>

>> I wonder if the lengths in TeX, which have 3 (IIRC) layers of infinities

>> and infinitesimals, were invented as a kind of computer approximation to

>> surreals?

>

> Pass. I've never used [and don't like] TeX; we had real

> experts in typography in my dept [consultants for major academic

> publishers, exam boards, etc], and some of that rubbed off. They

> disliked the "look" of Knuth's books, and even more so that we

> kept being told that TeX knows best. They devoted their time to

> tweaking Troff, which takes a more pragmatic view [and some of

> the tweaks found their way back into "official" Troff]. I also

> quite like Lout, FWIW.

Preparation System. Knuth's early books had ugly typography - that's

why he made TeX. But of course there's a lot of scope for subjective

choice in the subject, and for picking formats, layouts, etc., that suit

your own needs. LaTeX and friends take a bit of work to use well, but I

find it is worth it. (That is especially for most people that can't

afford in-house typographers, so the alternative is LibreOffice or,

horrors, Word.)

>

> [...]

>> All sorts of maths that is esoteric to most people has some odd

>> application or two. That is often what makes it interesting.

>

> /All/ maths is esoteric to most people! A large majority

> don't even know that there is maths beyond arithmetic, apart from

> the algebra [etc] that never made any sense at all to them.

>

Dec 3, 2021, 5:03:19 PM12/3/21

to

On 03/12/2021 17:49, David Brown wrote:

[Definition of rationals:]

concept than ℤ.

> While it is quite easy to write ℕ⁺, there is no good

> argument for suggesting it is noticeably simpler than using ℤ, nor any

> special case handling for 0.

My version didn't have any special casing for 0. Note that

according to the usual development you need four members of ℕ to

construct two members of ℤ [as two equivalence classes of members

of ℕ], of which one is logically redundant.

> You wrote:

> I don't see "a + 1 == a'" as interestingly harder than

> "a + 0 = a". In some ways it's easier;

> Presumably then that was a typo. (Fair enough, we all do that.)

If it was, I still don't see it. Did you miss the "'"

[usual abbreviation for the successor function]?

Recap:

ANW: ℕ is 1, 1', 1'', ...; "+" is defined by "a + 1 == a'"

and "a + b' = (a+b)'"; "ℤ" by equivalence classes on

pairs of members of "ℕ"; "ℚ" as a triple of members of

"ℕ" [equivalently, a member of "ℤ" and a member of "ℕ"].

DB: ℕ is 0, 0', 0'', ...; "+" is defined by "a + 0 == a"

and "a + b' = (a+b)'"; "ℤ" by equivalence classes on

pairs of members of "ℕ"; "ℚ" as a pair of members of

"ℤ" of which the second is > 0 [equivalently, a member

of "ℤ" and a member of "ℕ⁺"].

I don't see an interesting difference in difficulty, nor any

good reason other than fashion to choose one over the other.

Further recap: this started with whether Bart counted "1, 2,

3, ..." or "0, 1, 2, ...". IRL virtually everyone is first

taught to count "1, 2, 3, ...". Everything else comes later

[if at all].

I doubt whether I have anything further to contribute

to this thread, which is now diverging a long way from CS.

[Definition of rationals:]

> ℕ is ambiguous - you need to write something like ℕ⁺ unless it is clear

> from earlier.

It was clear in context! But ℕ⁺ is still a more primitive
> from earlier.

concept than ℤ.

> While it is quite easy to write ℕ⁺, there is no good

> argument for suggesting it is noticeably simpler than using ℤ, nor any

> special case handling for 0.

according to the usual development you need four members of ℕ to

construct two members of ℤ [as two equivalence classes of members

of ℕ], of which one is logically redundant.

> You wrote:

> I don't see "a + 1 == a'" as interestingly harder than

> "a + 0 = a". In some ways it's easier;

> Presumably then that was a typo. (Fair enough, we all do that.)

[usual abbreviation for the successor function]?

Recap:

ANW: ℕ is 1, 1', 1'', ...; "+" is defined by "a + 1 == a'"

and "a + b' = (a+b)'"; "ℤ" by equivalence classes on

pairs of members of "ℕ"; "ℚ" as a triple of members of

"ℕ" [equivalently, a member of "ℤ" and a member of "ℕ"].

DB: ℕ is 0, 0', 0'', ...; "+" is defined by "a + 0 == a"

and "a + b' = (a+b)'"; "ℤ" by equivalence classes on

pairs of members of "ℕ"; "ℚ" as a pair of members of

"ℤ" of which the second is > 0 [equivalently, a member

of "ℤ" and a member of "ℕ⁺"].

I don't see an interesting difference in difficulty, nor any

good reason other than fashion to choose one over the other.

Further recap: this started with whether Bart counted "1, 2,

3, ..." or "0, 1, 2, ...". IRL virtually everyone is first

taught to count "1, 2, 3, ...". Everything else comes later

[if at all].

I doubt whether I have anything further to contribute

to this thread, which is now diverging a long way from CS.

Dec 4, 2021, 12:50:36 AM12/4/21

to

On Thursday, December 2, 2021 at 1:37:33 AM UTC-6, David Brown wrote:

> On 02/12/2021 01:11, Andy Walker wrote:

> > [...]

> >> Negative numbers long pre-date the general acceptance of 0 as a

> >> "number". They were used in accountancy, as well as by a few

> >> mathematicians. But there general use, especially in Europe, came a

> >> lot later.

> >

> > My impression is that accountants used red ink rather than

> > negative numbers. As late as the 1970s, hand/electric calculators

> > still used red numerals rather than a minus sign.

> >

> Many conventions have been used, in different countries, times, and

> cultures. "Red ink" is certainly a well-known phrase in modern

> English-speaking countries. But brackets, minus signs, and other

> methods are used. Go far enough back and people didn't write with ink

> at all.

>

Dang. Y'all giving me ideas.
> On 02/12/2021 01:11, Andy Walker wrote:

> > [...]

> >> Negative numbers long pre-date the general acceptance of 0 as a

> >> "number". They were used in accountancy, as well as by a few

> >> mathematicians. But there general use, especially in Europe, came a

> >> lot later.

> >

> > My impression is that accountants used red ink rather than

> > negative numbers. As late as the 1970s, hand/electric calculators

> > still used red numerals rather than a minus sign.

> >

> Many conventions have been used, in different countries, times, and

> cultures. "Red ink" is certainly a well-known phrase in modern

> English-speaking countries. But brackets, minus signs, and other

> methods are used. Go far enough back and people didn't write with ink

> at all.

>

--

better watch out

Dec 4, 2021, 6:29:46 AM12/4/21

to

On 03/12/2021 23:03, Andy Walker wrote:

> On 03/12/2021 17:49, David Brown wrote:

> [Definition of rationals:]

>> ℕ is ambiguous - you need to write something like ℕ⁺ unless it is clear

>> from earlier.

>

> It was clear in context! But ℕ⁺ is still a more primitive

> concept than ℤ.

>

You can't say that your definition of ℕ in your definition of ℚ is clear
> On 03/12/2021 17:49, David Brown wrote:

> [Definition of rationals:]

>> ℕ is ambiguous - you need to write something like ℕ⁺ unless it is clear

>> from earlier.

>

> It was clear in context! But ℕ⁺ is still a more primitive

> concept than ℤ.

>

in the context - that would be begging the question. Obviously I know

what you meant by ℕ because I know how the rationals are defined. But

if you are giving the definition of something, you can't force readers

to assume the definition you want in order to figure out the meaning of

the things you use in your definition.

I can agree that the naturals - with or without 0 - are more primitive

than the integers. I don't see an advantage in that. Mathematics is

about building up concepts step by step - once you have a concept, you

use it freely for the next step. There is no benefit in going back to

more primitive stages.

>> While it is quite easy to write ℕ⁺, there is no good

>> argument for suggesting it is noticeably simpler than using ℤ, nor any

>> special case handling for 0.

>

> My version didn't have any special casing for 0. Note that

> according to the usual development you need four members of ℕ to

> construct two members of ℤ [as two equivalence classes of members

> of ℕ], of which one is logically redundant.

>

>> You wrote:

>> I don't see "a + 1 == a'" as interestingly harder than

>> "a + 0 = a". In some ways it's easier;

>> Presumably then that was a typo. (Fair enough, we all do that.)

>

> If it was, I still don't see it. Did you miss the "'"

> [usual abbreviation for the successor function]?

>

indicator is fine on paper, but I find it can easily be missed in emails

or Usenet where typography is limited. With that sorted out, I agree

with you - it is entirely possible to start your addition recursive

definition with 1 here. I can't see a benefit from starting with 1, and

the 0 will come in very handy later on, but they are both valid

alternatives.

>

> I doubt whether I have anything further to contribute

> to this thread, which is now diverging a long way from CS.

>

the thread and the group!

Dec 6, 2021, 5:39:31 AM12/6/21

to

On Tue, 30 Nov 2021 08:07:30 +0000

James Harris <james.h...@gmail.com> wrote:

> From another thread, discussion between David and Bart:

> D> But if you have just one starting point, 0 is the sensible one.
James Harris <james.h...@gmail.com> wrote:

> From another thread, discussion between David and Bart:

> D> You might not like the way C handles arrays (and I'm not going to

> D> argue about it - it certainly has its cons as well as its pros),

> D> but even you would have to agree that defining "A[i]" to be the

> D> element at "address of A + i * the size of the elements" is neater

> D> and clearer than one-based indexing.

>

> B> That's a crude way of defining arrays. A[i] is simply the i'th

> B> element of N slots, you don't need to bring offsets into it.

>

> Why call it 'i'th? I know people do but wouldn't it be easier to call

> it 'element n' where n is its index? Then that would work with any

> basing.

>

'n'th, 'i'th, 'x'th, ...
> B> That's a crude way of defining arrays. A[i] is simply the i'th

> B> element of N slots, you don't need to bring offsets into it.

>

> Why call it 'i'th? I know people do but wouldn't it be easier to call

> it 'element n' where n is its index? Then that would work with any

> basing.

>

Does the letter choice matter?

Why wouldn't you say the 10th or 7th or 1st etc by using an actual

number to specify a specific item? Although I do like variables, this

isn't algebra.

> 'element n'

Chemistry ... Obtuse?

Well, "item" is shorter than "element". So, that's one up vote for

"item" ...

> B> With 0-based, there's a disconnect between the ordinal number of

> B> the element you want, and the index that needs to be used. So A[2]

> B> for the 3rd element.

>

> Why not call A[2] element 2?

>

> Why not call A[2] element 2?

>

> BTW, Bart, do you consider the first ten numbers as 1 to 10 rather

> than 0 to 9?

IMO, irrelevant.
> than 0 to 9?

As, he has his choice of either with 0-based indexing.

If he wants the first ten elements of an array to be indexed by 0 to 9,

he can use 0 to 9.

If he wants the first ten elements of an array to be indexed by 1 to

10, he can use 1 to 10, skip using 0, but he needs to remember

to allocate one additional element.

> [What does Rod prefer?]

> (Let's hope I'm not talking to myself now.)

Oh boy, you didn't ask me that ...

(Oops, it seems that I am talking to myself now.)

Maybe I've been coding in C for too long now, as I prefer zero-based

indexing, even in non-C situations like personal to-do lists.

0-based works really well with C, but I do recall it being somewhat

"unnatural" at first, even though zero was the central starting point of

the signed number line in mathematics. For programming, I strongly

prefer unsigned only. This eliminates many coding errors.

E.g., for C, I know that the address of the 0'th (zeroth) element of an

"array" (&A[0]) is the same as the base address of the said "array"

named A. This can be convenient as no address calculation needs to be

computed because there is no actual indexing into the array when the

index is zero.

E.g., for C, you can do neat 0-based "tricks" like below for loops to

detect loop termination. I.e., the value of 10 below as MAX detects the

end-of-loop of the 10 values of variable "i" with 0..9 being the actual

printed values:

#define MAX 10

for(i=0;i!=MAX;i++)

And, if the language is designed correctly and the variable is declared

with file scope, using zero-based indexing means that variables don't

need to be initialized to zero. They'll be cleared to zero upon

program execution. I.e., the "i=0" above should be optional in a

language for all declared file scope variables, since such variables

should be initialized to or cleared to zero by default. E.g., a C

compiler may warn if "i=0" initialization is missing when "i" is

declared with auto or local scope (within a procedure), but C compilers

will generally not warn when "i" is declared with file or global scope

(or local static) as they are initialized to zero due to BSS.

--

"If Britain were to join the United States, it would be the

second-poorest state, behind Alabama and ahead of Mississippi,"

Hunter Schwarz, Washington Post

Dec 6, 2021, 7:24:20 AM12/6/21

to

On 06/12/2021 10:41, Rod Pemberton wrote:

> On Tue, 30 Nov 2021 08:07:30 +0000

> James Harris <james.h...@gmail.com> wrote:

>> [What does Rod prefer?]

>> (Let's hope I'm not talking to myself now.)

> On Tue, 30 Nov 2021 08:07:30 +0000

> James Harris <james.h...@gmail.com> wrote:

>> [What does Rod prefer?]

>> (Let's hope I'm not talking to myself now.)

> 0-based works really well with C,

Well, C invented it. (If it didn't, then it made it famous.)
but I do recall it being somewhat

> "unnatural" at first, even though zero was the central starting point of

> the signed number line in mathematics.

If you draw XY axes on squared paper, and start annotating the positive

X axis as 0 (at Y-axis), 1, 2, 3 ..., then those figures will mark the

vertical divisions between the squares.

NOT the squares themselves. The squares are what correspond to C array

elements.

> For programming, I strongly

> prefer unsigned only. This eliminates many coding errors.

> E.g., for C, you can do neat 0-based "tricks" like below for loops to

> detect loop termination. I.e., the value of 10 below as MAX detects the

> end-of-loop of the 10 values of variable "i" with 0..9 being the actual

> printed values:

>

> #define MAX 10

> for(i=0;i!=MAX;i++)

>

>

> And, if the language is designed correctly and the variable is declared

> with file scope, using zero-based indexing means that variables don't

> need to be initialized to zero. They'll be cleared to zero upon

> program execution. I.e., the "i=0" above should be optional in a

> language for all declared file scope variables, since such variables

> should be initialized to or cleared to zero by default.

for(; i!=MAX; i++)

since now i will have value 10 (or whatever it ended up as after 1000

more lines of code, being a file-scope variable visible from any

function). Or when you execute this separate loop further on:

for(; i<N; i++)

Dec 6, 2021, 8:56:43 AM12/6/21

to

On 06/12/2021 11:41, Rod Pemberton wrote:

>

> Maybe I've been coding in C for too long now, as I prefer zero-based

> indexing, even in non-C situations like personal to-do lists.

>

> 0-based works really well with C, but I do recall it being somewhat

> "unnatural" at first, even though zero was the central starting point of

> the signed number line in mathematics. For programming, I strongly

> prefer unsigned only. This eliminates many coding errors.

Out of curiosity, what kinds of coding errors do you eliminate by using
>

> Maybe I've been coding in C for too long now, as I prefer zero-based

> indexing, even in non-C situations like personal to-do lists.

>

> 0-based works really well with C, but I do recall it being somewhat

> "unnatural" at first, even though zero was the central starting point of

> the signed number line in mathematics. For programming, I strongly

> prefer unsigned only. This eliminates many coding errors.

unsigned types?

>

> E.g., for C, I know that the address of the 0'th (zeroth) element of an

> "array" (&A[0]) is the same as the base address of the said "array"

> named A. This can be convenient as no address calculation needs to be

> computed because there is no actual indexing into the array when the

> index is zero.

>

> E.g., for C, you can do neat 0-based "tricks" like below for loops to

> detect loop termination. I.e., the value of 10 below as MAX detects the

> end-of-loop of the 10 values of variable "i" with 0..9 being the actual

> printed values:

>

> #define MAX 10

> for(i=0;i!=MAX;i++)

>

>

> And, if the language is designed correctly and the variable is declared

> with file scope, using zero-based indexing means that variables don't

> need to be initialized to zero. They'll be cleared to zero upon

> program execution. I.e., the "i=0" above should be optional in a

> language for all declared file scope variables, since such variables

> should be initialized to or cleared to zero by default. E.g., a C

> compiler may warn if "i=0" initialization is missing when "i" is

> declared with auto or local scope (within a procedure), but C compilers

> will generally not warn when "i" is declared with file or global scope

> (or local static) as they are initialized to zero due to BSS.

>

static lifetime). Indeed, they are normally local to the loop itself.

The idiomatic for loop in C is :

for (int i = 0; i < MAX; i++) { ... }

Dec 7, 2021, 7:48:43 AM12/7/21

to

On Mon, 6 Dec 2021 12:24:18 +0000

Bart <b...@freeuk.com> wrote:

> On 06/12/2021 10:41, Rod Pemberton wrote:

> > On Tue, 30 Nov 2021 08:07:30 +0000

> > James Harris <james.h...@gmail.com> wrote:

> >> [What does Rod prefer?]

> >> (Let's hope I'm not talking to myself now.)

>

> > 0-based works really well with C,

>

> Well, C invented it. (If it didn't, then it made it famous.)

>

> > but I do recall it being somewhat

> > "unnatural" at first, even though zero was the central starting

> > point of the signed number line in mathematics.

>

> Yeah, this my fence/fencepost distinction.

>

> If you draw XY axes on squared paper, and start annotating the

> positive X axis as 0 (at Y-axis), 1, 2, 3 ..., then those figures

> will mark the vertical divisions between the squares.

>

> NOT the squares themselves. The squares are what correspond to C

> array elements.

Well, they can represent the squares too along a single axis, either X
Bart <b...@freeuk.com> wrote:

> On 06/12/2021 10:41, Rod Pemberton wrote:

> > On Tue, 30 Nov 2021 08:07:30 +0000

> > James Harris <james.h...@gmail.com> wrote:

> >> [What does Rod prefer?]

> >> (Let's hope I'm not talking to myself now.)

>

> > 0-based works really well with C,

>

> Well, C invented it. (If it didn't, then it made it famous.)

>

> > but I do recall it being somewhat

> > "unnatural" at first, even though zero was the central starting

> > point of the signed number line in mathematics.

>

> Yeah, this my fence/fencepost distinction.

>

> If you draw XY axes on squared paper, and start annotating the

> positive X axis as 0 (at Y-axis), 1, 2, 3 ..., then those figures

> will mark the vertical divisions between the squares.

>

> NOT the squares themselves. The squares are what correspond to C

> array elements.

or Y. I.e., skip zero, use the other values.

This won't work for a 2-dimensional grid, but the same is true of

arrays, e.g., A[2][3]. I.e., what do you call the 3rd square up in

the 2nd column?

> > For programming, I strongly

> > prefer unsigned only. This eliminates many coding errors.

>

> OK, so we can forget about that negative X axis!

>

Yeah, we're down to one quadrant instead of four!

> > E.g., for C, you can do neat 0-based "tricks" like below for loops

> > to detect loop termination. I.e., the value of 10 below as MAX

> > detects the end-of-loop of the 10 values of variable "i" with 0..9

> > being the actual printed values:

> >

> > #define MAX 10

> > for(i=0;i!=MAX;i++)

> >

> >

> > And, if the language is designed correctly and the variable is

> > declared with file scope, using zero-based indexing means that

> > variables don't need to be initialized to zero. They'll be cleared

> > to zero upon program execution. I.e., the "i=0" above should be

> > optional in a language for all declared file scope variables, since

> > such variables should be initialized to or cleared to zero by

> > default.

>

> That's great. Until the second time you execute this loop:

>

> for(; i!=MAX; i++)

I.e., I'd argue that in general, very generally, loops are only

executed once within most C programs. However, obviously, if the loop

is re-used, the programmer must set i to 0, as shown previously, or to

1 in your case, prior to the re-use of i.

> since now i will have value 10 (or whatever it ended up as after 1000

> more lines of code, being a file-scope variable visible from any

> function). Or when you execute this separate loop further on:

>

> for(; i<N; i++)

variables, not specifically about loops. If numbering and indexing

start with a value of one, then you'll have to "clear" the BSS

variables to a value of "one", except the binary representations for

what "one" is may vary depending on the data type, e.g., integer vs

float. Whereas, the binary representation for "zero" is almost always

all bits clear.

--

Dec 7, 2021, 7:52:20 AM12/7/21

to

On Mon, 6 Dec 2021 14:56:41 +0100

David Brown <david...@hesbynett.no> wrote:

> On 06/12/2021 11:41, Rod Pemberton wrote:

>

> >

> > Maybe I've been coding in C for too long now, as I prefer zero-based

> > indexing, even in non-C situations like personal to-do lists.

> >

> > 0-based works really well with C, but I do recall it being somewhat

> > "unnatural" at first, even though zero was the central starting

> > point of the signed number line in mathematics. For programming, I

> > strongly prefer unsigned only. This eliminates many coding errors.

>

> Out of curiosity, what kinds of coding errors do you eliminate by

> using unsigned types?

When you expect the variable to function as unsigned, which is common
David Brown <david...@hesbynett.no> wrote:

> On 06/12/2021 11:41, Rod Pemberton wrote:

>

> >

> > Maybe I've been coding in C for too long now, as I prefer zero-based

> > indexing, even in non-C situations like personal to-do lists.

> >

> > 0-based works really well with C, but I do recall it being somewhat

> > "unnatural" at first, even though zero was the central starting

> > point of the signed number line in mathematics. For programming, I

> > strongly prefer unsigned only. This eliminates many coding errors.

>

> Out of curiosity, what kinds of coding errors do you eliminate by

> using unsigned types?

in C when working with characters, pointers, or binary data, but the

variable was actually declared as signed. If you add the value to

another integer, you'll end up with the "wrong" numeric result, i.e.,

other than what was expected because the variable was signed.

i=1000

x=0xFF (signed 8-bit char)

y=0xFF (unsigned 8-bit char)

i+x = 999 (unexpected)

i+y = 1255 (expected)

linked with the concept of a loop variable, or even a discussion of C

specifically, but was a generic statement for any file scope or global

variables, for some new language using C as a means to explain.

The other option, per the prior discussion, would require that the file

scope variables in BSS all be "cleared" to a value of one. As you know

from C, the representations for a value of one may be different for

each type of variable, e.g., integer vs float.

As for C, variable declarations within the for() loop is not valid

for ANSI C (C89), i.e., valid for C99 or C11 or later. So, one could

argue, that to ensure backwards code compatibility, hence portability

of C code, that declaring a variable somewhere within a procedure, such

as within a for() loop, should be avoided, yes? Think of C style guide

suggestions.

--

Dec 7, 2021, 9:07:13 AM12/7/21

to

On 07/12/2021 12:52, Rod Pemberton wrote:

> On Mon, 6 Dec 2021 14:56:41 +0100

> David Brown <david...@hesbynett.no> wrote:

>

>> On 06/12/2021 11:41, Rod Pemberton wrote:

>>

>>>

>>> Maybe I've been coding in C for too long now, as I prefer zero-based

>>> indexing, even in non-C situations like personal to-do lists.

>>>

>>> 0-based works really well with C, but I do recall it being somewhat

>>> "unnatural" at first, even though zero was the central starting

>>> point of the signed number line in mathematics. For programming, I

>>> strongly prefer unsigned only. This eliminates many coding errors.

>>

>> Out of curiosity, what kinds of coding errors do you eliminate by

>> using unsigned types?

>

> When you expect the variable to function as unsigned, which is common

> in C when working with characters, pointers, or binary data, but the

> variable was actually declared as signed. If you add the value to

> another integer, you'll end up with the "wrong" numeric result, i.e.,

> other than what was expected because the variable was signed.

>

> i=1000

> x=0xFF (signed 8-bit char)

If x has int8 type then it has the value -1 not +255.
> On Mon, 6 Dec 2021 14:56:41 +0100

> David Brown <david...@hesbynett.no> wrote:

>

>> On 06/12/2021 11:41, Rod Pemberton wrote:

>>

>>>

>>> Maybe I've been coding in C for too long now, as I prefer zero-based

>>> indexing, even in non-C situations like personal to-do lists.

>>>

>>> 0-based works really well with C, but I do recall it being somewhat

>>> "unnatural" at first, even though zero was the central starting

>>> point of the signed number line in mathematics. For programming, I

>>> strongly prefer unsigned only. This eliminates many coding errors.

>>

>> Out of curiosity, what kinds of coding errors do you eliminate by

>> using unsigned types?

>

> When you expect the variable to function as unsigned, which is common

> in C when working with characters, pointers, or binary data, but the

> variable was actually declared as signed. If you add the value to

> another integer, you'll end up with the "wrong" numeric result, i.e.,

> other than what was expected because the variable was signed.

>

> i=1000

> x=0xFF (signed 8-bit char)

> y=0xFF (unsigned 8-bit char)

>

> i+x = 999 (unexpected)

+255.

> i+y = 1255 (expected)

Unsigned has its own problems:

unsigned int i=1000;

unsigned int x=1001;

printf("%u\n",i-x);

printf("%f\n",(double)(i-x));

displsys:

4294967295

4294967295.000000

Dec 7, 2021, 9:40:03 AM12/7/21

to

On 07/12/2021 13:52, Rod Pemberton wrote:

> On Mon, 6 Dec 2021 14:56:41 +0100

> David Brown <david...@hesbynett.no> wrote:

>

>> On 06/12/2021 11:41, Rod Pemberton wrote:

>>

>>>

>>> Maybe I've been coding in C for too long now, as I prefer zero-based

>>> indexing, even in non-C situations like personal to-do lists.

>>>

>>> 0-based works really well with C, but I do recall it being somewhat

>>> "unnatural" at first, even though zero was the central starting

>>> point of the signed number line in mathematics. For programming, I

>>> strongly prefer unsigned only. This eliminates many coding errors.

>>

>> Out of curiosity, what kinds of coding errors do you eliminate by

>> using unsigned types?

>

> When you expect the variable to function as unsigned, which is common

> in C when working with characters, pointers, or binary data, but the

> variable was actually declared as signed.

It's a bad idea ever to use plain "char" and think of it as a number at
> On Mon, 6 Dec 2021 14:56:41 +0100

> David Brown <david...@hesbynett.no> wrote:

>

>> On 06/12/2021 11:41, Rod Pemberton wrote:

>>

>>>

>>> Maybe I've been coding in C for too long now, as I prefer zero-based

>>> indexing, even in non-C situations like personal to-do lists.

>>>

>>> 0-based works really well with C, but I do recall it being somewhat

>>> "unnatural" at first, even though zero was the central starting

>>> point of the signed number line in mathematics. For programming, I

>>> strongly prefer unsigned only. This eliminates many coding errors.

>>

>> Out of curiosity, what kinds of coding errors do you eliminate by

>> using unsigned types?

>

> When you expect the variable to function as unsigned, which is common

> in C when working with characters, pointers, or binary data, but the

> variable was actually declared as signed.

all - thus signedness makes no sense. Use "signed char" or "unsigned

char" if you need a signed small type - or, preferably, uint8_t, int8_t,

or one of the other <stdint.h> types if you need maximal portability.

Pointers don't have a concept of signedness.

The only type that is guaranteed to work for general "binary data"

without a proper type, is "unsigned char".

These issues are not solved by "preferring to use unsigned", they are

solved by learning the language.

> If you add the value to

> another integer, you'll end up with the "wrong" numeric result, i.e.,

> other than what was expected because the variable was signed.

>

> i=1000

> x=0xFF (signed 8-bit char)

> y=0xFF (unsigned 8-bit char)

>

> i+x = 999 (unexpected)

> i+y = 1255 (expected)

>

ints here, nor would they help.

Do you mean that you prefer to use a type that can hold the data you put

into it, rather than having it truncated or converted? If so, then I

agree - that is sane practice for all types and all programming languages.

But your "unexpected" result is caused by someone trying to use a

"signed char" or plain "char" (without knowing the signedness) for

values outside its range and for something completely inappropriate. A

much more useful coding rule here is "don't use plain chars for numbers

or arithmetic - use <stdint.h> types", rather than "prefer unsigned".

I agree that people do make mistakes by assuming that plain "char" is

signed - I just disagree that "preferring unsigned" is helpful in

avoiding such mistakes.

(I personally use a lot more unsigned types than most, because types

such as uint8_t, uint16_t and uint32_t are most natural when dealing

with low-level embedded programming.)

(in any language) are perhaps the least likely of any variable uses to

have static lifetime.

> The other option, per the prior discussion, would require that the file

> scope variables in BSS all be "cleared" to a value of one. As you know

> from C, the representations for a value of one may be different for

> each type of variable, e.g., integer vs float.

well as making every other static-lifetime default initialised variable

wrong.

>

> As for C, variable declarations within the for() loop is not valid

> for ANSI C (C89), i.e., valid for C99 or C11 or later.

is meany by "C". If you want to pick an older standard, it is best to

specify it explicitly. (And I don't recommend using the term "ANSI C"

at all - people often use it to mean C89, when in fact it means "the

current ISO C standard" - i.e., C18 at the time of writing.)

Of course you are correct that putting declarations in the "for" loop

was introduced in C99. Rounded to the nearest percentage, 100% of C

code has been written since the introduction of C99, and probably at

least 98% since it became widely supported by common tools. There are a

few very niche situations where it makes sense to use pre-C99 today,

other than for maintaining old programs in the style in which they were

written. Other than that, C99 syntax is standard.

> So, one could

> argue, that to ensure backwards code compatibility, hence portability

> of C code, that declaring a variable somewhere within a procedure, such

> as within a for() loop, should be avoided, yes? Think of C style guide

> suggestions.

>

That's like saying software should be published as printouts in a

magazine, rather than, say, on a web page, for backwards compatibility.

Dec 7, 2021, 9:55:24 AM12/7/21

to

On 07/12/2021 15:05, Bart wrote:

> On 07/12/2021 12:52, Rod Pemberton wrote:

>> On Mon, 6 Dec 2021 14:56:41 +0100

>> David Brown <david...@hesbynett.no> wrote:

>>

>>> On 06/12/2021 11:41, Rod Pemberton wrote:

>>>

>>>>

>>>> Maybe I've been coding in C for too long now, as I prefer zero-based

>>>> indexing, even in non-C situations like personal to-do lists.

>>>>

>>>> 0-based works really well with C, but I do recall it being somewhat

>>>> "unnatural" at first, even though zero was the central starting

>>>> point of the signed number line in mathematics. For programming, I

>>>> strongly prefer unsigned only. This eliminates many coding errors.

>>>

>>> Out of curiosity, what kinds of coding errors do you eliminate by

>>> using unsigned types?

>>

>> When you expect the variable to function as unsigned, which is common

>> in C when working with characters, pointers, or binary data, but the

>> variable was actually declared as signed. If you add the value to

>> another integer, you'll end up with the "wrong" numeric result, i.e.,

>> other than what was expected because the variable was signed.

>>

>> i=1000

>> x=0xFF (signed 8-bit char)

>

> If x has int8 type then it has the value -1 not +255.

>

Correct. "0xff" is interpreted as an integer constant (value 255).
> On 07/12/2021 12:52, Rod Pemberton wrote:

>> On Mon, 6 Dec 2021 14:56:41 +0100

>> David Brown <david...@hesbynett.no> wrote:

>>

>>> On 06/12/2021 11:41, Rod Pemberton wrote:

>>>

>>>>

>>>> Maybe I've been coding in C for too long now, as I prefer zero-based

>>>> indexing, even in non-C situations like personal to-do lists.

>>>>

>>>> 0-based works really well with C, but I do recall it being somewhat

>>>> "unnatural" at first, even though zero was the central starting

>>>> point of the signed number line in mathematics. For programming, I

>>>> strongly prefer unsigned only. This eliminates many coding errors.

>>>

>>> Out of curiosity, what kinds of coding errors do you eliminate by

>>> using unsigned types?

>>

>> When you expect the variable to function as unsigned, which is common

>> in C when working with characters, pointers, or binary data, but the

>> variable was actually declared as signed. If you add the value to

>> another integer, you'll end up with the "wrong" numeric result, i.e.,

>> other than what was expected because the variable was signed.

>>

>> i=1000

>> x=0xFF (signed 8-bit char)

>

> If x has int8 type then it has the value -1 not +255.

>

When assigned to "x", it is converted using an implementation-dependent

algorithm (which is invariably modulo reduction, when using a two's

complement system) to the range of the target variable - arriving at -1.

I'd prefer if C had some way of initialising variables that did not have

such implicit conversions (also known as "narrowing conversions"). C++

has a method which works well in practice but is, IMHO, ugly :

int8_t x { 0xff };

instead of :

int8_t x = 0xff;

The first is an error in C++ because of the narrowing conversion, the

second works exactly like C.

A syntax such as :

int8_t x := 0xff;

would have been nicer IMHO, but it's too late for that in C (or C++).

>

>> y=0xFF (unsigned 8-bit char)

>>

>> i+x = 999 (unexpected)

>

> This is not unexpected, only if you expect int8 to be able to represent

> +255.

>

>> i+y = 1255 (expected)

>

> Unsigned has its own problems:

>

> unsigned int i=1000;

> unsigned int x=1001;

>

> printf("%u\n",i-x);

> printf("%f\n",(double)(i-x));

>

> displsys:

>

> 4294967295

> 4294967295.000000

>

(Other languages could reasonably handle this differently - C's

treatment of unsigned int as a modulo type is not the only practical way

to handle things. But if you know the basics of C, it is neither a

problem not a surprise.)

Dec 7, 2021, 7:42:33 PM12/7/21

to

overflows and such at quite high magnitudes, around +/- 2e9 for int32,

and considerably higher for int64 at +/- 9e18.

But you will normally work with values a long way from those limits,

unless results are stored in narrower types, so such problems are rare.

With unsigned numbers however, one of those problematic limits is zero,

which is really too close for comfort! So you will see problems even

with small, very ordinary calculations, such as 2 - 3, which here

underflows that zero.

Dec 8, 2021, 3:41:17 AM12/8/21

to

On 08/12/2021 01:42, Bart wrote:

> On 07/12/2021 14:55, David Brown wrote:

>> On 07/12/2021 15:05, Bart wrote:

>

>>> Unsigned has its own problems:

>>>

>>> unsigned int i=1000;

>>> unsigned int x=1001;

>>>

>>> printf("%u\n",i-x);

>>> printf("%f\n",(double)(i-x));

>>>

>>> displsys:

>>>

>>> 4294967295

>>> 4294967295.000000

>>>

>>

>> That is also not unexpected.

>>

>> (Other languages could reasonably handle this differently - C's

>> treatment of unsigned int as a modulo type is not the only practical way

>> to handle things. But if you know the basics of C, it is neither a

>> problem not a surprise.)

>

> When using signed integers, then you really only get problems with

> overflows and such at quite high magnitudes, around +/- 2e9 for int32,

> and considerably higher for int64 at +/- 9e18.

Yes.
> On 07/12/2021 14:55, David Brown wrote:

>> On 07/12/2021 15:05, Bart wrote:

>

>>> Unsigned has its own problems:

>>>

>>> unsigned int i=1000;

>>> unsigned int x=1001;

>>>

>>> printf("%u\n",i-x);

>>> printf("%f\n",(double)(i-x));

>>>

>>> displsys:

>>>

>>> 4294967295

>>> 4294967295.000000

>>>

>>

>> That is also not unexpected.

>>

>> (Other languages could reasonably handle this differently - C's

>> treatment of unsigned int as a modulo type is not the only practical way

>> to handle things. But if you know the basics of C, it is neither a

>> problem not a surprise.)

>

> When using signed integers, then you really only get problems with

> overflows and such at quite high magnitudes, around +/- 2e9 for int32,

> and considerably higher for int64 at +/- 9e18.

>

> But you will normally work with values a long way from those limits,

> unless results are stored in narrower types, so such problems are rare.

>

> With unsigned numbers however, one of those problematic limits is zero,

> which is really too close for comfort! So you will see problems even

> with small, very ordinary calculations, such as 2 - 3, which here

> underflows that zero.

>

Whatever kind of numbers you use, you have to apply a few brain cells.

You can't represent 1/3 with an integer, no matter how big it is. You

can't represent negative numbers with unsigned types. It's common

sense, not a "problematic limit". Anyone who finds it surprising that

you can't subtract 3 from 2 without signed numbers should give up their

programming career and go back to primary school. We have to have

/some/ standard of education in this profession!

Dec 8, 2021, 4:45:21 AM12/8/21

to

Can the result be properly represented, or not? Given ordinary (ie.

smallish) signed values, it can. Given ordinary unsigned values, the

chances are 50% that it can't!

This is why I prefer signed types for general use to unsigned types. And

why my mixed arithmetic is performed using signed types.

Imagine working with unsigned float; where would you start with all the

potential problems!

C of course prefers to use unsigned for mixed arithmetic (although the

precise rules are complex). So here:

int a = 2;

unsigned b = 3;

double c = a-b;

printf("%f\n", c);

it prints 4294967295.000000. Same using b-4 instead of a-b.

If I do the same:

int a := 2

word b := 3

real c := a-b

println c

it shows -1.000000, for b-4 too. Fewer surprises.

I actually do all arithmetic using at least i64. Values of types u8, u16

and u32 are converted losslessly to i64 first. It's only when u64 is

involved that you need to start taking care, but my example uses u64

('word'), and that has more sensible behaviour than the C.

Dec 8, 2021, 6:07:38 AM12/8/21

to

>

> This is why I prefer signed types for general use to unsigned types. And

> why my mixed arithmetic is performed using signed types.

fine, and the ability to subtract them and get negative numbers is one

of the reasons for that.

>

> Imagine working with unsigned float; where would you start with all the

> potential problems!

>

> C of course prefers to use unsigned for mixed arithmetic (although the

> precise rules are complex). So here:

difficult does not help. Personally, however, I don't think they are

good - I would have preferred rules that promoted both sides to a common

type that is big enough for all the values involved. Thus "signed int +

unsigned int" should be done as "signed long" (or "signed long long" if

necessary - or the rules for sizes should be changed too). Failing

that, it should be a constraint error (i.e., fail to compile).

C doesn't give me the rules I want here, so I use warnings and errors in

my compilation that flags such mixed arithmetic use as errors. The

result is that I can't get accidents with mixed arithmetic when

developing, and the code is fine for other compilers or flags because it

is just a slightly limited subset of C.

Other programmers make other choices, of course - that's just the way I

choose to handle this.

Please don't mistake my understanding of C's rules, my acceptance of

them, my appreciation that C is used by many people for many purposes

with different preferences and requirements, my working with C and

liking C, as meaning that I think C's rules are the way I personally

would have preferred.

>

> int a = 2;

> unsigned b = 3;

> double c = a-b;

>

> printf("%f\n", c);

>

> it prints 4294967295.000000. Same using b-4 instead of a-b.

determined from the inside out (the actual calculations can be done in

any order, as long as the results match the sequence point

requirements). The type used on the left of an assignment operator has

no bearing on the types used on the right hand side (and vice versa).

This is the same in the solid majority of programming languages. It is

a simple and consistent choice that is easy to understand and use.

It is not the only option. In Ada, as I understand it, expressions are

influenced by the type they are assigned to. This is certainly true for

literals and it allows overloading functions based on the return type.

I don't know the full rules here (others here know better - indeed, much

of what /I/ know has come from postings here).

Applying the types from the outside in, so that in "c = a - b;" the type

of "c" is applied to those of "a" and "b" before the subtraction, is an

alternative to applying it from the inside out. It is not /better/, it

is /different/. It has some pros, and some cons. It is not in any

sense more "natural" or more "expected".

>

> If I do the same:

>

> int a := 2

> word b := 3

> real c := a-b

>

> println c

>

> it shows -1.000000, for b-4 too. Fewer surprises.

agree that a result of -1 is more likely to be useful to the programmer,

but not that a reasonably competent C programmer would fine the C

version surprising.

And you are mixing two separate issues here, which does not help your case.

>

> I actually do all arithmetic using at least i64. Values of types u8, u16

> and u32 are converted losslessly to i64 first. It's only when u64 is

> involved that you need to start taking care, but my example uses u64

> ('word'), and that has more sensible behaviour than the C.

- b" would first ensure that "a" and "b" are converted to a common type

that covers the whole range of both. If that can't be done, or if the

overflow characteristics of the original types are incompatible and an

overflow is possible, then there should be an error.

This is completely orthogonal as to whether "a - b" should be converted

to "real" before the subtraction, given that the result will be assigned

to a "real", or whether it should be evaluated first in the closest

common type specified by the language ("i64" in your language, "unsigned

int" in the C version) and /then/ converted to "real".

In particular, what does your language give for :

int a := 2

int b := 3

real c := b / a;

println c

Does it print 1, or 1.5 ?

The C version would give 1. Ada, as far as I could see in a quick test

on <https://godbolt.org>, will not accept mixing types in the same

expression or assignment without explicit casts.

Dec 8, 2021, 6:55:34 AM12/8/21

to

On 08/12/2021 11:07, David Brown wrote:

signed with a signed result; "." (chosen to make it clearer) means unsigned:

u8 u16 u32 u64 i8 i16 i32 i64

u8 S S . . S S S S

u16 S S . . S S S S

u32 . . . . . . . S

u64 . . . . . . . .

i8 S S . . S S S S

i16 S S . . S S S S

i32 S S . . S S S S

i64 S S S . S S S S

Here is the corresponding table for my language:

u8 u16 u32 u64 i8 i16 i32 i64

u8 . . . . S S S S

u16 . . . . S S S S

u32 . . . . S S S S

u64 . . . . S S S S

i8 S S S S S S S S

i16 S S S S S S S S

i32 S S S S S S S S

i64 S S S S S S S S

I think people can make up their own minds as to which has the simpler

rules!

(My table is missing row/colums for i128/u128, but it's the same

pattern: unsigned/unsigned => unsigned, otherwise signed. I don't know

what C's would look like with 128-bit added.)

> Personally, however, I don't think they are

> good - I would have preferred rules that promoted both sides to a common

> type that is big enough for all the values involved. Thus "signed int +

> unsigned int" should be done as "signed long" (or "signed long long" if

> necessary - or the rules for sizes should be changed too). Failing

> that, it should be a constraint error (i.e., fail to compile).

>

> C doesn't give me the rules I want here,

Yeah, I get that feeling a lot. (Are you still wondering why I prefer my

language?)

>

> In particular, what does your language give for :

>

> int a := 2

> int b := 3

> real c := b / a;

>

> println c

>

>

> Does it print 1, or 1.5 ?

My languages have two divide operators: "/" and "%".

"%" means integer divide. "/" is supposed to be for floating point

divide, but that's only on one language; the static one will still do

integer divide when both operands are integers.

So M will give 1.0, Q will give 1.5.

But in both cases, it is the operator and the operand types that

determine what happens. It can't look beyond that, since I want the same

code to work in dynamic code where that information doesn't exist (c

will not even have a type until assigned to).

You would anyway want a term like A*B, to give the same result in terms

of value and type, no matter which expressions it is part of.

In languages that do it differently, A*B could give different results

even if repeated within the same expression!

> On 08/12/2021 10:45, Bart wrote:

>> C of course prefers to use unsigned for mixed arithmetic (although the

>> precise rules are complex). So here:

>

> The precise rules are simple, not complex. Pretending they are

> difficult does not help.

Here is the table of rules for C: S means the operation is performed as
>> C of course prefers to use unsigned for mixed arithmetic (although the

>> precise rules are complex). So here:

>

> The precise rules are simple, not complex. Pretending they are

> difficult does not help.

signed with a signed result; "." (chosen to make it clearer) means unsigned:

u8 u16 u32 u64 i8 i16 i32 i64

u8 S S . . S S S S

u16 S S . . S S S S

u32 . . . . . . . S

u64 . . . . . . . .

i8 S S . . S S S S

i16 S S . . S S S S

i32 S S . . S S S S

i64 S S S . S S S S

Here is the corresponding table for my language:

u8 u16 u32 u64 i8 i16 i32 i64

u8 . . . . S S S S

u16 . . . . S S S S

u32 . . . . S S S S

u64 . . . . S S S S

i8 S S S S S S S S

i16 S S S S S S S S

i32 S S S S S S S S

i64 S S S S S S S S

I think people can make up their own minds as to which has the simpler

rules!

(My table is missing row/colums for i128/u128, but it's the same

pattern: unsigned/unsigned => unsigned, otherwise signed. I don't know

what C's would look like with 128-bit added.)

> Personally, however, I don't think they are

> good - I would have preferred rules that promoted both sides to a common

> type that is big enough for all the values involved. Thus "signed int +

> unsigned int" should be done as "signed long" (or "signed long long" if

> necessary - or the rules for sizes should be changed too). Failing

> that, it should be a constraint error (i.e., fail to compile).

>

> C doesn't give me the rules I want here,

language?)

>

> In particular, what does your language give for :

>

> int a := 2

> int b := 3

> real c := b / a;

>

> println c

>

>

> Does it print 1, or 1.5 ?

"%" means integer divide. "/" is supposed to be for floating point

divide, but that's only on one language; the static one will still do

integer divide when both operands are integers.

So M will give 1.0, Q will give 1.5.

But in both cases, it is the operator and the operand types that

determine what happens. It can't look beyond that, since I want the same

code to work in dynamic code where that information doesn't exist (c

will not even have a type until assigned to).

You would anyway want a term like A*B, to give the same result in terms

of value and type, no matter which expressions it is part of.

In languages that do it differently, A*B could give different results

even if repeated within the same expression!

Dec 8, 2021, 9:26:22 AM12/8/21

to

thinking of tweaking it so that there were more signed operations; I'd

forgotten I'd already done it and was trying it out!

The current table is (arguably even simpler than before):

u8 u16 u32 u64 i8 i16 i32 i64

u16 S S S S S S S S

u32 S S S S S S S S

u64 S S S . S S S S

i8 S S S S S S S S

i16 S S S S S S S S

i32 S S S S S S S S

i64 S S S S S S S S

for u64/u64.

Any scheme will give incorrect results: inappropriate signedness,

overflow etc on certain combination. This was designed to minimise those

and give the most useful results for the most common values.

Explicit u64 types are not common; but quite common are u8 u16 u32 used

in arrays and structs, which are about saving space.

But this is a demonstration of the benefit:

u8 a:=2, b:=3

println a-b

Under the old chart, this displayed 18446744073709551615 (u8-u8 =>

u64-u64 => u64). Under the new one, it shows -1 (u8-u8 => i64-i64 => i64).

BTW this is the C table showing the operation and result types (both

sides promoted to the type shown):

u8 u16 u32 u64 i8 i16 i32 i64

u16 i32 i32 u32 u64 i32 i32 i32 i64

u32 u32 u32 u32 u64 u32 u32 u32 i64

u64 u64 u64 u64 u64 u64 u64 u64 u64

i8 i32 i32 u32 u64 i32 i32 i32 i64

i16 i32 i32 u32 u64 i32 i32 i32 i64

i32 i32 i32 u32 u64 i32 i32 i32 i64

i64 i64 i64 i64 u64 i64 i64 i64 i64

And this is my own current chart:

u8 u16 u32 u64 i8 i16 i32 i64

u16 i64 i64 i64 i64 i64 i64 i64 i64

u32 i64 i64 i64 i64 i64 i64 i64 i64

u64 i64 i64 i64 u64 i64 i64 i64 i64

i8 i64 i64 i64 i64 i64 i64 i64 i64

i16 i64 i64 i64 i64 i64 i64 i64 i64

i32 i64 i64 i64 i64 i64 i64 i64 i64

i64 i64 i64 i64 i64 i64 i64 i64 i64

Spot the odd-one-out.

Dec 8, 2021, 10:36:36 AM12/8/21

to

On 08/12/2021 12:55, Bart wrote:

> On 08/12/2021 11:07, David Brown wrote:

>> On 08/12/2021 10:45, Bart wrote:

>

>>> C of course prefers to use unsigned for mixed arithmetic (although the

>>> precise rules are complex). So here:

>>

>> The precise rules are simple, not complex. Pretending they are

>> difficult does not help.

>

What is it with you and your campaign to claim everything C is bad, and
> On 08/12/2021 11:07, David Brown wrote:

>> On 08/12/2021 10:45, Bart wrote:

>

>>> C of course prefers to use unsigned for mixed arithmetic (although the

>>> precise rules are complex). So here:

>>

>> The precise rules are simple, not complex. Pretending they are

>> difficult does not help.

>

everything in your useless little private language is good? It doesn't

matter what anyone writes - you /always/ twist the facts, move the

goalposts or deliberately misinterpret what others write. (And yes,

your language is useless - no one else will ever use it. You've had

made useful software with it and used it in your work in the past.

That's great, and genuinely praise-worthy. But it is dead now. Move

along.)

So - let's start with some kindergarten logic. Claiming that your rules

are simpler than C's does not make C's rules complex.

In a binary arithmetic expression with integer types, any type smaller

than "int" is first converted to an "int". Then if the two parts have

different types, they are converted to the bigger type with "unsigned"

types being treated as slightly bigger than the signed types.

It is /not/ hard. It is /not/ complex. You might not think it is

ideal, and I'd agree. But it really is not rocket science, and it

doesn't need a complicated table of inappropriate made-up types to make

it look more complicated.

Oh, and your method will screw up too, for some cases. /Any/ method

will in some cases, unless you have unlimited ranges for your integers

(like Python) or point-blank refuse mixed signed expressions (like Ada).

And your language will still screw up on overflows.

(And before you post your knee-jerk response, the fact that C gets

things wrong on overflow does not mean your language is right or better.)

<snip more pointless and annoying drivel>

>> In particular, what does your language give for :

>>

>> int a := 2

>> int b := 3

>> real c := b / a;

>>

>> println c

>>

>>

>> Does it print 1, or 1.5 ?

>

> My languages have two divide operators: "/" and "%".

>

> "%" means integer divide. "/" is supposed to be for floating point

> divide, but that's only on one language; the static one will still do

> integer divide when both operands are integers.

division? Nothing says "simple" and "intuitive" like picking different

meanings for your operators than all other languages.

>

> So M will give 1.0, Q will give 1.5.

>

expressions in completely different ways?

If you want to keep posting about your own language, please feel free -

only you can tell if you are making things up as you go along. But

/please/ stop posting shite about other languages that you refuse to

understand.

Understand me correctly here - I really don't care if you like C or not.

I don't care if anyone else here likes it or not, uses it or not. I am

not interested in promoting C or any other language - I'll use what I

want to use, and others will use what they want.

But what I /do/ react against is lies, FUD, and misrepresentations. I

am not "pro-C" - I am "anti-FUD", and it just so happens that your

bizarre hatred of C means it is C you post rubbish about. I'd react

against anyone else deliberately and repeatedly writing nonsense about

other topics too.

Dec 8, 2021, 11:58:56 AM12/8/21

to

On 08/12/2021 15:36, David Brown wrote:

illustrated my point with a chart.

> than "int" is first converted to an "int". Then if the two parts have

> different types, they are converted to the bigger type with "unsigned"

> types being treated as slightly bigger than the signed types.

At least, they are simpler than the rules for type syntax. And not much

simpler than the rules for charting the Mandelbrot Set!

> It is /not/ hard. It is /not/ complex. You might not think it is

> ideal, and I'd agree. But it really is not rocket science, and it

> doesn't need a complicated table of inappropriate made-up types

What made-up types? And why are they inappropriate?

Are you sure you aren't twisting and making up things yourself?

> to make

> it look more complicated.

I think most people would be surprised at how untidy that chart is. /I/ was.

>>> Does it print 1, or 1.5 ?

>>

>> My languages have two divide operators: "/" and "%".

>>

>> "%" means integer divide. "/" is supposed to be for floating point

>> divide, but that's only on one language; the static one will still do

>> integer divide when both operands are integers.

>

> Genius. Does it also use "and" as a keyword for the remainder after

> division? Nothing says "simple" and "intuitive" like picking different

> meanings for your operators than all other languages.

"%" was used for integer divide in Pascal. I adopted it in the 1980s

when I needed distinct operators.

And I use "rem" for integer REMainder instead of "%"; "ixor" instead of

"^"; "ior" instead of "|" and "or" instead of "||". Maybe it's just me,

but I find them more readable.

Why, what do other languages use for integer divide?

>> So M will give 1.0, Q will give 1.5.

>>

>

> That's your two languages that are proudly the same syntax, but handle

> expressions in completely different ways?

Funnily enough, C and Python will also give 1.0 and 1.5 respectively.

But that of course is fine.

> On 08/12/2021 12:55, Bart wrote:

>

> What is it with you and your campaign to claim everything C is bad, and

> everything in your useless little private language is good?

I said the rules are complex. You said they are simple. I disagreed, and
>

> What is it with you and your campaign to claim everything C is bad, and

> everything in your useless little private language is good?

illustrated my point with a chart.

> than "int" is first converted to an "int". Then if the two parts have

> different types, they are converted to the bigger type with "unsigned"

> types being treated as slightly bigger than the signed types.

simpler than the rules for charting the Mandelbrot Set!

> It is /not/ hard. It is /not/ complex. You might not think it is

> ideal, and I'd agree. But it really is not rocket science, and it

> doesn't need a complicated table of inappropriate made-up types

Are you sure you aren't twisting and making up things yourself?

> to make

> it look more complicated.

>>> Does it print 1, or 1.5 ?

>>

>> My languages have two divide operators: "/" and "%".

>>

>> "%" means integer divide. "/" is supposed to be for floating point

>> divide, but that's only on one language; the static one will still do

>> integer divide when both operands are integers.

>

> Genius. Does it also use "and" as a keyword for the remainder after

> division? Nothing says "simple" and "intuitive" like picking different

> meanings for your operators than all other languages.

when I needed distinct operators.

And I use "rem" for integer REMainder instead of "%"; "ixor" instead of

"^"; "ior" instead of "|" and "or" instead of "||". Maybe it's just me,

but I find them more readable.

Why, what do other languages use for integer divide?

>> So M will give 1.0, Q will give 1.5.

>>

>

> That's your two languages that are proudly the same syntax, but handle

> expressions in completely different ways?

But that of course is fine.

Dec 8, 2021, 12:13:59 PM12/8/21

to

On 08/12/2021 17:58, Bart wrote:

> On 08/12/2021 15:36, David Brown wrote:

>> On 08/12/2021 12:55, Bart wrote:

>

>

>>

>> What is it with you and your campaign to claim everything C is bad, and

>> everything in your useless little private language is good?

>

> I said the rules are complex. You said they are simple. I disagreed, and

> illustrated my point with a chart.

A chart designed purely to make the simple rules of C appear complex -
> On 08/12/2021 15:36, David Brown wrote:

>> On 08/12/2021 12:55, Bart wrote:

>

>

>>

>> What is it with you and your campaign to claim everything C is bad, and

>> everything in your useless little private language is good?

>

> I said the rules are complex. You said they are simple. I disagreed, and

> illustrated my point with a chart.

it is FUD. You added those of your own language, which is utterly

irrelevant to C, purely to be able to claim that the rules of your

language are simple. Note that even if your language's rules are

simpler in some way, that does /not/ make C's rules complex!

>

>> than "int" is first converted to an "int". Then if the two parts have

>> different types, they are converted to the bigger type with "unsigned"

>> types being treated as slightly bigger than the signed types.

>

> At least, they are simpler than the rules for type syntax. And not much

> simpler than the rules for charting the Mandelbrot Set!

>

>> It is /not/ hard. It is /not/ complex. You might not think it is

>> ideal, and I'd agree. But it really is not rocket science, and it

>> doesn't need a complicated table of inappropriate made-up types

>

> What made-up types? And why are they inappropriate?

set of fundamental types (regardless of what you personally might think

of them, or even what /I/ personally might think of them), and the rules

of C are given in terms of those types.

>

> Are you sure you aren't twisting and making up things yourself?

>

>> to make

>> it look more complicated.

>

> I think most people would be surprised at how untidy that chart is. /I/

> was.

But let's be clear here. Do you think people familiar and experienced

with C programming will find C's rules surprising? Or do you just think

people who have never used C will find them surprising?

>

>

>>>> Does it print 1, or 1.5 ?

>>>

>>> My languages have two divide operators: "/" and "%".

>>>

>>> "%" means integer divide. "/" is supposed to be for floating point

>>> divide, but that's only on one language; the static one will still do

>>> integer divide when both operands are integers.

>>

>> Genius. Does it also use "and" as a keyword for the remainder after

>> division? Nothing says "simple" and "intuitive" like picking different

>> meanings for your operators than all other languages.

>

> "%" was used for integer divide in Pascal. I adopted it in the 1980s

> when I needed distinct operators.

>

> And I use "rem" for integer REMainder instead of "%"; "ixor" instead of

> "^"; "ior" instead of "|" and "or" instead of "||". Maybe it's just me,

> but I find them more readable.

>

> Why, what do other languages use for integer divide?

integers, it means modulus. (Conventions differ regarding rounding and

signs when dividing by negative integers.)

>

>>> So M will give 1.0, Q will give 1.5.

>>>

>>

>> That's your two languages that are proudly the same syntax, but handle

>> expressions in completely different ways?

>

> Funnily enough, C and Python will also give 1.0 and 1.5 respectively.

>

> But that of course is fine.

I've no problem with different languages handling these in different

ways - just as I have no problem with different languages handling

integer promotions and implicit conversions in different ways. I merely

have a problem with claims that one method is "surprising" and another

somehow unsurprising, and I would question the benefit of making

languages designed specifically to be as similar in appearance and

syntax as possible while disagreeing on something that fundamental.

So it is /fine/ that your language promotes unsigned types to signed

types in mixed arithmetic. Those are the rules you chose, and if they

are clear and consistent, great. It is /wrong/ to say they are better,

or simpler, than other choices. OK?

Dec 8, 2021, 12:58:51 PM12/8/21

to

On 08/12/2021 17:13, David Brown wrote:

> On 08/12/2021 17:58, Bart wrote:

>> On 08/12/2021 15:36, David Brown wrote:

>>> On 08/12/2021 12:55, Bart wrote:

>>

>>

>>>

>>> What is it with you and your campaign to claim everything C is bad, and

>>> everything in your useless little private language is good?

>>

>> I said the rules are complex. You said they are simple. I disagreed, and

>> illustrated my point with a chart.

> A chart designed purely to make the simple rules of C appear complex -

Does it correctly represent what you get when you apply those rules?
> On 08/12/2021 17:58, Bart wrote:

>> On 08/12/2021 15:36, David Brown wrote:

>>> On 08/12/2021 12:55, Bart wrote:

>>

>>

>>>

>>> What is it with you and your campaign to claim everything C is bad, and

>>> everything in your useless little private language is good?

>>

>> I said the rules are complex. You said they are simple. I disagreed, and

>> illustrated my point with a chart.

> A chart designed purely to make the simple rules of C appear complex -

Then there's nothing underhand about it.

> it is FUD. You added those of your own language, which is utterly

> irrelevant to C, purely to be able to claim that the rules of your

> language are simple.

type system between 32-bit and 64-bit types as there is in most desktop Cs.

But it also simpler because I made it so.

>> What made-up types? And why are they inappropriate?

>

> There are no types of the names you used in C. C has a perfectly good

> set of fundamental types (regardless of what you personally might think

> of them, or even what /I/ personally might think of them), and the rules

> of C are given in terms of those types.

made for a rather wide and spaced out chart.

(Or maybe I should included char, signed char, unsigned char,

signed/unsigned long etc as well. Then it would really have been big

/and/ complex!)

That is a ludicrous quibble; this is a language-agnostic group, and

everyone here surely can figure out what those types represent.

Besides I wanted two charts for comparison; they need to use the same

annotations.

>>

>> Are you sure you aren't twisting and making up things yourself?

>>

>>> to make

>>> it look more complicated.

>>

>> I think most people would be surprised at how untidy that chart is. /I/

>> was.

>

> You seem to find just about everything in C surprising.

>

> But let's be clear here. Do you think people familiar and experienced

> with C programming will find C's rules surprising

done as unsigned. But according to that chart, only 44% of mixed

combinations are done as unsigned; most are signed.

> Or do you just think

> people who have never used C will find them surprising?

years, will find surprising.

>> Why, what do other languages use for integer divide?

>

> Most use /.

point divide, and "//" for integer divide. Although Python and its "//"

came along some years after I chose "%".

So what else is there?

Wikipedia says (https://en.wikipedia.org/wiki/Division_(mathematics)):

"Names and symbols used for integer division include div, /, \, and %"

In my IL, I used DIV, IDIV for float and integer division, and IREM for

integer remainder. (Float remainder uses FMOD.)

I had once reserved "//" for designating rational numbers.

> And in most languages, if they have % operator for

> integers, it means modulus.

https://en.wikipedia.org/wiki/Modulo_operation#In_programming_languages

it seems to be split between REM, MOD and %. I chose REM.

Some languages use more than one for a choice of behaviour.

I don't think "%" is the most common; where it is used, it's often for a

language with C-style syntax.

Dec 8, 2021, 2:05:22 PM12/8/21

to

this group it might be largely C and Algol68 that come up.

C figures highly because I can't really get away from it; it's

everywhere. It's also the one whose purpose and use-cases most closely

match my own.

But it also annoys me that it is so deified despite being a such a

dreadful language.

That is not surprising given when it was created, nearly 50 years ago.

But it hasn't moved on. Its aficionados seem to treat every misfeature

as an advantage.

> I'd react

> against anyone else deliberately and repeatedly writing nonsense about

> other topics too.

you don't have much of a choice about it; you have to rely on external

tools to make it useful. That's OK, many people are stuck with languages

they don't like.

But some of us can do something about it, yet that seems to annoy you

and you are constantly belittling people's efforts, especially mine.

Dec 9, 2021, 7:58:52 AM12/9/21

to

On 08/12/2021 20:05, Bart wrote:

>

>

> I post criticisms of quite a few languages I come across, although in

> this group it might be largely C and Algol68 that come up.

>

> C figures highly because I can't really get away from it; it's

> everywhere. It's also the one whose purpose and use-cases most closely

> match my own.

>

> But it also annoys me that it is so deified despite being a such a

> dreadful language.

This is where the communication problem lies - your annoyance is based
> this group it might be largely C and Algol68 that come up.

>

> C figures highly because I can't really get away from it; it's

> everywhere. It's also the one whose purpose and use-cases most closely

> match my own.

>

> But it also annoys me that it is so deified despite being a such a

> dreadful language.

on two incorrect ideas.

First, you think C is "deified" - it is /not/. I really wish you could

understand that, as it would make discussions so much easier. You seem

to be fundamentally incapable of distinguishing between people who

understand C and use it (of which there are vast numbers), and people

who think C is the best language ever and completely flawless (of which

there are, to my knowledge, none).

Take me, as an example - because it's a lot easier to speak for myself

than for other people! I have a good understanding of the main C

language, and a subset of the standard library (there is a great deal in

it that I never use). I have read the standards, I keep up with changes

to the new standards. I have written a great deal of C code over the

years, almost all for small embedded systems (and a little for Linux).

I have used a wide range of C compilers for a wide range of

microcontrollers. Far and away the best C compiler I have seen is gcc,

which I know well and use for several targets.

I have worked in many different languages (I have at least some

experience with perhaps 20 programming languages, ranging from

functional programming, assembly, hardware description languages,

scripting languages, imperative languages, and more). I have used

assembly on a couple of dozen architectures over the years. I regularly

use several different languages for different types of programming.

I like programming in C. I think is a good language for a lot of what I

do, and I think it is a good language for a lot of what other people do.

But I also think it is /not/ an appropriate language for many uses

people make of it, and it is not an appropriate language for people who

are not able or willing to learn it properly. It is a language that

trusts the programmer to know what they are doing - if you are not

worthy of that trust, don't use C.

I would drop it in a heartbeat if I had something better. I /do/ drop

it without a backwards glance when I have something better for the task

at hand. Thus on some embedded systems, C++ is more appropriate and I

use that. (On occasions that are thankfully rare now, assembly was a

better choice.) On PC's or bigger systems, I often use Python - but

sometimes other languages.

C is not perfect. I have never heard anyone suggest it is - though you,

Bart, repeatedly accuse people (including me) of saying so. There are a

number of sub-optimal aspects in C that there is quite general agreement

about, and a large number where some people think it could have been

better, but different people have different opinions. For the most

part, those who know about the language understand why things are the

way they are - whether it be for historical reasons, compatibility,

limitations of old systems, or for more modern reasons and uses. No one

is in any doubt that if a language were being designed today to do the

job of C, many aspects would be different. No one is in any doubt that

C is not perfect for their needs or desires. Nonetheless, it is a good

language that works well for many programmers.

It takes effort, skill, knowledge and experience to use any language

well. You need to understand the subset that is appropriate for your

usage - all languages, bar a few niche or specialist ones, have features

and flexibility well outside what makes sense for any particular

programmer's needs. You need to understand how to use the tools for the

language as an aid to developing good code, avoiding problems, and

getting good results in the end. If you fight with the tools, you will

fail. If you fight with the language, you will lose. If you avoid the

useful features of the language, you will only make life harder for

yourself. If you are determined to find fault and dislike in every

aspect of a language, you will not like the language and you will not be

productive with it.

Your second mistake is to think C is a "dreadful language". It is not.

You place artificial limitations on it that make it a poorer language,

you misunderstand its philosophy and design, you fail to make good use

of proper tools (and C was always intended to be used with helpful

tools), and in general your emphasis is on finding faults rather than

uses. You appear unable to believe that people can successfully use the

language.

There is certainly a place for criticism, especially constructive

criticism, in all languages - /none/ are anywhere close to being

universally perfect. But there is no benefit to anyone in a repetitive,

out of context and biased stream of abuse and negativity towards a

language (or anything else, for that matter).

>

> That is not surprising given when it was created, nearly 50 years ago.

> But it hasn't moved on. Its aficionados seem to treat every misfeature

> as an advantage.

everyone else. I don't treat things /you/ see as misfeatures that way.

In reality, there are very few misfeatures in C that cannot be avoided

by good use of tools, good general development practices, and

occasionally a little extra effort. This is the same in all programming

languages, though of course the details vary. For some reason, you

insist on avoiding good tools (and avoiding good use of tools), and

prefer to find ways to misuse every feature of C that you can.

(The primary reason I have for moving to C++ is to gain additional

features, not to move away from misfeatures.)

>

>> I'd react

>> against anyone else deliberately and repeatedly writing nonsense about

>> other topics too.

>

> You mention lots of things you don't like about C. But it sounds like

> you don't have much of a choice about it; you have to rely on external

> tools to make it useful. That's OK, many people are stuck with languages

> they don't like.

perfect for every task). And with good tools used well, it is a very

pleasant and effective language to work with. The same applies to any

good software developer with any language - you find a language that is

suitable for the task and fits your style, you find good tools that help

with the job, and development processes that work well. If you don't

have that, you won't like what you are doing and won't do it well. The

choice of programming language is irrelevant outside its suitability for

the task.

Perhaps you are just envious that I can happily and successfully work

with C, while you have failed? That would be a shame - I am happy, not

envious, that you have a language that you enjoy working with. And I

think it would be better if you avoided dealing with a language that you

clearly don't appreciate or enjoy.

>

> But some of us can do something about it, yet that seems to annoy you

> and you are constantly belittling people's efforts, especially mine.

>

I don't belittle your effort or your language - I belittle your

attitude to your language and to C, your egotism and narcissistic

viewpoint. When you say you prefer to code in your own language, and

had success with it, that's fine. When you say your language is an

alternative to C, you are wrong. When you say it is "better" than C,

you are wrong. When you say a particular given aspect is "better" than

the equivalent aspect of C, then you /might/ be /subjectively/ right -

i.e., it could be better in some ways for some people or some use-cases.

(And I have regularly agreed on such points.)