Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why no binary support in sprintf()?

311 views
Skip to first unread message

Rick C. Hodgin

unread,
Aug 29, 2018, 8:03:30 PM8/29/18
to
I was looking today for a standard sprintf() format code to
print a binary number in the range 0..63, but could not find
any.

Is there one? If not, why not? That would seem to be a
fundamental need in many software apps, and especially so
back in the 70s/80s when C was created.

--
Rick C. Hodgin

Joe Pfeiffer

unread,
Aug 29, 2018, 8:08:57 PM8/29/18
to
I would presume because it has octal for PDP-11 users and hex for
everyone else. Actually printing out strings of 1's and 0's is only for
masochists.

Kenny McCormack

unread,
Aug 29, 2018, 8:38:39 PM8/29/18
to
In article <1b1sagl...@pfeifferfamily.net>,
IOW, Ricky to a T.

BTW, the easiest way to get base 2 output (the term "binary" is wrong
here), is to use sprintf to get octal, then use a lookup table on each
octal digit.

--
If you think you have any objections to anything I've said above, please
navigate to this URL:
http://www.xmission.com/~gazelle/Truth
This should clear up any misconceptions you may have.

Chris M. Thomasson

unread,
Aug 29, 2018, 8:42:52 PM8/29/18
to
Check this cra% out:

https://groups.google.com/d/topic/comp.lang.c++/7u_rLgQe86k/discussion

It prints out two symbols for 0 and 1, well, 0 = 0 and 1 = 1...

;^)

Rick C. Hodgin

unread,
Aug 29, 2018, 9:07:43 PM8/29/18
to
C is a pretty low-level language. Having access to bits is
not outside of the realm of possibility for such a low-level
language which also supports things like <<=, >>=, << and >>,
as well as bitwise operators.

As for me, I do OS development, as well as low-level work on
compilers, assemblers, and many related utilities. I need
access to binary bits to be able to easily visualize some of
the things I work on. I doubt I am unique in that need.

I wound up writing this:

// Use as sprintf_bin(buffer, 5, tnValue);
s8* sprintf_bin(s8* buffer, s32 tnWidth, u32 tnBinary)
{
s32 lnI;
u32 lnMask = 1 << (tnWidth - 1);

// Iterate through each bit
for (lnI = 0; lnMask != 0; ++lnI, lnMask >>= 1)
buffer[lnI] = ((tnBinary & lnMask) == 0 ? '0' : '1');

// NULL-terminate
buffer[lnI] = 0;

// Pass-thru input pointer
return(buffer);
}

--
Rick C. Hodgin

Joe Pfeiffer

unread,
Aug 29, 2018, 10:54:41 PM8/29/18
to
"Rick C. Hodgin" <rick.c...@gmail.com> writes:

> On 08/29/2018 08:08 PM, Joe Pfeiffer wrote:
>> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
>>
>>> I was looking today for a standard sprintf() format code to
>>> print a binary number in the range 0..63, but could not find
>>> any.
>>>
>>> Is there one? If not, why not? That would seem to be a
>>> fundamental need in many software apps, and especially so
>>> back in the 70s/80s when C was created.
>>
>> I would presume because it has octal for PDP-11 users and hex for
>> everyone else. Actually printing out strings of 1's and 0's is only for
>> masochists.
>
> C is a pretty low-level language. Having access to bits is
> not outside of the realm of possibility for such a low-level
> language which also supports things like <<=, >>=, << and >>,
> as well as bitwise operators.
>
> As for me, I do OS development, as well as low-level work on
> compilers, assemblers, and many related utilities. I need
> access to binary bits to be able to easily visualize some of
> the things I work on. I doubt I am unique in that need.

<snip>

Maybe not unique, but pretty close. Reading either octal or hex (I
learned octal first -- doing assembly code on a CDC 6400!) and seeing
bits is so automatic after about 15 minutes of exposure that it's hard
to imagine anyone actually needing to see the bit strings. When I was
teaching assembly code, I'd go over the hex-binary and octal-binary
conversions in much less than a lecture, and any reasonably competent
student never needed a refresher.

Decimal-binary, of course, took a week or so going through the
conversion algorithms.

jlad...@itu.edu

unread,
Aug 29, 2018, 11:32:26 PM8/29/18
to
On Wednesday, August 29, 2018 at 5:08:57 PM UTC-7, Joe Pfeiffer wrote:
I had a use case for printing raw binary. I was debugging a device in an embedded system which communicated over an I2C interface. The device configuration and data output was specified by several adjoined bit fields of unusual widths, totaling 64 bits in all(!). It was easier to see what I was doing in binary than in hex. As I recall, endianism was also an issue. The data sheet for the device was written in big-endian but my SoC was little-endian. I wanted to read left to right along with the data sheet.

Rick C. Hodgin

unread,
Aug 30, 2018, 12:03:43 AM8/30/18
to
When working on a kernel, you have various structures that are
bit-filled in various ways. Sometimes one bit, sometimes many
bits, but to see them in binary is a good way to see their vari-
ous flags and what not.

Specifically, I am using it today for an assembly language tool
for showing opcodes and their sub-byte bit patterns, such as
for register allocation/use, flags on or off (such as for sign-
extended, or zero-extended), and so on.

--
Rick C. Hodgin

David Brown

unread,
Aug 30, 2018, 3:09:54 AM8/30/18
to
/Octal/ is for masochists, especially with the idiotic choice of "0" as
the prefix for octal constants.

Binary is for /real/ programmers. That's why we need "0b" binary
constants in C, like we have in C++ and a good many C compilers. And we
need binary support in printf!

Exaggeration aside, sometimes showing output in binary /is/ useful - I
have done it often. But it is not uncommon to want to divide up the
output with separators - perhaps every 4 bits, but maybe in more
application-specific patterns.

mark.b...@gmail.com

unread,
Aug 30, 2018, 3:53:46 AM8/30/18
to
On Thursday, 30 August 2018 01:03:30 UTC+1, Rick C. Hodgin wrote:
> I was looking today for a standard sprintf() format code to
> print a binary number in the range 0..63, but could not find
> any.
>
> Is there one?

What does the standard say? I know you claim dyslexia, but if you
are hope to be developing a language which is compatible with C but
improves on it, you need to be able to use the documentation.

> If not, why not?

40 years ago I had a colleague who would answer "what?" and "how?"
questions, but claimed immunity on "why?". However, the nearest thing
to a canonical answer would be if it were discussed in the standard
(I believe there are sections on rationale - I'm not particularly
familiar with the standard, but I'm not claiming to write compilers).
Otherwise Joe Pfeiffer has given a fair pragmatic summary.

> That would seem to be a
> fundamental need in many software apps, and especially so
> back in the 70s/80s when C was created.

When I started on DecSystem-10 systems, octal was the lingua franca
(this may relate to the 36-bit architecture) and that also applied
to the Prime 50-series machines I worked on in the 1980s (32-bit, but
derived from a 16-bit architecture - machine addressing was to 16-bit
"half-words" and a number of 16-bit concepts had stuck, with numbers
often represented as 1 binary value and 5 octals!). Hex subsequently
became the standard representation.
I suspect that for most people, it's easier to deal with than 32 bits
which can be easily "blurred".

The binary option is clearly valued by some (see David Brown's post)
and sufficiently so to be provided in C++ , some C implementations
and the language I mainly work in, Java.

David Brown

unread,
Aug 30, 2018, 4:38:16 AM8/30/18
to
On 30/08/18 09:53, mark.b...@gmail.com wrote:
> On Thursday, 30 August 2018 01:03:30 UTC+1, Rick C. Hodgin wrote:
>> I was looking today for a standard sprintf() format code to
>> print a binary number in the range 0..63, but could not find
>> any.
>>
>> Is there one?
>
> What does the standard say? I know you claim dyslexia, but if you
> are hope to be developing a language which is compatible with C but
> improves on it, you need to be able to use the documentation.
>

There is no conversion specifier for binary in the standard printf (or
scanf) families. I haven't seen it as an extension in any other printf
functions either.

>> If not, why not?
>
> 40 years ago I had a colleague who would answer "what?" and "how?"
> questions, but claimed immunity on "why?". However, the nearest thing
> to a canonical answer would be if it were discussed in the standard
> (I believe there are sections on rationale - I'm not particularly
> familiar with the standard, but I'm not claiming to write compilers).
> Otherwise Joe Pfeiffer has given a fair pragmatic summary.
>

There is a rationale document for C99 (and maybe also for C11)
describing some of the reasons for the changes compared to previous
standards. But there are only occasional "why?" answers given in the
standards themselves.

>> That would seem to be a
>> fundamental need in many software apps, and especially so
>> back in the 70s/80s when C was created.
>
> When I started on DecSystem-10 systems, octal was the lingua franca
> (this may relate to the 36-bit architecture) and that also applied
> to the Prime 50-series machines I worked on in the 1980s (32-bit, but
> derived from a 16-bit architecture - machine addressing was to 16-bit
> "half-words" and a number of 16-bit concepts had stuck, with numbers
> often represented as 1 binary value and 5 octals!). Hex subsequently
> became the standard representation.
> I suspect that for most people, it's easier to deal with than 32 bits
> which can be easily "blurred".

Octal is long outdated as a useful number system - but working with old
systems is part of C's forte. Generally, hex is the more sensible
choice since the computing world moved almost universally into
power-of-two sizes and 8-bit bytes. (And almost all exceptions have at
least multiple of four bit sizes.)

>
> The binary option is clearly valued by some (see David Brown's post)
> and sufficiently so to be provided in C++ , some C implementations
> and the language I mainly work in, Java.
>

C++ standardised 0b0101 style binary constants as they have been
implemented in many compilers for quite some time (in C and C++).
Compilers aimed at embedded development usually had extensions for some
kind of binary constants, and "0b" had emerged as the most common. It
is also more convenient in C++ since there is also a digit separator
there - it's a lot easier to read and write 0b0110'1010'1101'0110 than
0b0110101011010110.

AFAIK C++ doesn't have any form of binary format output, but the "cout
<< ..." style output is easily customised.

I'm not entirely sure why C has not adopted "0b" binary constants. I
think it's just that they are not used very often, and you can get
similar effects with sets of macros - the C standards are very
conservative about adding new features. On the other hand, compilers
already implement 0b constants, there are no compatibility issues, and
it would only involve a couple of new paragraphs in one section of the
standard.

As for output in binary format, I've used them mostly on smaller
embedded systems - systems where I usually don't want "printf" anyway
because it is too big and slow. Output is more likely to be via
specific functions like "debugHex2", "debugString", "debugBin8" sending
data directly on a serial port. So lack of printf binary support is not
something I feel strongly - it would merely be a "nice to have" feature.

Ben Bacarisse

unread,
Aug 30, 2018, 6:07:11 AM8/30/18
to
"Rick C. Hodgin" <rick.c...@gmail.com> writes:

> On 08/29/2018 10:54 PM, Joe Pfeiffer wrote:
>> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
>>
>>> On 08/29/2018 08:08 PM, Joe Pfeiffer wrote:
>>>> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
>>>>
>>>>> I was looking today for a standard sprintf() format code to
>>>>> print a binary number in the range 0..63, but could not find
>>>>> any.
<snip>
> When working on a kernel, you have various structures that are
> bit-filled in various ways. Sometimes one bit, sometimes many
> bits, but to see them in binary is a good way to see their vari-
> ous flags and what not.

It's almost always better to print flags a short bit-fields is some more
mnemonic way than the raw bits. The flags a fields mean something and I
would usually prefer to print some reminder of the meanings even if it's
just single letters.

For debugging during development (when you may not want to keep changing
the "to-string" routines to track changes), I've found hex to be good
enough.

--
Ben.

Malcolm McLean

unread,
Aug 30, 2018, 6:35:23 AM8/30/18
to
My use case is Huffman codes. However they are not really binary numbers,
they are binary strings. Often they appear in my source code as string
literals - not the most efficient representation, but easier to check
against a specification.

Bart

unread,
Aug 30, 2018, 7:34:07 AM8/30/18
to
On 30/08/2018 01:03, Rick C. Hodgin wrote:
> I was looking today for a standard sprintf() format code to
> print a binary number in the range 0..63, but could not find
> any.

For such a specific use, just create a function like that below.



---------------------------
#include <stdio.h>

const char* bin64(unsigned int n) {
static const char* table[]={
"000000", // 0
"000001", // 1
"000010", // 2
"000011", // 3
"000100", // 4
"000101", // 5
"000110", // 6
"000111", // 7
"001000", // 8
"001001", // 9
"001010", // 10
"001011", // 11
"001100", // 12
"001101", // 13
"001110", // 14
"001111", // 15
"010000", // 16
"010001", // 17
"010010", // 18
"010011", // 19
"010100", // 20
"010101", // 21
"010110", // 22
"010111", // 23
"011000", // 24
"011001", // 25
"011010", // 26
"011011", // 27
"011100", // 28
"011101", // 29
"011110", // 30
"011111", // 31
"100000", // 32
"100001", // 33
"100010", // 34
"100011", // 35
"100100", // 36
"100101", // 37
"100110", // 38
"100111", // 39
"101000", // 40
"101001", // 41
"101010", // 42
"101011", // 43
"101100", // 44
"101101", // 45
"101110", // 46
"101111", // 47
"110000", // 48
"110001", // 49
"110010", // 50
"110011", // 51
"110100", // 52
"110101", // 53
"110110", // 54
"110111", // 55
"111000", // 56
"111001", // 57
"111010", // 58
"111011", // 59
"111100", // 60
"111101", // 61
"111110", // 62
"111111" // 63
};

if (n>=64) return "******";
return table[n];
}

int main(void){
int i;
for (i=0; i<=70; ++i) printf("%d: %s\n",i,bin64(i));
}

--
bart

Thiago Adams

unread,
Aug 30, 2018, 9:12:50 AM8/30/18
to

This is for all people that replied Rick's topic.

Rick will never stop spam because he
has attention here.

We are not winning this fight because we
are not working together.

I don't have any problems in answer Rick on-topics
but first he needs stop spam. Clearly he is not doing
that.


Scott Lurndal

unread,
Aug 30, 2018, 9:17:54 AM8/30/18
to
"Rick C. Hodgin" <rick.c...@gmail.com> writes:
>On 08/29/2018 08:08 PM, Joe Pfeiffer wrote:
>> "Rick C. Hodgin" <rick.c...@gmail.com> writes:

>> I would presume because it has octal for PDP-11 users and hex for
>> everyone else. Actually printing out strings of 1's and 0's is only for
>> masochists.
>
>C is a pretty low-level language. Having access to bits is
>not outside of the realm of possibility for such a low-level
>language which also supports things like <<=, >>=, << and >>,
>as well as bitwise operators.

Listen to the Professor. Masochists, indeed. If it had
been the least useful, it would have been added 40+ years ago.

>
>As for me, I do OS development, as well as low-level work on
>compilers, assemblers, and many related utilities. I need
>access to binary bits to be able to easily visualize some of
>the things I work on. I doubt I am unique in that need.

Most people have no problem reading hexidecimal numbers and
figuring out which bits are set and clear.

On the other hand, there is always the korn shell:

$ printf '0x%x\n' $(( (2#1101010100 << 22) | (2#011 << 16) | (2#0010 <<12) | (2#0010 << 8) | (2#100 << 5) | 2#11111 ))
0xd503229f

jadi...@gmail.com

unread,
Aug 30, 2018, 9:19:14 AM8/30/18
to
On Thursday, August 30, 2018 at 3:09:54 AM UTC-4, David Brown wrote:
> On 30/08/18 02:08, Joe Pfeiffer wrote:
I've actually had a legitimate use for octal codes. If you ever have to work
with avionics equipment with ARINC-429 interfaces, it's right up front and
center.

Rick C. Hodgin

unread,
Aug 30, 2018, 9:50:32 AM8/30/18
to
On 08/30/2018 09:12 AM, Thiago Adams wrote:
> Rick will never stop spam because he
> has attention here.

Thiago, it's not spam.

After the rapture, remember these posts and come back to
them and read them and then realize I was truly concerned
about your eternal soul.

You can still be saved at that time, but it will be much,
much more difficult. Don't take the mark required to buy
or sell, and ask Jesus to forgive your sin and feel your
life changed in so doing.

My concerns are truly for your eternal future, for your
lives both here in this world, and in eternity. It is
not spam, it is love applied.

And, FWIW, I would NEVER EVER EVER have believed any-
thing I was saying ... until it happened to me in 2004
when I was saved. In my salvation came real change, and
in that real change came this new focus and purpose in
my life.

You can look at any aspect of my life and see that I am
not simply coming here to spam. What you see here is a
true reflection of the change which is evident in all
areas of my life. That change makes me care for all of
the people around me, including those online in our
social networks and constructs, such as software and
hardware development.

I teach here to reach the people who will not go to a
church and hear the message. I come to where the people
are and teach them the best I can because I care about
them, their life, their eternity, their soul. I want
people to thrive and prosper and love and help and be
the people God created us to be.

Seriously. If you'd like to come and meet me and speak
to me personally and ask me questions face to face and
see my life and so on ... I live in Indianapolis, IN.
If you're ever on a connecting flight passing through
my city, let me know. I'll meet you at the airport and
you can see "Rick C. Hodgin" in person. I can tell you
about my multitude of failures in serving the Lord, as
well as my goals and intents in so doing.

Here is a major software project I've been working on
since 2012:

http://www.visual-freepro.org/

You'll see it's not spam. It's a changed life, and it
is teaching people here who may never have had the op-
portunity to hear the truth to be given the chance. I
hope someday you'll come to see it for what it is, the
one rescue line God is giving you to be saved from the
final day of eternal judgment.

It is important. It is more important than other things,
which is why I teach it.

--
Rick C. Hodgin

mark.b...@gmail.com

unread,
Aug 30, 2018, 9:58:43 AM8/30/18
to
On Thursday, 30 August 2018 14:17:54 UTC+1, Scott Lurndal wrote:
> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
...
> Listen to the Professor.

No. Rick is the Very Reverend. Fir is the Professor. Do keep up!

> Masochists, indeed. If it had
> been the least useful, it would have been added 40+ years ago.

Everyone is marching out of step except Rick and Bart.

Philipp Klaus Krause

unread,
Aug 30, 2018, 10:12:25 AM8/30/18
to
His question starting this thread was about a function from the C
standard library, and thus on-topic.

While the original question is a bit narrow, I more generally wonder why
C has no support for converting numbers in bases other than 010, 10 and
0x10 to strings.

After all, we _do_ have some support for converting from strings to
integers in strtol(), etc.

Philipp

Bart

unread,
Aug 30, 2018, 10:24:09 AM8/30/18
to
And, apparently, C++, Java and even some C versions:

On 30/08/2018 08:53, mark.b...@gmail.com wrote:
> The binary option is clearly valued by some (see David Brown's post)
> and sufficiently so to be provided in C++ , some C implementations
> and the language I mainly work in, Java.

Must be a lot of masochists around.

Scott Lurndal

unread,
Aug 30, 2018, 10:26:58 AM8/30/18
to
mark.b...@gmail.com writes:
>On Thursday, 30 August 2018 14:17:54 UTC+1, Scott Lurndal wrote:
>> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
>...
>> Listen to the Professor.
>
>No. Rick is the Very Reverend. Fir is the Professor. Do keep up!

And Joe is a _real_ Professor, albeit retired.

Scott Lurndal

unread,
Aug 30, 2018, 10:28:22 AM8/30/18
to
Or if you've ever worked with 12-bit or 36-bit DEC systems, where Octal
was de rigueur.

Thiago Adams

unread,
Aug 30, 2018, 10:42:07 AM8/30/18
to
On Thursday, August 30, 2018 at 11:12:25 AM UTC-3, Philipp Klaus Krause wrote:
> Am 30.08.2018 um 15:12 schrieb Thiago Adams:
> >
> > This is for all people that replied Rick's topic.
> >
> > Rick will never stop spam because he
> > has attention here.
> >
> > We are not winning this fight because we
> > are not working together.
> >
> > I don't have any problems in answer Rick on-topics
> > but first he needs stop spam. Clearly he is not doing
> > that.
> >
> >
>
> His question starting this thread was about a function from the C
> standard library, and thus on-topic.

Rick will choose groups where he has attention
on on-topic. He works with C and C++ and them these
two groups are the most affect by his spam.

I believe he would prefer stop spam and communicate with
other people instead of just spaw and never received
feedback on other topics.

What he has now is working for he.. he makes spam and no one
can stop him and he still participating on on-topic
discussions.

He doesn't have any reason and motivation to stop the spam.

Rick C. Hodgin

unread,
Aug 30, 2018, 10:46:23 AM8/30/18
to
On 08/30/2018 10:41 AM, Thiago Adams wrote:
> Rick will choose groups where he has attention
> on on-topic. He works with C and C++ and them these
> two groups are the most affect by his spam.

I have interests in C, C++, and hardware and OS kernel
design. Those are the groups where I go and read the
posts and participate, and also reach out to the people
there.

But these actions are not isolated. When I am at a gas
station I speak to people. When I am at the grocery
store, I will speak to people, etc.

It is always teaching, trying to give people information
they don't currently have.

I pray someday you come to see it, Thiago. You are a
valuable and remarkable creation of God, and I would
like to see you in Heaven.

--
Rick C. Hodgin

Thiago Adams

unread,
Aug 30, 2018, 1:07:16 PM8/30/18
to
On Thursday, August 30, 2018 at 11:46:23 AM UTC-3, Rick C. Hodgin wrote:
> On 08/30/2018 10:41 AM, Thiago Adams wrote:
> > Rick will choose groups where he has attention
> > on on-topic. He works with C and C++ and them these
> > two groups are the most affect by his spam.
>
> I have interests in C, C++, and hardware and OS kernel
> design. Those are the groups where I go and read the
> posts and participate, and also reach out to the people
> there.
>
> But these actions are not isolated. When I am at a gas
> station I speak to people. When I am at the grocery
> store, I will speak to people, etc.
>
> It is always teaching, trying to give people information
> they don't currently have.

What you are doing here is not teaching.

Every day I see your off-topic messages is like
you had painted with spray on the public street I use.

This is vandalism. You do something you known no one likes.

The google group don't give an option to delete or hide the
message. The message stays there for a week in bold font
just like a painting that I need to wait to disappear.

You should have your streets, your mail box painted with
some unsolicited advertisement then you would understand
better what you are doing.


Keith Thompson

unread,
Aug 30, 2018, 1:52:36 PM8/30/18
to
Thiago Adams <thiago...@gmail.com> writes:
> On Thursday, August 30, 2018 at 11:46:23 AM UTC-3, Rick C. Hodgin wrote:
[...]
>> It is always teaching, trying to give people information
>> they don't currently have.
>
> What you are doing here is not teaching.

Please do not bypass my filters by reposting Rick's off-topic posts.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Joe Pfeiffer

unread,
Aug 30, 2018, 2:18:38 PM8/30/18
to
When just about the entire instruction set is made up of three bit
fields (as was the case with the PDP-11), octal works great.

Joe Pfeiffer

unread,
Aug 30, 2018, 2:21:46 PM8/30/18
to
mark.b...@gmail.com writes:

> On Thursday, 30 August 2018 14:17:54 UTC+1, Scott Lurndal wrote:
>> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
> ...
>> Listen to the Professor.
>
> No. Rick is the Very Reverend. Fir is the Professor. Do keep up!

I think he was referring to me as the (retired) professor, in this
case.

already...@yahoo.com

unread,
Aug 30, 2018, 2:28:23 PM8/30/18
to
On Thursday, August 30, 2018 at 9:18:38 PM UTC+3, Joe Pfeiffer wrote:

>
> When just about the entire instruction set is made up of three bit
> fields (as was the case with the PDP-11), octal works great.

I don't understand why.
PDP-11 is 16-bit computer. 16 is divisible by 4 and not divisible by 3. So, hex should be a natural choice.

Scott Lurndal

unread,
Aug 30, 2018, 2:40:29 PM8/30/18
to
The predecessor PDP-6, PDP-8, PDP-10 and PDP-12 lines all had word
sizes divisible by three (12, 18, 36). It was natural for DEC
to carry the octal notation to the PDP-11 and later the VAX.

Bart

unread,
Aug 30, 2018, 2:43:14 PM8/30/18
to
I wondered that too.

Apparently, instruction opcodes use 15 bits (5 fields of 3 bits) with
the top bit used for the word size (8 or 16 bits).

Other processors may not be so tidy. Eg. the related PDP10, where the
groups might be 9,4,1,4 and 18 bits starting from the lsb. However that
still used octal extensively. Binary wouldn't really help here; you
would need to use custom formatting of an instruction to split up the
fields for display.

Same with x86, which although normally using hex, has many opcode bytes
grouped as 2-3-3 (from msb), with other groupings too. (Here I do often
use binary as well as custom formatting.)

Keith Thompson

unread,
Aug 30, 2018, 2:58:36 PM8/30/18
to
Because just about the entire instruction set is made up of three
bit fields. For example, the PDP-11 has 8 CPU registers (R0-R7),
and many instructions have one or two 3-bit fields that refer to
register numbers. There are also 8 addressing modes, also encoded
in 3 bits.

The 3-bit fields are right-aligned, so if you represent a 16-bit
instruction in octal, each octal digit after the first (which is 0
or 1) usually corresponds to a well defined field of the instruction.
In many cases, setting the high-order bit to 1 makes the instruction
operate on 8-bit bytes rather than on 16-bit words.

already...@yahoo.com

unread,
Aug 30, 2018, 3:02:05 PM8/30/18
to
PDP-6 -> PDP-10 line is not related to PDP-11. Word-addressable mainframes vs byte-addressable mini.
PDP-8 -> PDP-12 also appear unrelated, except for also being mini.

Keith Thompson

unread,
Aug 30, 2018, 3:08:05 PM8/30/18
to
Doesn't VAX assembly language tend to use hexadecimal rather than
octal? PDP-11 instructions have 3-bit fields, but the VAX has 16
general-purpose registers (compared to 8 for the PDP-11).

Jorgen Grahn

unread,
Aug 30, 2018, 4:00:50 PM8/30/18
to
On Thu, 2018-08-30, Joe Pfeiffer wrote:
> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
>
>> I was looking today for a standard sprintf() format code to
>> print a binary number in the range 0..63, but could not find
>> any.
>>
>> Is there one? If not, why not? That would seem to be a
>> fundamental need in many software apps, and especially so
>> back in the 70s/80s when C was created.
>
> I would presume because it has octal for PDP-11 users and hex for
> everyone else. Actually printing out strings of 1's and 0's is only for
> masochists.

I suspect that nails it, yes. I also suspect spelling out binary
wasn't popular even back in the 1970s, especially not in the
subcultures C came from. Maybe the 8-bit enthusiasts did it more.

I think I've occasionally written code to format binary numbers,
though. And I keep a table like this on a post-it by my computer,
because I can never remember what 0xd looks like:

8 000 0
9 001 1
a 10 010 2
b 11 011 3
c 12 100 4
d 13 101 5
e 14 110 6
f 15 111 7

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

David Brown

unread,
Aug 30, 2018, 4:05:54 PM8/30/18
to
No one knows how to stop Rick from making religious posts in technical
groups. If there was a way to stop it, it would have been done long ago.

But we know of many ways to /provoke/ such posts. One guaranteed way is
to make posts like yours complaining about it.

If you don't want to make technical on-topic replies to a technical
on-topic thread started by Rick, that's your choice. I can respect that
- you are not alone in that choice. Others make the choice that any
appropriate C question or post makes a thread they can contribute to,
regardless of who posts it and what their history might be. That is
also fine, and a choice to be respected. It is up to the individual to
decide if it is the poster that is important, or the post.

However, posts complaining about a poster, or complaining about others
replying, is a sure-fire way of getting exactly the kind of unwanted
posts that cause the problem.

So please, choose to reply to on-topic posts with on-topic replies, or
don't reply at all. Don't make matters worse. (Once a thread has gone
off-topic, as this sub-thread has, you can post anything you like.)

Scott Lurndal

unread,
Aug 30, 2018, 4:16:27 PM8/30/18
to
They're related because they're all made by The Digital Equipment Corporation
all at various plants in and around eastern Mass.

Hence a corporate uniformity across all lines.

Scott Lurndal

unread,
Aug 30, 2018, 4:27:31 PM8/30/18
to
Keith Thompson <ks...@mib.org> writes:
>sc...@slp53.sl.home (Scott Lurndal) writes:
>> already...@yahoo.com writes:
>>>On Thursday, August 30, 2018 at 9:18:38 PM UTC+3, Joe Pfeiffer wrote:
>>>
>>>>
>>>> When just about the entire instruction set is made up of three bit
>>>> fields (as was the case with the PDP-11), octal works great.
>>>
>>>I don't understand why.
>>>PDP-11 is 16-bit computer. 16 is divisible by 4 and not divisible by
>>>3. So, hex should be a natural choice.
>>
>> The predecessor PDP-6, PDP-8, PDP-10 and PDP-12 lines all had word
>> sizes divisible by three (12, 18, 36). It was natural for DEC
>> to carry the octal notation to the PDP-11 and later the VAX.
>
>Doesn't VAX assembly language tend to use hexadecimal rather than
>octal? PDP-11 instructions have 3-bit fields, but the VAX has 16
>general-purpose registers (compared to 8 for the PDP-11).
>

A mix of octal and hex - often depending on the past history
of the programmer. programmers new to Macro-32 would use hex;
programmers coming over from Macro-11 often continued to use octal.
Macro-32 supported them all.


200$: MOVZWL #^X8000,R0 ;RETURN INVALID FP NUMBER

(from a focal interpreter source file).

^O is the octal prefix, ^X is the base-16 prefix
^A is the ASCII prefix and ^B is the binary prefix.

lack of prefix is decimal.

Note also that VMS1.0 was running RSX-11
utilities (e.g. PIP) in compatability mode; there
were only a few native utilities until the fully
native VMS 2.0 release.

Joe Pfeiffer

unread,
Aug 30, 2018, 6:21:00 PM8/30/18
to
Like I said: three bit fields. It had eight registers and eight
addressing modes, so the basic format of an instruction was typically
some variation on a four bit op code followed by four three bit fields
(that's a grotesque oversimplification, but it will do). So writing the
instruction in octal gave you the field contents pretty much directly.

Joe Pfeiffer

unread,
Aug 30, 2018, 6:22:07 PM8/30/18
to
PDP-11 yes, VAX no. The VAX had four bit fields where the PDP-11 had
three bit, so hex was used in notating the VAX.

Joe Pfeiffer

unread,
Aug 30, 2018, 6:23:15 PM8/30/18
to
Keith Thompson <ks...@mib.org> writes:

> sc...@slp53.sl.home (Scott Lurndal) writes:
>> already...@yahoo.com writes:
>>>On Thursday, August 30, 2018 at 9:18:38 PM UTC+3, Joe Pfeiffer wrote:
>>>
>>>>
>>>> When just about the entire instruction set is made up of three bit
>>>> fields (as was the case with the PDP-11), octal works great.
>>>
>>>I don't understand why.
>>>PDP-11 is 16-bit computer. 16 is divisible by 4 and not divisible by
>>>3. So, hex should be a natural choice.
>>
>> The predecessor PDP-6, PDP-8, PDP-10 and PDP-12 lines all had word
>> sizes divisible by three (12, 18, 36). It was natural for DEC
>> to carry the octal notation to the PDP-11 and later the VAX.
>
> Doesn't VAX assembly language tend to use hexadecimal rather than
> octal? PDP-11 instructions have 3-bit fields, but the VAX has 16
> general-purpose registers (compared to 8 for the PDP-11).

Yes. VAX used hex notation for exactly that reason.

Scott

unread,
Aug 30, 2018, 9:28:03 PM8/30/18
to
On Thu, 30 Aug 2018 22:05:44 +0200, David Brown
<david...@hesbynett.no> wrote:

>No one knows how to stop Rick from making religious posts in technical
>groups. If there was a way to stop it, it would have been done long ago.
...
>So please, choose to reply to on-topic posts with on-topic replies, or
>don't reply at all. Don't make matters worse. (Once a thread has gone
>off-topic, as this sub-thread has, you can post anything you like.)

The positive aspect here is that it's motivating me to dig out that
old GPL program I used to use, Newsproxy by name, dust it off and put
it back to work. It does all the twit filtering that my newsreader
does not. Unfortunately I no longer have the resources to build a
Windows VC++ project, so I'll have to saw off the GUI and wrap the
guts up in a Unix style daemon. Should be a nice weekend project (as
if I don't have enough of those already).

If it turns out half decent I'll post a source link, OK? It's no help
to the Google crowd, but if you can't be arsed to use NNTP like it was
meant to be used, I can't help you.

John Bode

unread,
Aug 30, 2018, 11:16:52 PM8/30/18
to
On Wednesday, August 29, 2018 at 7:03:30 PM UTC-5, Rick C. Hodgin wrote:
> I was looking today for a standard sprintf() format code to
> print a binary number in the range 0..63, but could not find
> any.
>
> Is there one?

No.

> If not, why not? That would seem to be a
> fundamental need in many software apps, and especially so
> back in the 70s/80s when C was created.
>

I can think of at least two reasons: hex and octal are much easier to read than straight
binary (it is *ridiculously* easy to transpose or miscount bits), and back in the days of
hardcopy terminals, anything that reduced output was a win.

Hex dumps aren't *that* hard to read (as long as you know which direction the bytes go).

Joe Pfeiffer

unread,
Aug 30, 2018, 11:37:09 PM8/30/18
to
"Rick C. Hodgin" <rick.c...@gmail.com> writes:

> On 08/29/2018 10:54 PM, Joe Pfeiffer wrote:
>> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
>>
>>> On 08/29/2018 08:08 PM, Joe Pfeiffer wrote:
>>>> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
>>>>
>>>>> I was looking today for a standard sprintf() format code to
>>>>> print a binary number in the range 0..63, but could not find
>>>>> any.
>>>>>
>>>>> Is there one? If not, why not? That would seem to be a
>>>>> fundamental need in many software apps, and especially so
>>>>> back in the 70s/80s when C was created.
>>>>
>>>> I would presume because it has octal for PDP-11 users and hex for
>>>> everyone else. Actually printing out strings of 1's and 0's is only for
>>>> masochists.
>>>
>>> C is a pretty low-level language. Having access to bits is
>>> not outside of the realm of possibility for such a low-level
>>> language which also supports things like <<=, >>=, << and >>,
>>> as well as bitwise operators.
>>>
>>> As for me, I do OS development, as well as low-level work on
>>> compilers, assemblers, and many related utilities. I need
>>> access to binary bits to be able to easily visualize some of
>>> the things I work on. I doubt I am unique in that need.
>>
>> <snip>
>>
>> Maybe not unique, but pretty close. Reading either octal or hex (I
>> learned octal first -- doing assembly code on a CDC 6400!) and seeing
>> bits is so automatic after about 15 minutes of exposure that it's hard
>> to imagine anyone actually needing to see the bit strings. When I was
>> teaching assembly code, I'd go over the hex-binary and octal-binary
>> conversions in much less than a lecture, and any reasonably competent
>> student never needed a refresher.
>>
>> Decimal-binary, of course, took a week or so going through the
>> conversion algorithms.
>
> When working on a kernel, you have various structures that are
> bit-filled in various ways. Sometimes one bit, sometimes many
> bits, but to see them in binary is a good way to see their vari-
> ous flags and what not.
>
> Specifically, I am using it today for an assembly language tool
> for showing opcodes and their sub-byte bit patterns, such as
> for register allocation/use, flags on or off (such as for sign-
> extended, or zero-extended), and so on.

I have written a number of device drivers, and play with projects
written in PIC assembly code as a hobby. So yes, I understand fields
with weird numbers of bits at bizarre offsets. Actually dumping a
word by bits is almost never helpful to me. Anything I'm going to look
at more than once I extract the fields using shifts and masks, and then
either print the contents in hex or decode and print as keyword values
as appropriate. Counting bits, like arithmetic in general, is something
computers do much better than I do, so I have the computer do it.

Melzzzzz

unread,
Aug 31, 2018, 12:38:56 AM8/31/18
to
I have killfield him and only see replies here and there. I can't stop
him from spamming, but I don't have to read it...



--
press any key to continue or any other to quit...

Reinhardt Behm

unread,
Aug 31, 2018, 1:11:50 AM8/31/18
to
AT Thursday 30 August 2018 21:19, jadi...@gmail.com wrote:

> On Thursday, August 30, 2018 at 3:09:54 AM UTC-4, David Brown wrote:
>> On 30/08/18 02:08, Joe Pfeiffer wrote:
>> > "Rick C. Hodgin" <r...@gmail.com> writes:
>> >
>> >> I was looking today for a standard sprintf() format code to
>> >> print a binary number in the range 0..63, but could not find
>> >> any.
>> >>
>> >> Is there one? If not, why not? That would seem to be a
>> >> fundamental need in many software apps, and especially so
>> >> back in the 70s/80s when C was created.
>> >
>> > I would presume because it has octal for PDP-11 users and hex for
>> > everyone else. Actually printing out strings of 1's and 0's is only
>> > for masochists.
>> >
>>
>> /Octal/ is for masochists, especially with the idiotic choice of "0" as
>> the prefix for octal constants.
>>
>> Binary is for /real/ programmers. That's why we need "0b" binary
>> constants in C, like we have in C++ and a good many C compilers. And we
>> need binary support in printf!
>>
>> Exaggeration aside, sometimes showing output in binary /is/ useful - I
>> have done it often. But it is not uncommon to want to divide up the
>> output with separators - perhaps every 4 bits, but maybe in more
>> application-specific patterns.
>
> I've actually had a legitimate use for octal codes. If you ever have to
> work with avionics equipment with ARINC-429 interfaces, it's right up
> front and center.

But then you need octal in reverse.

--
Reinhardt

David Brown

unread,
Aug 31, 2018, 3:10:26 AM8/31/18
to
On 30/08/18 16:24, Bart wrote:
> On 30/08/2018 14:58, mark.b...@gmail.com wrote:
>> On Thursday, 30 August 2018 14:17:54 UTC+1, Scott Lurndal wrote:
>>> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
>> ...
>>> Listen to the Professor.
>>
>> No. Rick is the Very Reverend. Fir is the Professor. Do keep up!
>>
>>> Masochists, indeed. If it had
>>> been the least useful, it would have been added 40+ years ago.
>>
>> Everyone is marching out of step except Rick and Bart.
>
> And, apparently, C++, Java and even some C versions:

There's a point of confusion here, which was perhaps my fault. C++ and
many C compiler support binary constants (like 0b1101) - they don't
support binary output in printf (or C++ cout <<).

(I can't answer for Java.)

>
> On 30/08/2018 08:53, mark.b...@gmail.com wrote:
>> The binary option is clearly valued by some (see David Brown's post)
>> and sufficiently so to be provided in C++ , some C implementations
>> and the language I mainly work in, Java.
>
> Must be a lot of masochists around.
>

David Brown

unread,
Aug 31, 2018, 3:22:40 AM8/31/18
to
On 30/08/18 21:07, Keith Thompson wrote:
> sc...@slp53.sl.home (Scott Lurndal) writes:
>> already...@yahoo.com writes:
>>> On Thursday, August 30, 2018 at 9:18:38 PM UTC+3, Joe Pfeiffer wrote:
>>>
>>>>
>>>> When just about the entire instruction set is made up of three bit
>>>> fields (as was the case with the PDP-11), octal works great.
>>>
>>> I don't understand why.
>>> PDP-11 is 16-bit computer. 16 is divisible by 4 and not divisible by
>>> 3. So, hex should be a natural choice.
>>
>> The predecessor PDP-6, PDP-8, PDP-10 and PDP-12 lines all had word
>> sizes divisible by three (12, 18, 36). It was natural for DEC
>> to carry the octal notation to the PDP-11 and later the VAX.
>
> Doesn't VAX assembly language tend to use hexadecimal rather than
> octal? PDP-11 instructions have 3-bit fields, but the VAX has 16
> general-purpose registers (compared to 8 for the PDP-11).
>

There is a modern (well, /relatively/ modern) microcontroller cpu, the
"msp430", that is strongly based on the PDP-11 ISA. It doesn't have the
historical connection with PDP or DEC - people come to the msp430 from
other microcontrollers, not mainframes or minis. In msp430 assembly or
C programming you use decimal, hex, or binary where convenient - not octal.

Personally, I cannot see the connection between having 3-bit fields in
an instruction set (as you often have in cpus with 8 registers) and
wanting to use octal. It makes no sense - you program these things in
C, or assembly, not manually constructing the bit fields for machine
code instructions. You write "MOV.W R3, R4" to move register 3 into
register 4. The registers are really names, not numbers. And you don't
use base 32 when working with processors with 32 registers.

But when you have a data word size of 12 bits, it's natural to choose
either 3-bit or 4-bit groupings for the data. And once that decision is
made, you get stuck with the habit.

David Brown

unread,
Aug 31, 2018, 3:29:57 AM8/31/18
to
This all depends on what you want to look at, and how you want to view
the data.

Quick, what is the pattern in this hex stream :

e0 70 38 1c 0e 07

Look at as a series of binary outputs, it's much easier.

11100000
01110000
00111000
00011100
00001110
00000111

Raw binary output, especially with large numbers of digits, is seldom
helpful. But /sometimes/ it is very useful.

Ben Bacarisse

unread,
Aug 31, 2018, 6:54:01 AM8/31/18
to
David Brown <david...@hesbynett.no> writes:
<snip>
> Personally, I cannot see the connection between having 3-bit fields in
> an instruction set (as you often have in cpus with 8 registers) and
> wanting to use octal. It makes no sense

There seems to be a strong anti-octal feeling around here! Octal/hex --
it makes no difference to me. Both are easy to read in my opinion, so
"making sense" is not really the point. All I need is any reason, even
a tiny one, for the balance to tip one way or the other and I'll go with
that.

By the way, on a 16-bit machine (like the PDP-11) I really like the fact
that the sign bit stands out in octal. 177140: negative, 077140:
positive.

> - you program these things in
> C, or assembly, not manually constructing the bit fields for machine
> code instructions. You write "MOV.W R3, R4" to move register 3 into
> register 4. The registers are really names, not numbers. And you don't
> use base 32 when working with processors with 32 registers.

But that's an argument against needing any readable representation of
the bits! If you never have to read an instruction stream other than a
nicely disassembled one then, sure, there's nothing to be said for
octal -- and hex is shorter for other things.

> But when you have a data word size of 12 bits, it's natural to choose
> either 3-bit or 4-bit groupings for the data. And once that decision is
> made, you get stuck with the habit.

This is true. I think habit is a big part of it and DEC had many
machines where octal was a very natural choice.

--
Ben.

Malcolm McLean

unread,
Aug 31, 2018, 7:10:48 AM8/31/18
to
On Friday, August 31, 2018 at 11:54:01 AM UTC+1, Ben Bacarisse wrote:
> David Brown <david...@hesbynett.no> writes:
> <snip>
> > Personally, I cannot see the connection between having 3-bit fields in
> > an instruction set (as you often have in cpus with 8 registers) and
> > wanting to use octal. It makes no sense
>
> There seems to be a strong anti-octal feeling around here! Octal/hex --
> it makes no difference to me. Both are easy to read in my opinion, so
> "making sense" is not really the point. All I need is any reason, even
> a tiny one, for the balance to tip one way or the other and I'll go with
> that.
>
A list of hexadecimal numbers of any length will look like hex, because
you've got a six in sixteen chance that any digit will be a letter.
However octal looks like denary, you can read quite a few numbers before
you realise that they must be octal because 8 and 9 are missing.

David Brown

unread,
Aug 31, 2018, 7:58:22 AM8/31/18
to
On 31/08/18 12:53, Ben Bacarisse wrote:
> David Brown <david...@hesbynett.no> writes:
> <snip>
>> Personally, I cannot see the connection between having 3-bit fields in
>> an instruction set (as you often have in cpus with 8 registers) and
>> wanting to use octal. It makes no sense
>
> There seems to be a strong anti-octal feeling around here! Octal/hex --
> it makes no difference to me. Both are easy to read in my opinion, so
> "making sense" is not really the point. All I need is any reason, even
> a tiny one, for the balance to tip one way or the other and I'll go with
> that.

My gripe with octal is not, in fact, the octal itself. If someone wants
to write octal, that's their choice. But I don't see it as a
particularly useful feature. Indeed, I cannot think of any situation in
all my programming life (assembly, C, or anything else) where I have
thought octal would be useful - but I /have/ thought that base 4 would
sometimes be useful.

The things that irritates me about octal is the notation. The use of
"0" as the prefix in C - that's just /wrong/. "023" should be 23. I'd
be fine with "0o23" (though not "0O23", for obvious reasons), "8#23",
"octal(23)", or perhaps "0k23" (avoiding the dangerous "o").

>
> By the way, on a 16-bit machine (like the PDP-11) I really like the fact
> that the sign bit stands out in octal. 177140: negative, 077140:
> positive.
>

Try decimal: -416 negative, 32352 positive. I guess if you have worked
with octal a lot, it seems natural. I've worked lots with decimal, hex
and binary, but not octal. I can see advantages of these three number
bases - I am simply at a loss to see much point in using octal.

>> - you program these things in
>> C, or assembly, not manually constructing the bit fields for machine
>> code instructions. You write "MOV.W R3, R4" to move register 3 into
>> register 4. The registers are really names, not numbers. And you don't
>> use base 32 when working with processors with 32 registers.
>
> But that's an argument against needing any readable representation of
> the bits! If you never have to read an instruction stream other than a
> nicely disassembled one then, sure, there's nothing to be said for
> octal -- and hex is shorter for other things.

Yes, exactly. But for some reason this turns up as an argument for
using octal with some old PDP machines - "it has 8 registers, so octal
is logical" or "the instructions are encoded in fields of 3 bit width,
so octal is logical". I don't believe /you/ have said anything like
that, but others here have.

>
>> But when you have a data word size of 12 bits, it's natural to choose
>> either 3-bit or 4-bit groupings for the data. And once that decision is
>> made, you get stuck with the habit.
>
> This is true. I think habit is a big part of it and DEC had many
> machines where octal was a very natural choice.
>

And since I don't have the habit, it is an unnatural choice for me!


David Brown

unread,
Aug 31, 2018, 8:01:48 AM8/31/18
to
Exactly. Spot the bug here:

static const int patterns[] = {
3210, 2103, 1032, 0321
};


Rick C. Hodgin

unread,
Aug 31, 2018, 8:17:23 AM8/31/18
to
Found it. :-)

CAlive has defined explicit formats (0y, 0n, 0o, 0d or nothing,
and 0x):

0y1010110 // Binary
0n0123 // Base-4 (for DNA research)
0o01234567 // Octal
0d0123456789 // Decimal
0x1234567890abcdef // Hexadecimal

There was also a format specified on one of the assembly
forums years ago I always liked:

2x1010110 // Binary
4x0123 // Base-4 (for DNA research)
8x01234567 // Octal
10x0123456789 // Decimal
16x1234567890abcdef // Hexadecimal

I had forgotten about that format until just now on this post.
I will add support for it to CAlive as well.

--
Rick C. Hodgin

Thiago Adams

unread,
Aug 31, 2018, 8:19:14 AM8/31/18
to
I don't want to extend this off-topic,
I have already made my point, this is something
that needs a collective thinking.



David Brown

unread,
Aug 31, 2018, 8:38:36 AM8/31/18
to
On 31/08/18 14:17, Rick C. Hodgin wrote:
> On 8/31/2018 8:01 AM, David Brown wrote:
>> On 31/08/18 13:10, Malcolm McLean wrote:
>>> A list of hexadecimal numbers of any length will look like hex, because
>>> you've got a six in sixteen chance that any digit will be a letter.
>>> However octal looks like denary, you can read quite a few numbers before
>>> you realise that they must be octal because 8 and 9 are missing.
>>
>> Exactly. Spot the bug here:
>>
>> static const int patterns[] = {
>> 3210, 2103, 1032, 0321
>> };
>
> Found it. :-)
>
> CAlive has defined explicit formats (0y, 0n, 0o, 0d or nothing,
> and 0x):
>
> 0y1010110 // Binary

Why not 0b1010110, as used elsewhere?

> 0n0123 // Base-4 (for DNA research)

I really can't see that being a big usage. Surely DNA programs would
prefer strings like "CGAT" rather than "0n0123" ?

And there are plenty of other uses of quaternary than just DNA. I have
used it for defining character fonts on a little display with 2-bit
greyscale pixels.

> 0o01234567 // Octal
> 0d0123456789 // Decimal
> 0x1234567890abcdef // Hexadecimal
>
> There was also a format specified on one of the assembly
> forums years ago I always liked:
>
> 2x1010110 // Binary
> 4x0123 // Base-4 (for DNA research)
> 8x01234567 // Octal
> 10x0123456789 // Decimal
> 16x1234567890abcdef // Hexadecimal
>
> I had forgotten about that format until just now on this post.
> I will add support for it to CAlive as well.
>

If I were making a language, I'd have convenient support for decimal (no
prefix), hex (0x123), binary (0b1010), and then a general format to
handle any unusual cases users might want. That might be the one you
have given, or Ada's 4#123# format, or something similar.


mark.b...@gmail.com

unread,
Aug 31, 2018, 8:41:13 AM8/31/18
to
On Friday, 31 August 2018 13:19:14 UTC+1, Thiago Adams wrote:

> I don't want to extend this off-topic,
> I have already made my point, this is something
> that needs a collective thinking.

Fortunately (or not) usenet is more a benign anarchy than a democracy,
so collective thinking on this sort of thing is unliklely to progress
beyond an agreement to differ on the best way forward.

I'm of the opinion that it's reasonable to engage with Rick on topical
matters, and the best thing for his off topic stuff is to simply ignore
it - there doesn't seem to be any value in challenging him on it, as he
just takes that as a cue for more drivel.

As others have said, that's just my opinion - I am not directing you to
do (or not do) anything.

Bart

unread,
Aug 31, 2018, 8:54:17 AM8/31/18
to
Which version doesn't really need the comments?

Only 0x is acceptable because its use is so widespread.

See sig for working examples.

--
bart

c:\mapps>type test.m

proc start=
println 0x100
println 2x100
println 3x100
println 4x100
println 5x100
println 6x100
println 7x100
println 8x100
println 9x100
println 10x100
println 11x100
println 12x100
println 13x100
println 14x100
println 15x100
println 16x100
end

c:\mapps>test
256
4
9
16
25
36
49
64
81
100
121
144
169
196
225
256

c:\mapps>type test2.m

proc start=
println 0x100.1
println 2x100.1
println 3x100.1
println 4x100.1
println 5x100.1
println 6x100.1
println 7x100.1
println 8x100.1
println 9x100.1
println 10x100.1
println 11x100.1
println 12x100.1
println 13x100.1
println 14x100.1
println 15x100.1
println 16x100.1
end

c:\mapps>test2
256.062500
4.500000
9.333333
16.250000
25.200000
36.166667
49.142857
64.125000
81.111111
100.100000
121.090909
144.083333
169.076923
196.071429
225.066667
256.062500

c:\mapps>

And:

c:\mapps>type test3.m

proc start=
println 0x1p3
println 2x1e11
println 3x1e10
println 4x1e3
println 5x1e3
println 6x1e3
println 7x1e3
println 8x1e3
println 9x1e3
println 10x1e3
println 11x1e3
println 12x1e3
println 13x1e3
println 14x1e3
println 15x1p3
println 16x1p3
end

c:\mapps>test3
4096.000000
8.000000
27.000000
64.000000
125.000000
216.000000
343.000000
512.000000
729.000000
1000.000000
1331.000000
1728.000000
2197.000000
2744.000000
3375.000000
4096.000000

c:\mapps>

Bart

unread,
Aug 31, 2018, 8:58:49 AM8/31/18
to
On 31/08/2018 12:58, David Brown wrote:
> On 31/08/18 12:53, Ben Bacarisse wrote:
>> David Brown <david...@hesbynett.no> writes:
>> <snip>
>>> Personally, I cannot see the connection between having 3-bit fields in
>>> an instruction set (as you often have in cpus with 8 registers) and
>>> wanting to use octal. It makes no sense
>>
>> There seems to be a strong anti-octal feeling around here! Octal/hex --
>> it makes no difference to me. Both are easy to read in my opinion, so
>> "making sense" is not really the point. All I need is any reason, even
>> a tiny one, for the balance to tip one way or the other and I'll go with
>> that.
>
> My gripe with octal is not, in fact, the octal itself. If someone wants
> to write octal, that's their choice. But I don't see it as a
> particularly useful feature. Indeed, I cannot think of any situation in
> all my programming life (assembly, C, or anything else) where I have
> thought octal would be useful - but I /have/ thought that base 4 would
> sometimes be useful.
>
> The things that irritates me about octal is the notation. The use of
> "0" as the prefix in C - that's just /wrong/. "023" should be 23. I'd
> be fine with "0o23" (though not "0O23", for obvious reasons), "8#23",
> "octal(23)", or perhaps "0k23" (avoiding the dangerous "o").

You seem to be taking over my schtick...

Rick C. Hodgin

unread,
Aug 31, 2018, 9:02:41 AM8/31/18
to
On 8/31/2018 8:38 AM, David Brown wrote:
> On 31/08/18 14:17, Rick C. Hodgin wrote:
>> 0y1010110 // Binary
> Why not 0b1010110, as used elsewhere?

I support that format for legacy purposes, but to me it looks
like it may have been intended to be hexadecimal and someone
forgot to add the leading 0x. I actually feel the same way
about my 0d below and considered using 0s for that reason (the
s being for standard), but it was confusing to me as well.

Since decimal doesn't normally require any support, it is a
low point to contend with, but I can see binary being used
much more often.

I don't particularly like either format. I do like the ones
that were suggested on the assembly group years ago, with the
Nx prefix, where N is the base.

>> 0n0123 // Base-4 (for DNA research)
>
> I really can't see that being a big usage. Surely DNA programs would
> prefer strings like "CGAT" rather than "0n0123" ?

I have used it in my DNA research. That's why it's in there.
:-)

> And there are plenty of other uses of quaternary than just DNA. I have
> used it for defining character fonts on a little display with 2-bit
> greyscale pixels.

I will support other formats if they are standard.

>> 0o01234567 // Octal
>> 0d0123456789 // Decimal
>> 0x1234567890abcdef // Hexadecimal
>>
>> There was also a format specified on one of the assembly
>> forums years ago I always liked:
>>
>> 2x1010110 // Binary
>> 4x0123 // Base-4 (for DNA research)
>> 8x01234567 // Octal
>> 10x0123456789 // Decimal
>> 16x1234567890abcdef // Hexadecimal
>>
>> I had forgotten about that format until just now on this post.
>> I will add support for it to CAlive as well.
>
> If I were making a language, I'd have convenient support for decimal (no
> prefix), hex (0x123), binary (0b1010), and then a general format to
> handle any unusual cases users might want. That might be the one you
> have given, or Ada's 4#123# format, or something similar.

In CAlive, decimal doesn't require the format, but you can explicitly
use it to overcome certain bugs:

static const int patterns[] = {
3210,
2103,
1032,
0d_0321
^^^^^^^
};

Note: CAlive also allows any combination of _ and ' to inhabit
numbers, so you can do 0xb'8000 so as to divide out the
hex on 16-bit boundaries, etc. Same as 0xb_8000, and it
works for any number including base 10: 123_456.321'415,
evaluates to 123456.321415, as does 123'456.321_415, etc.
Within an number, _ and ' are ignored.

--
Rick C. Hodgin

Rick C. Hodgin

unread,
Aug 31, 2018, 9:10:38 AM8/31/18
to
On 8/31/2018 8:41 AM, mark.b...@gmail.com wrote:
> On Friday, 31 August 2018 13:19:14 UTC+1, Thiago Adams wrote:
>> I don't want to extend this off-topic,
>> I have already made my point, this is something
>> that needs a collective thinking.
> [snip]
> I'm of the opinion that it's reasonable to engage with Rick on topical
> matters, and the best thing for his off topic stuff is to simply ignore
> it - there doesn't seem to be any value in challenging him on it, as he
> just takes that as a cue for more drivel.

Not more drivel. When people post things which are inconsistent
with Biblical teaching, I correct the incorrect things being posted,
replacing them with Biblical knowledge.

I know many people won't believe the things I teach, and will even
rise up as Peter Cheung did and attack me and issue death threats
and the like, or as Leigh Johnston does still in comp.lang.c++ by
making mocking posts pretending to be me.

My hope is that after the rapture, that people will then remember
these posts and think to themselves, "Hmm... maybe what Rick was
talking about was true?" and then come back and re-read my posts
with a truth-seeking heart, rather than a fully dismissing heart.
This would allow them then (when they are truth-seeking) to receive
the teaching.

> As others have said, that's just my opinion - I am not directing you to
> do (or not do) anything.

I give this information as a witness and testimony for Jesus Christ.
I am not doing it for other reasons. For everyone in these groups,
it will serve as something that aids them in their faith and walk,
or gives a testimony against them on the day of their judgment as
Jesus says, "I sent my servant Rick to teach you about me, and you
not only rejected me, but you mocked him as well. Why?"

People will have no excuse for that and there will be much weeping
and gnashing of teeth because they will then, at that time only,
know what they have lost.

I would spare everyone that day of judgment by giving them the
opportunity to hear the teaching today, even if it means me taking
losses here in this world, because this world goes on for a tick
of the clock, but eternity never ends. I have my eyes set on our
eternal future, and not merely here upon this Earth.

-----
And as I've stated numerous times, I would never have believed
any of these things I teach had God not drawn me to His Son where
I asked forgiveness and was changed. The change took me completely
by surprise, and I am amazed by it fully even 14+ years later.

There really is new life in Jesus Christ. The old things change,
the new comes into existence, the old man dies, the new man is
born.

--
Rick C. Hodgin

Bart

unread,
Aug 31, 2018, 9:23:43 AM8/31/18
to
Seriously? You still support leading zero octals? WHY?

Anyway the problem is in spotting this potential error so that you can
apply the fix. Having that fix (which will make most people retch)
doesn't help in spotting or avoiding the pitfall.

Where there are columns of columns that need to line up, you don't want
to have to insert extra blank space on the off-chance that some numbers
will need such a prefix. The idea of the leading zeros might be so that
each numbers is represented in the same width.

--
bart

Rick C. Hodgin

unread,
Aug 31, 2018, 9:38:32 AM8/31/18
to
On 8/31/2018 9:23 AM, Bart wrote:
>> In CAlive, decimal doesn't require the format, but you can explicitly
>> use it to overcome certain bugs:
>>
>>      static const int patterns[] = {
>>           3210,
>>           2103,
>>           1032,
>>        0d_0321
>>        ^^^^^^^
>
> Seriously? You still support leading zero octals? WHY?

Because there's 40+ years of source code that already has it.

CAlive doesn't look to break existing code. It looks to add
to C where C is lacking, to enable existing code to work as
it has all along, and to give people more modern programming
features and new abilities if they want to use those coding
aspects in their new code design, or maintenance updates.

> Anyway the problem is in spotting this potential error so that you can apply
> the fix. Having that fix (which will make most people retch) doesn't help in
> spotting or avoiding the pitfall.

Yes. Human developers are still required.

> Where there are columns of columns that need to line up, you don't want to
> have to insert extra blank space on the off-chance that some numbers will
> need such a prefix. The idea of the leading zeros might be so that each
> numbers is represented in the same width.

I do a lot of visual programming in that way for that reason. It is
so very difficult for me to read certain types of code at times. By
giving my eyes some spatial cues, it becomes an order of magnitude or
more easier.

It's why my code is always so spaced out. I rarely have issues in
the moment when I'm there authoring new code, but when I come back
to it the next day, or a year later... then it is very hard without
the formatting.

I also type very fast, so adding the extra formatting doesn't take
too long. But I think even if I typed slowly, I would still do it.
It makes it that much faster for maintenance later.

--
Rick C. Hodgin

Bart

unread,
Aug 31, 2018, 9:54:12 AM8/31/18
to
On 31/08/2018 14:02, Rick C. Hodgin wrote:
> On 8/31/2018 8:38 AM, David Brown wrote:
>> On 31/08/18 14:17, Rick C. Hodgin wrote:
>>>      0y1010110           // Binary
>> Why not 0b1010110, as used elsewhere?
>
> I support that format for legacy purposes, but to me it looks
> like it may have been intended to be hexadecimal and someone
> forgot to add the leading 0x.  I actually feel the same way
> about my 0d below and considered using 0s for that reason (the
> s being for standard), but it was confusing to me as well.
>
> Since decimal doesn't normally require any support, it is a
> low point to contend with, but I can see binary being used
> much more often.
>
> I don't particularly like either format.  I do like the ones
> that were suggested on the assembly group years ago, with the
> Nx prefix, where N is the base.
>
>>>      0n0123              // Base-4 (for DNA research)
>>
>> I really can't see that being a big usage.  Surely DNA programs would
>> prefer strings like "CGAT" rather than "0n0123" ?
>
> I have used it in my DNA research.  That's why it's in there.
> :-)

That doesn't add up.

DNA sequences are usually longer than the 32 elements you will get in a
64-bit integer.

Even if these constants are to build big integers, you wouldn't normally
represent them as such in program source code.

I would expect the main requirement is to have a large array of such
elements, and the simplest way is to use a string of 8-bit characters.

But if the arrays are very long and you want to cut down on memory, then
you really want arrays of 2-bit elements. I suggest that would be a
useful language feature (once you have arrays of single bits of course).

(I don't have that in my static language, but the other can just about
manage it. There, an array of 1 billion 2-bit elements takes 250MB.
Since each element is unsigned, display of each is as 0,1,2,3 - in
decimal. Base-4 doesn't come into it, unless you convert a short
sequence into an ordinary integer and want to display that.)

--
bart

Bart

unread,
Aug 31, 2018, 10:05:09 AM8/31/18
to
On 31/08/2018 14:38, Rick C. Hodgin wrote:
> On 8/31/2018 9:23 AM, Bart wrote:
>>> In CAlive, decimal doesn't require the format, but you can explicitly
>>> use it to overcome certain bugs:
>>>
>>>      static const int patterns[] = {
>>>           3210,
>>>           2103,
>>>           1032,
>>>        0d_0321
>>>        ^^^^^^^
>>
>> Seriously? You still support leading zero octals? WHY?
>
> Because there's 40+ years of source code that already has it.

That virtually no-one uses, and most C programmers are probably not even
aware of until it bites.

Just get rid of it and stop perpetuating incredibly bad ideas from C.

> CAlive doesn't look to break existing code.  It looks to add
> to C where C is lacking,

Then you're just adding to the already hefty baggage.

If you must allow existing C source to be compiled (which I've delved
into and it is a nightmare), then at least deprecate the use of
significant leading zeros (as Python has done, I believe).

So 0321 would be an error. You either have to drop the zero, or use a
prefix or other modifier to clarify what it has to be.

> Yes.  Human developers are still required.

Get rid of significant leading zeros, and less human effort needs to be
expended in catching pointless errors.

--
bart

John Bode

unread,
Aug 31, 2018, 10:05:22 AM8/31/18
to
Oh, yeah, sure, sometimes binary's exactly what's needed. It's just in my experience,
that was the exception, not the rule.

I honestly don't know why standard C never had binary conversion specifier, except
that it just wasn't considered that important vs. hex and octal.

Ben Bacarisse

unread,
Aug 31, 2018, 10:05:34 AM8/31/18
to
The data are rarely random. The actual proportion of A-Fs in a typical
hex representation is much lower than the theoretical 37.5%. For text
files, it's about 11% and for executable files, it's about 20% (from a
small survey on my machine). Obviously if you have lots of encrypted
files (and not further ASCII encoded), you will get close to the 37%.

> Exactly. Spot the bug here:
>
> static const int patterns[] = {
> 3210, 2103, 1032, 0321
> };

This illustrates something else entirely. It may be hard to spot, but
the base is being flagged up here. Malcolm's point applies when the
base is /not/ explicit.

--
Ben.

David Brown

unread,
Aug 31, 2018, 10:13:58 AM8/31/18
to
On 31/08/18 15:02, Rick C. Hodgin wrote:
> On 8/31/2018 8:38 AM, David Brown wrote:
>> On 31/08/18 14:17, Rick C. Hodgin wrote:
>>> 0y1010110 // Binary
>> Why not 0b1010110, as used elsewhere?
>
> I support that format for legacy purposes, but to me it looks
> like it may have been intended to be hexadecimal and someone
> forgot to add the leading 0x. I actually feel the same way
> about my 0d below and considered using 0s for that reason (the
> s being for standard), but it was confusing to me as well.
>

An "0y" prefix is not going to have much legacy use - it is never been a
common extension in C. That doesn't mean there is not some C compiler
out there that /does/ have "0y" binary constants, but it is hardly going
to be worth supporting.

Multiple options means more effort - more effort when writing the tool,
more effort when writing the documentation, more effort for people
reading the documentation, and more effort for people understanding
other developers code. It may seem like a very simple thing to say "0s"
and "0d" both mean decimal - and it is surely quite easy to include in
your parser. But if no one uses these, they are pointless - and if
someone /does/ use them, then that means wasted effort and confusion for
more normal programmers trying to understand that code. Such multiple
options sound like great flexibility, but are really lose-lose situations.

> Since decimal doesn't normally require any support, it is a
> low point to contend with, but I can see binary being used
> much more often.
>

Binary constants are useful sometimes - not often, but occasionally.
And a "0b" prefix is going to be easy to understand. If you have a
general solution - Nx or whatever - then binary is possibly not worth
having as a separate case.

> I don't particularly like either format. I do like the ones
> that were suggested on the assembly group years ago, with the
> Nx prefix, where N is the base.
>
>>> 0n0123 // Base-4 (for DNA research)
>>
>> I really can't see that being a big usage. Surely DNA programs would
>> prefer strings like "CGAT" rather than "0n0123" ?
>
> I have used it in my DNA research. That's why it's in there.
> :-)

It's your language, so that's a good reason!
Drop the C concept of "0" being an octal prefix, and the bug disappears.

>
> Note: CAlive also allows any combination of _ and ' to inhabit
> numbers, so you can do 0xb'8000 so as to divide out the
> hex on 16-bit boundaries, etc. Same as 0xb_8000, and it
> works for any number including base 10: 123_456.321'415,
> evaluates to 123456.321415, as does 123'456.321_415, etc.
> Within an number, _ and ' are ignored.
>

Digit separators are good.


Rick C. Hodgin

unread,
Aug 31, 2018, 10:22:51 AM8/31/18
to
It's a way to pack things into half-nibbles:

0000___0000 // 8-bits, with two 4-bit nibbles
00''00___00''00 // 8-bits, with four 2-bit half-nibbles

Each half-nibble is the binary values 00, 01, 10, 11, which are the
decimal values 0, 1, 2, 3.

Using this packed format, you conclude very easily how concatenated
sequences can be packed together and searches and comparisons are
performed. You can also do transformations (depending on which of
the CG/AT values you ascribe to each).

I have tried various forms and formats, lengths and depths, and
haven't yet found any kind of pattern that is discernible, though
I am convinced I am doing it wrong and just haven't hit on it yet.

This all started by looking at the pattern on Butterfly wings. I
noted how much it looked like UV texture data in 3D work:

http://uv2uv.spacymen.com/Tutorials_files/Tut003/Tut003.htm
http://uv2uv.spacymen.com/Tutorials_files/Tut003/Head_cyl.jpg

Butterfly wings:


https://www.insects.orkin.com/insecta/patterns/genus-caligo-owl-butterfly-wing-posters/

I see the texture there as being able to be wrapped around 3D
geometry. There are on some wings the appearance of that which
would be used as sprites, or key-frames between sprites, with a
type of video compression expansion of images from one to the
next, creating a smooth progression which, when mapped onto 3D
geometry, would reveal a 3D movie hidden in our collective DNA,
given to us by God in the beginning.


https://www.dreamstime.com/butterfly-wing-macro-closeup-macro-closeup-detail-beautiful-butterfly-insect-wing-pattern-full-color-bright-white-image107538358

See on that wing how there are many patterns showing what could
be a progression from one previous image to a new one. And these
things are present on many different kinds of butterfly wings,
with many kinds of various patterns.

I also considered the nature of the butterfly, it starts out as
a form which consumes and grows and enters a new state of trans-
formation, where it emerges on the other side of that transfor-
mation as this beautiful creation that is able to fly.

It is symbolic of mankind, who for millennia have simply eaten
and grown until we reach a certain level (of knowledge, tech-
nology), and then we enter a period of change where we begin
to contemplate these things with regards to God, and then we
emerge on the other side with the proper understanding ... and
we fly.

I also recognize that the "Monarch" Butterfly (the "King" but-
terfly) has what looks like a candle on it. And I remember the
Bible verse, "And God said let there be light, an there was
light." It made me think that the King's guidance (Jesus'
guidance) was given by that first source of light:

You can see the orange candle, and the yellow flame on
each wing, at the top, leading the way:
http://www.butterflyutopia.com/monarch_danaus_plexippus.html

-----
For the record:

I have as of yet to find anything indicating anything like that
actually exists. But, back in 2014 this thought occurred to me
spur of the moment when I went to a co-worker's desk and on his
computer's desktop he had a slideshow of butterflies. I saw it
in a second and it flashed in my mind without me thinking about
it. After that, I spent much time searching for it. I haven't
worked on it since 2015 though.

--
Rick C. Hodgin

jadi...@gmail.com

unread,
Aug 31, 2018, 10:23:00 AM8/31/18
to
On Friday, August 31, 2018 at 1:11:50 AM UTC-4, Reinhardt Behm wrote:
Yes, you do typically need an octal reverse (I use a lookup table) to flip
the orientation.

Ben Bacarisse

unread,
Aug 31, 2018, 10:28:52 AM8/31/18
to
David Brown <david...@hesbynett.no> writes:

> On 31/08/18 12:53, Ben Bacarisse wrote:
>> David Brown <david...@hesbynett.no> writes:
>> <snip>
>>> Personally, I cannot see the connection between having 3-bit fields in
>>> an instruction set (as you often have in cpus with 8 registers) and
>>> wanting to use octal. It makes no sense
>>
>> There seems to be a strong anti-octal feeling around here! Octal/hex --
>> it makes no difference to me. Both are easy to read in my opinion, so
>> "making sense" is not really the point. All I need is any reason, even
>> a tiny one, for the balance to tip one way or the other and I'll go with
>> that.
>
> My gripe with octal is not, in fact, the octal itself. If someone wants
> to write octal, that's their choice. But I don't see it as a
> particularly useful feature. Indeed, I cannot think of any situation in
> all my programming life (assembly, C, or anything else) where I have
> thought octal would be useful - but I /have/ thought that base 4 would
> sometimes be useful.
>
> The things that irritates me about octal is the notation. The use of
> "0" as the prefix in C - that's just /wrong/. "023" should be 23. I'd
> be fine with "0o23" (though not "0O23", for obvious reasons), "8#23",
> "octal(23)", or perhaps "0k23" (avoiding the dangerous "o").

I agree, but that's got nothing to do with the point you made before and
the one I was responding to. C's choice is imported from B, and B was
widely used on 36-but machines. I think it should have been phased out,
but C's syntax is not "the notation" for octal, it's just one.

>> By the way, on a 16-bit machine (like the PDP-11) I really like the fact
>> that the sign bit stands out in octal. 177140: negative, 077140:
>> positive.
>
> Try decimal: -416 negative, 32352 positive.

Try decimal when examining a PDP-11 core dump.

> I guess if you have worked
> with octal a lot, it seems natural. I've worked lots with decimal, hex
> and binary, but not octal. I can see advantages of these three number
> bases - I am simply at a loss to see much point in using octal.
>
>>> - you program these things in
>>> C, or assembly, not manually constructing the bit fields for machine
>>> code instructions. You write "MOV.W R3, R4" to move register 3 into
>>> register 4. The registers are really names, not numbers. And you don't
>>> use base 32 when working with processors with 32 registers.
>>
>> But that's an argument against needing any readable representation of
>> the bits! If you never have to read an instruction stream other than a
>> nicely disassembled one then, sure, there's nothing to be said for
>> octal -- and hex is shorter for other things.
>
> Yes, exactly. But for some reason this turns up as an argument for
> using octal with some old PDP machines - "it has 8 registers, so octal
> is logical" or "the instructions are encoded in fields of 3 bit width,
> so octal is logical". I don't believe /you/ have said anything like
> that, but others here have.

If I haven't, it a slip-up. I would always examine a PDP-11 core file
in octal.

--
Ben.

Rick C. Hodgin

unread,
Aug 31, 2018, 10:29:16 AM8/31/18
to
On 8/31/2018 10:05 AM, Bart wrote:
> On 31/08/2018 14:38, Rick C. Hodgin wrote:
>> CAlive doesn't look to break existing code.  It looks to add
>> to C where C is lacking,
>
> Then you're just adding to the already hefty baggage.

Yes. My goals are to serve people, not language needs. I wish
to give people the whole arsenal of tools in their toolbox, able
to tackle the jobs they have, and not to limit them by artificial
constraints.

CAlive will also be open source, so people can take what I offer
as a baseline, and then do what they will with it.

> If you must allow existing C source to be compiled (which I've delved into
> and it is a nightmare), then at least deprecate the use of significant
> leading zeros (as Python has done, I believe).

Why deprecate it? It's a thing. Let it live and keep it alive
once it's working. It only takes developmental effort, and after
that it's always available. Why divert resources when they are
so few toward this end, and may require other people to notably
re-work their existing software.

Features like this, long-established, may be used by people in
ways we do not visualize. To make decisions to break all of
that code when it is trivial to retain it ... that is the bad
decision in my opinion.

> So 0321 would be an error. You either have to drop the zero, or use a prefix
> or other modifier to clarify what it has to be.

Yes. I think generating that error is a wrong end-goal.

>> Yes.  Human developers are still required.
>
> Get rid of significant leading zeros, and less human effort needs to be
> expended in catching pointless errors.

I refuse to categorize use cases of something long-established as
you do here, Bart. I don't know which automated tool is writing
out source code using this information as data and it's all in
octal format. I wouldn't want to force people to re-write those
many engines in order to use CAlive if they wanted.

We clearly have different goals. And may I say, yours seems to
take a real hatred of mankind. You do not meet them where they
are, but try to lead them to where you think they should be. If
your goals are truly fundamental, then that kind of guidance can
be desirable, but in cases like this where you do not know why
people would use what they've had the ability to use for 40+
years ... it is wrong to conclude your way is right, and theirs
is wrong.

I think you need to place more stock in being a servant of
people, and less of a master.

--
Rick C. Hodgin

David Brown

unread,
Aug 31, 2018, 10:34:44 AM8/31/18
to
Hey, just because we disagree on many things, does not mean we disagree
on /everything/.

There are a number of things about C that I don't like. There a quite a
few more that I think are not particularly nice, but they are not much
worse than alternatives and they are certainly bearable without trouble
- possibly with the help of common tools.

If gcc had a "-Woctal" warning, I'd be happier. But it doesn't - not
yet, anyway. It's on their list of things they would like, but not on
their list of things they need to get done.

Rick C. Hodgin

unread,
Aug 31, 2018, 10:36:33 AM8/31/18
to
On 8/31/2018 10:13 AM, David Brown wrote:
> On 31/08/18 15:02, Rick C. Hodgin wrote:
>> On 8/31/2018 8:38 AM, David Brown wrote:
>>> On 31/08/18 14:17, Rick C. Hodgin wrote:
>>>> 0y1010110 // Binary
>>> Why not 0b1010110, as used elsewhere?
>>
>> I support that format for legacy purposes, but to me it looks
>> like it may have been intended to be hexadecimal and someone
>> forgot to add the leading 0x. I actually feel the same way
>> about my 0d below and considered using 0s for that reason (the
>> s being for standard), but it was confusing to me as well.
>>
>
> An "0y" prefix is not going to have much legacy use - it is never been a
> common extension in C. That doesn't mean there is not some C compiler
> out there that /does/ have "0y" binary constants, but it is hardly going
> to be worth supporting.

I've always used in my tools. And when it is available, other people
will either use it or not.

And, as I say, "0b" will also be supported.

> Multiple options means more effort - more effort when writing the tool,
> more effort when writing the documentation, more effort for people
> reading the documentation, and more effort for people understanding
> other developers code. It may seem like a very simple thing to say "0s"
> and "0d" both mean decimal - and it is surely quite easy to include in
> your parser. But if no one uses these, they are pointless - and if
> someone /does/ use them, then that means wasted effort and confusion for
> more normal programmers trying to understand that code. Such multiple
> options sound like great flexibility, but are really lose-lose situations.

We place value on different things, David. And I'm writing CAlive
as much for me and my software efforts, as for other people. I do
not intent to use 0b, but will use 0y, and if I ever have some kind
of organization where people are contributing to that entity, as
people today contribute to GNU, for example, then we will use this
new format and it will become part of that standard.

Those are my long-term goals, to have a software effort based on
things that I view as making more sense than other things being
applied by Christ-serving individuals, who are writing software
not simply to do it, but to honor Jesus with their offerings, to
do it right, to do it feature rich, to do it in a way which gives
people options and guidance and reasons why things are done the
way they are, which relate back to serving God in their founda-
tions.

>> Since decimal doesn't normally require any support, it is a
>> low point to contend with, but I can see binary being used
>> much more often.
>>
>
> Binary constants are useful sometimes - not often, but occasionally.
> And a "0b" prefix is going to be easy to understand. If you have a
> general solution - Nx or whatever - then binary is possibly not worth
> having as a separate case.

I disagree. Existing code uses 0b, so it will be supported. And
for the reasons stated above, 0y will be the way CAlive and my
Liberty Software Foundation (LibSF) are moving toward.
Legacy support demands it remains. I refuse to discount the hours
and weeks and man-years of labor people have applied toward their
software applications. That labor is substantial and should not
be undone for imposed reasons.

It's the whole reason I began working on Visual FreePro, when MS
decided to cancel Visual FoxPro. I wanted to give people a way
to keep that 25+ year labor effort with full business applications
being written, debugged, deployed, and in use for decades, a new
lease on life.

People have rejected it, however, because there's a cross on the
door. They would rather re-write their apps in .NET or Python
or some other language than face the cross in their application.

It's broken my heart many times.

>> Note: CAlive also allows any combination of _ and ' to inhabit
>> numbers, so you can do 0xb'8000 so as to divide out the
>> hex on 16-bit boundaries, etc. Same as 0xb_8000, and it
>> works for any number including base 10: 123_456.321'415,
>> evaluates to 123456.321415, as does 123'456.321_415, etc.
>> Within an number, _ and ' are ignored.
>
> Digit separators are good.

So are JUJYFRUITS:

https://www.partycity.com/jujyfruits-55pc-790072.html

--
Rick C. Hodgin

Reinhardt Behm

unread,
Aug 31, 2018, 10:40:42 AM8/31/18
to
No, I have designed my own A429 interface controller that automatically
fixes these reversed labels.

--
Reinhardt

Bart

unread,
Aug 31, 2018, 10:41:38 AM8/31/18
to
In that case why not have a poll on whether people want to have this
leading-zero octal format perpetuated in a new language.

I (and it seems David Brown) vote No.

Note that numbers with leading zeros occur in everyday life, and those
/never/ represent octal, IME. Unless they are symbols (like in telephone
or account numbers), the numbers will still be decimal.

If a car odometer shows 077777, that will always mean 77,777 (miles or
km), never 32,767.

So the argument against octal in this form is strong (8x77777 etc is
fine). It was obviously a mistake in the design of C.

Why do you want to copy the same mistakes?

--
bart


jadi...@gmail.com

unread,
Aug 31, 2018, 10:42:46 AM8/31/18
to
On Friday, August 31, 2018 at 8:38:36 AM UTC-4, David Brown wrote:
> On 31/08/18 14:17, Rick C. Hodgin wrote:
> > On 8/31/2018 8:01 AM, David Brown wrote:
> >> On 31/08/18 13:10, Malcolm McLean wrote:
> >>> A list of hexadecimal numbers of any length will look like hex, because
> >>> you've got a six in sixteen chance that any digit will be a letter.
> >>> However octal looks like denary, you can read quite a few numbers before
> >>> you realise that they must be octal because 8 and 9 are missing.
> >>
> >> Exactly. Spot the bug here:
> >>
> >> static const int patterns[] = {
> >> 3210, 2103, 1032, 0321
> >> };
> >
> > Found it. :-)
> >
> > CAlive has defined explicit formats (0y, 0n, 0o, 0d or nothing,
> > and 0x):
> >
> > 0y1010110 // Binary
>
> Why not 0b1010110, as used elsewhere?
>
> > 0n0123 // Base-4 (for DNA research)
>
> I really can't see that being a big usage. Surely DNA programs would
> prefer strings like "CGAT" rather than "0n0123" ?

Maybe for display, but for storage you may want to reduce memory footprint
by bitpacking the information into 2-bit elements.

> And there are plenty of other uses of quaternary than just DNA. I have
> used it for defining character fonts on a little display with 2-bit
> greyscale pixels.

I've worked with 4 brightness level LCD displays (temp/humidity sensor).
Granted these use cases are rather niche. For a hobby like language you
can throw in as many features as you want to support to look for syntax
overlap, and then always trim back I guess.

Rick C. Hodgin

unread,
Aug 31, 2018, 10:57:08 AM8/31/18
to
On 8/31/2018 10:41 AM, Bart wrote:
> On 31/08/2018 15:29, Rick C. Hodgin wrote:
>> On 8/31/2018 10:05 AM, Bart wrote:
>> I think you need to place more stock in being a servant of
>> people, and less of a master.
>
> In that case why not have a poll on whether people want to have this
> leading-zero octal format perpetuated in a new language.

Because I am in the position of being a leader here. My goals are
to create a software effort that has its foundations in serving my
Lord and Savior, Jesus Christ. I'm not just writing software for
the sake of writing software. The Lord has blessed me with certain
abilities and insights and knowledge and opportunity and experience,
and I desire to return those things He first gave me back to Him.

> I (and it seems David Brown) vote No.

It's the same reason the church cannot conduct its business by going
out to the heathen communities and taking a poll. The church exists
with guidance, and that guidance is given of God (if the people there
in that leadership are working as they should), and that guidance is
to be taught to people, even if they won't hear it, or even upon their
hearing, do not understand.

The church of Christ (not physical buildings, but the teachings of
Jesus Christ alive in His disciples) exists for a reason. We are
here to give light to this dark world, and to focus people back to
what is important, and away from what is a lie or misleading goal
being given over by the enemy.

With CAlive, I am trying to be that same light, teaching people that
we are called to be software developers, but not to do so for arbi-
trary reasons, but rather to do so to honor God with our lives.

Indeed, we are to honor God with all of our efforts, thoughts, acts,
goals, deeds, and in all assistance.

> Note that numbers with leading zeros occur in everyday life, and those
> /never/ represent octal, IME. Unless they are symbols (like in telephone or
> account numbers), the numbers will still be decimal.

I store those account numbers as text in my applications. It is
the only what that makes sense, because Susie's number is 111-555-
1212 Ext 401, and Frank's number is 111-555-1213, and Stacy's
number is 111-555-1214, press 4, ask for 'Accounting'.

Unless you want to try to take into account in your design all of
the various forms of phone numbers, you store them as text.

We also have account numbers like 12345678.0000 in accounting.
Storing as a number, you have to then place those values out to
the four decimal places. It requires extra programming effort
at each point of use to make sure it's formatted properly, whereas
if you have on the input screen a common control that does format
it properly as text, then you merely convey the data.

There are a host of reasons why things can work one way or the
other, and it requires some comprehensive consideration in each
case to find the best course.

CAlive, for example, isn't going to just be for C-like code. I
have defined weakly typed "var" constructs, allowing for on-the-
fly variable forms to be used be they integer, floating point,
character, logical, date, datetime, etc., which then allows also
a host of functions which can conduct work on those var types,
allowing for a high-level inclusion of source code directly
alongside the code in C-like CAlive code.

CAlive's goals are more comprehensive than a simple C-like compiler.

> If a car odometer shows 077777, that will always mean 77,777 (miles or km),
> never 32,767.

Well, if you're storing odometer readings in octal, then there's
probably another issue that needs addressed first. :-)

"Tom, it's okay. These men in white lab coats, they are here
to help you. Don't be afraid. Go with them. They'll take
you to a place of peace and happiness for a little while."

> So the argument against octal in this form is strong (8x77777 etc is fine).
> It was obviously a mistake in the design of C.
>
> Why do you want to copy the same mistakes?

I don't view them as mistakes. I view them as easy-of-access for
features that were desirable at the time. Remember B? And assembly?
They had a particular type of thinking at work in moving forward. It
does make sense why it's there, and in CAlive, this level of parsing
takes place immediately after lexing, so it's a trivial thing to
implement. And there would be nothing to prevent a flag from being
added "-nooctal" and then what you desire exists, and what I desire
exists.

--
Rick C. Hodgin

David Brown

unread,
Aug 31, 2018, 10:59:14 AM8/31/18
to
On 31/08/18 16:36, Rick C. Hodgin wrote:
> On 8/31/2018 10:13 AM, David Brown wrote:
>> On 31/08/18 15:02, Rick C. Hodgin wrote:
>>> On 8/31/2018 8:38 AM, David Brown wrote:
>>>> On 31/08/18 14:17, Rick C. Hodgin wrote:
>>>>> 0y1010110 // Binary
>>>> Why not 0b1010110, as used elsewhere?
>>>
>>> I support that format for legacy purposes, but to me it looks
>>> like it may have been intended to be hexadecimal and someone
>>> forgot to add the leading 0x. I actually feel the same way
>>> about my 0d below and considered using 0s for that reason (the
>>> s being for standard), but it was confusing to me as well.
>>>
>>
>> An "0y" prefix is not going to have much legacy use - it is never been a
>> common extension in C. That doesn't mean there is not some C compiler
>> out there that /does/ have "0y" binary constants, but it is hardly going
>> to be worth supporting.
>
> I've always used in my tools. And when it is available, other people
> will either use it or not.
>

Compatibility with your own tools is, of course, a high priority. I
have just never seen it before.
I know you have your goals and your reasons, and it doesn't matter
whether I think of them. I am just giving friendly advice with an
alternate viewpoint - choice is not always a good thing. But if you
read my advice, think about, and decide not to take it, then that is fine.

>>> Since decimal doesn't normally require any support, it is a
>>> low point to contend with, but I can see binary being used
>>> much more often.
>>>
>>
>> Binary constants are useful sometimes - not often, but occasionally.
>> And a "0b" prefix is going to be easy to understand. If you have a
>> general solution - Nx or whatever - then binary is possibly not worth
>> having as a separate case.
>
> I disagree. Existing code uses 0b, so it will be supported. And
> for the reasons stated above, 0y will be the way CAlive and my
> Liberty Software Foundation (LibSF) are moving toward.
>

There is existing code using 0b, yes. But if you take 1000 random C or
C++ projects from github, sourceforge, the Debian source repositories,
or other common sources of C code, I expect you will find that none of
them have 0b binary constants. You'd have to look at embedded software
for microcontrollers - which is (if I understand your aims) not the
target for your tools.

Remember, ever unnecessary and unused feature waters out the useful
features. Feature creep means delays, extra work, and it puts people
off. This applies to /all/ software, regardless of your motivations and
ambitions.

But as always, you decide for your language.
I gather you are thinking in terms of a "C mode" and "CAlive" mode for
your tools. It's understandable to want legacy support in C mode for
legacy software. It is less understandable to want legacy support for a
poor design decision in your new language mode.

> It's the whole reason I began working on Visual FreePro, when MS
> decided to cancel Visual FoxPro. I wanted to give people a way
> to keep that 25+ year labor effort with full business applications
> being written, debugged, deployed, and in use for decades, a new
> lease on life.
>
> People have rejected it, however, because there's a cross on the
> door. They would rather re-write their apps in .NET or Python
> or some other language than face the cross in their application.
>

Let's just say that your enthusiasm can be a bit much for people at times.

jadi...@gmail.com

unread,
Aug 31, 2018, 11:11:22 AM8/31/18
to
On Friday, August 31, 2018 at 10:40:42 AM UTC-4, Reinhardt Behm wrote:
Most of my experience with A429 is translating A429 messages into human-
readable data for avionics or test engineers, whether it's displaying them
in an A429 packet sent between CDUs and MFDs in Wireshark, or taking a A429
message dump from a flight from a card like a GeFanuc CEI 830 and creating
a human readable dump of that data in a spreadsheet.

In both cases, I've had to deal with the issue where the octal label is in a
bit-wise inverted orientation than the rest of the data. You're experience
is different than mine; I don't design A429 controllers.

But my point was that octal is niche but still exists in today's world. I'd
definitely include octal as a numeric representation if I were creating my
own programming language.

Best regards,
John D.

David Brown

unread,
Aug 31, 2018, 11:15:45 AM8/31/18
to
I haven't seen any use of octal outside of C (I know other people have),
so for me the notation in C and the base are closely linked. (And this
is, after all, a C newsgroup - the C usage of octal is the important one
here.) And regardless of the notation, I fail to see octal as a
relevant number base in this era.

We have a lot more convenient facilities now that there were before.
Our tools happily convert between bases, or between alternative symbols
such as enumerations. There was a time when other bases were useful,
just as there was a time when people hand-assembled code to machine code
(done that) or edited punched cards by hand. There was a time when
36-bit machines were common and people programmed in B. There was a
time when people read core dumps from a PDP-11.

Those times are gone. Octal has very few real uses now.

So it is not that I dislike octal as such. I don't care about octal -
it doesn't matter any more than duodecimal. I /do/ care that the
programming languages I use, and the tools I use, waste space on such an
unnecessary relic. (I don't care very strongly here - this thread is a
bit out of proportion.) And I care that the language and tools allow
errors to go by undetected because of this relic. (Again, my gripe
about octal is tiny compared to my dislike of things like missing or
non-prototype function declarations.)

>>> By the way, on a 16-bit machine (like the PDP-11) I really like the fact
>>> that the sign bit stands out in octal. 177140: negative, 077140:
>>> positive.
>>
>> Try decimal: -416 negative, 32352 positive.
>
> Try decimal when examining a PDP-11 core dump.
>

No, I'll pass on that! (I have looked at memory dumps from msp430
microcontrollers, which have an ISA based closely on the PDP-11. I
looked at them in hex and ASCII, and would have hated to do so in octal.)

Rick C. Hodgin

unread,
Aug 31, 2018, 11:18:50 AM8/31/18
to
On 8/31/2018 10:59 AM, David Brown wrote:
> On 31/08/18 16:36, Rick C. Hodgin wrote:
>> Those are my long-term goals, to have a software effort based on
>> things that I view as making more sense than other things being
>> applied by Christ-serving individuals, who are writing software
>> not simply to do it, but to honor Jesus with their offerings, to
>> do it right, to do it feature rich, to do it in a way which gives
>> people options and guidance and reasons why things are done the
>> way they are, which relate back to serving God in their founda-
>> tions.
>
> I know you have your goals and your reasons, and it doesn't matter
> whether I think of them. I am just giving friendly advice with an
> alternate viewpoint - choice is not always a good thing. But if you
> read my advice, think about, and decide not to take it, then that is fine.

I appreciate your advice. I often disagree with it, but I also
read what you write to other people and I agree with you on many
things as well. You are very knowledgeable and your advice, like
Ben's, is solid and reliable. People would do well to follow your
advice. I'm just trying to do something new, so our goals are
often incompatible because I place emphasis on things you don't,
and you place emphasis on things I don't.

>>>> Since decimal doesn't normally require any support, it is a
>>>> low point to contend with, but I can see binary being used
>>>> much more often.
>>>>
>>>
>>> Binary constants are useful sometimes - not often, but occasionally.
>>> And a "0b" prefix is going to be easy to understand. If you have a
>>> general solution - Nx or whatever - then binary is possibly not worth
>>> having as a separate case.
>>
>> I disagree. Existing code uses 0b, so it will be supported. And
>> for the reasons stated above, 0y will be the way CAlive and my
>> Liberty Software Foundation (LibSF) are moving toward.
>
> There is existing code using 0b, yes. But if you take 1000 random C or
> C++ projects from github, sourceforge, the Debian source repositories,
> or other common sources of C code, I expect you will find that none of
> them have 0b binary constants. You'd have to look at embedded software
> for microcontrollers - which is (if I understand your aims) not the
> target for your tools.

CAlive will work with the processor cores I am intending to create.
I do have one microcontroller project that's a goal, called Arlina.
Arlina is an N-bit processor that is configurable by wide software
words, a type of array-based assembly language.

If I ever get it developed CAlive will be the target app for that
hardware as well. And, during design, it is a consideration.

> Remember, ever unnecessary and unused feature waters out the useful
> features. Feature creep means delays, extra work, and it puts people
> off. This applies to /all/ software, regardless of your motivations and
> ambitions.
>
> But as always, you decide for your language.

Understood. I have always taken a different view on things than
most people. I've looked at software like this: Once you write
something, once it's working and well-debugged, then it exists.
It's not like a car that will rust out over time. It's digital,
and so long as the underlying hardware hasn't changed, and so
long as the algorithm still exists in a form that can be compiled
today as it was 40+ years ago, then why jettison it?

I've always hated that about OS designs, for example. I would
like, today in 2018, to use a Windows XP interface. There's no
reason we couldn't have modern security on a Windows XP UI. And
I have vowed since I began my OS kernel project that I would
never break any backward compatibility on anything in moving for-
ward. I would also provide support for things which worked in
the past. I would never cause people to have to re-write any-
thing, but I would allow them to re-write anything they chose to
do so on.

>>> Drop the C concept of "0" being an octal prefix, and the bug disappears.
>>
>> Legacy support demands it remains. I refuse to discount the hours
>> and weeks and man-years of labor people have applied toward their
>> software applications. That labor is substantial and should not
>> be undone for imposed reasons.
>
> I gather you are thinking in terms of a "C mode" and "CAlive" mode for
> your tools. It's understandable to want legacy support in C mode for
> legacy software. It is less understandable to want legacy support for a
> poor design decision in your new language mode.

No. CAlive will support octal formats by default. And most all of
the existing C code should compile in CAlive and work properly with
zero changes. There will be corner cases where things will produce
different results because of how CAlive handles intermediate calcu-
lations, and sign-saturates values that overflow, but if you are
dealing with normal ranges of values, then CAlive should compile
nearly all commonly used features in C-code without error, including
octal variable forms.

I do intend later on (early 2020s) to add C90 and C99 modes, which
will then overflow as they do today, and operate as other true C90
and C99 compilers when those modes are enabled, disabling then the
CAlive extensions by default, and allowing them to be re-introduced
one-by-one through additional flags.

That's the plan, Stan (James 4:15).

>> It's the whole reason I began working on Visual FreePro, when MS
>> decided to cancel Visual FoxPro. I wanted to give people a way
>> to keep that 25+ year labor effort with full business applications
>> being written, debugged, deployed, and in use for decades, a new
>> lease on life.
>>
>> People have rejected it, however, because there's a cross on the
>> door. They would rather re-write their apps in .NET or Python
>> or some other language than face the cross in their application.
>
> Let's just say that your enthusiasm can be a bit much for people at times.

Many people would say that. I feel like I am not doing enough,
because I know how much my sin cost my Lord and Savior. I think
that I could always be doing more, and I chastise myself when I
do not move more than I do. It keeps me moving forward toward
Him and His Kingdom.

--
Rick C. Hodgin

David Brown

unread,
Aug 31, 2018, 11:21:16 AM8/31/18
to
On 31/08/18 16:41, Bart wrote:
> On 31/08/2018 15:29, Rick C. Hodgin wrote:
>> On 8/31/2018 10:05 AM, Bart wrote:
>
>> I think you need to place more stock in being a servant of
>> people, and less of a master.
>
> In that case why not have a poll on whether people want to have this
> leading-zero octal format perpetuated in a new language.
>
> I (and it seems David Brown) vote No.
>

A poll like that should be among potential users of the language - we
are not part of that. We can only say that /we/ would not want octals
(at least in a leading-zero format) in a new language, but it's Rick's
choice to decide for his language.


David Brown

unread,
Aug 31, 2018, 11:23:45 AM8/31/18
to
On 31/08/18 16:42, jadi...@gmail.com wrote:
> On Friday, August 31, 2018 at 8:38:36 AM UTC-4, David Brown wrote:
>> On 31/08/18 14:17, Rick C. Hodgin wrote:
>>> On 8/31/2018 8:01 AM, David Brown wrote:
>>>> On 31/08/18 13:10, Malcolm McLean wrote:
>>>>> A list of hexadecimal numbers of any length will look like hex, because
>>>>> you've got a six in sixteen chance that any digit will be a letter.
>>>>> However octal looks like denary, you can read quite a few numbers before
>>>>> you realise that they must be octal because 8 and 9 are missing.
>>>>
>>>> Exactly. Spot the bug here:
>>>>
>>>> static const int patterns[] = {
>>>> 3210, 2103, 1032, 0321
>>>> };
>>>
>>> Found it. :-)
>>>
>>> CAlive has defined explicit formats (0y, 0n, 0o, 0d or nothing,
>>> and 0x):
>>>
>>> 0y1010110 // Binary
>>
>> Why not 0b1010110, as used elsewhere?
>>
>>> 0n0123 // Base-4 (for DNA research)
>>
>> I really can't see that being a big usage. Surely DNA programs would
>> prefer strings like "CGAT" rather than "0n0123" ?
>
> Maybe for display, but for storage you may want to reduce memory footprint
> by bitpacking the information into 2-bit elements.
>

Yes, of course - but you can interpret those letters and store them as
you want. We are talking about source code formats here, not storage
formats.

>> And there are plenty of other uses of quaternary than just DNA. I have
>> used it for defining character fonts on a little display with 2-bit
>> greyscale pixels.
>
> I've worked with 4 brightness level LCD displays (temp/humidity sensor).
> Granted these use cases are rather niche.

Agreed.

> For a hobby like language you
> can throw in as many features as you want to support to look for syntax
> overlap, and then always trim back I guess.
>

You certainly /could/ do that, but I would not advise it. You are
probably better starting with a small feature set and adding to it when
you have the limited version working.

David Brown

unread,
Aug 31, 2018, 11:48:43 AM8/31/18
to
On 31/08/18 17:18, Rick C. Hodgin wrote:
> On 8/31/2018 10:59 AM, David Brown wrote:
>> On 31/08/18 16:36, Rick C. Hodgin wrote:
<snip>
>> There is existing code using 0b, yes. But if you take 1000 random C or
>> C++ projects from github, sourceforge, the Debian source repositories,
>> or other common sources of C code, I expect you will find that none of
>> them have 0b binary constants. You'd have to look at embedded software
>> for microcontrollers - which is (if I understand your aims) not the
>> target for your tools.
>
> CAlive will work with the processor cores I am intending to create.
> I do have one microcontroller project that's a goal, called Arlina.
> Arlina is an N-bit processor that is configurable by wide software
> words, a type of array-based assembly language.
>

There still will be little or no overlap in software - code for these
kinds of devices, especially at the level were binary constants are
convenient, is rarely portable.

> If I ever get it developed CAlive will be the target app for that
> hardware as well. And, during design, it is a consideration.
>
>> Remember, ever unnecessary and unused feature waters out the useful
>> features. Feature creep means delays, extra work, and it puts people
>> off. This applies to /all/ software, regardless of your motivations and
>> ambitions.
>>
>> But as always, you decide for your language.
>
> Understood. I have always taken a different view on things than
> most people. I've looked at software like this: Once you write
> something, once it's working and well-debugged, then it exists.
> It's not like a car that will rust out over time. It's digital,
> and so long as the underlying hardware hasn't changed, and so
> long as the algorithm still exists in a form that can be compiled
> today as it was 40+ years ago, then why jettison it?
>

That is just not true. It is unfortunate, perhaps, but it is not true.
Software does not last forever. Underlying hardware /does/ change.
Requirements change. Expectations change. Compilers change. Languages
change. Use-cases change. Other software that interacts with the
software changes.

There is very little software that remains useful and substantially
unchanged over decades. (Certainly there is /some/ such software - but
it is a small proportion of software.) Some of this is arguably
unnecessary - things are changed just for the sake of changing them, or
to provoke new sales and sources of income. (Note that I am not making
moral judgements here about whether this is right or wrong - I am merely
saying it is "arguable".)

I agree with you that it is a good thing if modern tools can handle old
source code - it helps reduce wastage. But there should be limits - and
you should not support niche cases of old software when it is to the
detriment of working with new code. Support for - say - non-prototype
function declarations in a C compiler will let you compile some older
code. That support comes at the cost of poorer quality checking of code
written /now/. It is not a good trade-off. It is like putting lead in
petrol to support the tiny fraction of cars that still need it,
regardless of what it does to the environment or to people's health.

It is not that I am against re-use of existing code - far from it. But
I am not seeing a balance here - a considered decision looking at the
tradeoffs. I just see "my software will do everything!" - ambition,
enthusiasm and well-intentioned goals, but not enough realism and
consideration of the down-sides and costs to current programmers writing
new software.

> I've always hated that about OS designs, for example. I would
> like, today in 2018, to use a Windows XP interface. There's no
> reason we couldn't have modern security on a Windows XP UI. And
> I have vowed since I began my OS kernel project that I would
> never break any backward compatibility on anything in moving for-
> ward. I would also provide support for things which worked in
> the past. I would never cause people to have to re-write any-
> thing, but I would allow them to re-write anything they chose to
> do so on.

This is getting way off-topic for this off-topic thread, but maybe you'd
like ReactOS ? I seem to remember you disliking Linux and the BSD's,
though I can't remember why (there's no need to explain it - if the
reasons are good enough for you, that's all that matters).


Rick C. Hodgin

unread,
Aug 31, 2018, 12:12:01 PM8/31/18
to
On 8/31/2018 11:48 AM, David Brown wrote:
> On 31/08/18 17:18, Rick C. Hodgin wrote:
>> Understood. I have always taken a different view on things than
>> most people. I've looked at software like this: Once you write
>> something, once it's working and well-debugged, then it exists.
>> It's not like a car that will rust out over time. It's digital,
>> and so long as the underlying hardware hasn't changed, and so
>> long as the algorithm still exists in a form that can be compiled
>> today as it was 40+ years ago, then why jettison it?
>
> That is just not true. It is unfortunate, perhaps, but it is not true.
> Software does not last forever. Underlying hardware /does/ change.
> Requirements change. Expectations change. Compilers change. Languages
> change. Use-cases change. Other software that interacts with the
> software changes.

That's my point... those things change for arbitrary reasons.
They don't need to. People change things because they want
to, and they don't place value on the prior labor effort.

I do. I will make sure backward compatibility always exists
for anything I write, no matter how obtuse it might seem in
the future.

> There is very little software that remains useful and substantially
> unchanged over decades. (Certainly there is /some/ such software - but
> it is a small proportion of software.) Some of this is arguably
> unnecessary - things are changed just for the sake of changing them, or
> to provoke new sales and sources of income. (Note that I am not making
> moral judgements here about whether this is right or wrong - I am merely
> saying it is "arguable".)

That's my point. There would be no reason why, on the 80386,
that Microsoft could not have introduced an 4004 emulator, to
allow apps written for it to work.

Things that involve labor should not be so easily discounted
or jettisoned.

> I agree with you that it is a good thing if modern tools can handle old
> source code - it helps reduce wastage. But there should be limits - and
> you should not support niche cases of old software when it is to the
> detriment of working with new code. Support for - say - non-prototype
> function declarations in a C compiler will let you compile some older
> code. That support comes at the cost of poorer quality checking of code
> written /now/. It is not a good trade-off. It is like putting lead in
> petrol to support the tiny fraction of cars that still need it,
> regardless of what it does to the environment or to people's health.

I happen to think the use of octal prefixes is a good idea. I
have wished there was something similar for hexadecimal. I also
like the way assemblers do it, with 0FAh, for example, rather
than 0xFA. But, both of them have value, so I support them both.

> It is not that I am against re-use of existing code - far from it. But
> I am not seeing a balance here - a considered decision looking at the
> tradeoffs. I just see "my software will do everything!" - ambition,
> enthusiasm and well-intentioned goals, but not enough realism and
> consideration of the down-sides and costs to current programmers writing
> new software.

I look at the writing of a compiler, and of designing its various
facets, as a one-time labor effort. I exert for a time, and then
accommodate the various needs, and then it exists from that point
forward.

My one man exertion (or team of small five exertion), is very
trivial to the 10s of thousands to millions of developers who
may one day use CAlive. I consider those things in the design,
and I work for them.

>> I've always hated that about OS designs, for example. I would
>> like, today in 2018, to use a Windows XP interface. There's no
>> reason we couldn't have modern security on a Windows XP UI. And
>> I have vowed since I began my OS kernel project that I would
>> never break any backward compatibility on anything in moving for-
>> ward. I would also provide support for things which worked in
>> the past. I would never cause people to have to re-write any-
>> thing, but I would allow them to re-write anything they chose to
>> do so on.
>
> This is getting way off-topic for this off-topic thread, but maybe you'd
> like ReactOS ? I seem to remember you disliking Linux and the BSD's,
> though I can't remember why (there's no need to explain it - if the
> reasons are good enough for you, that's all that matters).

ReactOS isn't yet ready for prime time, but I am following its
design. I have considered contributing to it, but am still re-
solved to complete my own modern OS/2-like system, because I
believe OS/2 was phenomenally designed, and was going in the
right direction.

I have plans for ES/2, which is my nearly 100% compatible target
for a modern OS/2 written from scratch, being able to run any
and all OS/2 software that can be re-compiled in my CAlive compiler,
to generate compatible binaries for my kernel.

We'll see. Those are 2020s goals.

--
Rick C. Hodgin

David Brown

unread,
Aug 31, 2018, 12:40:55 PM8/31/18
to
On 31/08/18 18:11, Rick C. Hodgin wrote:
> On 8/31/2018 11:48 AM, David Brown wrote:
>> On 31/08/18 17:18, Rick C. Hodgin wrote:
>>> Understood. I have always taken a different view on things than
>>> most people. I've looked at software like this: Once you write
>>> something, once it's working and well-debugged, then it exists.
>>> It's not like a car that will rust out over time. It's digital,
>>> and so long as the underlying hardware hasn't changed, and so
>>> long as the algorithm still exists in a form that can be compiled
>>> today as it was 40+ years ago, then why jettison it?
>>
>> That is just not true. It is unfortunate, perhaps, but it is not true.
>> Software does not last forever. Underlying hardware /does/ change.
>> Requirements change. Expectations change. Compilers change. Languages
>> change. Use-cases change. Other software that interacts with the
>> software changes.
>
> That's my point... those things change for arbitrary reasons.

/Sometimes/ the changes are arbitrary. Mostly they are not. And even
if things are changed just because someone wants to change them, does
not make it a bad thing.

Don't dismiss all changes just because some are bad. Don't assume that
all old software is useful or good just because some of it is.

> They don't need to. People change things because they want
> to, and they don't place value on the prior labor effort.
>
> I do. I will make sure backward compatibility always exists
> for anything I write, no matter how obtuse it might seem in
> the future.
>
>> There is very little software that remains useful and substantially
>> unchanged over decades. (Certainly there is /some/ such software - but
>> it is a small proportion of software.) Some of this is arguably
>> unnecessary - things are changed just for the sake of changing them, or
>> to provoke new sales and sources of income. (Note that I am not making
>> moral judgements here about whether this is right or wrong - I am merely
>> saying it is "arguable".)
>
> That's my point. There would be no reason why, on the 80386,
> that Microsoft could not have introduced an 4004 emulator, to
> allow apps written for it to work.

There are two /very/ good reasons why MS should not have made a 4004
emulator.

1. Rounded to the nearest part per million of MS users, no one would use it.

2. It would cost. It would take time and money to make it - that would
come out of /our/ pockets. It would have the risk of bugs, including
security risks. It would complicate existing software. It would mean
more work, more documentation, more testing.

A simple cost/benefit analysis here is showing a solid cost and zero
benefit - that is not a good sum.

>
> Things that involve labor should not be so easily discounted
> or jettisoned.
>

Of course they should be jettisoned when they are no longer of positive
benefit. At best, you put them in a museum.

Again, there is plenty of old stuff that /is/ useful. But that does not
mean that /all/ old stuff should be kept.

>> I agree with you that it is a good thing if modern tools can handle old
>> source code - it helps reduce wastage. But there should be limits - and
>> you should not support niche cases of old software when it is to the
>> detriment of working with new code. Support for - say - non-prototype
>> function declarations in a C compiler will let you compile some older
>> code. That support comes at the cost of poorer quality checking of code
>> written /now/. It is not a good trade-off. It is like putting lead in
>> petrol to support the tiny fraction of cars that still need it,
>> regardless of what it does to the environment or to people's health.
>
> I happen to think the use of octal prefixes is a good idea. I
> have wished there was something similar for hexadecimal. I also
> like the way assemblers do it, with 0FAh, for example, rather
> than 0xFA. But, both of them have value, so I support them both.
>

Well, okay. I have no argument against that other than what I have
already said.

>> It is not that I am against re-use of existing code - far from it. But
>> I am not seeing a balance here - a considered decision looking at the
>> tradeoffs. I just see "my software will do everything!" - ambition,
>> enthusiasm and well-intentioned goals, but not enough realism and
>> consideration of the down-sides and costs to current programmers writing
>> new software.
>
> I look at the writing of a compiler, and of designing its various
> facets, as a one-time labor effort. I exert for a time, and then
> accommodate the various needs, and then it exists from that point
> forward.
>

That is not realistic. But you already know I think that - and again,
it is your choice.

> My one man exertion (or team of small five exertion), is very
> trivial to the 10s of thousands to millions of developers who
> may one day use CAlive. I consider those things in the design,
> and I work for them.

I really think that if you ever expect anyone else to use your languages
and tools, you need to talk to other people and listen to their
suggestions and advice. So far, I see CAlive being totally tuned to
your own personal thoughts and ideas, which are very significantly
different from other people's (here I am only talking about the
technical aspects of programming languages and tools). That's okay if
you are the only user for the tools. If you ever want anyone else to
help you with them, or to use them, then you need to listen to what
other people want and what they say. You don't have the authority or
the power to force people to accept your way of thinking about
programming (not that I think you'd want to force anyone). You can't
expect anyone to accept your significantly different ways of thinking
just like that - you need to do so gradually, with dialogue.

Rick C. Hodgin

unread,
Aug 31, 2018, 12:49:20 PM8/31/18
to
On 8/31/2018 12:40 PM, David Brown wrote:
> On 31/08/18 18:11, Rick C. Hodgin wrote:
>> On 8/31/2018 11:48 AM, David Brown wrote:
>>> On 31/08/18 17:18, Rick C. Hodgin wrote:
>>>> Understood. I have always taken a different view on things than
>>>> most people. I've looked at software like this: Once you write
>>>> something, once it's working and well-debugged, then it exists.
>>>> It's not like a car that will rust out over time. It's digital,
>>>> and so long as the underlying hardware hasn't changed, and so
>>>> long as the algorithm still exists in a form that can be compiled
>>>> today as it was 40+ years ago, then why jettison it?
>>>
>>> That is just not true. It is unfortunate, perhaps, but it is not true.
>>> Software does not last forever. Underlying hardware /does/ change.
>>> Requirements change. Expectations change. Compilers change. Languages
>>> change. Use-cases change. Other software that interacts with the
>>> software changes.
>>
>> That's my point... those things change for arbitrary reasons.
>
> /Sometimes/ the changes are arbitrary. Mostly they are not. And even
> if things are changed just because someone wants to change them, does
> not make it a bad thing.
>
> Don't dismiss all changes just because some are bad. Don't assume that
> all old software is useful or good just because some of it is.

I do. Things don't need to change. People don't place value
on prior labor efforts. They should.

>> That's my point. There would be no reason why, on the 80386,
>> that Microsoft could not have introduced an 4004 emulator, to
>> allow apps written for it to work.
>
> There are two /very/ good reasons why MS should not have made a 4004
> emulator.
>
> 1. Rounded to the nearest part per million of MS users, no one would use it.
>
> 2. It would cost. It would take time and money to make it - that would
> come out of /our/ pockets. It would have the risk of bugs, including
> security risks. It would complicate existing software. It would mean
> more work, more documentation, more testing.
>
> A simple cost/benefit analysis here is showing a solid cost and zero
> benefit - that is not a good sum.

Some people would've used it if it existed. There is, in 2018,
a rich 4004 enthusiast base, with people writing emulators and
compilers to this day. The 4004 was the first modern commercially
available microprocessor, so it has that base, but it would also
have had a similar base of users had they included it back in
the day.

Microsoft (and other companies) did not do that because they did
not place value on prior labor efforts written for the 4004.
They saw the easier low-hanging fruit of new fields, new targets,
to abandon the old and force those who used old hardware and the
software that went with it, to upgrade.

It's a wrong philosophy in my opinion.

>> Things that involve labor should not be so easily discounted
>> or jettisoned.
>
> Of course they should be jettisoned when they are no longer of positive
> benefit. At best, you put them in a museum.
>
> Again, there is plenty of old stuff that /is/ useful. But that does not
> mean that /all/ old stuff should be kept.

When it comes to working software, I disagree. There's no need
to re-write something that's working ... unless there's a need
to re-write it. And to force that need upon someone because you
have broken some legacy support that existed for years previously,
that is wrong.

I'm sorry you disagree with my position here, but I truly believe
what I'm stating.

>> My one man exertion (or team of small five exertion), is very
>> trivial to the 10s of thousands to millions of developers who
>> may one day use CAlive. I consider those things in the design,
>> and I work for them.
>
> I really think that if you ever expect anyone else to use your languages
> and tools, you need to talk to other people and listen to their
> suggestions and advice. So far, I see CAlive being totally tuned to
> your own personal thoughts and ideas, which are very significantly
> different from other people's (here I am only talking about the
> technical aspects of programming languages and tools). That's okay if
> you are the only user for the tools. If you ever want anyone else to
> help you with them, or to use them, then you need to listen to what
> other people want and what they say. You don't have the authority or
> the power to force people to accept your way of thinking about
> programming (not that I think you'd want to force anyone). You can't
> expect anyone to accept your significantly different ways of thinking
> just like that - you need to do so gradually, with dialogue.

CAlive will support the bulk of C out of the box, and will have full
C90 and C99 support at some time. No existing source code needs to
be altered to use it.

CAlive offers a new ABI that allows LiveCode (edit-and-continue) and
many may find it useful for development on that basis alone, to then
take the code out and compile it later in a C/C++ compiler.

We'll see. Time will tell. But regardless, this will be the flag-
ship tool used by LibSF and its many goals and efforts.

--
Rick C. Hodgin

Bart

unread,
Aug 31, 2018, 1:44:07 PM8/31/18
to
On 31/08/2018 17:49, Rick C. Hodgin wrote:

> Some people would've used it if it existed.  There is, in 2018,
> a rich 4004 enthusiast base, with people writing emulators and
> compilers to this day.  The 4004 was the first modern commercially
> available microprocessor, so it has that base, but it would also
> have had a similar base of users had they included it back in
> the day.

It was a crude microcontroller.

I can't think of any reason why MS would bundle an emulator for that,
above an emulator for hundreds of other devices, or why they would do
any of that above an emulator for Win16 or Win64 for which there is a
genuine need. (I have a dozen apps for Win16 that no longer work.)

> Microsoft (and other companies) did not do that because they did
> not place value on prior labor efforts written for the 4004.
> They saw the easier low-hanging fruit of new fields, new targets,
> to abandon the old and force those who used old hardware and the
> software that went with it, to upgrade.

What in the name of *** have Microsoft to do with the 4004 anyway?

Are you confusing Microsoft with Intel by any chance?

And if you did mean Microsoft, why aren't you asking the same question
of Unix, or OSX, or Android? (Or do those OSes already include 4004
emulations? That would be a fact I wasn't aware of.)

> CAlive will support the bulk of C out of the box, and will have full
> C90 and C99 support at some time.  No existing source code needs to
> be altered to use it.

If you are talking about existing, debugged, working C code, then you
can already utilise that via an existing C compiler, of which there are
a few. You just arrange for it to produce a binary file that can be
linked into your program.

(And often, it will already be in that form; you just need the new
language to deal with or translate the interfaces to that code.)

If, on the other hand, you are talking about creating new programs,
surely you want those programs to use the more up-to-date language with
its new, safer, more convenient features?

Again, what you are saying doesn't stack up.

--
bart

Rick C. Hodgin

unread,
Aug 31, 2018, 1:56:51 PM8/31/18
to
On 8/31/2018 1:44 PM, Bart wrote:
> On 31/08/2018 17:49, Rick C. Hodgin wrote:
>
>> Some people would've used it if it existed.  There is, in 2018,
>> a rich 4004 enthusiast base, with people writing emulators and
>> compilers to this day.  The 4004 was the first modern commercially
>> available microprocessor, so it has that base, but it would also
>> have had a similar base of users had they included it back in
>> the day.
>
> It was a crude microcontroller.
>
> I can't think of any reason why MS would bundle an emulator for that, above
> an emulator for hundreds of other devices, or why they would do any of that
> above an emulator for Win16 or Win64 for which there is a genuine need. (I
> have a dozen apps for Win16 that no longer work.)

The 4004 was quite powerful, and could've been emulated on a
4.77 MHz 8088/8086 CPU probably in real-time. With a small
ISA adapter board for I/O it could've handled any devices
which were run by it from the more modern PC, giving people
a path forward.

The 4004 wasn't too bad either, given the time it came out.
It could do in software what had previously required much
custom hardware. A huge advantage, and many people bought
it.

The 80386 contained built-in support for the original 8086,
sans some of the external I/O ports and gate A20's access
for how memory wrapped around 1 MB.

The 80486 contains backward compatibility for the 80386,
save a few new instructions.

The AMD64 architecture contains backward compatibility for
the 8086 all the way through the most advanced and extended
x86 designs before it.

It's hard to do support like that in hardware, and is much
easier to do it in software (Intel's Itanium originally
had IA-32 support for x86-based CPUs in hardware, then they
later switched to a software emulation layer because it was
faster (given how they did the emulation in hardware)).

Having backward support for things is a good idea. I plan
to incorporate it into everything I ever produce in software
and hardware. When a clean break is needed, it will be a
new product line with a clear departure point, and support
for the old one will be maintained indefinitely.

>> Microsoft (and other companies) did not do that because they did
>> not place value on prior labor efforts written for the 4004.
>> They saw the easier low-hanging fruit of new fields, new targets,
>> to abandon the old and force those who used old hardware and the
>> software that went with it, to upgrade.
>
> What in the name of *** have Microsoft to do with the 4004 anyway?

Microsoft was MS-DOS, Windows, Windows 95, etc. They were
the company that dominated 90% and greater use of all the
modern PCs.

Have you not heard of them?

> Are you confusing Microsoft with Intel by any chance?

Nope. While the base architecture of the 4004 is seen in
the design of the 8086, 80386, 80486, etc., it's not quite
binary compatible, and Intel did not seek to add any binary
support for it in their 8086 design, though they could've
with minimal difficulty.

So, Microsoft, being a software company, could've written
that emulator and provided backward support. It would've
given people a reason to upgrade to an 8086, 80286, or
80386 as the original 8086-based emulator could've worked
just fine in those various machines.

> And if you did mean Microsoft, why aren't you asking the same question of
> Unix, or OSX, or Android? (Or do those OSes already include 4004 emulations?
> That would be a fact I wasn't aware of.)

Microsoft was bigger (money-wise, resource-wise, user-base-
wise, etc).

Android is a total departure from historical devices. It
deserves to be its own new thing. However, there are many
emulators written for Android.

>> CAlive will support the bulk of C out of the box, and will have full
>> C90 and C99 support at some time.  No existing source code needs to
>> be altered to use it.
>
> If you are talking about existing, debugged, working C code, then you can
> already utilise that via an existing C compiler, of which there are a few.
> You just arrange for it to produce a binary file that can be linked into your
> program.

With CAlive you compile the old, and make a transition to
the new. You also gain some features in CAlive because I
process data as data, and not as the limitations of the
type they're stored in. That's a paradigm shift for a low-
level language.

> (And often, it will already be in that form; you just need the new language
> to deal with or translate the interfaces to that code.)

Yeah. Everyone can keep on using whatever compilers they
have always used for their C (and some C++) code. But if
they want to use the new features of CAlive, they'll be
pleased to know they don't have to change much of anything
to make the migration, and then they gain the new features.

> If, on the other hand, you are talking about creating new programs, surely
> you want those programs to use the more up-to-date language with its new,
> safer, more convenient features?

How safe something is depends on the developer as much as
the features of the language.

> Again, what you are saying doesn't stack up.

You're just not looking at it correctly, Bart. You're also
not asking me questions waiting for my answers before you
conclude negative things about me or my ideas. You're going
on assumption, then concluding assumed negative things about
me and my ideas.

If you are interested in the truth, you should wait for the
discovery phase to yield information, so you can come to an
informed conclusion.

--
Rick C. Hodgin

Rick C. Hodgin

unread,
Aug 31, 2018, 2:03:57 PM8/31/18
to
On 8/31/2018 1:56 PM, Rick C. Hodgin wrote:
> The 4004 was quite powerful, and could've been emulated on a
> 4.77 MHz 8088/8086 CPU probably in real-time.  With a small
> ISA adapter board for I/O it could've handled any devices
> which were run by it from the more modern PC, giving people
> a path forward.
>
> The 4004 wasn't too bad either, given the time it came out.
> It could do in software what had previously required much
> custom hardware.  A huge advantage, and many people bought
> it.

I will say that the 4004 implementer, Federico Faggin, has
stated that the 4004's biggest weakness was its very small
external bus. He said if they had widened it to the full
12-bits of data it could access, that it would've provided
triple the performance it did. He said that was his worst
design consideration because it affected performance most
of all.

He was also frustrated that Intel didn't push its sales
and use more significantly, which is why he left Intel to
go and create the Z80.

He later developed the touch screen technology we use on
our various devices, and now he's trying to work on get-
ting computers to think by building models of neural nets
that mimic our own thinking processes.

https://en.wikipedia.org/wiki/Federico_Faggin

These legacy things should not be forgotten. And the les-
sons learned from their mistakes (in hindsight) should also
not be forgotten.

Some things really are fundamental in semiconductors, and
we need to address those needs for what they are.

--
Rick C. Hodgin

Bart

unread,
Aug 31, 2018, 2:37:48 PM8/31/18
to
On 31/08/2018 19:03, Rick C. Hodgin wrote:
> On 8/31/2018 1:56 PM, Rick C. Hodgin wrote:
>> The 4004 was quite powerful, and could've been emulated on a
>> 4.77 MHz 8088/8086 CPU probably in real-time.  With a small
>> ISA adapter board for I/O it could've handled any devices
>> which were run by it from the more modern PC, giving people
>> a path forward.
>>
>> The 4004 wasn't too bad either, given the time it came out.
>> It could do in software what had previously required much
>> custom hardware.  A huge advantage, and many people bought
>> it.
>
> I will say that the 4004 implementer, Federico Faggin, has
> stated that the 4004's biggest weakness was its very small
> external bus.  He said if they had widened it to the full
> 12-bits of data it could access, that it would've provided
> triple the performance it did.  He said that was his worst
> design consideration because it affected performance most
> of all.
>
> He was also frustrated that Intel didn't push its sales
> and use more significantly, which is why he left Intel to
> go and create the Z80.

Which doesn't emulate 4004 either. (You also seem to be skipping over
4040 and 8008.)

Z80 was a much better version of 8080: simpler power requirements,
simple square wave clock, simpler reset, simpler interfacing in general,
faster clock speed, dram support, extra registers, extra instructions...

And compatible with 8080 which is what people were interested in because
of CP/M. They weren't interested, even then, in running 4004 programs;
they would just use a 4004 if so.

So it it nonsensical to suggest that people running high-performance
64-bit machines now, 40 years later, are still interested in a 4004
emulator (for what purpose?). The few individuals who are, can just
download an emulator. Or write one.

I just can't see this is something that is anything at all to do with
Microsoft, other than you are looking for something to lay at their door
because of some personal issue you have with them.

>
> He later developed the touch screen technology we use on
> our various devices, and now he's trying to work on get-
> ting computers to think by building models of neural nets
> that mimic our own thinking processes.

It sounds like even he has moved on then.

--
bart

Rick C. Hodgin

unread,
Aug 31, 2018, 2:56:49 PM8/31/18
to
On 8/31/2018 2:37 PM, Bart wrote:
> And compatible with 8080 which is what people were interested in because of
> CP/M. They weren't interested, even then, in running 4004 programs; they
> would just use a 4004 if so.
>
> So it it nonsensical to suggest that people running high-performance 64-bit
> machines now, 40 years later, are still interested in a 4004 emulator (for
> what purpose?). The few individuals who are, can just download an emulator.
> Or write one.

My point is that once something is developed, it should be maintained
in legacy. If you don't want to incorporate it in hardware, then pro-
vide that software emulator for it, but still allow prior labor efforts
in software to run.

> I just can't see this is something that is anything at all to do with
> Microsoft, other than you are looking for something to lay at their door
> because of some personal issue you have with them.

Only because it's software. Intel could've done it too.

Intel did learn their lesson with the 80286, however, in providing
full backward compatibility with the 8086 in their new protected mode
hardware design, primarily because of how much software there was
written for their prior architecture. People needed a way to move
that content they already purchased forward, and not be forced to
replace it all up front.
The Z80 is still in mass manufacturing in 2018, with an estimated 4
to 8 billion devices having been manufactured since it was created.
Faggin has stated that it's impossible to know the number because
their hardware licensing allowed companies to embed Z80 cores through
a one-time license fee, rather than a per-unit license fee.

--
Rick C. Hodgin

Ian Collins

unread,
Aug 31, 2018, 8:09:51 PM8/31/18
to
On 31/08/18 23:58, David Brown wrote:
> On 31/08/18 12:53, Ben Bacarisse wrote:
>> David Brown <david...@hesbynett.no> writes:
>> <snip>
>>> Personally, I cannot see the connection between having 3-bit fields in
>>> an instruction set (as you often have in cpus with 8 registers) and
>>> wanting to use octal. It makes no sense
>>
>> There seems to be a strong anti-octal feeling around here! Octal/hex --
>> it makes no difference to me. Both are easy to read in my opinion, so
>> "making sense" is not really the point. All I need is any reason, even
>> a tiny one, for the balance to tip one way or the other and I'll go with
>> that.
>
> My gripe with octal is not, in fact, the octal itself. If someone wants
> to write octal, that's their choice. But I don't see it as a
> particularly useful feature. Indeed, I cannot think of any situation in
> all my programming life (assembly, C, or anything else) where I have
> thought octal would be useful - but I /have/ thought that base 4 would
> sometimes be useful.

There posts someone who hasn't had to work in MACRO-11! My first pure
project was on PDP-11s, wing a mix of HLL and assembler. PDP-11 was
(along with 68K) the most logical and consistent assembly language I
have ever encountered.

--
Ian.

Chris M. Thomasson

unread,
Aug 31, 2018, 8:39:33 PM8/31/18
to
On 8/31/2018 4:10 AM, Malcolm McLean wrote:
> On Friday, August 31, 2018 at 11:54:01 AM UTC+1, Ben Bacarisse wrote:
>> David Brown <david...@hesbynett.no> writes:
>> <snip>
>>> Personally, I cannot see the connection between having 3-bit fields in
>>> an instruction set (as you often have in cpus with 8 registers) and
>>> wanting to use octal. It makes no sense
>>
>> There seems to be a strong anti-octal feeling around here! Octal/hex --
>> it makes no difference to me. Both are easy to read in my opinion, so
>> "making sense" is not really the point. All I need is any reason, even
>> a tiny one, for the balance to tip one way or the other and I'll go with
>> that.
>>
> A list of hexadecimal numbers of any length will look like hex, because
> you've got a six in sixteen chance that any digit will be a letter.
> However octal looks like denary, you can read quite a few numbers before
> you realise that they must be octal because 8 and 9 are missing.
>

Ciphertext:
___________________
0.05888309126700248 0.20480949594366504
0.2160703875185702 0.29710291350167534

37 56 5B 93 34 7C BE 4A 38 4F 22 81 7D 1C D5 FD 19 DB 65 99 4A 6C 8D C7
0F 14 54 B4
___________________

That decrypts to:
___________________
They will look like hex. ;^)
___________________

over on:

http://funwithfractals.atspace.cc/ffe

Copy and paste the data between the lines into the ciphertext box, and
click the decrypt button. ;^)

Reinhardt Behm

unread,
Aug 31, 2018, 9:16:49 PM8/31/18
to
No objection here.
But even with A429 using octal is just a convention as it was originally in
C because DEC used octal.
I often use FORTH (not for Avionics). There I can use any base. Just set the
variable base to the desired value and then print. This is sometimes quite
handy.

--
Reinhardt

Reinhardt Behm

unread,
Aug 31, 2018, 9:24:40 PM8/31/18
to
AT Friday 31 August 2018 19:58, David Brown wrote:

> My gripe with octal is not, in fact, the octal itself. If someone wants
> to write octal, that's their choice. But I don't see it as a
> particularly useful feature. Indeed, I cannot think of any situation in
> all my programming life (assembly, C, or anything else) where I have
> thought octal would be useful - but I have thought that base 4 would
> sometimes be useful.
>
> The things that irritates me about octal is the notation. The use of
> "0" as the prefix in C - that's just /wrong/. "023" should be 23. I'd
> be fine with "0o23" (though not "0O23", for obvious reasons), "8#23",
> "octal(23)", or perhaps "0k23" (avoiding the dangerous "o").

This notation was just nonsense. I had people trying to write decimal init
value in nice table form. To make it look tidy they wrote all values with 3
digits, using leading zeros. If there are no values with digits of 8 or 9
the compiler happily accepts this. The result is not was what was intended.

PL/I has a better notation: '00101'b is binary, '00101'b3 is octal,
'00101'b4 is hex. They should have copied that, it existed before creating
C. But it now much too lat to complain. C is as it is, we just have to
accept such pitfalls.

--
Reinhardt

Malcolm McLean

unread,
Sep 1, 2018, 6:52:48 AM9/1/18
to
On Friday, August 31, 2018 at 3:05:34 PM UTC+1, Ben Bacarisse wrote:
> David Brown <david...@hesbynett.no> writes:
>
> > On 31/08/18 13:10, Malcolm McLean wrote:
> >> On Friday, August 31, 2018 at 11:54:01 AM UTC+1, Ben Bacarisse wrote:
> >>> David Brown <david...@hesbynett.no> writes:
> >>> <snip>
> >>>> Personally, I cannot see the connection between having 3-bit fields in
> >>>> an instruction set (as you often have in cpus with 8 registers) and
> >>>> wanting to use octal. It makes no sense
> >>>
> >>> There seems to be a strong anti-octal feeling around here! Octal/hex --
> >>> it makes no difference to me. Both are easy to read in my opinion, so
> >>> "making sense" is not really the point. All I need is any reason, even
> >>> a tiny one, for the balance to tip one way or the other and I'll go with
> >>> that.
> >>>
> >> A list of hexadecimal numbers of any length will look like hex, because
> >> you've got a six in sixteen chance that any digit will be a letter.
> >> However octal looks like denary, you can read quite a few numbers before
> >> you realise that they must be octal because 8 and 9 are missing.
>
> The data are rarely random. The actual proportion of A-Fs in a typical
> hex representation is much lower than the theoretical 37.5%. For text
> files, it's about 11% and for executable files, it's about 20% (from a
> small survey on my machine). Obviously if you have lots of encrypted
> files (and not further ASCII encoded), you will get close to the 37%.
>
Yes, actually in normal denary, the low digits are more common than
the high ones. That's because valid numbers go in ranges. If the range
is 1 - 200, half the leading digits on an even distribution will be
1.

already...@yahoo.com

unread,
Sep 1, 2018, 2:19:04 PM9/1/18
to
Which does not change my opinion that *today* presence of octal literals in C makes it less reliable language that it would be without octals.
I am programming in C professionally for 26 years and almost for 30 years total, and in all this years I dodn't remember one case where octal literal was used intentionally. But I do remember probably close to dozen cases where it was entered in to code by mistake. And at least one case where finding the mistake took considerable time.
It is loading more messages.
0 new messages