Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Hard Forth

677 views
Skip to first unread message

lehs

unread,
Aug 21, 2016, 4:04:25 AM8/21/16
to
Shouldn't it be possible, when using Forth, to address sectors on an external disk, read and write on this sectors regardless of what the disk contain?

Mark Wills

unread,
Aug 21, 2016, 4:12:48 AM8/21/16
to
If you have no underlying OS then that's the only way you can do it. You'll interface directly with the disk controller.

Julian Fondren

unread,
Aug 21, 2016, 4:16:08 AM8/21/16
to
On Sunday, August 21, 2016 at 3:04:25 AM UTC-5, lehs wrote:
> Shouldn't it be possible, when using Forth, to address sectors on an external disk, read and write on this sectors regardless of what the disk contain?

On a Linux-hosted Forth it's a simple matter to open a block device
(unix term) without a filesystem of a disk and access it directly.
/dev/sda , /dev/sdb , etc. Just open it like a file. Seek to a
position and read or write a block. I used a 4GB CompactFlash card as
a block library on the Sharp Zaurus like this for a while. It's also
convenient for data -- I had cells of prime factors starting at block
10000.

If you use plain old files on a filesystem you'll get some benefits
(rsync; Linux managing filesystem cache for you), but you can easily
apply your own cleverness to your block words (f.e., instead of
rsyncing files to a backup, keep a journal of touched blocks numbers
and then write only those to the backup).


-- Julian

lehs

unread,
Aug 21, 2016, 4:17:03 AM8/21/16
to
Den söndag 21 augusti 2016 kl. 10:12:48 UTC+2 skrev Mark Wills:
> If you have no underlying OS then that's the only way you can do it. You'll interface directly with the disk controller.

Yes, but shouldn't there be words in any Forth implementation that allows it, what ever operating system the Forth is built upon?

lehs

unread,
Aug 21, 2016, 4:20:46 AM8/21/16
to
So Unix do what should possible in Forth? The idea of Unix could work as an idea of Forth, but Forth shouldn't be dependent on Unix to do such things. In my opinion.
Message has been deleted

Julian Fondren

unread,
Aug 21, 2016, 4:41:37 AM8/21/16
to
On Sunday, August 21, 2016 at 3:20:46 AM UTC-5, lehs wrote:
> So Unix do what should possible in Forth? The idea of Unix could work as an idea of Forth, but Forth shouldn't be dependent on Unix to do such things. In my opinion.

Oh, my mistake. You wanted to complain about the standard. Well,
there's no actual gap in the standard here. Using Forth-200x for
convenience:

http://www.forth200x.org/documents/html/file.html
^ is what you use on Linux if you're going to get a block device
yourself. Since hosted Forths tend not to automatically load block
libraries, you're left to managing the block store yourself.

http://www.forth200x.org/documents/html/block.html
^ is what you use to actually work with the block store. On an
embedded Forth you may not have to actually specify where your blocks
are stored; that tends to be determined by what's available.


-- Julian

Mark Wills

unread,
Aug 21, 2016, 4:54:14 AM8/21/16
to
I see where you are going with this but in fairness, in this particular case I would argue that Forth has that covered via the BLOCK wordset. I think one could argue that Forth has a logical sector size of 1024 bytes. How that maps on to physical sector size (on my system physical sector size is 256 bytes) is a matter of implementation. On the old floppy systems the forth systems would directly access the disk. There was no file system on the disk (so they couldn't be read with 'normal' file management programs). Later systems used a file as a block file on a formatted disk. My system does this.

lehs

unread,
Aug 21, 2016, 10:26:36 AM8/21/16
to
Den söndag 21 augusti 2016 kl. 10:54:14 UTC+2 skrev Mark Wills:
> I see where you are going with this but in fairness, in this particular case I would argue that Forth has that covered via the BLOCK wordset. I think one could argue that Forth has a logical sector size of 1024 bytes. How that maps on to physical sector size (on my system physical sector size is 256 bytes) is a matter of implementation. On the old floppy systems the forth systems would directly access the disk. There was no file system on the disk (so they couldn't be read with 'normal' file management programs). Later systems used a file as a block file on a formatted disk. My system does this.

But would BLOCK let me write a bootstrap code on a secondary disk?

Coos Haak

unread,
Aug 21, 2016, 11:08:14 AM8/21/16
to
Op Sun, 21 Aug 2016 07:26:34 -0700 (PDT) schreef lehs:

> Den söndag 21 augusti 2016 kl. 10:54:14 UTC+2 skrev Mark Wills:
>> I see where you are going with this but in fairness, in this particular case I would argue that Forth has that covered via the BLOCK wordset. I think one could argue that Forth has a logical sector size of 1024 bytes. How that maps on to physical sector size (on my system physical sector size is 256 bytes) is a matter of implementation. On the old floppy systems the forth systems would directly access the disk. There was no file system on the disk (so they couldn't be read with 'normal' file management programs). Later systems used a file as a block file on a formatted disk. My system does this.
>
> But would BLOCK let me write a bootstrap code on a secondary disk?

If you use 3,5 inch diskettes, then you have 1440 blocks on
the primary disk (numbered 0..1439) and 1440 blocks on the secondary,
numbered 1440..2879. With MS-DOS the first 512 bytes hold the bootcode.
and there is some 17 Kb for the directory. You can overwrite
that with your own code. Or you can have an offset of 17 and have
a disk with 1423 blocks and never touch the bootsector with BLOCK.

groet Coos

Richard Owlett

unread,
Aug 21, 2016, 3:17:00 PM8/21/16
to
On 8/21/2016 3:04 AM, lehs wrote:
> Shouldn't it be possible, when using Forth, to address sectors on an external disk,
> read and write on this sectors regardless of what the disk contain?

I've been following this thread and began wondering if everyone
is using the same definition of the word {the character sequence}
"Forth".

I see multiple possible definitions of "Forth/FORTH".

Question to lehs in particular and list in general -
Does http://www.forthos.org/ answer &/or raise pertinent questions?

lehs

unread,
Aug 21, 2016, 4:50:42 PM8/21/16
to
First, I'm not negative to the Forth standards. A standard is a compromise and if you want to use a standard or not is up to the forther. Also, I don't want to define Forth. My idea of the definition of Forth is the core words we all know about.

But it seems to me that Forth systems maybe should be more oriented to hardware. Algol didn't even had standards for input or output, since it was a lamguage for algorithms. Forth is excellent for algorithms, but has always had rather general words for input amd output.

These days, input and output means a lot more than keyboards, screens and printers, but Forth seems to have stopped develop in that direction since Forth community highjacked Forth from Charles Moore.

rickman

unread,
Aug 21, 2016, 5:37:35 PM8/21/16
to
On 8/21/2016 4:54 AM, Mark Wills wrote:
> I see where you are going with this but in fairness, in this particular case I would argue that Forth has that covered via the BLOCK wordset. I think one could argue that Forth has a logical sector size of 1024 bytes. How that maps on to physical sector size (on my system physical sector size is 256 bytes) is a matter of implementation. On the old floppy systems the forth systems would directly access the disk. There was no file system on the disk (so they couldn't be read with 'normal' file management programs). Later systems used a file as a block file on a formatted disk. My system does this.

It may not map to the physical sectors at all. Blocks can be
implemented in files which can be fragmented across the drive.

--

Rick C

Mark Wills

unread,
Aug 21, 2016, 5:51:46 PM8/21/16
to
Indeed. That's how my system works. The blocks file is a text file on the host OS formatted disk.

hughag...@gmail.com

unread,
Aug 21, 2016, 10:47:00 PM8/21/16
to
On Sunday, August 21, 2016 at 1:50:42 PM UTC-7, lehs wrote:
> These days, input and output means a lot more than keyboards, screens and printers, but Forth seems to have stopped develop in that direction since Forth community highjacked Forth from Charles Moore.

The Forth community hijacked Forth from Charles Moore???

What happened is that Elizabeth Rather wanted to hit FIG (Forth Interest Group) with a lawsuit to stop them from distributing FIG-Forth which was largely copied from Forth Inc.'s MicroForth, and to also stop them from using the name "Forth" --- she wanted Forth to be proprietary to Forth Inc. --- Charles Moore refused to go along with the lawsuit (he was okay with FIG and with there being Forth programmers who didn't pay Forth Inc.) and this is why he had to leave Forth Inc.. Without his support, Elizabeth Rather's lawsuit against FIG died --- he is the inventor of Forth after all --- he is the only person who could succeed in court with such a lawsuit.

The Forth community did not hijack Forth from Charles Moore --- he could have used lawsuits to exterminate the Forth community --- he didn't do this because he felt confident that he would always continue to be the best Forth programmer and that Forth Inc. would continue to lead the Forth community due to his skill --- after he got kicked out of Forth Inc. however, there were no programmers at Forth Inc. and so they just tried to keep his old code going forever.

PolyForth for the 16-bit x86 was limited to 64KB (application code, application data, compiler and dictionary) because it was ported directly from one of his old Forth systems that had been written for a 64KB system, most likely the PDP-11. This was pathetic! Everybody else in the world understood that the whole purpose of the x86 was to address more than 64KB and they figured out how to use the segment registers to do this. UR/Forth used a memory scheme roughly comparable to the Small memory-model. Also, according to my benchmarks, UR/Forth code speed was the same as Borland's Turbo C code speed using the Small memory model. By comparison, PolyForth compared badly to QBASIC in both the size of the supported programs and their speed. Even today, SwiftForth continues to be total crap; Forth Inc. is composed entirely of sales-people, and they have no programming talent.

As things turned out, Charles Moore didn't continue to be the best Forth programmer anyway --- he switched from Forth programming to Forth-processor design, and his designs were too far-fetched to be practical --- from what I've heard of his Forth programming, he wasn't all that good at it anyway, because he didn't understand the importance of general-purpose data-structures in regard to supporting large programs written by professional programmers.

Cecil Bayona

unread,
Aug 21, 2016, 11:29:49 PM8/21/16
to


On 8/21/2016 9:46 PM, hughag...@gmail.com wrote:

> What happened is that Elizabeth Rather wanted to hit FIG (Forth Interest Group) with a lawsuit to stop them
from distributing FIG-Forth which was largely copied from Forth Inc.'s
MicroForth, and to also stop them from
using the name "Forth" --- she wanted Forth to be proprietary to Forth
Inc. --- Charles Moore refused to go along
with the lawsuit (he was okay with FIG and with there being Forth
programmers who didn't pay Forth Inc.) and
this is why he had to leave Forth Inc.. Without his support, Elizabeth
Rather's lawsuit against FIG died ---
he is the inventor of Forth after all --- he is the only person who
could succeed in court with such a lawsuit.
>

Is this documented somewhere? I would like to read it, but Google floods
you with bad links when I looked for it. What lawsuit does not have the
word forth in it?
--
Cecil - k5nwa

Ron Aaron

unread,
Aug 22, 2016, 12:03:28 AM8/22/16
to


On 22/08/2016 06:29, Cecil Bayona wrote:

> Is this documented somewhere? I would like to read it, but Google floods
> you with bad links when I looked for it. What lawsuit does not have the
> word forth in it?

Don't feed the trolls.

Cecil Bayona

unread,
Aug 22, 2016, 12:13:55 AM8/22/16
to
Even trolls need to eat once in a while.

Either there is proof or there isn't proof, I would like to know one way
or the other.

--
Cecil - k5nwa

Ron Aaron

unread,
Aug 22, 2016, 12:18:50 AM8/22/16
to


On 22/08/2016 07:13, Cecil Bayona wrote:

> Even trolls need to eat once in a while.
>
> Either there is proof or there isn't proof, I would like to know one way
> or the other.

You won't get any actual facts out of him. His MO is to engage in
ad-hominem attacks, repeatedly, ad-naseum.

He has serious problems, and is best ignored. I should have thought
that obvious by the tone and content of his posts...

hughag...@gmail.com

unread,
Aug 22, 2016, 1:08:22 AM8/22/16
to
Jeff Fox told me about this. I doubt that the lawsuit was actually filed because doing so would have required Charles Moore to agree to it, which he refused to do. My understanding is that he left Forth Inc. because the argument over the lawsuit was an irreconcilable difference between him and Elizabeth Rather --- without him, she had to give up on the lawsuit --- 12 years later though, ANS-Forth accomplished the same thing for her, in that ANS-Forth trapped the whole Forth community in a pit of incompetence that prevented them from writing commercial software in Forth.

You could ask Charles Moore, except that he doesn't care about the Forth community any more and won't talk to you --- that is pretty sad, that the inventor of Forth isn't on speaking terms with the Forth community --- maybe he thinks that we are all idiots who just make him look stupid, and he now regrets having invented Forth.

Cecil Bayona

unread,
Aug 22, 2016, 1:55:11 AM8/22/16
to
Second hand testimony that would not hold up in court, actually third
hand, Jeff Fox is dead and I have not found any comment by him on that
subject, and I would not think that Mr More would care to get involved
one way or the other. So this is a case of he said, she said, most
likely not to be resolved. Mr Moore was not happy with the Standard, but
associated with FIG for whatever that is worth, guesses as to motives.

I've seen him attend the SVFIG meetings and give some talks so he is
still involved or was recently, he is 78 now and it shows. Jeff Fox site
had some videos of Mr Moore but they are gone, one of these days I will
search for them, I would like to hear what he had to say, but from some
articles I can see the Forth Community and him are heading in different
directions, that is neither good nor bad but it is what it is.

--
Cecil - k5nwa

hughag...@gmail.com

unread,
Aug 22, 2016, 2:27:10 AM8/22/16
to
It is well known that Charles Moore was not happy with the ANS-Forth Standard, yet his name appears in the ANS-Forth document as a contributor. Realistically, ANSI would not have certified ANS-Forth if they did not have the inventor's name listed --- they don't actually know anything about Forth --- all they care about is seeing Forth's inventor's name listed, and they give it their stamp of approval.

> I've seen him attend the SVFIG meetings and give some talks so he is
> still involved or was recently, he is 78 now and it shows. Jeff Fox site
> had some videos of Mr Moore but they are gone, one of these days I will
> search for them, I would like to hear what he had to say, but from some
> articles I can see the Forth Community and him are heading in different
> directions, that is neither good nor bad but it is what it is.

I find it odd that Charles Moore never visits comp.lang.forth --- he is the inventor of Forth --- you would think that he would be interested in what the Forth community is doing.

I also find it odd that Charles Moore gave up ownership of Forth Inc. to Elizabeth Rather --- he was the founder of the company --- you would think that he would have kicked her out, rather than himself leave. He didn't get paid anything for his shares in the corporation but gave it up completely.

The obvious explanation for this is that he was required by a court order to give up everything that he owned and everything that he cared about. Most likely, when he and Elizabeth Rather split up it went to court, and he was required to pay alimony. He was required to give up Forth Inc. and to also agree to not interfere with Forth Inc. sales by making contact with Forth Inc.'s customers or potential customers --- everybody on comp.lang.forth is a potential customer of Forth Inc. --- that would explain why he never visits comp.lang.forth.

Were Charles Moore and Elizabeth Rather legally married and they got divorced? Did they have a kid? If there was a kid, then Charles Moore would seriously lose everything that he owned and everything that he cared about, which seems to be what happened.

Well, it is certainly odd that the inventor of Forth never makes contact with any of us Forth programmers --- it seems unlikely that he hates us for programming in Forth, considering that he supports FIG --- more likely is that he is required by a court order to not make contact with us.

Anyway --- to hell with ANS-Forth and to hell with Forth-200x --- I own them nothing!

lehs

unread,
Aug 22, 2016, 2:30:12 AM8/22/16
to
Stupid of me joking about Forth community and Charles Moore. Of course Forth could have been a trade secret, but the important issue is the development of Forth since the community and Charles Moore drifted apart.

Paul Rubin

unread,
Aug 22, 2016, 3:04:05 AM8/22/16
to
lehs <skydda...@gmail.com> writes:
> the important issue is the development of Forth since the community
> and Charles Moore drifted apart.

I think Chuck's interests moved away from the Forth language and into
Forth hardware some decades ago, so he hasn't cared much about how
people used Forth on conventional computers. He was involved with
GreenArrays he's traditionally attended Forth day at SVFIG, including
last year.

I don't know if GreenArrays is very active these days, now that its
legal disputes are afaik resolved.

Elizabeth D. Rather

unread,
Aug 22, 2016, 3:42:19 AM8/22/16
to
On 8/21/16 5:29 PM, Cecil Bayona wrote:
>
>
> On 8/21/2016 9:46 PM, hughag...@gmail.com wrote:
>
>> What happened is that Elizabeth Rather wanted to hit FIG (Forth
>> Interest Group) with a lawsuit to stop them
> from distributing FIG-Forth which was largely copied from Forth Inc.'s
> MicroForth, and to also stop them from
> using the name "Forth" --- she wanted Forth to be proprietary to Forth
> Inc. --- Charles Moore refused to go along
> with the lawsuit (he was okay with FIG and with there being Forth
> programmers who didn't pay Forth Inc.) and
> this is why he had to leave Forth Inc.. Without his support, Elizabeth
> Rather's lawsuit against FIG died ---
> he is the inventor of Forth after all --- he is the only person who
> could succeed in court with such a lawsuit.
>>
>
> Is this documented somewhere?

No, because it didn't happen.

Cheers,
Elizabeth

> I would like to read it, but Google floods
> you with bad links when I looked for it. What lawsuit does not have the
> word forth in it?


--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================

rickman

unread,
Aug 22, 2016, 6:00:39 AM8/22/16
to
On 8/22/2016 2:27 AM, hughag...@gmail.com wrote:
>
> I find it odd that Charles Moore never visits comp.lang.forth --- he
> is the inventor of Forth --- you would think that he would be
> interested in what the Forth community is doing.

<<< much insane sounding ranting snipped >>>

Really? Why would anyone working in Forth want here and listen to your
insane rantings?

--

Rick C

HAA

unread,
Aug 23, 2016, 2:11:21 AM8/23/16
to
lehs wrote:
> ...
> Of course Forth could have
> been a trade secret,
> ...

From a speech by Chuck reprinted in FD v1n6 (1980) ...

"... The conclusion was that maybe it [FORTH] could be patented, but it
would take Supreme Court action to do it. NRAO wasn't interested. As
inventor I had fall-back rights but I didn't want to spend $10,000 either,
so FORTH was not patented. This probably was a good thing."



lehs

unread,
Aug 23, 2016, 2:45:54 AM8/23/16
to
Yes, but what I ment was that trade secrets are not much for real conspiracy theorist.

https://iesho.blogspot.se/2015/02/21-murder-of-swedish-prime-minister.html

Albert van der Horst

unread,
Aug 29, 2016, 9:25:29 AM8/29/16
to
In article <f14aebd4-a311-4d91...@googlegroups.com>,
lehs <skydda...@gmail.com> wrote:
>Den s=C3=B6ndag 21 augusti 2016 kl. 21:17:00 UTC+2 skrev Richard Owlett:
>> On 8/21/2016 3:04 AM, lehs wrote:
>> > Shouldn't it be possible, when using Forth, to address sectors on an ex=
>ternal disk,
>> > read and write on this sectors regardless of what the disk contain?
>>=20
>> I've been following this thread and began wondering if everyone=20
>> is using the same definition of the word {the character sequence}=20
>> "Forth".
>>=20
>> I see multiple possible definitions of "Forth/FORTH".
>>=20
>> Question to lehs in particular and list in general -
>> Does http://www.forthos.org/ answer &/or raise pertinent questions?
>First, I'm not negative to the Forth standards. A standard is a compromise =
>and if you want to use a standard or not is up to the forther. Also, I don'=
>t want to define Forth. My idea of the definition of Forth is the core word=
>s we all know about.
>
>But it seems to me that Forth systems maybe should be more oriented to hard=
>ware. Algol didn't even had standards for input or output, since it was a l=
>amguage for algorithms. Forth is excellent for algorithms, but has always h=
>ad rather general words for input amd output.=20

Algol68 had the most elaborate definition of transput (as they called
it) for its time and it was part of the language.
It was not based on existing practice and did get well known.

>
>These days, input and output means a lot more than keyboards, screens and p=
>rinters, but Forth seems to have stopped develop in that direction since Fo=
>rth community highjacked Forth from Charles Moore.

What do you want? Have drivers in Forth developed for all hardware since
1970? I'm glad that manufacturers sometimes supply c-files to access
devices, or even documentation.

Groetjes Albert
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Bruce Axtens

unread,
Sep 12, 2016, 7:16:34 AM9/12/16
to
On 22/08/2016 4:50 AM, lehs wrote:
> but Forth seems to have stopped develop in that direction since Forth community highjacked Forth from Charles Moore.
Maybe it's time to give up on Forth (by that name) and re-badge as "Moore".

lawren...@gmail.com

unread,
Sep 16, 2016, 2:43:32 AM9/16/16
to
On Sunday, August 21, 2016 at 8:17:03 PM UTC+12, lehs wrote:
>
> Den söndag 21 augusti 2016 kl. 10:12:48 UTC+2 skrev Mark Wills:
>>
>> If you have no underlying OS then that's the only way you can do it. You'll
>> interface directly with the disk controller.
>
> Yes, but shouldn't there be words in any Forth implementation that allows
> it, what ever operating system the Forth is built upon?

No. If you want to work at the level of bytes I/O, then there is no essential difference between a block device and a file, it’s just down to the file/device name you specify on the open.

If you want to work at a lower level, then perhaps you are asking for a SATA interface word set? And then that won’t work with onboard SSDs, so you need an M2 word set. And what about USB-connected devices? So you want a USB word set? DMA channels? Dealing with PCI versus PCI-E versus whatever? Interrupt servicing?

Do you want to bother about such low-level details or not?

lawren...@gmail.com

unread,
Sep 16, 2016, 2:54:28 AM9/16/16
to
On Monday, August 22, 2016 at 8:50:42 AM UTC+12, lehs wrote:
>
> Algol didn't even had standards for input or output, since it was a
> lamguage for algorithms.

Algol 60 didn’t, but Algol 68 did. The reason for having the concept of an “algorithmic, not a programming language” back in those days was precisely because I/O was such a complicated and machine-dependent mess. To deal with that, the Algol 68 I/O layer description ended up taking a quarter (over 70 pages) of the entire language spec.

POSIX got rid of all that. So nowadays we can assume the existence of a common, fairly simple “stdio” layer on top of whatever the OS or hardware might actually be doing. Any OS that cannot provide such a thing simply isn’t worth using any more.

lawren...@gmail.com

unread,
Sep 16, 2016, 2:57:11 AM9/16/16
to
On Monday, August 22, 2016 at 2:47:00 PM UTC+12, hughag...@gmail.com wrote:
> Everybody else in the world understood that the whole purpose of the x86
> was to address more than 64KB and they figured out how to use the segment
> registers to do this.

x86 segmentation was an absolutely stupid pain in the bum. Other architectures (e.g. 68000-family) figured out how to address more memory *without* segmentation. And as a bonus, because you were using linear 32-bit addresses to begin with, the transition to full 32-bit processors (68020 and later) was fairly painless.

rickman

unread,
Sep 16, 2016, 3:30:49 AM9/16/16
to
Sure, it is easy to look back from this high vantage point and say,
"They should have gone this way". But Intel had an 8 bit processor they
wanted to be instruction and register compatible with while Motorola
make little effort to be compatible with their existing 8 bit processors.

In the end Intel ended up with a full 32 bit CPU with 32 bit addressing
and a 64 bit CPU with 64 bit addresses. The purpose of the segment
registers now is not so much a way to utilize a large address space, but
to separate the various memory sections and support security.

--

Rick C

Andrew Haley

unread,
Sep 16, 2016, 4:03:17 AM9/16/16
to
lawren...@gmail.com wrote:
>
> POSIX got rid of all that. So nowadays we can assume the existence
> of a common, fairly simple 'stdio' layer on top of whatever the OS
> or hardware might actually be doing. Any OS that cannot provide such
> a thing simply isn't worth using any more.

Umm, what? This is comp.lang.forth. A brilliant language for small
embedded systems. An order of magnitude or two smaller than POSIX.
It doesn't need to run on top of a heavyweight OS.

Andrew.

lawren...@gmail.com

unread,
Sep 16, 2016, 4:19:10 AM9/16/16
to
On Friday, September 16, 2016 at 8:03:17 PM UTC+12, Andrew Haley wrote:

Anton Ertl

unread,
Sep 16, 2016, 4:37:43 AM9/16/16
to
lawren...@gmail.com writes:
>On Friday, September 16, 2016 at 8:03:17 PM UTC+12, Andrew Haley wrote:
>> Umm, what? This is comp.lang.forth. A brilliant language for small
>> embedded systems. An order of magnitude or two smaller than POSIX.
>> It doesn't need to run on top of a heavyweight OS.
>
>If you want to work at a lower level, then perhaps you are asking for a SAT=
>A interface word set? And then that won=E2=80=99t work with onboard SSDs, s=
>o you need an M2 word set. And what about USB-connected devices? So you wan=
>t a USB word set? DMA channels? Dealing with PCI versus PCI-E versus whatev=
>er? Interrupt servicing?

Systems with SATA, SSDs, M2, PCI, or PCI-E are not small embedded
systems; USB not sure. For the big systems that have these things,
having a POSIX layer in between is fine, and Forth is also a cool
language for that (although there is more competition there). For
small embedded systems like Arduinos, it seems to me that programmers
prefer to talk directly to the I/O hardware rather than through a
POSIX layer.

- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.forth200x.org/forth200x.html
EuroForth 2016: http://www.euroforth.org/ef16/

Anton Ertl

unread,
Sep 16, 2016, 5:03:02 AM9/16/16
to
rickman <gnu...@gmail.com> writes:
>On 9/16/2016 2:57 AM, lawren...@gmail.com wrote:
>> On Monday, August 22, 2016 at 2:47:00 PM UTC+12, hughag...@gmail.com wrote:
>>> Everybody else in the world understood that the whole purpose of the x86
>>> was to address more than 64KB and they figured out how to use the segment
>>> registers to do this.
>>
>> x86 segmentation was an absolutely stupid pain in the bum. Other architectures (e.g. 68000-family) figured out how to address more memory *without* segmentation. And as a bonus, because you were using linear 32-bit addresses to begin with, the transition to full 32-bit processors (68020 and later) was fairly painless.
>
>Sure, it is easy to look back from this high vantage point and say,
>"They should have gone this way".

It was also obvious at the time, as evidenced by the fact that
everybody else (68k, Z8000, NS32K) provided longer addresses; OTOH,
Intel was first to market in the beyond-8-bit area, which got them the
IBM PC design win, so in a way their decisions were right.

>But Intel had an 8 bit processor they
>wanted to be instruction and register compatible with while Motorola
>make little effort to be compatible with their existing 8 bit processors.

Intel 8086 was not binary compatible with the 8080, but instead was
assembly-source compatible. Not sure how much this helped, but
Intel's focus on compatibility certainly helped later (80286, 386, and
especially 486 when the RISCs looked like the future), and when they
strayed from it (IA-64), they failed.

Anyway, Intel could have achieved assembly-source compatibility while
still providing a flat address space.

>In the end Intel ended up with a full 32 bit CPU with 32 bit addressing
>and a 64 bit CPU with 64 bit addresses. The purpose of the segment
>registers now is not so much a way to utilize a large address space, but
>to separate the various memory sections and support security.

The only purpose of segment registers (in 64-bit mode) is to provide
thread-local storage. Instead of extending segment descriptors for
64-bit use (which would have required more space), segments in 64-bit
mode were reduced in capability and can no longer be used for
security; they have not been used for security before that, anyway.
Segment fans always whine about security; paging goes home and fscks
the system:-).

Lars Brinkhoff

unread,
Sep 16, 2016, 6:16:02 AM9/16/16
to
an...@mips.complang.tuwien.ac.at (Anton Ertl) writes:
> Intel was first to market in the beyond-8-bit area, which got them the
> IBM PC design win, so in a way their decisions were right.

There are a few different versions of that story. Maturity and time to
market seems to have been one concern, but also cost:

68000 was carefully considered. "An excellent architecture chip, it
has proven to be a worthy competitor to the Intel-based
architecture." There were four major concerns:

1) 16 bit data path would require more bus buffers, therefore a more
expensive system board.

2) more memory chips for a minimum configuration.

3) while it had a performance advantage, the 68000 was not as memory
efficient.

4) Companion and support chips not as well covered as Intel.

He also felt the the 68000 didn't have as good software and support
tools, and the similar register model allowed the porting of 8080
tools to the 8086/8088.

"In summary the 8088 was selected because it allowed the lowest cost
implementation of an architecture that provided a migration path to
a larger address space and higher performance implementations.
Because it was a unique choice relative to competitive system
implementations, IBM could be viewed as a leader, rather than a
follower. It had a feasible software migration path that allowed
access to the large base of existing 8080 software.

http://yarchive.net/comp/ibm_pc_8088.html

Albert van der Horst

unread,
Sep 16, 2016, 7:06:45 AM9/16/16
to
In article <2016Sep1...@mips.complang.tuwien.ac.at>,
Anton Ertl <an...@mips.complang.tuwien.ac.at> wrote:
>lawren...@gmail.com writes:
>>On Friday, September 16, 2016 at 8:03:17 PM UTC+12, Andrew Haley wrote:
>>> Umm, what? This is comp.lang.forth. A brilliant language for small
>>> embedded systems. An order of magnitude or two smaller than POSIX.
>>> It doesn't need to run on top of a heavyweight OS.
>>
>>If you want to work at a lower level, then perhaps you are asking for a SAT=
>>A interface word set? And then that won=E2=80=99t work with onboard SSDs, s=
>>o you need an M2 word set. And what about USB-connected devices? So you wan=
>>t a USB word set? DMA channels? Dealing with PCI versus PCI-E versus whatev=
>>er? Interrupt servicing?
>
>Systems with SATA, SSDs, M2, PCI, or PCI-E are not small embedded
>systems; USB not sure. For the big systems that have these things,
>having a POSIX layer in between is fine, and Forth is also a cool
>language for that (although there is more competition there). For
>small embedded systems like Arduinos, it seems to me that programmers
>prefer to talk directly to the I/O hardware rather than through a
>POSIX layer.

It set me thinking. The first POSIX-like
thing is to have devices-as-a-file.
It is easy to come up with a nice interface for an adc.
1. devices are /dev/adc1 /dev/asc# as many as there are
2. "/dev/adc1" xxx OPEN-FILE would give a fd.
xx could be a resolution. All initialisation would be done for
you. If the resolution is not available, you get an error ior.
3. fd PAD 2 READ-FILE would get a value from the adc of 16 bits.

This is all very nice and portable, and doable, but in the
context of Arduinos noforth etc. nobody would even think of it.

Standardizing on INIT-ADC1 ADC1@
would however not be too far-fetched for Forth.
An important annoyance solved by this could be the
conflict if the adc port is in used for something else e.g. SPI.
In that case one would get a compile time error, which
would be nice (and ahead of the c development systems.).

In short : POSIX? No!

>
>- anton

Anton Ertl

unread,
Sep 16, 2016, 9:24:35 AM9/16/16
to
Lars Brinkhoff <lars...@nocrew.org> writes:
>an...@mips.complang.tuwien.ac.at (Anton Ertl) writes:
>> Intel was first to market in the beyond-8-bit area, which got them the
>> IBM PC design win, so in a way their decisions were right.
>
>There are a few different versions of that story. Maturity and time to
>market seems to have been one concern, but also cost:
[reasons snipped]
>http://yarchive.net/comp/ibm_pc_8088.html

Yes, there were a number of factors involved, and the 8086
segmentation was not something in favour (but also not something
detrimental, which may have to do with the memory sizes at the time).

One interesting aspect is that both Intel and Motorola had two CPU
projects going on in the late 70s, one an evolutionary design, and the
other a forward-looking groundbreaking design. The evolutionary
results were the 8086 and the 6809 (assembly-source compatible to the
8080 and the 6800, respectively), while the groundbreaking results
were the iAPX432 and the 68000. Intel expected the 8086 to be
surpassed by the iAPX432 soon (as happened with 6809 and 68000); IBM
did not expect a long future for the IBM PC, either, so the 8088 was a
good match. The 6809 was too small (marketed as 8-bit, and limited to
64k address space) for the IBM PC, though, and as stated, the 68000
was, in a way, too large.

Andrew Haley

unread,
Sep 16, 2016, 10:40:03 AM9/16/16
to
Anton Ertl <an...@mips.complang.tuwien.ac.at> wrote:
> rickman <gnu...@gmail.com> writes:
>>On 9/16/2016 2:57 AM, lawren...@gmail.com wrote:
>
> It was also obvious at the time, as evidenced by the fact that
> everybody else (68k, Z8000, NS32K) provided longer addresses; OTOH,
> Intel was first to market in the beyond-8-bit area, which got them the
> IBM PC design win, so in a way their decisions were right.

Indeed so.

> Intel 8086 was not binary compatible with the 8080, but instead was
> assembly-source compatible. Not sure how much this helped,

I think it helped a lot. It got them Microsoft BASIC, which was a
very big deal at the time.

Andrew.

Andrew Haley

unread,
Sep 16, 2016, 10:45:09 AM9/16/16
to
lawren...@gmail.com wrote:
> On Friday, September 16, 2016 at 8:03:17 PM UTC+12, Andrew Haley wrote:
It depends on the hardware, which may or may not have such things.
Forth is an "any level" language, equally happy in small and large
systems.

Andrew.

JUERGEN

unread,
Sep 16, 2016, 1:29:46 PM9/16/16
to
I cannot understand the reasoning:
I would agree with: Keep Forth, backward compatibility, ANS ... to keep and continue, actually this is what the commercial SW manufacturers do for backward compatibility.
MPE has introduced the SockPuppet to bridge the gap to C - one of the most important evolutions of Forth and it seems not appreciated by the Forth Community. Forth++ and MOORE was open for the last 30 years, but not many branches came out of it, which indicates that commercially the interest was limited.

Chuck always did what he wanted. Did not like ANS, instead did the Forth Processors, which took over the world - or might do in the future.

rickman

unread,
Sep 17, 2016, 2:13:10 AM9/17/16
to
On 9/16/2016 4:37 AM, Anton Ertl wrote:
> rickman <gnu...@gmail.com> writes:
>> On 9/16/2016 2:57 AM, lawren...@gmail.com wrote:
>>> On Monday, August 22, 2016 at 2:47:00 PM UTC+12, hughag...@gmail.com wrote:
>>>> Everybody else in the world understood that the whole purpose of the x86
>>>> was to address more than 64KB and they figured out how to use the segment
>>>> registers to do this.
>>>
>>> x86 segmentation was an absolutely stupid pain in the bum. Other architectures (e.g. 68000-family) figured out how to address more memory *without* segmentation. And as a bonus, because you were using linear 32-bit addresses to begin with, the transition to full 32-bit processors (68020 and later) was fairly painless.
>>
>> Sure, it is easy to look back from this high vantage point and say,
>> "They should have gone this way".
>
> It was also obvious at the time, as evidenced by the fact that
> everybody else (68k, Z8000, NS32K) provided longer addresses; OTOH,
> Intel was first to market in the beyond-8-bit area, which got them the
> IBM PC design win, so in a way their decisions were right.
>
>> But Intel had an 8 bit processor they
>> wanted to be instruction and register compatible with while Motorola
>> make little effort to be compatible with their existing 8 bit processors.
>
> Intel 8086 was not binary compatible with the 8080, but instead was
> assembly-source compatible. Not sure how much this helped, but
> Intel's focus on compatibility certainly helped later (80286, 386, and
> especially 486 when the RISCs looked like the future), and when they
> strayed from it (IA-64), they failed.
>
> Anyway, Intel could have achieved assembly-source compatibility while
> still providing a flat address space.

Huh? The 8080 has a flat address space. How do you extend that to a
larger address space without sectors. The basic instructions assume 16
bit addresses. Are you suggesting a totally different instruction set
on *top* of the 8 bit register instructions?


>> In the end Intel ended up with a full 32 bit CPU with 32 bit addressing
>> and a 64 bit CPU with 64 bit addresses. The purpose of the segment
>> registers now is not so much a way to utilize a large address space, but
>> to separate the various memory sections and support security.
>
> The only purpose of segment registers (in 64-bit mode) is to provide
> thread-local storage. Instead of extending segment descriptors for
> 64-bit use (which would have required more space), segments in 64-bit
> mode were reduced in capability and can no longer be used for
> security; they have not been used for security before that, anyway.
> Segment fans always whine about security; paging goes home and fscks
> the system:-).

You seem to be totally ignoring the incremental nature of the
development of the technology. Remember how the 8086 was available
earlier than the 68000? That's because they didn't try to solve
tomorrow's problems today. The 68000 has numerous issues such as higher
transistor count and slower throughput compared to the 8086. It was
only if you looked many years ahead that the 68000 looked like a better
design. In the end the x86 line dominated, so clearly it was the better
choice.

--

Rick C

lawren...@gmail.com

unread,
Sep 17, 2016, 2:46:44 AM9/17/16
to
On Saturday, September 17, 2016 at 6:13:10 PM UTC+12, rickman wrote:
> The 68000 has numerous issues such as higher
> transistor count and slower throughput compared to the 8086.

Yet Microsoft Windows (on x86) was not able to match the performance of the Macintosh (on 680x0). It was only when all the Windows code was able to move to a flat 32-bit address space (late 1990s onwards) that it was finally able to pull ahead. The segmented-address issue delayed this transition by about a decade compared to the Macintosh.

> In the end the x86 line dominated, so clearly it was the better
> choice.

It was a triumph of marketing over technology.

Andrew Haley

unread,
Sep 17, 2016, 3:02:11 AM9/17/16
to
lawren...@gmail.com wrote:
> On Saturday, September 17, 2016 at 6:13:10 PM UTC+12, rickman wrote:
>> The 68000 has numerous issues such as higher
>> transistor count and slower throughput compared to the 8086.
>
> Yet Microsoft Windows (on x86) was not able to match the performance
> of the Macintosh (on 680x0).

Well, sure, if you need a 32-bit address space, then having true
32-bit hardware helps. As for the rest of it, I don't know how you
can compare two very different code bases on two very different
processors and declare that the difference is due to the processors.

> It was only when all the Windows code was able to move to a flat
> 32-bit address space (late 1990s onwards) that it was finally able
> to pull ahead. The segmented-address issue delayed this transition
> by about a decade compared to the Macintosh.

>> In the end the x86 line dominated, so clearly it was the better
>> choice.
>
> It was a triumph of marketing over technology.

I think it was a triumph of manufacturing and process technology.
Motorola didn't keep up, initially with RISC, losing Sun and then
Apple, then with Intel.

Andrew.

Mark Wills

unread,
Sep 17, 2016, 3:05:00 AM9/17/16
to
X86 dominance in the market had nothing to do it being better. Anyone who's ever written a line of assembler on both devices knows the 68K is the better device by a mile. The X86 success was purely down to IBM's decision to use it in their new PC line, and possibly the dominance of CPM prior to that.

lawren...@gmail.com

unread,
Sep 17, 2016, 3:41:16 AM9/17/16
to
On Saturday, September 17, 2016 at 7:02:11 PM UTC+12, Andrew Haley wrote:
> Lawrence D’Oliveiro wrote:
>
>> On Saturday, September 17, 2016 at 6:13:10 PM UTC+12, rickman wrote:
>>> The 68000 has numerous issues such as higher
>>> transistor count and slower throughput compared to the 8086.
>>
>> Yet Microsoft Windows (on x86) was not able to match the performance
>> of the Macintosh (on 680x0).
>
> Well, sure, if you need a 32-bit address space, then having true
> 32-bit hardware helps. As for the rest of it, I don't know how you
> can compare two very different code bases on two very different
> processors and declare that the difference is due to the processors.

Either that, or Microsoft writes crap code...

m...@iae.nl

unread,
Sep 17, 2016, 4:28:41 AM9/17/16
to
On Saturday, September 17, 2016 at 9:05:00 AM UTC+2, Mark Wills wrote:
> X86 dominance in the market had nothing to do it being better.
> Anyone who's ever written a line of assembler on both devices
> knows the 68K is the better device by a mile.

I did write *a lot* of assembly language for both
processors, an embedded board with CP/M for the 68K
and a 386 PC under 32bit Windows NT. I liked (or
disliked) both architectures equally well. There was
no objective technical reason to call either of them
(much) 'better' than the other one. At some point
the x86 developed itself much faster (higher clock
speeds, more memory, FP co-processor), and 68K
simply stopped to be a serious alternative.

An x86 versus 68K dispute is only useful to identify
the real engineers from the dilettantes.

-marcel

lawren...@gmail.com

unread,
Sep 17, 2016, 5:04:48 AM9/17/16
to
On Saturday, September 17, 2016 at 8:28:41 PM UTC+12, m...@iae.nl wrote:
> There was no objective technical reason to call either of them
> (much) 'better' than the other one.

68K had a linear 32-bit address space. More registers. Better addressing modes. A much less painful transition to full 32-bit operation.

Did I mention it had a 32-bit linear address space?

> At some point the x86 developed itself much faster (higher clock
> speeds, more memory, FP co-processor), and 68K simply stopped to
> be a serious alternative.

That only happened because Intel could afford to spend ten times as much as Motorola on developing its processors. It needed to, to keep up.

Andrew Haley

unread,
Sep 17, 2016, 11:20:46 AM9/17/16
to
Mark Wills <markwi...@gmail.com> wrote:

> X86 dominance in the market had nothing to do it being
> better. Anyone who's ever written a line of assembler on both
> devices knows the 68K is the better device by a mile.

You can't judge the quality of a processor solely by its instruction
set architecture. It's just as important to think about the issue
rate, code density, and so on. These days what's really important
is just as likely to be memory bandwidth, cache behaviour, and so on.

> The X86 success was purely down to IBM's decision to use it in their
> new PC line, and possibly the dominance of CPM prior to that.

And Intel's execution. They are very good at that,

Andrew.

Anton Ertl

unread,
Sep 17, 2016, 11:38:20 AM9/17/16
to
rickman <gnu...@gmail.com> writes:
>On 9/16/2016 4:37 AM, Anton Ertl wrote:
>> Anyway, Intel could have achieved assembly-source compatibility while
>> still providing a flat address space.
>
>Huh? The 8080 has a flat address space. How do you extend that to a
>larger address space without sectors. The basic instructions assume 16
>bit addresses. Are you suggesting a totally different instruction set
>on *top* of the 8 bit register instructions?

There are various ways to do it.

The 65C816 extended the 6502 instruction set to support a 16MB address
space, and was not only assembly language compatible, but even binary
compatible.

The MIPS, SPARC, and Power architectures were extended from 32-bit to
64-bit and were not only assembly language compatible, but even binary
compatible, and similar things happened to the S/360 architecture.

The 386 architecture included the 8086 and 80286 architecture through
16-bit modes.

Similarly, the AMD64 architecture has modes for running 386
architecture programs and 8086 and 80286 programs in addition to its
64-bit mode for its 64-bit architecture.

ARM went in the same direction with ARMv8: It has a 64-bit
architecture and a 32-bit architecture and modes for executing
programs in these architectures.

I am sure there are other examples.

>>> In the end Intel ended up with a full 32 bit CPU with 32 bit addressing
>>> and a 64 bit CPU with 64 bit addresses. The purpose of the segment
>>> registers now is not so much a way to utilize a large address space, but
>>> to separate the various memory sections and support security.
>>
>> The only purpose of segment registers (in 64-bit mode) is to provide
>> thread-local storage. Instead of extending segment descriptors for
>> 64-bit use (which would have required more space), segments in 64-bit
>> mode were reduced in capability and can no longer be used for
>> security; they have not been used for security before that, anyway.
>> Segment fans always whine about security; paging goes home and fscks
>> the system:-).
>
>You seem to be totally ignoring the incremental nature of the
>development of the technology. Remember how the 8086 was available
>earlier than the 68000? That's because they didn't try to solve
>tomorrow's problems today. The 68000 has numerous issues such as higher
>transistor count and slower throughput compared to the 8086. It was
>only if you looked many years ahead that the 68000 looked like a better
>design.

I did not have to look ahead. I just had to look at the assembly
language. Anyway, I don't know what this has to do with my refutation
of your claim about segments being used for security these days.
Although, come to think of it, maybe with "now" you mean a time when
the 80286 was current and there were attempts to use protected mode
for security (maybe in OS/2 1.x, now 25 years gone).

> In the end the x86 line dominated, so clearly it was the better
>choice.

They had to invent a new architecture for 32 bits, while the 68000
line could continue with its existing architecture. The 8086 does not
dominate, AMD64 (and ARMv8) dominates; AMD64 still has 8086 in some
dark corner, but it's hardly used these days, if at all.

As for better choice: If IBM had chosen the 68000, it would have
dominated, and it would clearly have been the better choice.

As for performance,
<http://performance.netlib.org/performance/html/dhrystone.data.col0.html>
lists an 8MHz 8086 (ATT PC6300) at 0.44 Dhrystone MIPS and a 7.16MHz
68000 (Amiga 1000) at 0.54 Dhrystone MIPS, and an IBM PC (4.77MHz
8088) at 0.22 Dhrystone MIPS. So no, they did not chose the 8088 for
performance.

Anton Ertl

unread,
Sep 17, 2016, 11:49:46 AM9/17/16
to
m...@iae.nl writes:
>On Saturday, September 17, 2016 at 9:05:00 AM UTC+2, Mark Wills wrote:
>> X86 dominance in the market had nothing to do it being better.
>> Anyone who's ever written a line of assembler on both devices
>> knows the 68K is the better device by a mile.
>
>I did write *a lot* of assembly language for both
>processors, an embedded board with CP/M for the 68K
>and a 386 PC under 32bit Windows NT. I liked (or
>disliked) both architectures equally well.

The 386 has a different architecture than the 8086, and is in fact
more orthogonal than the 68000: with the exception of a few relatively
rare instructions (and 8086 holdouts) and addressing modes, every
instruction and addressing mode could use every register; i.e., the
386 is a register machine. By contrast, the 8086 was a mess of
special-purpose instructions and especially addressing modes. There's
a reason the registers are not called R0-R7 like in a from-the-start
register architecture (such as the PDP-11).

>An x86 versus 68K dispute is only useful to identify
>the real engineers from the dilettantes.

The use of "x86" is useful to identify dilettantes. Real engineers
know that 8086, IA-32, and AMD64 are different architectures and won't
mix them by applying one moniker to refer to some or all of them,
because that just confuses.

rickman

unread,
Sep 17, 2016, 4:51:05 PM9/17/16
to
On 9/17/2016 2:46 AM, lawren...@gmail.com wrote:
> On Saturday, September 17, 2016 at 6:13:10 PM UTC+12, rickman wrote:
>> The 68000 has numerous issues such as higher
>> transistor count and slower throughput compared to the 8086.
>
> Yet Microsoft Windows (on x86) was not able to match the performance of the Macintosh (on 680x0). It was only when all the Windows code was able to move to a flat 32-bit address space (late 1990s onwards) that it was finally able to pull ahead. The segmented-address issue delayed this transition by about a decade compared to the Macintosh.

Huh? How can you compare two different machines with two different OS
and say the difference is purely the hardware?


>> In the end the x86 line dominated, so clearly it was the better
>> choice.
>
> It was a triumph of marketing over technology.

That's your opinion. The x86 was the design the world wanted at the
time. In other words, it best met the needs of users. Only gear heads
argue about which CPU *should* have won the race.

--

Rick C

rickman

unread,
Sep 17, 2016, 4:52:02 PM9/17/16
to
On 9/17/2016 3:04 AM, Mark Wills wrote:
> X86 dominance in the market had nothing to do it being better. Anyone who's ever written a line of assembler on both devices knows the 68K is the better device by a mile. The X86 success was purely down to IBM's decision to use it in their new PC line, and possibly the dominance of CPM prior to that.

Your analysis is one dimensional.

--

Rick C

lawren...@gmail.com

unread,
Sep 17, 2016, 8:22:21 PM9/17/16
to
On Sunday, September 18, 2016 at 3:38:20 AM UTC+12, Anton Ertl wrote:
> The MIPS, SPARC, and Power architectures were extended from 32-bit to
> 64-bit and were not only assembly language compatible, but even binary
> compatible, and similar things happened to the S/360 architecture.

Can’t speak for the others, but I did do some code generation for the PowerPC instruction set. It was clear that was designed to be a 64-bit architecture from the outset, so the 32-bit chips were basically operating on cut-down subsets. This would have made it fairly straightforward to transition code to the full 64-bit processors--analogously to how the original Motorola 68000 16-bit chip was really running a cut-down 32-bit architecture.

lawren...@gmail.com

unread,
Sep 17, 2016, 8:24:12 PM9/17/16
to
On Sunday, September 18, 2016 at 8:51:05 AM UTC+12, rickman wrote:
>
> On 9/17/2016 2:46 AM, Lawrence D’Oliveiro wrote:
>>
>> On Saturday, September 17, 2016 at 6:13:10 PM UTC+12, rickman wrote:
>>>
>>> The 68000 has numerous issues such as higher
>>> transistor count and slower throughput compared to the 8086.
>>
>> Yet Microsoft Windows (on x86) was not able to match the performance
>> of the Macintosh (on 680x0). It was only when all the Windows code was
>> able to move to a flat 32-bit address space (late 1990s onwards) that
>> it was finally able to pull ahead. The segmented-address issue delayed
>> this transition by about a decade compared to the Macintosh.
>
> Huh? How can you compare two different machines with two different OS
> and say the difference is purely the hardware?

Andrew Haley

unread,
Sep 18, 2016, 4:44:41 PM9/18/16
to
rickman <gnu...@gmail.com> wrote:
> Only gear heads argue about which CPU *should* have won the race.

Well, yeah, but gearheads 'r us. :-)

Andrew.

Bernd Paysan

unread,
Oct 12, 2016, 8:50:18 PM10/12/16
to
Am Sat, 17 Sep 2016 16:51:03 -0400 schrieb rickman:

> That's your opinion. The x86 was the design the world wanted at the
> time.

You mean IBM. IBM choose the 8088 over the 68000 due to time to market
pressure. At that time, the 68000 was just being debugged, and the 8088
was ready. It was Microsoft which convinced IBM to actually use a 16 bit
processor; they had planned to use an 8 bit processor originally. It was
supposed to be a home computer, nothing serious, as they feared
cannibalizing their workstation market (which it eventually did, but
first, it cannibalized the typewriter market, something IBM could live
with).

> In other words, it best met the needs of users. Only gear heads argue
> about which CPU *should* have won the race.

Of course we argue about what should have won from a technical point of
merit. Mundane marketing issues like availability on the market at a
specific point in time doesn't count. For several years to come after
the initial PC, the 68k competitors like Atari ST, Mac, Amiga, where all
better offerings than the original IBM PC, and those machines had found a
lot of customers, too. That changed when the clones came and pushed the
price down. And at that point in time, the 386 was already there; the
clone makers were the first to introduce the 386 (Compaq, IIRC), just
with ISA, when IBM wanted to transition people to Microchannel, a failed
attempt of gaining back control. Later, PCI replaced ISA, it came from
Intel and was open.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
net2o ID: kQusJzA;7*?t=uy@X}1GWr!+0qqp_Cn176t4(dQ*
http://bernd-paysan.de/

Bernd Paysan

unread,
Oct 12, 2016, 9:06:07 PM10/12/16
to
Am Fri, 16 Sep 2016 08:32:57 +0000 schrieb Anton Ertl:

> For small embedded systems like Arduinos, it seems to me that
> programmers prefer to talk directly to the I/O hardware rather than
> through a POSIX layer.

Apart from the register-based IO on embedded controllers, which are
addressed by <regname> @ and ! (or maybe some other memory space if
necessary), typical IO on controllers are serial interfaces, and those
have an abstraction layer in Forth: TYPE, EMIT, and EMIT? for writing,
KEY and KEY? for reading and checking for available data.

The @ and ! for these registers correspond more to ioctl on POSIX, and
ioctl is far less well-defined as files and stdio.

Albert van der Horst

unread,
Oct 12, 2016, 9:24:12 PM10/12/16
to
In article <ntmlnn$qa9$3...@dont-email.me>, Bernd Paysan <be...@net2o.de> wrote:
>Am Sat, 17 Sep 2016 16:51:03 -0400 schrieb rickman:
>
>> That's your opinion. The x86 was the design the world wanted at the
>> time.
>
>You mean IBM. IBM choose the 8088 over the 68000 due to time to market
>pressure. At that time, the 68000 was just being debugged, and the 8088
>was ready. It was Microsoft which convinced IBM to actually use a 16 bit
>processor; they had planned to use an 8 bit processor originally. It was
>supposed to be a home computer, nothing serious, as they feared
>cannibalizing their workstation market (which it eventually did, but
>first, it cannibalized the typewriter market, something IBM could live
>with).
>
>> In other words, it best met the needs of users. Only gear heads argue
>> about which CPU *should* have won the race.
>
>Of course we argue about what should have won from a technical point of
>merit. Mundane marketing issues like availability on the market at a
>specific point in time doesn't count. For several years to come after
>the initial PC, the 68k competitors like Atari ST, Mac, Amiga, where all
>better offerings than the original IBM PC, and those machines had found a
>lot of customers, too. That changed when the clones came and pushed the

I remember how all technically savvy people immediately bought an Atari
or an Amiga, as soon as they became available. Then Veltman had his
Schoonschip program running on the Mac, Atari and Amiga.
(Started in the 1980 this got him the Nobelprice around 2000).

>price down. And at that point in time, the 386 was already there; the
>clone makers were the first to introduce the 386 (Compaq, IIRC), just
>with ISA, when IBM wanted to transition people to Microchannel, a failed
>attempt of gaining back control. Later, PCI replaced ISA, it came from
>Intel and was open.

The intel 386 inspired nobody until Linux Torvalds came around.

Groetjes Albert

>
>--
>Bernd Paysan

rickman

unread,
Oct 12, 2016, 11:34:30 PM10/12/16
to
You can exclude such commercial aspects as availability, but then why
not exclude price? If you want to be a gearhead and look at what
"should" have happened, then why consider any of the practicality issues?

I guess I'm just getting too old to care about such things. I'd rather
look ahead and see where things are going.

--

Rick C

Andrew Haley

unread,
Oct 13, 2016, 5:03:01 AM10/13/16
to
Bernd Paysan <be...@net2o.de> wrote:
> Am Sat, 17 Sep 2016 16:51:03 -0400 schrieb rickman:
>
>> In other words, it best met the needs of users. Only gear heads argue
>> about which CPU *should* have won the race.
>
> Of course we argue about what should have won from a technical point of
> merit.

Well, yes. Gearheads 'r us. :-)

> Mundane marketing issues like availability on the market at a
> specific point in time doesn't count. For several years to come
> after the initial PC, the 68k competitors like Atari ST, Mac, Amiga,
> where all better offerings than the original IBM PC,

Better than the original PC, sure, but the PC did have the open bus
architecture which was extremely well-documented. This spawned a
thriving industry.

Andrew.

lawren...@gmail.com

unread,
Oct 13, 2016, 6:04:57 PM10/13/16
to
On Thursday, October 13, 2016 at 10:03:01 PM UTC+13, Andrew Haley wrote:
> Better than the original PC, sure, but the PC did have the open bus
> architecture which was extremely well-documented. This spawned a
> thriving industry.

The “original” PC--the LINC <https://www.youtube.com/watch?v=ZgPfLWt5FWE>?
0 new messages