Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: 68000 assembly language programming

156 views
Skip to first unread message

Jonathan de Boyne Pollard

unread,
Sep 2, 2011, 5:45:52 AM9/2/11
to
> I was once urged to write a book on 68000 assembly language programming.
> I did in fact write one on 80x86 assembly language programming, but the
> reviewers didn't like it because they said I'd picked the wrong
> processor. They were pretty well unanimous in saying that the 80x86 line
> was going to fizzle out and the Motorola 6800/68000/etc were the wave of
> the future. As a result, the publisher made some bad marketing decisions
> based on the assumption that the potential market was vanishingly small.

They'd have been on safer ground had they decided that it was the
_assembly language programming_ part that was going to become
comparatively unimportant, rather than the processor architecture.
(When I'm next in the same part of the country as my library, I'll check
how many books that I have that deal with 80x86 assembly language
programming, rather than with the rather different beasts of _MS-DOS
system and applications programming_.)

Peter Flass

unread,
Sep 2, 2011, 7:37:36 AM9/2/11
to

The 68xx (and 68xxx) chops *should have* become more important. The
architecture was miles better than the x86, and in fact seemed to me to
be the only microprocessor architecture that hadn't been thrown together
higlly-piglly out of whatever could be made to fit on a chip.

Just goes to show that better doesn't necessarily equal market success.

wolfgang kern

unread,
Sep 2, 2011, 1:27:06 PM9/2/11
to

Peter Flass answered Jonathan de Boyne Pollard:

Beside that CLAX is an x86 group, I find nothing wrong on Motorora
CUs for a certain range of usage. HC11 actually were(is) an (MCU)CPU
which covered a lot of situations in the past and still may do today.

I don't think that 68xxx is any better that x86, it's just different.

OTOH, I always prefered first Z80 based MCUs like the NSC800/1600/Z280)
because this little things allowed for everything required then.

But if only the used language is of concern here, I'd recommend to
follow the terminology used by the vendor's manuals and nothing else.

A programmer may really save on a lot of misunderstanding by coding
in a languange which reflects the hardware opportunities, otherwise
it will end up with 'portable HLL-bloatware' like too often seen.

__
wolfgang


hanc...@bbs.cpcn.com

unread,
Sep 2, 2011, 10:33:08 PM9/2/11
to
On Sep 2, 7:37 am, Peter Flass <Peter_Fl...@nospicedham.Yahoo.com>
wrote:

> The 68xx (and 68xxx) chops *should have* become more important.  The
> architecture was miles better than the x86, and in fact seemed to me to
> be the only microprocessor architecture that hadn't been thrown together
> higlly-piglly out of whatever could be made to fit on a chip.

Why was it better?

> Just goes to show that better doesn't necessarily equal market success.

Betamax vs. VHS.

Nathan Baker

unread,
Sep 3, 2011, 2:06:58 AM9/3/11
to

"Peter Flass" <Peter...@nospicedham.Yahoo.com> wrote in message
news:j3qf5r$4f2$1...@dont-email.me...

>> They'd have been on safer ground had they decided that it was the
>> _assembly language programming_ part that was going to become
>> comparatively unimportant, rather than the processor architecture. (When
>> I'm next in the same part of the country as my library, I'll check how
>> many books that I have that deal with 80x86 assembly language
>> programming, rather than with the rather different beasts of _MS-DOS
>> system and applications programming_.)
>

A 'web browser' is the only application you need. ;)

> The 68xx (and 68xxx) chops *should have* become more important. The
> architecture was miles better than the x86, and in fact seemed to me to be
> the only microprocessor architecture that hadn't been thrown together
> higlly-piglly out of whatever could be made to fit on a chip.
>

You wanted "higgly-piggly" there. [to stay a.u.e. relevant {see header}]

> Just goes to show that better doesn't necessarily equal market success.

Better in what way? If you want 68xxx-style 'syntax' in a x86 world, that
*has* been done.

Nathan.
--
http://clax.inspiretomorrow.net/
http://www.fysnet.net/faq/index.htm


R H Draney

unread,
Sep 3, 2011, 2:13:37 AM9/3/11
to
hanc...@bbs.cpcn.com filted:
>
>On Sep 2, 7:37=A0am, Peter Flass <Peter_Fl...@nospicedham.Yahoo.com>
>wrote:
>
>> The 68xx (and 68xxx) chops *should have* become more important. =A0The

>> architecture was miles better than the x86, and in fact seemed to me to
>> be the only microprocessor architecture that hadn't been thrown together
>> higlly-piglly out of whatever could be made to fit on a chip.
>
>Why was it better?
>
>
>
>> Just goes to show that better doesn't necessarily equal market success.
>
>Betamax vs. VHS.

Hydrox vs. Oreo....r


--
Me? Sarcastic?
Yeah, right.

Peter Moylan

unread,
Sep 3, 2011, 3:26:01 AM9/3/11
to

The Intel designers were educated on IBM machines, and the Motorola
designers were educated on Motorola machines; and it shows.


>
> Just goes to show that better doesn't necessarily equal market success.

The Intel processors might look like a great steaming pile of crap if
you look at the instruction set, the confusingly specialised registers,
etc. Nevertheless the Intel designs were superior in a number of other
directions. Fitting an entire floating point coprocessor onto the same
chip as the main processor was pretty impressive at the time, even if
it's now commonplace. Stealing the segmentation approach[1] was a
brilliant idea, even if the software people wasted the opportunity to
use it. (Although it still mystifies me that Intel wasn't sued for
patent violation. The original design must surely have been patented,
and it was lifted with bit-for-bit copying accuracy.) Perhaps most
importantly, Intel knew how to fabricate very complicated chips with an
acceptable yield, leading to the whole thing being affordable.

Motorola had a much cleaner approach to processor design, but it wasn't
nearly as impressive when it came to things like memory management,
caches, etc.

A point that's sometimes overlooked is that Intel was also in the
business of designing and selling microcontrollers and various other
bits of microelectronics. That made the company very visible to engineers.

[1] Which early computer was it that had exactly the same segment
descriptors as the 80286? I used to know, and it's slipped out of my
mind. Whichever one it was, it was recognised at the time as a major
advance in concept, but not practical because the hardware was too
expensive.

--
Peter Moylan, Newcastle, NSW, Australia. http://www.pmoylan.org
For an e-mail address, see my web page.

Message has been deleted

Peter Flass

unread,
Sep 3, 2011, 8:14:43 AM9/3/11
to
On 9/3/2011 2:06 AM, Nathan Baker wrote:
> "Peter Flass"<Peter...@nospicedham.Yahoo.com> wrote in message
> news:j3qf5r$4f2$1...@dont-email.me...
>>> They'd have been on safer ground had they decided that it was the
>>> _assembly language programming_ part that was going to become
>>> comparatively unimportant, rather than the processor architecture. (When
>>> I'm next in the same part of the country as my library, I'll check how
>>> many books that I have that deal with 80x86 assembly language
>>> programming, rather than with the rather different beasts of _MS-DOS
>>> system and applications programming_.)
>>
>
> A 'web browser' is the only application you need. ;)
>
>> The 68xx (and 68xxx) chops *should have* become more important. The
>> architecture was miles better than the x86, and in fact seemed to me to be
>> the only microprocessor architecture that hadn't been thrown together
>> higlly-piglly out of whatever could be made to fit on a chip.
>>
>
> You wanted "higgly-piggly" there. [to stay a.u.e. relevant {see header}]

I didn't notice that this was also postted to a.u.e -- I tried about six
different ways of spelling this and none looked right, so I gave up. I
probably should have googled it.

>
>> Just goes to show that better doesn't necessarily equal market success.
>
> Better in what way? If you want 68xxx-style 'syntax' in a x86 world, that
> *has* been done.

This goes back a ways. Another poster also asked this, and I have long
forgotten. Years ago, maybe back in the 70s I looked at the common
architectures in use with an eye to building a homebrew system: 8080,
6800, 6502, 1802 and probably others. Probably a lot of my reasoning
had to do with ease of interfacing the chip (not an architecture issue).
I thought the architecture was more "regular" - I liked the similarity
to the PDP-11. At the time I also preferred the memory-mapped I/O. I
do know that after some analysis I formed the opinion that the 6800 was
better. Most of my complaints with current x86 architecture don't
relate to the original chips but to the contortions Intel went thru to
try to maintain compatibility, that cluttered the newer chips up with a
lot of cruft.

greenaum

unread,
Sep 3, 2011, 8:37:58 AM9/3/11
to
On Fri, 2 Sep 2011 19:33:08 -0700 (PDT), hanc...@bbs.cpcn.com
sprachen:

>Betamax vs. VHS.

This is a bit of a myth. For one thing, Betamax decks were more
expensive, due to Sony's licensing strategy (something they seem to
have got right eventually!). Also, the tapes didn't run for as long,
and the quality wasn't THAT much better. Late 70s TVs weren't
particularly sharp of picture so that you'd notice.

--

--------------------------------------------------------------------------------
"There's nothing like eating hay when you're faint," the White King remarked to Alice, as he munched away.
"I should think throwing cold water over you would be better," Alice suggested: "--or some sal-volatile."
"I didn't say there was nothing better," the King replied. "I said there was nothing like it."
Which Alice did not venture to deny.


greenaum

unread,
Sep 3, 2011, 8:38:27 AM9/3/11
to
On Sat, 3 Sep 2011 02:06:58 -0400, "Nathan Baker"
<nathan...@nospicedham.gmail.com> sprachen:

>You wanted "higgly-piggly" there. [to stay a.u.e. relevant {see header}]

"Higgledy-Piggledy" in English.

John Dunlop

unread,
Sep 3, 2011, 8:42:11 AM9/3/11
to
Peter Flass:

> [Nathan Baker:]
>
>> [Peter Flass:]
>>
>> [...] thrown together higlly-piglly


>>
>> You wanted "higgly-piggly" there. [to stay a.u.e. relevant {see header}]
>
> I didn't notice that this was also postted to a.u.e -- I tried about six
> different ways of spelling this and none looked right, so I gave up. I
> probably should have googled it.

In its entry for "higgledy-piggledy", the OED records the form
"higgley-piggley", among others, but only for the 17th C., and
there's a separate entry for "higly-pigly". Why would that be?
Anyway, I've not heard it pronounced in four syllables before.

--
John

Bill Leary

unread,
Sep 3, 2011, 8:49:08 AM9/3/11
to
wrote in message
news:41595a6a-1f91-4457...@d18g2000yqm.googlegroups.com...

> On Sep 2, 7:37 am, Peter Flass <Peter_Fl...@nospicedham.Yahoo.com>
> wrote:
>> The 68xx (and 68xxx) chops *should have* become more important.
>> The architecture was miles better than the x86, and in fact seemed
>> to me to be the only microprocessor architecture that hadn't been
>> thrown together higlly-piglly out of whatever could be made to fit
>> on a chip.
>
> Why was it better?

At the time I often heard the word "orthogonal" used to describe the
architecture. A regular set of registers, with very few limitations on
their usage and thus a very regular instruction set. I programmed assembly
on the 68000 and the 80x86 at the same time and one point I recall is that
there was very rarely any "move this to that register so I can do math on on
it." In general, if the value you were working on was in a register, you
could just do the math on it. Plus no segment registers. Megabytes of
linear address space, so you didn't have to load a segment register and an
offset to get to something. And you could compare addresses and you didn't
have to "regularize" them first for the comparison to be meaningful.

Probably more, but those are the ones that jump to mind from back then.

- Bill

Jonathan de Boyne Pollard

unread,
Sep 3, 2011, 11:40:56 AM9/3/11
to
>> The 68xx (and 68xxx) chops *should have* become more important. The
>> architecture was miles better than the x86, and in fact seemed to me
>> to be the only microprocessor architecture that hadn't been thrown
>> together higlly-piglly out of whatever could be made to fit on a chip.
>>
> Why was it better?
>
For the machine code programmer, the instruction encoding for the 6809
and 680xx family was better than that of the 8086 because it was simpler
and more orthogonal. For the assembly language programmer, the 680xx
family register set was more straightforward than that of the 8086. M.
Moylan has, however, pointed out some of the 680xx family comparative
deficiencies. It wasn't all roses.

Tim Roberts

unread,
Sep 3, 2011, 7:03:59 PM9/3/11
to
Peter Moylan <inv...@nospicedham.peter.pmoylan.org.invalid> wrote:
>
>The Intel processors might look like a great steaming pile of crap if
>you look at the instruction set, the confusingly specialised registers,
>etc. Nevertheless the Intel designs were superior in a number of other
>directions. Fitting an entire floating point coprocessor onto the same
>chip as the main processor was pretty impressive at the time, even if
>it's now commonplace.

Well, in all fairness, that didn't actually happen until the 80486, some 10
years into the lifetime of the series.
--
Tim Roberts, ti...@probo.com
Providenza & Boekelheide, Inc.

John Levine

unread,
Sep 4, 2011, 12:02:37 AM9/4/11
to
>>The Intel processors might look like a great steaming pile of crap if
>>you look at the instruction set, the confusingly specialised registers,
>>etc. Nevertheless the Intel designs were superior in a number of other
>>directions. Fitting an entire floating point coprocessor onto the same
>>chip as the main processor was pretty impressive at the time, even if
>>it's now commonplace.

>Well, in all fairness, that didn't actually happen until the 80486,
>some 10 years into the lifetime of the series.

It was educational to compare the 486 to the i860, their short lived
hard to program RISC chip. They came out at the same time, same data
formats, same memory architecture, same manufacturing process. To a
first approximation, the i860 was twice as fast as the i486, entirely
due to the bigger register sets and easier instruction decoding.

It was hard to make a fair comparison, since the i860 exposed its
instruction pipeline, so to get full performance you needed to hand
write loops to use instructions like store result from three
multiplies ago. But I gather it was still twice as fast in practice.

These days I expect the difference would be a lot less since x86
implementations do all that stuff for you.

R's,
John

Charles Richmond

unread,
Sep 4, 2011, 1:37:07 AM9/4/11
to
On 9/3/11 1:06 AM, Nathan Baker wrote:
> "Peter Flass"<Peter...@nospicedham.Yahoo.com> wrote in message
> news:j3qf5r$4f2$1...@dont-email.me...
>
> [snip...] [snip...] [snip...]

>
>> The 68xx (and 68xxx) chops *should have* become more important. The
>> architecture was miles better than the x86, and in fact seemed to me to be
>> the only microprocessor architecture that hadn't been thrown together
>> higlly-piglly out of whatever could be made to fit on a chip.
>>
>
> You wanted "higgly-piggly" there. [to stay a.u.e. relevant {see header}]
>

The *right* words are actually "higgledy-piggledy"...


--
+----------------------------------------+
| Charles and Francis Richmond |
| |
| plano dot net at aquaporin4 dot com |
+----------------------------------------+

HT-Lab

unread,
Sep 4, 2011, 4:38:51 AM9/4/11
to
On 03/09/2011 08:26, Peter Moylan wrote:
...

>
> The Intel designers were educated on IBM machines, and the Motorola
> designers were educated on Motorola machines; and it shows.
>>
>> Just goes to show that better doesn't necessarily equal market success.
>
> The Intel processors might look like a great steaming pile of crap if
> you look at the instruction set, the confusingly specialised registers,
> etc. Nevertheless the Intel designs were superior in a number of other
> directions.

Including code density.

http://www.csl.cornell.edu/~vince/papers/iccd09/iccd09_density.pdf

Hans
www.ht-lab.com

Peter Flass

unread,
Sep 4, 2011, 8:37:03 AM9/4/11
to
On 9/3/2011 7:03 PM, Tim Roberts wrote:
> Peter Moylan<inv...@nospicedham.peter.pmoylan.org.invalid> wrote:
>>
>> The Intel processors might look like a great steaming pile of crap if
>> you look at the instruction set, the confusingly specialised registers,
>> etc. Nevertheless the Intel designs were superior in a number of other
>> directions. Fitting an entire floating point coprocessor onto the same
>> chip as the main processor was pretty impressive at the time, even if
>> it's now commonplace.
>
> Well, in all fairness, that didn't actually happen until the 80486, some 10
> years into the lifetime of the series.

Yes. Looking back at the 8086 vs. 6800 it's hard not to have your
thinking colored by all that came after. From one angle it's great that
x86 has all those special shorter instructions that reference eax, etc.
From the other, it's often seemed to me that if you eliminated the
special cases you'd have a not-so-bad architecture. The whole thing is
optimized for code size.

As someone else pointed out, the assembler syntax is just awful, but
that's another issue too.

Olafur Gunnlaugsson

unread,
Sep 4, 2011, 8:48:08 AM9/4/11
to
Þann 03/09/2011 13:37, skrifaði greenaum:
> On Fri, 2 Sep 2011 19:33:08 -0700 (PDT), hanc...@bbs.cpcn.com
> sprachen:
>
>> Betamax vs. VHS.
>
> This is a bit of a myth. For one thing, Betamax decks were more
> expensive, due to Sony's licensing strategy (something they seem to
> have got right eventually!).

It was more of a mechanism price issue, by the time Betamax had arrived
at the least 2 factories were turning out budget VHS mechanisms that
Sony could not price match, that resulted in Orion (OEM maker), Sharp
and Sanyo budget (for the time) recorders that Sony had no answer for
and probably did more to entrench the format than anything else

> Also, the tapes didn't run for as long,

Only in Europe, in the NTSC countries and the Asian countries the
running times of VHS were lower since the tape ran at faster speeds,
Matsushita delayed the introduction of VHS for PAL systems for 18 months
because they envisioned problems with the higher bandwidth needed for
PAL versus NTSC, in the meantiime they had managed to improve the
picture quality to such as degree that they could afford to slow the
tape down

180 min cassettes are 120 min in a USA spec recorder, if I remember it
correctly the first tapes available for NA VHS were 80 min long which
was not enough to record a movie off TV with since the adverts made the
show time of the average movie on tv 90 to 120 min long

> and the quality wasn't THAT much better. Late 70s TVs weren't
> particularly sharp of picture so that you'd notice.

You did notice, quite easily, some TV's from the time like Grundig were
extremely good, I preferred buying second hand late 70's/early 80's TV's
to new Wide-screen ones up until the time I went LCD a few years back

Edmund H. Ramm

unread,
Sep 4, 2011, 11:04:22 AM9/4/11
to
In <41595a6a-1f91-4457...@d18g2000yqm.googlegroups.com> hanc...@bbs.cpcn.com writes:

> Why was it better?

(Almost) orthogonal command set and linear addressing, to name just
two reasons. None of that braindead segment register stuff.

Eddi ._._.
--
e-mail: dk3uz AT arrl DOT net | AMPRNET: dk...@db0hht.ampr.org
Linux/m68k, the best U**x ever to hit an Atari!

Jonathan de Boyne Pollard

unread,
Sep 4, 2011, 12:46:59 PM9/4/11
to
> Looking back at the 8086 vs. 6800 it's hard not to have your thinking
> colored by all that came after.
>
.... on both sides. The 6800 didn't have the large general purpose
register set of the 680xx family, for instance.

> As someone else pointed out, the assembler syntax is just awful, but
> that's another issue too.
>

And then there's the machine code. Opcode prefix bytes, for example.

Jonathan de Boyne Pollard

unread,
Sep 4, 2011, 12:54:45 PM9/4/11
to
> At the time I also preferred the memory-mapped I/O.
>
The world at large does today, if my experience of PCI devices is
anything to go by.

Walter Bushell

unread,
Sep 4, 2011, 2:08:59 PM9/4/11
to
In article
<IU.D20110904.T1...@J.de.Boyne.Pollard.localhost>,
Jonathan de Boyne Pollard
<J.deBoynePoll...@nospicedham.NTLWorld.COM> wrote:

One always could write one's own assembler, but machine code requires
change of processor.

--
Ignorance is no protection against reality. -- Paul J Gans

Ahem A Rivet's Shot

unread,
Sep 4, 2011, 2:55:39 PM9/4/11
to
On Sun, 04 Sep 2011 17:46:59 +0100

Jonathan de Boyne Pollard
<J.deBoynePoll...@nospicedham.NTLWorld.COM> wrote:

> > Looking back at the 8086 vs. 6800 it's hard not to have your thinking
> > colored by all that came after.
> >
> .... on both sides. The 6800 didn't have the large general purpose
> register set of the 680xx family, for instance.

The 6800 isn't really related to the 68000 very closely, the 6809
is closer to being an 8 bit ancestor.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Rugxulo

unread,
Sep 4, 2011, 4:30:27 PM9/4/11
to
Hi,

On Sep 4, 3:38 am, HT-Lab <han...@nospicedham.htminuslab.com> wrote:
>
> > Nevertheless the Intel designs were superior in a number of other
> > directions.
>
> Including code density.
>
> http://www.csl.cornell.edu/~vince/papers/iccd09/iccd09_density.pdf

Here's a better link (to the same thing):

http://www.deater.net/weave/vmwprod/asm/ll/

Yes, x86 code can be compact but usually isn't. I blame bad compilers,
but all the extra alignment for alleged speed doesn't help (moreso
needed for SSE2). Lacking decent "smartlinkers" also hurts (and is
usually only for Pascal-ish compilers, not C ... why?).

I think Wirth's Lilith machine was touted as being even more compact
vs. traditional machines. Here's a link:

http://www.modulaware.com/mdlt52.htm

"[H]igh code density was considered to be of paramount importance for
complex system implementation on small workstations. Lilith was
organized as a word-addressed 16-bit computer, M-code as a byte
stream. Memory size was 216 words (128K bytes)."

"Two factors contributed primarily to the achieved high code density,
which turned out to be superior to commercial processors by the
remarkable factor of 2.5 (M68000) to 3.5 (I8086).

1. The use of several address and operand lengths of 4, 8, 16, and 32
bits. It turned out that more than 70% of all instruction parameters
had values between 0 and 15, in which case the operand was packed
together with the operation code in a single byte.

2. Lilith was equipped with a stack for intermediate results occuring
in the evaluation of expressions. This stack was implemented as a 16-
word, fast SRAM. M-code therefore contained no register numbers, as
they were implied by the stack scheme."

Peter Flass

unread,
Sep 4, 2011, 7:25:57 PM9/4/11
to

I think a stack architecture, being a 0- or 1-address scheme, will
always generate more compact code. (I'm willing to be contradicted on
this). I believe the disadvantages of a stack architecture are
mitigated by having the topmost "n" stack elements be registers.

Peter Moylan

unread,
Sep 4, 2011, 8:36:53 PM9/4/11
to
Edmund H. Ramm wrote:
> In <41595a6a-1f91-4457...@d18g2000yqm.googlegroups.com> hanc...@bbs.cpcn.com writes:
>
>> Why was it better?
>
> (Almost) orthogonal command set and linear addressing, to name just
> two reasons. None of that braindead segment register stuff.

I'd like to jump in here and defend segmentation. It was an excellent
idea that never got the attention that it deserved. These days, most OS
designers use the paging hardware to simulate segmentation, but that
requires throwing away some desirable features that would have been
supported by the segmentation hardware.

Paging and segmentation are conceptually different, and logically ought
to be supported by two entirely different layers of the operating
system. Segmentation is all about protection. Paging is for disk
swapping. By throwing the two concepts into the same pot you get a less
clean system design.

There were two factors that killed of the widespread use of
segmentation. The first was a desire to have a linear address space.
Now, why would anyone want to have a linear address space? In good
modular program design, it's better to have nonlinear addresses of the
form (module, address within module). Unfortunately the C standard, when
strictly interpreted, requires a linear address space, even if
higher-level languages don't. Without a linear address space, pointer
arithmetic won't work in all cases. Try, for example, to find a meaning
for (p1-p2), where p1 and p2 are pointers into two different segments.

Now, a little thought will show that pointer arithmetic fails in a
segmented address space ONLY when doing things that nobody but an
extremely stupid programmer would do. It wouldn't be hard to change the
C standard to say that the result of doing something stupid is
undefined. Nevertheless, some people cling to saying "the standard
allows it, so it should be legal".

(There were also some legitimate complaints about performance issues.
For example, the "task segment" concept looks like an excellent way to
do thread switching. In practice, though, it turned out to be a lot
faster to use methods that ignored the task segment.)

The second thing that killed off segmentation was Intel's fault. The
entire point of segmentation is protection, but Intel chose to release a
processor that had segment registers but didn't have segment protection.
In addition the segment size limits were too small. In other words, the
8086 had segmentation that didn't implement segmentation. No wonder
everyone looked at the result and decided that segmentation was a stupid
idea. In that processor, it was indeed a stupid idea. The problem was
fixed in the 80286, but by then it was too late. Segmentation got a
dirty name, and it never recovered. Some CS departments don't even teach
about segmentation any more; it's been swept under the rug.

John Byrns

unread,
Sep 4, 2011, 8:15:49 PM9/4/11
to
In article <2umdnSU4uZgWS_zT...@westnet.com.au>,
Peter Moylan <inv...@nospicedham.peter.pmoylan.org.invalid> wrote:

> Peter Flass wrote:
> > On 9/2/2011 5:45 AM, Jonathan de Boyne Pollard wrote:
> >>> I was once urged to write a book on 68000 assembly language programming.
> >>> I did in fact write one on 80x86 assembly language programming, but the
> >>> reviewers didn't like it because they said I'd picked the wrong
> >>> processor. They were pretty well unanimous in saying that the 80x86 line
> >>> was going to fizzle out and the Motorola 6800/68000/etc were the wave of
> >>> the future. As a result, the publisher made some bad marketing decisions
> >>> based on the assumption that the potential market was vanishingly small.
> >>
> >> They'd have been on safer ground had they decided that it was the
> >> _assembly language programming_ part that was going to become
> >> comparatively unimportant, rather than the processor architecture. (When
> >> I'm next in the same part of the country as my library, I'll check how
> >> many books that I have that deal with 80x86 assembly language
> >> programming, rather than with the rather different beasts of _MS-DOS
> >> system and applications programming_.)
> >
> > The 68xx (and 68xxx) chops *should have* become more important. The
> > architecture was miles better than the x86, and in fact seemed to me to
> > be the only microprocessor architecture that hadn't been thrown together
> > higlly-piglly out of whatever could be made to fit on a chip.
>
> The Intel designers were educated on IBM machines, and the Motorola
> designers were educated on Motorola machines; and it shows.

What conclusions are we to draw from this? I know what an IBM computer is
although it isn't obvious to me what sort of machines Intel designers were
educated on, I don't see the connection between Intel and IBM machines? I'm
even more confused what it means that "the Motorola designers were educated on
Motorola machines"? For that matter what was a "Motorola machine" in the
relevant time period, an MDP-1000, a.k.a. a rebadged General Automation SPC-12?
I see zero architectural similarity between the MC6800 and the MDP-1000 except
that they both used 8 bit wide memory systems.

I don't know anything about the design history of the MC6800, although there was
a large transistor level schematic diagram of it on the wall outside my office
door that showed every transistor in it, not plots of the masks but an actual
schematic showing wires, transistors, capacitors and etc. A schematic of the
largest current Intel offering would be interesting to see, I wonder how big a
sheet of paper would be required, would it fit in the state of Texas?

I do know a little about the design of the MC68K that followed the MC6800. At
the time I was involved in the design of crypto algorithms and chips to
implement them and interfaced with the Motorola semi conductor group in Phoenix.
The original 68K architecture specification was done by a group of Motorola
employees that came out of the Motorola Data Processing Center and were
disciples of the IBM religion. Sometime in the latter half of the 1970s, I
along with several other people were given copies of the preliminary 68K
architecture document and asked to write reports on what we thought of the
design. The design was right out of the IBM 370 POP. Being a devotee of the
PDP-11 I didn't much care for the proposed architecture, I wrote my report
detailing changes I felt should be made and sent if off. A short time later I
stopped by the 68K design center, which I think was at Motorola's new Austin
facility, on my way home from Phoenix, although I may be confused about that, to
discuss the proposed architecture with them. I didn't hear anything more from
the 68K group until the chip hit the market a couple of years later. Needless
to say I was very pleased to find that the actual production 68K architecture
was full of ideas borrowed from the PDP-11 and was without a hint of the
original 370 lite proposal. I wish I still had a copy of that original 68K
proposal, as well as my criticisms of it and suggestions for improving it. Not
wanting to ruffle too many feathers I think my suggestions were along the lines
of marrying some of the PDP-11 addressing modes to the IBM architecture.

The irony is that a couple of years later Motorola reprogrammed the micro
program ROM in the original 68K for IBM, to emulate the 370 instruction set.
IIRC the IBM educated designers at Intel provided a floating point chip for the
same IBM project, that implemented the IBM floating point instructions.

--
Regards,

John Byrns

Surf my web pages at, http://fmamradios.com/

John Byrns

unread,
Sep 4, 2011, 8:33:10 PM9/4/11
to
In article <2umdnSU4uZgWS_zT...@westnet.com.au>,
Peter Moylan <inv...@nospicedham.peter.pmoylan.org.invalid> wrote:

> Peter Flass wrote:
> > On 9/2/2011 5:45 AM, Jonathan de Boyne Pollard wrote:
> >>> I was once urged to write a book on 68000 assembly language programming.
> >>> I did in fact write one on 80x86 assembly language programming, but the
> >>> reviewers didn't like it because they said I'd picked the wrong
> >>> processor. They were pretty well unanimous in saying that the 80x86 line
> >>> was going to fizzle out and the Motorola 6800/68000/etc were the wave of
> >>> the future. As a result, the publisher made some bad marketing decisions
> >>> based on the assumption that the potential market was vanishingly small.
> >>
> >> They'd have been on safer ground had they decided that it was the
> >> _assembly language programming_ part that was going to become
> >> comparatively unimportant, rather than the processor architecture. (When
> >> I'm next in the same part of the country as my library, I'll check how
> >> many books that I have that deal with 80x86 assembly language
> >> programming, rather than with the rather different beasts of _MS-DOS
> >> system and applications programming_.)
> >
> > The 68xx (and 68xxx) chops *should have* become more important. The
> > architecture was miles better than the x86, and in fact seemed to me to
> > be the only microprocessor architecture that hadn't been thrown together
> > higlly-piglly out of whatever could be made to fit on a chip.
>
> The Intel designers were educated on IBM machines, and the Motorola
> designers were educated on Motorola machines; and it shows.

What conclusions are we to draw from this? I know what an IBM computer is

although it isn't obvious to me what sort of machines Intel designers were
educated on, I don't see the connection between Intel and IBM machines? I'm
even more confused what it means that "the Motorola designers were educated on
Motorola machines"? For that matter what was a "Motorola machine" in the
relevant time period, an MDP-1000, a.k.a. a rebadged General Automation SPC-12?
I see zero architectural similarity between the MC6800 and the MDP-1000 except
that they both used 8 bit wide memory systems.

I don't know anything about the design history of the MC6800, although there was
a large transistor level schematic diagram of it on the wall outside my office
door that showed every transistor in it, not plots of the masks but an actual
schematic showing wires, transistors, capacitors and etc. A schematic of the
largest current Intel offering would be interesting to see, I wonder how big a
sheet of paper would be required, would it fit in the state of Texas?

I do know a little about the design of the MC68K that followed the MC6800. At
the time I was involved in the design of crypto algorithms and chips to
implement them and interfaced with the Motorola semi conductor group in Phoenix.
The original 68K architecture specification was done by a group of Motorola

employees that came out of the Motorola Data Center and were disciples of the

IBM religion. Sometime in the latter half of the 1970s, I along with several
other people were given copies of the preliminary 68K architecture document and
asked to write reports on what we thought of the design. The design was right

out of the IBM 370 of the time. Being a devotee of the PDP-11 I didn't care
much at all for the proposed architecture, I wrote my report detailing changes I

felt should be made and sent if off. A short time later I stopped by the 68K
design center, which I think was at Motorola's new Austin facility, on my way
home from Phoenix, although I may be confused about that, to discuss the
proposed architecture with them. I didn't hear anything more from the 68K group
until the chip hit the market a couple of years later. Needless to say I was
very pleased to find that the actual production 68K architecture was full of
ideas borrowed from the PDP-11 and was without a hint of the original 370 lite
proposal. I wish I still had a copy of that original 68K proposal, as well as
my criticisms of it and suggestions for improving it. Not wanting to ruffle too
many feathers I think my suggestions were along the lines of marrying some of
the PDP-11 addressing modes to the IBM architecture.

The irony is that a couple of years later Motorola reprogrammed the micro

program ROM in the original 68K, to emulate the 370 instruction set for IBM.
IIRC the IBM educated designers at Intel supplied a floating point chip which
emulated the IBM floating point instruction set for the same IBM project that
used the modified M68K.

Robert Wessel

unread,
Sep 4, 2011, 10:56:12 PM9/4/11
to
On Mon, 05 Sep 2011 10:36:53 +1000, Peter Moylan
<inv...@nospicedham.peter.pmoylan.org.invalid> wrote:

>Edmund H. Ramm wrote:
>> In <41595a6a-1f91-4457...@d18g2000yqm.googlegroups.com> hanc...@bbs.cpcn.com writes:
>>
>>> Why was it better?
>>
>> (Almost) orthogonal command set and linear addressing, to name just
>> two reasons. None of that braindead segment register stuff.
>
>I'd like to jump in here and defend segmentation. It was an excellent
>idea that never got the attention that it deserved. These days, most OS
>designers use the paging hardware to simulate segmentation, but that
>requires throwing away some desirable features that would have been
>supported by the segmentation hardware.
>
>Paging and segmentation are conceptually different, and logically ought
>to be supported by two entirely different layers of the operating
>system. Segmentation is all about protection. Paging is for disk
>swapping. By throwing the two concepts into the same pot you get a less
>clean system design.


x86's implementation of segments had enough problems that many people
were put off. First, there were never enough segment registers (even
once FS and GS got added), and reloading segment registers was
painfully slow. And in 16 bit mode, the 64K limit was a serious
issue, and finally the maximum number of segments a program could have
(~8K), was far too small, and setting up new segments was horribly
slow.


>There were two factors that killed of the widespread use of
>segmentation. The first was a desire to have a linear address space.
>Now, why would anyone want to have a linear address space? In good
>modular program design, it's better to have nonlinear addresses of the
>form (module, address within module). Unfortunately the C standard, when
>strictly interpreted, requires a linear address space, even if
>higher-level languages don't. Without a linear address space, pointer
>arithmetic won't work in all cases. Try, for example, to find a meaning
>for (p1-p2), where p1 and p2 are pointers into two different segments.
>
>Now, a little thought will show that pointer arithmetic fails in a
>segmented address space ONLY when doing things that nobody but an
>extremely stupid programmer would do. It wouldn't be hard to change the
>C standard to say that the result of doing something stupid is
>undefined. Nevertheless, some people cling to saying "the standard
>allows it, so it should be legal".


The C standard does *not* define arithmetic involving pointers
pointing to different objects. It does largely imply that *within* an
object addresses appear to be linear. For example, in:

int a;
void f()
{
int b;
int *p;
p = malloc(...);
}

a, b, p and the storage pointed to by p, can all be in different
segments.

It is *not* defined C to compute "&a-p". (Of course on many
compilers, especially those implementing C in a flat address space, it
is).


>(There were also some legitimate complaints about performance issues.
>For example, the "task segment" concept looks like an excellent way to
>do thread switching. In practice, though, it turned out to be a lot
>faster to use methods that ignored the task segment.)
>
>The second thing that killed off segmentation was Intel's fault. The
>entire point of segmentation is protection, but Intel chose to release a
>processor that had segment registers but didn't have segment protection.
>In addition the segment size limits were too small. In other words, the
>8086 had segmentation that didn't implement segmentation. No wonder
>everyone looked at the result and decided that segmentation was a stupid
>idea. In that processor, it was indeed a stupid idea. The problem was
>fixed in the 80286, but by then it was too late. Segmentation got a
>dirty name, and it never recovered. Some CS departments don't even teach
>about segmentation any more; it's been swept under the rug.


More the other way around. 8086 segmentation was not great, but it
was not that hard to use, especially since you didn't usually have
that many segments. Protected mode on the 286 brought the pain to the
forefront, not least because the limits were actually enforced, you
actually lost a good chuck of a segment register since you couldn't
store into CS anymore, and the segment register reloads went from
being a bit slow, to being very slow (which of course was exacerbated
by now having only three segments you could use to address data).

Roberto Waltman

unread,
Sep 5, 2011, 1:33:40 AM9/5/11
to
Robert Wessel wrote:

>x86's implementation of segments had enough problems that many people
>were put off. First, there were never enough segment registers (even
>once FS and GS got added), and reloading segment registers was
>painfully slow. And in 16 bit mode, the 64K limit was a serious
>issue, and finally the maximum number of segments a program could have
>(~8K), was far too small, and setting up new segments was horribly
>slow.

I read once an interview with Ashton-Tate's "Chief Scientist" (cannot
recall the name,) whom was responsible for the development of
Framework, an excellent office suite for IBM PCs.
(Excellent considering what was available at the time and the limited
resources of early PCs. I'm talking about 8088s...)
He said that the x86's segmented architecture increased the time
needed to develop Framework by 30%, compared to what could be done
with a flat/linear addressing space.
--
Roberto Waltman

[ Please reply to the group.
Return address is invalid ]

Single Stage to Orbit

unread,
Sep 5, 2011, 3:32:17 AM9/5/11
to
On Sat, 2011-09-03 at 17:26 +1000, Peter Moylan wrote:
> [1] Which early computer was it that had exactly the same segment
> descriptors as the 80286? I used to know, and it's slipped out of my
> mind. Whichever one it was, it was recognised at the time as a major
> advance in concept, but not practical because the hardware was too
> expensive.

I've been trying to find out which computer was that for ages, but
nobody seems to know anything about this. Even wikipedia had nothing.
--
Tactical Nuclear Kittens

io_x

unread,
Sep 5, 2011, 4:30:15 AM9/5/11
to

"Peter Moylan" <inv...@nospicedham.peter.pmoylan.org.invalid> ha scritto nel
messaggio news:Z4qdnau2oKM1hPnT...@westnet.com.au...

> Edmund H. Ramm wrote:
>> In <41595a6a-1f91-4457...@d18g2000yqm.googlegroups.com>
>> hanc...@bbs.cpcn.com writes:
> for (p1-p2), where p1 and p2 are pointers into two different segments.

i think all goes well because there is the function "malloc()"
that work using these pointers subtration and addition
the day that is not possible
you have to rewrite malloc

i think could be impossible to preserve all good behaviors
of malloc in a segmentate sys
but possibly i make wrong about it...

Buon Giorno

Jonathan de Boyne Pollard

unread,
Sep 5, 2011, 5:40:58 AM9/5/11
to
> I don't know anything about the design history of the MC6800, although there was
> a large transistor level schematic diagram of it on the wall outside my office
> door that showed every transistor in it, not plots of the masks but an actual
> schematic showing wires, transistors, capacitors and etc. A schematic of the
> largest current Intel offering would be interesting to see, I wonder how big a
> sheet of paper would be required, would it fit in the state of Texas?

There are a couple of RTL models of the 80386 in VHDL knocking around
the WWW. Convergent's model comes to some 1200 lines, I am told. But
it's also not functionally complete. The University of Cincinnatti's
(also incomplete) RTL model of the MC68000 comes to 1300 lines of VHDL.

Jonathan de Boyne Pollard

unread,
Sep 5, 2011, 5:48:54 AM9/5/11
to
>> Bootstrap programs still do such things now. The bootstrap program in
>> your hard discs' MBRs replaces itself, at the same memory location, with
>> a program loaded from a VBR, and then restarts the program.

> AFTER it relocates itself from 0:7C00 to 0:0600. :-)

No. That's not actually true for all MBR bootstrap programs. It's not
true for at least two of Microsoft's; and it's not true for two of mine.

Jonathan de Boyne Pollard

unread,
Sep 5, 2011, 6:07:53 AM9/5/11
to
>> Looking back at the 8086 vs. 6800 it's hard not to have your thinking
>>> colored by all that came after.
>>>
>> .... on both sides. The 6800 didn't have the large general purpose
>> register set of the 680xx family, for instance.
>
> The 6800 isn't really related to the 68000 very closely, the 6809
> is closer to being an 8 bit ancestor.

That's certainly arguable. But the 6809 doesn't have the large general
purpose register set of the 680xx family, *either*. The point remains
that the 680xx family does nowadays colour one's perceptions of the 680x
processors, just as the 80486DX colours one's perceptions of the 8086.

Peter Flass

unread,
Sep 5, 2011, 8:16:13 AM9/5/11
to

Seems like it's a tradeoff of development time vs. security and sharing
of code. If you look at what windows, Linux, etc. go thru to share
libraries vs. what would have to be done in a segmented system (i.e.
nothing) you can see this. Done right, segmentation would also provide
(somewhat) better protection against clobbering storage.

Ahem A Rivet's Shot

unread,
Sep 5, 2011, 9:19:22 AM9/5/11
to
On Mon, 05 Sep 2011 11:07:53 +0100

Jonathan de Boyne Pollard
<J.deBoynePoll...@nospicedham.NTLWorld.COM> wrote:

I'd put the 6800 beside the 8080, the 6809 beside the Z80, and the
68000 beside the 8086 in terms of development stages (and times).

greenaum

unread,
Sep 5, 2011, 9:56:49 AM9/5/11
to
On Sun, 04 Sep 2011 13:48:08 +0100, Olafur Gunnlaugsson
<oligun...@nospicedham.gmail.com> sprachen:

>Matsushita delayed the introduction of VHS for PAL systems for 18 months
>because they envisioned problems with the higher bandwidth needed for
>PAL versus NTSC,

>180 min cassettes are 120 min in a USA spec recorder,

Why? There's 100 more lines, but 10 less fields per second. I thought
they worked out as needing the same bandwidth, about 15,000 lines per
second.

--

--------------------------------------------------------------------------------
"There's nothing like eating hay when you're faint," the White King remarked to Alice, as he munched away.
"I should think throwing cold water over you would be better," Alice suggested: "--or some sal-volatile."
"I didn't say there was nothing better," the King replied. "I said there was nothing like it."
Which Alice did not venture to deny.


Jonathan de Boyne Pollard

unread,
Sep 5, 2011, 9:34:14 AM9/5/11
to
> I'd like to jump in here and defend segmentation.[...]

You only get to do so if other people can jump in and defend the C
language and C standard from the charges that you make against them. (-:

> There were two factors that killed of the widespread use of
> segmentation. The first was a desire to have a linear address space.
> Now, why would anyone want to have a linear address space? In good
> modular program design, it's better to have nonlinear addresses of the
> form (module, address within module). Unfortunately the C standard, when
> strictly interpreted, requires a linear address space, even if
> higher-level languages don't. Without a linear address space, pointer
> arithmetic won't work in all cases. Try, for example, to find a meaning
> for (p1-p2), where p1 and p2 are pointers into two different segments.
>
> Now, a little thought will show that pointer arithmetic fails in a
> segmented address space ONLY when doing things that nobody but an
> extremely stupid programmer would do. It wouldn't be hard to change the
> C standard to say that the result of doing something stupid is
> undefined. Nevertheless, some people cling to saying "the standard
> allows it, so it should be legal".

The standard didn't, and still doesn't (despite pressure), define any
such thing. The standard didn't need to be changed, because *as it was
written* it actually permitted implementations that did as Win16 and
16-bit OS/2 implementations did with far and huge pointers and pointer
arithmetics. One got undefined behaviour under the C standard from
simply using the value of an invalid pointer, for example. This
accorded with the behaviour of segmented pointers in the x86 world where
just loading an invalid selector into a selector register, before even
manipulating the pointer value or dereferencing it, causes a general
protection exception. One also got (and still gets) undefined behaviour
under the C standard for subtracting one pointer from another when they
don't point to objects in the same array (or to one past the end of the
array).

I'm afraid that we don't get to blame the standard or the language for
this. The blame can be laid squarely at the door of the programmers who
wanted things *that the language didn't guarantee* to work *anyway*,
despite that.

Peter Moylan

unread,
Sep 5, 2011, 10:31:01 AM9/5/11
to

By pure coincidence, rewriting malloc (for a microcontroller) was my job
today at work. It requires a small amount of pointer arithmetic, but not
much. The only real difficulty is in avoiding program bugs. Well, OK,
making it efficient is also hard work, and making it thread-safe is
something that some implementers forget, but those considerations have
nothing to do with linearity or otherwise of the address space. Most
typically, you'd make your heap one large segment, so that in any case
all pointers lie inside the same segment.

I have vague memories of some object-oriented computer architectures in
which every object returned by the equivalent of malloc lives in its own
hardware-protected segment, so it's impossible to make mistakes like
running off the end of an allocated structure. That's the best way to
preserve good behaviours. Of course there's a performance hit, but if
you're building that sort of computer you take care to minimise the
inefficiencies.

With most memory implementations, a C-like malloc operation has the very
undesirable feature that there's no protection against having your
pointers wander into a forbidden region.

Peter Moylan

unread,
Sep 5, 2011, 10:33:57 AM9/5/11
to
John Byrns wrote:

> I'm even more confused what it means that "the Motorola designers
> were educated on Motorola machines"?

Sorry, that was poor proof-reading on my part. I meant DEC machines.

Peter Moylan

unread,
Sep 5, 2011, 11:10:58 AM9/5/11
to
Ahem A Rivet's Shot wrote:
> On Mon, 05 Sep 2011 11:07:53 +0100
> Jonathan de Boyne Pollard
> <J.deBoynePoll...@nospicedham.NTLWorld.COM> wrote:
>
>>>> Looking back at the 8086 vs. 6800 it's hard not to have your thinking
>>>>> colored by all that came after.
>>>>>
>>>> .... on both sides. The 6800 didn't have the large general purpose
>>>> register set of the 680xx family, for instance.
>>> The 6800 isn't really related to the 68000 very closely, the 6809
>>> is closer to being an 8 bit ancestor.
>> That's certainly arguable. But the 6809 doesn't have the large general
>> purpose register set of the 680xx family, *either*. The point remains
>> that the 680xx family does nowadays colour one's perceptions of the 680x
>> processors, just as the 80486DX colours one's perceptions of the 8086.
>
> I'd put the 6800 beside the 8080, the 6809 beside the Z80, and the
> 68000 beside the 8086 in terms of development stages (and times).
>
In times, yes; but the 6809 was a genuine advance over the 6800, while
the Z80 was just an 8080 with a few kludges tacked on.
Message has been deleted

Marven Lee

unread,
Sep 5, 2011, 11:21:26 AM9/5/11
to

Jonathan de Boyne Pollard wrote:
>> I was once urged to write a book on 68000 assembly language programming.
>> I did in fact write one on 80x86 assembly language programming, but the
>> reviewers didn't like it because they said I'd picked the wrong
>> processor. They were pretty well unanimous in saying that the 80x86 line
>> was going to fizzle out and the Motorola 6800/68000/etc were the wave of
>> the future. As a result, the publisher made some bad marketing decisions
>> based on the assumption that the potential market was vanishingly small.
>
> They'd have been on safer ground had they decided that it was the
> _assembly language programming_ part that was going to become
> comparatively unimportant, rather than the processor architecture. (When
> I'm next in the same part of the country as my library, I'll check how
> many books that I have that deal with 80x86 assembly language programming,
> rather than with the rather different beasts of _MS-DOS system and
> applications programming_.)

I remember being stuck with my Amiga 4000/030 while everyone else
was upgrading to Pentium or faster PCs. I didn't do much assembly
language programming on it but I remember writing a 16-bit fixed
point Mandelbrot generator to see how much faster I could make it
than the code I wrote in C. I think turned out to be a bit faster but
can't remember by how much. Maybe it was pointless optimizing
for a 25MHz 68030 considering the speed of Pentium PCs available
at the time, however I had to wait a few more years for my first PC.

I found 68K assembler hard as I had the mindset that I had to treat the
registers as a form of cache and that I had to avoid moving data between
registers and memory wherever possible. Keeping track of what variable
was in what register seemed tricky to me although I'm sure there were
ways of defining labels for the registers. Learning x86 afterwards was a
lot easier as I never really thought of the registers as a cache but as a
place to hold intermediate results or for forming addresses.

The book I used to learn assembly was Mastering Amiga Assembler but
that only covered application programming and not systems programming.
Later a friend gave me his copy of Assembly Language and Systems Programming
for the M68000 family. When I read it I got the impression that systems
programming and the MMU in particular were quite complex subjects. In any
case the chapters on the FPU and MMU were useless to me as the
68EC030 had neither.

I recall that the 68020 and above had lots of funky addressing modes but I
don't remember using any of them. I think I'm right in saying that the
'040 and '060 supported the complex addressing modes but that it was
faster to calculate the effective address with a sequence of simpler
instructions.

--
Marv


Peter Brooks

unread,
Sep 5, 2011, 11:24:32 AM9/5/11
to
On Sep 5, 4:31 pm, Peter Moylan

<inva...@nospicedham.peter.pmoylan.org.invalid> wrote:
>
>
> With most memory implementations, a C-like malloc operation has the very
> undesirable feature that there's no protection against having your
> pointers wander into a forbidden region.
>
If your pointers are wandering into forbidden regions then, maybe,
it'd be wise to try using a safer programming language. Either that or
keep a firmer grip on your pointers - that's the only thing that sort
of variable understands..

Michael Black

unread,
Sep 5, 2011, 11:50:19 AM9/5/11
to
On Tue, 6 Sep 2011, Peter Moylan wrote:


>> I'd put the 6800 beside the 8080, the 6809 beside the Z80, and the
>> 68000 beside the 8086 in terms of development stages (and times).
>>
> In times, yes; but the 6809 was a genuine advance over the 6800, while
> the Z80 was just an 8080 with a few kludges tacked on.
>

And that's a good way to refer to the Z80, because if the 8080 did not
have an orthogonal set of opcodes, the Z80 just built on it, adding fancy
instructions that did only specific things.

The 6809, Motorola spent a lot of time analyzing actual programs in 6800
code, to see where to move things in the next generation. There was a
three part article in Byte about the 6809's development. It was a much
more complicated CPU than the 6800.

The 68000 was developed more or less at the same time, but it was mostly a
start from scratch, unlike the 8086 which sort of built on to the 8080.
The 68000 actually seemed less complicated than the 6809, but that's more
because there were fewer instructions but the added addressing modes mode
up for it. I remember looking at the set of instructions for the 68000
and thinking "there isn't much to it, that's an improvement over the
6809?" but then later it was obviously quite capable, just in a cleaner
way than the 6809.

In both cases, they benefited from what had come before. The Z80 wasn't a
radical improvement over the 8080, it was just fancier with things tacked
on. There was somewhat of a gap before they started work on the 6809 and
68000, and that I think gave them a better perspective on where to go.

Of course, when I first looked at 8080 mneumoics, after using the 6502, I
thought they 8080 ones were odd. And that remained, which is why I went
to a 6809 and then a 68000. One reason I didn't run Linux as early as I
could (I went to the 6809 in 1984 so I could run Microware OS-9, said to
be "unix-like" and some of it was) was because I was waiting to find a
used Mac that was good enough to run Linux. That never happened, so
finally I got a used Pentium in 2001, the first time that an Intel CPU had
been in my main computer. And of course by that point, the hardware
didn't really matter that much, things have become so complicated that I'm
not going to hand assemble like I did in the 6502 days, and the CPUs are
fast enough that compiling C programs is not a slow process, so the
assembly language for me is now abstract.

Michael

John Levine

unread,
Sep 5, 2011, 2:03:27 PM9/5/11
to
>Seems like it's a tradeoff of development time vs. security and sharing
>of code. If you look at what windows, Linux, etc. go thru to share
>libraries vs. what would have to be done in a segmented system (i.e.
>nothing) you can see this. Done right, segmentation would also provide
>(somewhat) better protection against clobbering storage.

I wouldn't say it's nothing. Look at Multics, the ur-segmented
system, Every routine was really two segments, a shared read-only one
for the code and an unshared read-write one (which would be COW now)
for the data, with a fair amount of cruft to deal with the linkage.

Segmented architectures suffer from the same two problems of any other
architecture, address size and performance, and they suffer from it
worse. In segmented systems the segments sooner or later turn out to
be too small, and you don't have enough of them. Segmented addressing
is always slower than flat addressing because each new segment
reference has to fetch all the descriptor stuff for the segment. The
Intel 286 did a uniquely bad job of dealing with all of these issues
(really, how hard would it have been to notice when you reloaded the
same value into a segment register) but it's always an issue.

Multics died for a variety of reasons, but one of the reasons was
surely that it was so slow. On the same computer, Multics could
support maybe 20 users, DTSS which was built like a transaction system
with an unsegmented process architecture could support 100.

You might want to look at the Burroughs large system architecture,
first implemented in the B6500 in 1969 and still around today as
the Unisys Clearpath. It's not exactly segmented, but it has all
the goodness that segments give you.

R's,
John

Tim Roberts

unread,
Sep 5, 2011, 3:40:01 PM9/5/11
to
gree...@nospicedham.yahoo.co.uk (greenaum) wrote:
>
>On Sun, 04 Sep 2011 13:48:08 +0100, Olafur Gunnlaugsson
><oligun...@nospicedham.gmail.com> sprachen:
>
>>Matsushita delayed the introduction of VHS for PAL systems for 18 months
>>because they envisioned problems with the higher bandwidth needed for
>>PAL versus NTSC,
>
>Why? There's 100 more lines, but 10 less fields per second. I thought
>they worked out as needing the same bandwidth, about 15,000 lines per
>second.

You are correct. The pixel rate is the same.
--
Tim Roberts, ti...@probo.com
Providenza & Boekelheide, Inc.

Tim Roberts

unread,
Sep 5, 2011, 3:42:14 PM9/5/11
to
John Levine <jo...@nospicedham.iecc.com> wrote:
>
>Multics died for a variety of reasons, but one of the reasons was
>surely that it was so slow. On the same computer, Multics could
>support maybe 20 users, DTSS which was built like a transaction system
>with an unsegmented process architecture could support 100.
>
>You might want to look at the Burroughs large system architecture,
>first implemented in the B6500 in 1969 and still around today as
>the Unisys Clearpath. It's not exactly segmented, but it has all
>the goodness that segments give you.

The Control Data Cyber 180 architecture was also segment and ring based. It
had 15 rings, and the operating system actually used 9 of those rings to
separate user from subsystem from kernel. It resulted in a very good
security model, but by the time it was introduced, mainframes were
irrelevant.

James Kuyper

unread,
Sep 5, 2011, 2:17:38 PM9/5/11
to
On 09/05/2011 09:34 AM, Jonathan de Boyne Pollard wrote:

You really should include attribution lines for whoever it was who made
the comments you're referring to.

Since I've seen no previous messages on this topic in comp.std.c, I
conclude that you added comp.std.c to the cross-postings of a message
that was posted in some other newsgroup(s). A closer examination also
indicates that you dropped comp.lang.asm.x86 from the "Reply-To" list.
Standard usenet netiquette requires that you mention any such changes in
the message body.

>> I'd like to jump in here and defend segmentation.[...]
>
> You only get to do so if other people can jump in and defend the C
> language and C standard from the charges that you make against them. (-:

...


>> form (module, address within module). Unfortunately the C standard, when
>> strictly interpreted, requires a linear address space, even if
>> higher-level languages don't. Without a linear address space, pointer
>> arithmetic won't work in all cases. Try, for example, to find a meaning
>> for (p1-p2), where p1 and p2 are pointers into two different segments.

I'm curious, how does the strict interpretation of the C standard, that
the uncredited poster of the previous message refers to, deal with
section 6.5.6p9: "When two pointers are subtracted, both shall point to
elements of the same array object, or one past the last element of the
array object; the result is the difference of the subscripts of the two
array elements." in such a way as to support the conclusion that a
linear address space is mandatory?

...


>> Now, a little thought will show that pointer arithmetic fails in a
>> segmented address space ONLY when doing things that nobody but an
>> extremely stupid programmer would do. It wouldn't be hard to change the
>> C standard to say that the result of doing something stupid is
>> undefined.

Violating a "shall" that occurs occurs outside of a constraint section
is one of the three ways listed in section 4p2, whereby the behavior of
a program can become undefined. Therefore, the standard already says
that the behavior a a program that performs such a subtraction is
undefined. No change is needed; which I suppose could be considered as
as the difficulty->0 limit of difficult changes.
--
James Kuyper

Peter Flass

unread,
Sep 5, 2011, 4:19:47 PM9/5/11
to
On 9/5/2011 11:21 AM, Marven Lee wrote:
>
> I found 68K assembler hard as I had the mindset that I had to treat the
> registers as a form of cache and that I had to avoid moving data between
> registers and memory wherever possible. Keeping track of what variable
> was in what register seemed tricky to me although I'm sure there were
> ways of defining labels for the registers. Learning x86 afterwards was a
> lot easier as I never really thought of the registers as a cache but as a
> place to hold intermediate results or for forming addresses.

You develop a lot of funky notions when first learning assembler. The
process of getting rid of them makes learning assembler for any other
machine much easier.

Phil Carmody

unread,
Sep 5, 2011, 4:18:43 PM9/5/11
to
Peter Moylan <inv...@nospicedham.peter.pmoylan.org.invalid> writes:
> There were two factors that killed of the widespread use of
> segmentation. The first was a desire to have a linear address space.
> Now, why would anyone want to have a linear address space? In good
> modular program design, it's better to have nonlinear addresses of the
> form (module, address within module). Unfortunately the C standard, when
> strictly interpreted, requires a linear address space, even if
> higher-level languages don't.

Chapter and verse, please.

> Without a linear address space, pointer
> arithmetic won't work in all cases. Try, for example, to find a meaning
> for (p1-p2), where p1 and p2 are pointers into two different segments.

My bet is they they're different objects if they're in different segments.
In which case, there *shouldn't* be a meaning to p1-p2. You've just invoked
the Nuddsy one.

Phil
--
"Religion is what keeps the poor from murdering the rich."
-- Napoleon

Phil Carmody

unread,
Sep 5, 2011, 4:39:47 PM9/5/11
to
HT-Lab <han...@nospicedham.htminuslab.com> writes:
> On 03/09/2011 08:26, Peter Moylan wrote:
> ...
> >
> > The Intel designers were educated on IBM machines, and the Motorola
> > designers were educated on Motorola machines; and it shows.
> >>
> >> Just goes to show that better doesn't necessarily equal market success.
> >
> > The Intel processors might look like a great steaming pile of crap if
> > you look at the instruction set, the confusingly specialised registers,
> > etc. Nevertheless the Intel designs were superior in a number of other
> > directions.
>
> Including code density.
>
> http://www.csl.cornell.edu/~vince/papers/iccd09/iccd09_density.pdf

Unfortunately every benchmark is a toy. Practically every routine
fits in 8 registers, so that's favouring an architecture with 8
registers, as it can do all calculation in registers, and also have
short instructions. (Notice how 6502 with so few registers suffers
so much.)

I know that real world code that I have compiled both on an x86 and
my Alpha have only shown a moderate code size increase, nowhere near
the several-to-one ratio that study of toy inner loops claims.

Peter Flass

unread,
Sep 5, 2011, 4:53:50 PM9/5/11
to
On 9/5/2011 2:03 PM, John Levine wrote:
>> Seems like it's a tradeoff of development time vs. security and sharing
>> of code. If you look at what windows, Linux, etc. go thru to share
>> libraries vs. what would have to be done in a segmented system (i.e.
>> nothing) you can see this. Done right, segmentation would also provide
>> (somewhat) better protection against clobbering storage.
>
> I wouldn't say it's nothing. Look at Multics, the ur-segmented
> system, Every routine was really two segments, a shared read-only one
> for the code and an unshared read-write one (which would be COW now)
> for the data, with a fair amount of cruft to deal with the linkage.

I was thinking of things like having to reference code and data in dso's
on Linux thru the PLT [? I have my copy of _Linkers and Loaders_ sitting
on a shelf in the next room, but I have to re-read that section].
Program relocation is never required in a segmented system - everything
starts at offset zero.

I believe windows has the opposite problem if it still handles this like
OS/2. Shared libraries are loaded and relocated once, and are mapped at
the same virtual address by all users. This is a bit like like VM's
DCSS (discontinuous saved segments).

>
> Segmented architectures suffer from the same two problems of any other
> architecture, address size and performance, and they suffer from it
> worse. In segmented systems the segments sooner or later turn out to
> be too small, and you don't have enough of them.

No matter what you have it's always too small and there aren't enough of
them. Segmented architectures undoubtedly hit these limits sooner than
flat memory architectures.

> Segmented addressing
> is always slower than flat addressing because each new segment
> reference has to fetch all the descriptor stuff for the segment. The
> Intel 286 did a uniquely bad job of dealing with all of these issues
> (really, how hard would it have been to notice when you reloaded the
> same value into a segment register) but it's always an issue.

With all the work that's gone into, for example, cache on x86 systems
today, you certainly could cache as many segment descriptors as
required. I'm not aware of work done on locality of *segment*
references analogous to what has been done with paging. This is
probably just my lack of knowledge. How many segments would a program
access in a given time. Probably the "segment working set" is a
relatively small number, though probably greater than 6. A reasonable
architecture today could easily cache 16 or 32 or more descriptors.

>
> Multics died for a variety of reasons, but one of the reasons was
> surely that it was so slow. On the same computer, Multics could
> support maybe 20 users, DTSS which was built like a transaction system
> with an unsegmented process architecture could support 100.

I won't argue, but I'll point out that Multics was designed from the
start as a general-purpose timesharing system, while DTSS originally
started out (AFAIK) as a BASIC-only system. The resource requirements
are very different.

>
> You might want to look at the Burroughs large system architecture,
> first implemented in the B6500 in 1969 and still around today as
> the Unisys Clearpath. It's not exactly segmented, but it has all
> the goodness that segments give you.

Actually the B5500, and presumably the B5000 go back even earlier. I
think that everyone who used these systems loved them.

I like to be contrarian, and segments are mostly "the road not taken."
I'm not sure segmented architectures got a fair shot. The hardware was
more complicated at a time when every gate counted, so the tradeoffs
were (AFAIK) never completely evaluated.


Message has been deleted

Peter Flass

unread,
Sep 5, 2011, 4:57:46 PM9/5/11
to
On 9/5/2011 3:42 PM, Tim Roberts wrote:
>
> The Control Data Cyber 180 architecture was also segment and ring based. It
> had 15 rings, and the operating system actually used 9 of those rings to
> separate user from subsystem from kernel. It resulted in a very good
> security model, but by the time it was introduced, mainframes were
> irrelevant.

Mainframes have *never* become irrelevant;-)

Marven Lee

unread,
Sep 5, 2011, 6:10:11 PM9/5/11
to

Peter Moylan wrote:
> The Intel processors might look like a great steaming pile of crap if
> you look at the instruction set, the confusingly specialised registers,
> etc. Nevertheless the Intel designs were superior in a number of other
> directions. Fitting an entire floating point coprocessor onto the same
> chip as the main processor was pretty impressive at the time, even if
> it's now commonplace. Stealing the segmentation approach[1] was a
> brilliant idea, even if the software people wasted the opportunity to
> use it. (Although it still mystifies me that Intel wasn't sued for
> patent violation. The original design must surely have been patented,
> and it was lifted with bit-for-bit copying accuracy.) Perhaps most
> importantly, Intel knew how to fabricate very complicated chips with an
> acceptable yield, leading to the whole thing being affordable.
>
> Motorola had a much cleaner approach to processor design, but it wasn't
> nearly as impressive when it came to things like memory management,
> caches, etc.

I suppose one of the good things about the 386 was that all variants had
an MMU as standard unlike the EC versions of 68030s and 040s.

I sometimes wonder what would have happened if Commodore shipped
all of their big box Amigas with processors that had MMUs, whether
AmigaOS would have evolved into a system with some form of memory
protection or if they could have wrote a replacement OS and included
a dual boot feature. Of course the market for big box Amigas must have
been insignificant compared to their 500/600/1200 computers.

I'm sure it's been thought of before but one way of adding protection
would have been to run all of the existing parts of the kernel and
applications in supervisor mode but run new applications as user-mode
processes in their own address spaces.

Old applications could still bring the system down as they'd be running in
the kernel. It would still have been a win for stability even if only a
small number of new applications began taking advantage of memory
protection.

A monitor would have to run in supervisor mode alongside the exisitng kernel
and old applications. Traps and exceptions from new user-mode applications
would transfer control to the monitor which would then work out what
underlying Amiga kernel routines to call. The monitor would also be
responsible for keeping track of what resources such as files, memory and
other objects were in use.

The monitor would grow quite large to cover most of the Amiga's existing
library calls. Perhaps in the beginning the monitor could just handle the
basics of process, memory management and filesystem calls. That way a lot
of command line tools could have been protected and would have been the
easiest place to start. Possibly more than one type of monitor could exist
to provide different personalities.

I think it would have been possible but I'm sure a lot of Amigans will
tell me it isn't. I can dream though!


--
Marv


Anne & Lynn Wheeler

unread,
Sep 5, 2011, 5:29:57 PM9/5/11
to
John Levine <jo...@nospicedham.iecc.com> writes:
> Multics died for a variety of reasons, but one of the reasons was
> surely that it was so slow. On the same computer, Multics could
> support maybe 20 users, DTSS which was built like a transaction system
> with an unsegmented process architecture could support 100.

tss/360 was extremely slow and bloated on (segment virtual address)
360/67. I ran emulated fortran edit, compile & execute with cp67 and got
better performance & response with 35 simulated users than tss/360 with
four simulated users. tss/360 besides being heavily bloated would memory
map all the segment stuff and demand page ... compared to cp67 which
would use simulated real i/o that did larger block tranfers.

there was a period where lots of other locations started to pickup & use
cp67 ... originally done at cambridge science center which had 768kbyte
360/67, 104 pageable pages after fixed storage. One was grenoble science
center that had 1mbyte 360/67, 155 pageable pages (after fixed storage),
approx. 50% available real storage than cambridge. As undergraduate in
the 60s for cp67, I did my own multiprogramming level controls, page
thrashing controls, and global LRU page replacement algorithms ... in
contrast to what was published in academic literature circa 1968. In
the 70s, Grenoble decided to modify cp67 with local LRU page replacement
and working set controls ... from academic literature from 1968 (and
Grenoble published article on the results in CACM in the early 70s).
Note that with similar workload, Genoble supported 35 users with similar
thruoughput and response as Cambridge system did for 75 users (grenoble
360/67 had 1.5 times the real storage of the cambridge 360/67 but only
supported half as many users ... local LRU and "working set
dispatcher"). misc. past posts

In the early 80s, this raised its head with somebody working at Tandem
(co-working of Jim Gray) doing Phd at Stanford on global LRU page
replacement ... and there was stiff academic resistance to anything
other than "local LRU". Jim was aware of my work in Global LRU as
undergraduate in the 60s ... as well as the global/local comparsion
beteen the cambridge & grenoble systems (global and other stuff
on cambridge system outperforming much larger grenoble system with
local & other academic "acceptable").
http://www.garlic.com/~lynn/subtopic.html#clock

later, i did memory mapped filesystem for cms ... i worked hard to
provide for large block transfers and to avoid many of the tss/360
bottlenecks (that i had seen) and got on order of three times the
thruput of base cms filesystem. misc. past posts
http://www.garlic.com/~lynn/submain.html#mmap

I had problem with segment & sharing because CMS was using lots of
conventions and applications borrowed from os/360 ... and some
of the internal characteristics caused me all sorts of problems
with segment sharing ... misc. past posts discussing some of the
segement sharing problems doing memory mapped filesystem for CMS
http://www.garlic.com/~lynn/submain.html#adcon

note that the internal (failed) Future System picked up a lot of stuff
from tss/360 with regard to memory mapped and "single level store" misc.
past posts
http://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Anne & Lynn Wheeler

unread,
Sep 5, 2011, 6:03:38 PM9/5/11
to

John Levine <jo...@nospicedham.iecc.com> writes:
> Multics died for a variety of reasons, but one of the reasons was
> surely that it was so slow. On the same computer, Multics could
> support maybe 20 users, DTSS which was built like a transaction system
> with an unsegmented process architecture could support 100.

2nd attempt

Peter Moylan

unread,
Sep 5, 2011, 6:36:15 PM9/5/11
to
Morten Reistad wrote:
> In article <A-ednaGGWue1QPnT...@westnet.com.au>,
> Peter Moylan <inv...@nospicedham.peter.pmoylan.org.invalid> wrote:

>> With most memory implementations, a C-like malloc operation has the very
>> undesirable feature that there's no protection against having your
>> pointers wander into a forbidden region.
>

> It is a reasonably simple task to write a malloc that sets bounds on
> such pointers.

I wasn't talking about the malloc itself - you can usually assume that
that was written by a competent programmer - but what is done with the
chunk of memory after it gets mallocated.

Peter Moylan

unread,
Sep 5, 2011, 6:43:36 PM9/5/11
to

In my own private projects I do use a safer programming language. At
work I get less choice. My problem is that most of the programming I do
at work is for embedded microcontrollers, where typically Hobson
controls your choice of language. C compilers are widespread. Compilers
for other languages are ... you know, that word that I can't use in case
Dan is reading this thread. The opposite of well-done.

Keeping a firm grip on your pointers is a good idea, but they're
inclined to misbehave any time your attention wanders.

Peter Moylan

unread,
Sep 5, 2011, 6:45:06 PM9/5/11
to

Several people have pointed this out. I guess I shouldn't have relied on
a distant memory. Sorry.

Anne & Lynn Wheeler

unread,
Sep 5, 2011, 7:08:38 PM9/5/11
to

something wierd with this post ... it was posted to my news server
... but took possibly hr before it showed up (thinking it was lost,
resent it after 30mins) ... when finally shows up, both had munged email
address.

Frank Kotler

unread,
Sep 5, 2011, 7:29:00 PM9/5/11
to
Anne & Lynn Wheeler wrote:
> something wierd with this post ... it was posted to my news server
> ... but took possibly hr before it showed up (thinking it was lost,
> resent it after 30mins) ... when finally shows up, both had munged email
> address.

[apprentice moderator's note]
You(se guys) are posting to a moderated newsgroup. When/if I'm awake and
on the ball, I'm approving your posts, and putting you on the
"whitelist" - which means that your posts will be "auto-approved" within
10 or 15 minutes, I forget what I've got "sleep" set to. When/if the
moderator gives me the word, we're probably going to "pull the plug" on
this thread, since few of these posts relate to x86(-64) assembly
language programming, which is the topic of c.l.a.x86. As an
alternative, news:alt.lang.asm is appropriate for assembly language
other-than-x86. The guys at news:comp.arch may have something to add.
AFAIK, news:alt.os.assembly is "dead"... although they're probably happy
to see some "traffic"...

It's a pleasure to see some traffic here - a few posts have actually
been "on topic", and it's a pleasure to hear from you guys (some of whom
I "know" from other venues). But you probably don't really want to be
posting here - it delays all posts...

As the conductor said to the hobo, "Ain't my train, son. There ain't
nothing in the world that I can do."

Best,
Frank

Peter Flass

unread,
Sep 5, 2011, 9:08:07 PM9/5/11
to
On 9/5/2011 7:29 PM, Frank Kotler wrote:
> Anne & Lynn Wheeler wrote:
>> something wierd with this post ... it was posted to my news server
>> ... but took possibly hr before it showed up (thinking it was lost,
>> resent it after 30mins) ... when finally shows up, both had munged email
>> address.
>
> [apprentice moderator's note]
> You(se guys) are posting to a moderated newsgroup.

I believe if you cross-post to a moderated and an unmoderated newsgroup
all posts are held up pending moderator approval.

Anne & Lynn Wheeler

unread,
Sep 5, 2011, 9:41:42 PM9/5/11
to

John Levine <jo...@nospicedham.iecc.com> writes:
> Multics died for a variety of reasons, but one of the reasons was
> surely that it was so slow. On the same computer, Multics could
> support maybe 20 users, DTSS which was built like a transaction system
> with an unsegmented process architecture could support 100.

re:
http://www.garlic.com/~lynn/2011l.html#6 segments and sharing, was 68000 assembly language programming

one of the last adtech conferences in the 70s was in POK ... we
presented 16-way 370 SMP and the 801 group was presenting 801/risc.
... the FS failure and mad rush to get products back into 370 product
pipeline also cannibalized adtech groups ... misc. past posts mentioning
FS
http://www.garlic.com/~lynn/submain.html#futuresys

somebody in the 801 group was making statements that vm370 couldn't
support 16way smp because he had looked at the production shipped code
and it didn't contain any smp support.

the 801 group then presented inverted page tables and 16 segment
"registers" ... 16 256mbyte segments (32bit addressing). I pointed out
16 segments were way too small a number. The response was 801 is a
"closed" system with no (hardware) protection domains ... inline
application/library code can change segment register values as easily as
general purpose register values (security would be achieved by the pl.8
compiler only generating correct programs and cp.r loader would only
load valid pl.8 compiled programs.

(the amount of vm370 code written to support 16way 370 smp ... was
enormously less than the amount of code that the 801 group had yet to
write)

later 801 email reference in the 80s
http://www.garlic.com/~lynn/2006t.html#email810812
in this comp.arch post
http://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64?

romp/801 was originally going to be follow-on to displaywriter. When
that project was killed, there was search to find some other use and
decided on unix workstation ... the group that had done AT&T unix port
to ibm/pc as pc/ix was hired to do aixv2 running on pc/rt. for unix
port, however romp/801 had to have hardware protection domain added
... which eliminate the ability to change segment register values with
inline application/library code (instead requiring kernel calls).

i was dragged in working out how to package multiple small shared
segments into single large 256mbytes segment. ... old email refs
http://www.garlic.com/~lynn/2006y.html#email841114c
http://www.garlic.com/~lynn/2006y.html#email841127
in this comp.arch post
http://www.garlic.com/~lynn/2006y.html#36 Multiple mappings

misc. (other) old 801/risc email
http://www.garlic.com/~lynn/lhwemail.html#801

Nathan Baker

unread,
Sep 5, 2011, 11:04:05 PM9/5/11
to

"Jonathan de Boyne Pollard"
<J.deBoynePoll...@nospicedham.NTLWorld.COM> wrote in message
news:IU.D20110905.T0...@J.de.Boyne.Pollard.localhost...
>>> Bootstrap programs still do such things now. The bootstrap program in
>>> your hard discs' MBRs replaces itself, at the same memory location, with
>>> a program loaded from a VBR, and then restarts the program.
>
> > AFTER it relocates itself from 0:7C00 to 0:0600. :-)
>
> No. That's not actually true for all MBR bootstrap programs. It's not
> true for at least two of Microsoft's; and it's not true for two of mine.

I agree that this discussion belongs in comp.lang.asm.x86, but I fail to see
why the other groups are necessary. Also, your cross-posting habit may
develop a few side-effects that you may not be aware of. Consider that CLAX
is moderated:

CLAX86 Policy & Technical Issues FAQ:
http://clax.inspiretomorrow.net/clax86.html
{note - I.5. is "tongue 'n cheek" .. meant to make the reader think}

What happens if I decide to reject an article? Will AUE people (or AFC
folk) start to complain about my sudden *censorship* power over their
'unmoderated' group??

A couple options exist:

1) Cross-post to either 'alt.lang.asm' or 'comp.arch' -- these two are not
moderated.

2) Continue to cross-post to CLAX, but set the 'Followup-To:' header to
just CLAX -- that way, further replies come to CLAX but don't automatically
cause the other groups to suffer moderation. (consult your newsreader docs
on how to set this header)

Nathan.
P.S. - I give no apologies for poor gramar... and certainly not for a
preposition a construct may end in.


ArarghMai...@not.at.arargh.com

unread,
Sep 5, 2011, 11:08:58 PM9/5/11
to
On Mon, 05 Sep 2011 10:48:54 +0100, Jonathan de Boyne Pollard
<J.deBoynePoll...@nospicedham.NTLWorld.COM> wrote:

>>> Bootstrap programs still do such things now. The bootstrap program in
>>> your hard discs' MBRs replaces itself, at the same memory location, with
>>> a program loaded from a VBR, and then restarts the program.
>
> > AFTER it relocates itself from 0:7C00 to 0:0600. :-)
>
>No. That's not actually true for all MBR bootstrap programs. It's not
>true for at least two of Microsoft's; and it's not true for two of mine.

Well, it's true for all of the MS MBR programs that I know of, and
for all the ones that I wrote.
--
ArarghMail108 at [drop the 'http://www.' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html

To reply by email, remove the extra stuff from the reply address.

Nathan Baker

unread,
Sep 5, 2011, 11:43:07 PM9/5/11
to
[re-posting to 'alt.usage.english' for all and sundry peruse, edification,
and such]

"Frank Kotler" <fbko...@nospicedham.myfairpoint.net> wrote in message
news:j43n2h$ttj$1...@speranza.aioe.org...

Frank Kotler

unread,
Sep 5, 2011, 11:40:25 PM9/5/11
to

Correct. I should have made that more clear, perhaps. I don't know *why*
it's like that, but it is. Seems "wrong" to me, but I was not consulted.:)

Best,
Frank

Message has been deleted

Adam Funk

unread,
Sep 6, 2011, 6:30:42 AM9/6/11
to
On 2011-09-05, Peter Moylan wrote:

> Peter Brooks wrote:

>> If your pointers are wandering into forbidden regions then, maybe,
>> it'd be wise to try using a safer programming language. Either that or
>> keep a firmer grip on your pointers - that's the only thing that sort
>> of variable understands..
>
> In my own private projects I do use a safer programming language. At
> work I get less choice. My problem is that most of the programming I do
> at work is for embedded microcontrollers, where typically Hobson
> controls your choice of language. C compilers are widespread. Compilers
> for other languages are ... you know, that word that I can't use in case
> Dan is reading this thread. The opposite of well-done.

I've heard of cooking hardware (especially when fans fail), but how do
you cook a compiler?


[Pruning FUs a bit, but I don't mind if someone sets them back.]


--
The internet is quite simply a glorious place. Where else can you find
bootlegged music and films, questionable women, deep seated xenophobia
and amusing cats all together in the same place? [Tom Belshaw]

Bob Masta

unread,
Sep 6, 2011, 8:20:43 AM9/6/11
to
On Mon, 05 Sep 2011 10:36:53 +1000, Peter Moylan
<inv...@nospicedham.peter.pmoylan.org.invalid> wrote:

>I'd like to jump in here and defend segmentation. It was an excellent
>idea that never got the attention that it deserved. These days, most OS
>designers use the paging hardware to simulate segmentation, but that
>requires throwing away some desirable features that would have been
>supported by the segmentation hardware.

From an assembly language perspective, segmentation made
array handling easy, since you could set the segment to the
start of the array and then use the array index without an
offset.

Also, segmentation made relocation (of small programs
anyway) dead simple: To relocate the code, just change the
segment. No mucking around with the code itself.

Best regards,


Bob Masta

DAQARTA v6.02
Data AcQuisition And Real-Time Analysis
www.daqarta.com
Scope, Spectrum, Spectrogram, Sound Level Meter
Frequency Counter, FREE Signal Generator
Pitch Track, Pitch-to-MIDI
Science with your sound card!

Message has been deleted
Message has been deleted

wolfgang kern

unread,
Sep 6, 2011, 1:36:31 PM9/6/11
to

Peter Moylan said:
> Edmund H. Ramm wrote:
>> In <41595a6a-1f91-4457...@d18g2000yqm.googlegroups.com>
>> hanc...@bbs.cpcn.com writes:

>>> Why was it better?

>> (Almost) orthogonal command set and linear addressing, to name just
>> two reasons. None of that braindead segment register stuff.

> I'd like to jump in here and defend segmentation. It was an excellent
> idea that never got the attention that it deserved. These days, most OS
> designers use the paging hardware to simulate segmentation, but that
> requires throwing away some desirable features that would have been
> supported by the segmentation hardware.

Me too found segmentation as the much lesser overhead to protect
system-software like an OS kernel from beeing touched from whom soever.

> Paging and segmentation are conceptually different, and logically ought
> to be supported by two entirely different layers of the operating
> system. Segmentation is all about protection. Paging is for disk
> swapping. By throwing the two concepts into the same pot you get a less
> clean system design.

Yeah, paging with attributes show of course another safety-opportunty,
and while x86-LONG-mode wont work without paging at all it's wise to use
this (evebn a bit paranoid) protection for system functions in [memory}.

> There were two factors that killed of the widespread use of
> segmentation. The first was a desire to have a linear address space.
> Now, why would anyone want to have a linear address space? In good
> modular program design, it's better to have nonlinear addresses of the
> form (module, address within module). Unfortunately the C standard, when
> strictly interpreted, requires a linear address space, even if

> higher-level languages don't. Without a linear address space, pointer


> arithmetic won't work in all cases. Try, for example, to find a meaning
> for (p1-p2), where p1 and p2 are pointers into two different segments.

Oh yeah, paging may help but also introduce guessing where things reside!
By any luck, CLAX may remain as a LL-ASM-group :)
so here all abstractions should be a thing to avoid!
OTOHS, of course a HLL-programmer may earn money faster than any ASM-
or lowest level -coder than me. But the quality (size and speed) may
bring back this delayed disadvantage in form of trustworthy and
relyability. My way may be just an extraordinary example, but I never
ever had to send 'upgrades' aka bugfix to my clients.

I see nothing wrong on pointer arithmetic, a pointer on 86-CPUs is
nothing else that a 32(64)-bit value kept in register or in RAM.
__
wolfgang


wolfgang kern

unread,
Sep 6, 2011, 2:28:10 PM9/6/11
to

John Levine mentioned:

[x86-segmentation and code/data segments ...]

I once used a Z280 CPU (true 16-bit) together with an Intel-grapic-chip.
The design itself was totally wrong (in terms of optimal timing), but
I could make it work even just sold only a few machines.

Zilogs Z280 and followers actually used separated code and data segments
given by on how the hardware were connected to it (separated busses).

I overruled this behaviour with a hardware gate to make data parts also
executable and have the code segment as read/write memory.

Today I hesitate to built my on mainbords because I cannot compete
any Asian low-cost mass-distribution anymore.

__
wolfgang (stopped PC-hw-production 1997)


wolfgang kern

unread,
Sep 6, 2011, 3:19:38 PM9/6/11
to

Arargh wrote:

>>>> Bootstrap programs still do such things now. The bootstrap program in
>>>> your hard discs' MBRs replaces itself, at the same memory location,
>>>> with
>>>> a program loaded from a VBR, and then restarts the program.
>>> AFTER it relocates itself from 0:7C00 to 0:0600. :-)

>>No. That's not actually true for all MBR bootstrap programs. It's not
>>true for at least two of Microsoft's; and it's not true for two of mine.

> Well, it's true for all of the MS MBR programs that I know of, and
> for all the ones that I wrote.

This relocation could be a (M$?) standard which I wont follow.
My first stage boot-loader already moves several sectors from disk
to above the 1.MB [usually 63.5KB to HMA]

__
wolfgang

wolfgang kern

unread,
Sep 6, 2011, 2:48:13 PM9/6/11
to

Frank Kotler replied to an Anne&Lynn Wheeler-post.

I'd recommend to whitelist the well known Wheeler-twins, even I can't
remember which of the two is the programmer, I remember the knowledge-
base from this girls is worth to be compared to Beth. ASM-groups could
really improve their hit-count if a just few Ladies would post again.
__
wolfgang


wolfgang kern

unread,
Sep 6, 2011, 2:08:50 PM9/6/11
to
Peter Flass mentioned:

>>> x86's implementation of segments had enough problems that many people
>>> were put off. First, there were never enough segment registers (even
>>> once FS and GS got added), and reloading segment registers was
>>> painfully slow. And in 16 bit mode, the 64K limit was a serious
>>> issue, and finally the maximum number of segments a program could have
>>> (~8K), was far too small, and setting up new segments was horribly
>>> slow.

>> I read once an interview with Ashton-Tate's "Chief Scientist" (cannot
>> recall the name,) whom was responsible for the development of
>> Framework, an excellent office suite for IBM PCs.
>> (Excellent considering what was available at the time and the limited
>> resources of early PCs. I'm talking about 8088s...)
>> He said that the x86's segmented architecture increased the time
>> needed to develop Framework by 30%, compared to what could be done
>> with a flat/linear addressing space.

> Seems like it's a tradeoff of development time vs. security and sharing of
> code. If you look at what windows, Linux, etc. go thru to share libraries
> vs. what would have to be done in a segmented system (i.e. nothing) you
> can see this. Done right, segmentation would also provide (somewhat)
> better protection against clobbering storage.

Fully agree here, the price for protecion was either:
* never tell on how you use any given hardware ...
or
* follow the CPU-manufactorers hints on how to hide opportunities

believe it or not, my way was (and still is) a working combination
of both :) And it worked well for me since several decades.

Of course the time spent on programming in LL is much higher than ...
But code quality (in terms of size and speed) become unbeatable then.
So take your own decision:
fast selling may need HLL, but making customers believe in you may ask
for increased reliabilty and transparency together with code-size/speed
performance.

Just my few cents on the matter..
__
wolfgang


wolfgang kern

unread,
Sep 6, 2011, 3:03:19 PM9/6/11
to

Walter Bushell said:

>>> Looking back at the 8086 vs. 6800 it's hard not to have your thinking
>>> colored by all that came after.

>> .... on both sides. The 6800 didn't have the large general purpose
>> register set of the 680xx family, for instance.

>>> As someone else pointed out, the assembler syntax is just awful, but
>>> that's another issue too.

I see only a matter of familiarity (I can live with both).

>> And then there's the machine code. Opcode prefix bytes, for example.

> One always could write one's own assembler, but machine code requires
> change of processor.

As I'm one of the last machine code programmers on this planet, I see no
reason to change the hardware for programming :)
But of course I need to upgrade/modify my machine code editors comment and
code size lines for every new brand including AMD-GPUs [ATI and Nvidea].

__
wolfgang


Bill Leary

unread,
Sep 6, 2011, 9:42:32 PM9/6/11
to
"wolfgang kern" wrote in message news:j45rot$5v7$4...@newsreader2.utanet.at...

> As I'm one of the last machine code programmers on this planet, I
> see no reason to change the hardware for programming :)

Do you mean "machine code" literally there?

As in 0x38, 0x84, 0x28 and so on?

If so, I thought I was the last person doing that. :)

- Bill

Nathan Baker

unread,
Sep 6, 2011, 11:45:15 PM9/6/11
to

"wolfgang kern" <now...@never.at> wrote in message
news:j45ros$5v7$3...@newsreader2.utanet.at...
>
> I'd recommend to whitelist the well known Wheeler-twins, even I can't
> remember which of the two is the programmer, I remember the knowledge-
> base from this girls is worth to be compared to Beth. ASM-groups could
> really improve their hit-count if a just few Ladies would post again.
>

Umm... That is a "wife and husband" account... and both of them are
programmers. My understanding is that Lynn does most of the forum & usenet
activity.

Nathan.


Frank Kotler

unread,
Sep 7, 2011, 4:40:28 AM9/7/11
to

We've got a guy posting on the Nasm forum who does it in *decimal*!!!

http://www.magicschoolbook.com/computing/os-project

You guys are *sane* in comparison! :)

Best,
Frank

P.S. Dropping a.u.e - I can't imagine that this is on-topic there!

Olafur Gunnlaugsson

unread,
Sep 7, 2011, 5:01:31 AM9/7/11
to
Þann 05/09/2011 20:40, skrifaði Tim Roberts:
> gree...@nospicedham.yahoo.co.uk (greenaum) wrote:
>>
>> On Sun, 04 Sep 2011 13:48:08 +0100, Olafur Gunnlaugsson
>> <oligun...@nospicedham.gmail.com> sprachen:
>>
>>> Matsushita delayed the introduction of VHS for PAL systems for 18 months
>>> because they envisioned problems with the higher bandwidth needed for
>>> PAL versus NTSC,
>>
>> Why? There's 100 more lines, but 10 less fields per second. I thought
>> they worked out as needing the same bandwidth, about 15,000 lines per
>> second.
>
> You are correct. The pixel rate is the same.

No he is not, max signal bandwidth for a NTSC signal is 4.2, it is 5.1
for PAL

Pixel rate is 8.4 for NTSC 10.2 for PAL

http://www.maxim-ic.com/app-notes/index.mvp/id/750

wolfgang kern

unread,
Sep 7, 2011, 5:12:13 AM9/7/11
to

Bill Leary asked:

I wrote:
>> As I'm one of the last machine code programmers on this planet, I
>> see no reason to change the hardware for programming :)

> Do you mean "machine code" literally there?

Oh yes.

> As in 0x38, 0x84, 0x28 and so on?

Yeah, except that I don't need to type any '0x' nor an appended 'h'.

> If so, I thought I was the last person doing that. :)

Me too once declared myself as the last one :)
But Rick C.Hodgin told me that I'm wrong, so we aren't alone!

I mainly use my own disassembler which got the hexadecimal
opcode-field editable to create code. This way I immediately see
the correct meaning of my input beside code-size and alignment.

__
wolfgang

Peter Moylan

unread,
Sep 7, 2011, 7:29:17 AM9/7/11
to
It's a dying art, but it's not yet entirely dead.

On the other hand, I don't think we'll ever return to the days where we
used to edit binary paper tapes by gluing the chads back in.

Peter Flass

unread,
Sep 7, 2011, 8:12:17 AM9/7/11
to
What's the advantage of doing this as opposed to using the assembler and
inspecting the generated machine code? I'm not sure what architecture
you're writing for, but I would think that ip- (or pc-) relative
branches would be a bear to keep up with as you make code changes.

Peter Duncanson (BrE)

unread,
Sep 7, 2011, 8:48:54 AM9/7/11
to
On Wed, 07 Sep 2011 21:29:17 +1000, Peter Moylan
<inv...@nospicedham.peter.pmoylan.org.invalid> wrote:

>Bill Leary wrote:
>> "wolfgang kern" wrote in message
>> news:j45rot$5v7$4...@newsreader2.utanet.at...
>>> As I'm one of the last machine code programmers on this planet, I
>>> see no reason to change the hardware for programming :)
>>
>> Do you mean "machine code" literally there?
>>
>> As in 0x38, 0x84, 0x28 and so on?
>>
>> If so, I thought I was the last person doing that. :)
>
>It's a dying art, but it's not yet entirely dead.
>
>On the other hand, I don't think we'll ever return to the days where we
>used to edit binary paper tapes by gluing the chads back in.

Ah. Happy memories.

I have one particularly bad memory of paper tape. There were two types
of reader. One was mechanical. The reader attempted to poke a row of
spring-loaded metal rods through the tape. Rods would go through holes
but not through non-holes. Other readers were optical. They attempted to
"poke" light through the tape. Light would go through holes but not
through non-holes.

This was normally no problem. Just once I was faced with the results of
someone attempting to read a tape that had come from a non-computing,
mechanical-reader only, environment. It was an oiled tape. The resulting
mess on an optical paper tape reader took some time to clear up. Not
only did it interfer with the optics, more nastily it messed up the tape
feed mechansim which relied on friction between the tape and the feed
rollers, and between the feed rollers and the feed-roller brake pads.

--
Peter Duncanson, UK
(in alt.usage.english)

James Silverton

unread,
Sep 7, 2011, 9:29:16 AM9/7/11
to
On 9/7/2011 7:29 AM, Peter Moylan wrote:
> Bill Leary wrote:
>> "wolfgang kern" wrote in message
>> news:j45rot$5v7$4...@newsreader2.utanet.at...
>>> As I'm one of the last machine code programmers on this planet, I
>>> see no reason to change the hardware for programming :)
>>
>> Do you mean "machine code" literally there?
>>
>> As in 0x38, 0x84, 0x28 and so on?
>>
>> If so, I thought I was the last person doing that. :)
>
> It's a dying art, but it's not yet entirely dead.
>
> On the other hand, I don't think we'll ever return to the days where we
> used to edit binary paper tapes by gluing the chads back in.
>
Not quite gluing the chads back in but covering torn tape with opaque
thin sticky tape and repunching with a hand punch. I had to do that for
a whole day after the incident I mentioned yesterday when the tape fell
off the reel onto me.

--


James Silverton, Potomac

I'm *not* not.jim....@verizon.net

Mike Barnes

unread,
Sep 7, 2011, 10:42:28 AM9/7/11
to
Peter Moylan <inv...@nospicedham.peter.pmoylan.org.invalid>:
>I don't think we'll ever return to the days where we
>used to edit binary paper tapes by gluing the chads back in.

Glue? Luxury.

--
Mike Barnes
Cheshire, England

Charlie Gibbs

unread,
Sep 7, 2011, 1:05:45 PM9/7/11
to
In article <icpe67ttajbq4j89u...@4ax.com>,
ma...@nospicedham.peterduncanson.net (BrE) writes:

> I have one particularly bad memory of paper tape. There were two types
> of reader. One was mechanical. The reader attempted to poke a row of
> spring-loaded metal rods through the tape. Rods would go through holes
> but not through non-holes. Other readers were optical. They attempted
> to "poke" light through the tape. Light would go through holes but not
> through non-holes.
>
> This was normally no problem. Just once I was faced with the
> results of someone attempting to read a tape that had come from
> a non-computing, mechanical-reader only, environment. It was an
> oiled tape. The resulting mess on an optical paper tape reader
> took some time to clear up. Not only did it interfer with the
> optics, more nastily it messed up the tape feed mechansim which
> relied on friction between the tape and the feed rollers, and
> between the feed rollers and the feed-roller brake pads.

Interesting. Nearly all the paper tape work I did involved oiled
tape. Aside from the routine cleaning required by any unit - optical
or mechanical - the optical reader we used had no problem with oiled
tapes (the capstan and pinch roller worked just fine).

For me, the nightmare was tapes which someone had repaired with
those bits of perforated sticky tape. The oil in the tape attacked
the adhesive, turning it into a gummy residue which jammed the
tape under the read head. That's when we got stuck with messy,
time-consuming cleanup jobs. White glue worked much better for
patching oiled tapes.

When you mentioned tape readers with mechanical fingers, I thought
you were going to mention how they handled chadless tapes effortlessly,
while optical readers, not having the fingers to push the still-attached
chad aside, couldn't read them.

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Charlie Gibbs

unread,
Sep 7, 2011, 1:13:06 PM9/7/11
to
In article <Pine.LNX.4.64.11...@darkstar.example.net>,
et...@nospicedham.ncf.ca (Michael Black) writes:

> On Tue, 6 Sep 2011, Peter Moylan wrote:
>
>>> I'd put the 6800 beside the 8080, the 6809 beside the Z80, and the
>>> 68000 beside the 8086 in terms of development stages (and times).
>>>
>> In times, yes; but the 6809 was a genuine advance over the 6800,
>> while the Z80 was just an 8080 with a few kludges tacked on.

Just as the 8086 was just an 8080 with a few different kludges
tacked on.

> And that's a good way to refer to the Z80, because if the 8080 did not
> have an orthogonal set of opcodes, the Z80 just built on it, adding
> fancy instructions that did only specific things.

What irritates me about the Zilog mnemonics is their attempt to
create an illusion of orthogonality where none exists. LD this
and that, but oops, sorry, not that.

I really wish the 68000 had come out just a few months sooner...

Charlie Gibbs

unread,
Sep 7, 2011, 1:30:41 PM9/7/11
to
In article <f4n8679eferjqupc1...@4ax.com>,
use...@nospicedham.rwaltman.com (Roberto Waltman) writes:

> Robert Wessel wrote:
>
>> x86's implementation of segments had enough problems that many people
>> were put off. First, there were never enough segment registers (even
>> once FS and GS got added), and reloading segment registers was
>> painfully slow. And in 16 bit mode, the 64K limit was a serious
>> issue, and finally the maximum number of segments a program could
>> have (~8K), was far too small, and setting up new segments was
>> horribly slow.
>
> I read once an interview with Ashton-Tate's "Chief Scientist" (cannot
> recall the name,) whom was responsible for the development of
> Framework, an excellent office suite for IBM PCs.
> (Excellent considering what was available at the time and the limited
> resources of early PCs. I'm talking about 8088s...)
> He said that the x86's segmented architecture increased the time
> needed to develop Framework by 30%, compared to what could be done
> with a flat/linear addressing space.

My software still contains remnants of the kludges I wrote to work
around the 64K barrier when dealing with large in-memory tables
to avoid the various horrors of segment overrun, wrap-around, etc.
Like so many others, Intel's poor implementation of segmentation
has left a bad taste in my mouth (I thought I was through with such
things after leaving behind the 360's 4K base/offset scheme).

The simultaneous rise of the x86 and C accounted for my move
away from assembly-language programming.

Peter Duncanson (BrE)

unread,
Sep 7, 2011, 2:29:16 PM9/7/11
to
On 07 Sep 11 09:05:45 -0800, "Charlie Gibbs"
<cgi...@nospicedham.kltpzyxm.invalid> wrote:

>When you mentioned tape readers with mechanical fingers, I thought
>you were going to mention how they handled chadless tapes effortlessly,
>while optical readers, not having the fingers to push the still-attached
>chad aside, couldn't read them.

Where I was chadless tapes were rejected as though works of the Devil.

Skitt

unread,
Sep 7, 2011, 2:09:35 PM9/7/11
to
I seem to remember that some optical readers had trouble with shiny
greenish mylar tapes. The matte black paper tapes worked fine.

That was a long time ago ...

--
Skitt (SF Bay Area)
http://come.to/skitt

sidd

unread,
Sep 7, 2011, 6:13:51 PM9/7/11
to
On Wednesday 07 September 2011 04:40, Frank Kotler wrote:


> http://www.magicschoolbook.com/computing/os-project

o, nice, thank you for the link

> You guys are *sane* in comparison! :)

only in some respects

sidd

ArarghMai...@not.at.arargh.com

unread,
Sep 7, 2011, 8:39:53 PM9/7/11
to

Could be.

Since there is a standard that the BIOS loads either a boot sector
or a MBR to 0:7C00, it follows that those programs expect to be
loaded there. (There are some old BIOSs that set the registers
wrong, so I allow for that when I write either of these routines.)

In that case, a MBR has to move itself somewhere else in order to
load a boot sector. It doesn't have to be 0:0600. I just copied MS
about that.

The last floppy boot sector I wrote actually occupies the first two
sectors of a floppy, and everything else gets shoved down by 1
sector. Boots ok, and IIRC, works ok under Win98 which is the only
place I tested it.

arargh
--
ArarghMail108 at [drop the 'http://www.' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html

To reply by email, remove the extra stuff from the reply address.

It is loading more messages.
0 new messages