Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

6502 anyone ?

275 views
Skip to first unread message

john

unread,
Apr 11, 2017, 8:07:54 AM4/11/17
to

Whilst poking around to shortcut my hardware development problems
I came across the following links.

Since I suspect most of use here are a bunch
of old fogeys I thought they might be of interest,

http://apple2.x10.mx/CHOCHI/
http://hackaday.com/2014/08/16/an-fpga-based-6502-computer/

So far I've seen one that claims to go up to 200Mhz.
Something a bit different anyway.

--

john

=========================
http://johntech.co.uk

"Bleeding Edge Forum"
http://johntech.co.uk/forum/

=========================

rickman

unread,
Apr 11, 2017, 11:31:30 PM4/11/17
to
On 4/11/2017 8:07 AM, john wrote:
>
> Whilst poking around to shortcut my hardware development problems
> I came across the following links.
>
> Since I suspect most of use here are a bunch
> of old fogeys I thought they might be of interest,
>
> http://apple2.x10.mx/CHOCHI/
> http://hackaday.com/2014/08/16/an-fpga-based-6502-computer/
>
> So far I've seen one that claims to go up to 200Mhz.
> Something a bit different anyway.

FPGA designs for old CPUs are a dime a dozen. You can find everything
from a PDP-11 to an 1802 to an IBM360 for FPGAs. Nearly all of them run
way faster than the original.

--

Rick C

Mark Wills

unread,
Apr 12, 2017, 4:27:51 AM4/12/17
to
Also, the 6502, whilst a very simple device and quite fast, has a
crap instruction set and no registers.

Yack. Just yack.

Thanks for reminding me of writing assembly (in hex) on my Commodore 64
and Atari 800XL computers. I would have been better off being outside
with my friends trying to get my hands up girls' bras. As it happens,
that came a little later :-)

Lars Brinkhoff

unread,
Apr 12, 2017, 4:46:15 AM4/12/17
to
Mark Wills wrote:
> Also, the 6502, whilst a very simple device and quite fast, has a
> crap instruction set and no registers.

I haven't done any serious 6502 programming, so I wouldn't know.
Certainly, handling anything beyond 8 bits is going to be awkward, since
it's very insistent on being an 8-bitter.

But at least the instruction set *encoding* is quite regular and
pleasant. I wrote my own Forth assembler for it, and it turned out to
be very simple.

Mark Wills

unread,
Apr 12, 2017, 4:57:21 AM4/12/17
to
I found the Z80 *much* nicer to code for, because it had proper 16 bit
instructions, BC, DE, HL registers etc, and some nice macro instructions
(LDIR etc.)

However, I never looked back once I got to work on the 68K and the 9900's.
The TMS9900 family has my all time favourite instruction set, followed
by the 68K family. I've looked at x86 but that too looks vile so I always
stayed away from it. I'll just use Forth or something else instead!

john

unread,
Apr 12, 2017, 5:08:34 AM4/12/17
to
In article <ock6so$33k$3...@dont-email.me>, gnu...@gmail.com says...
I think you missed the real point but to be fair I didn't say much because I thought
it was obvious.

An FPGA board already configured with 6502 running forth and C
for $30

Still - if that's all you want to take from it fair enough.

john

unread,
Apr 12, 2017, 5:25:54 AM4/12/17
to
In article <86h91uu...@molnjunk.nocrew.org>, lars...@nocrew.org
says...
There is a 16 bit version which I think is also running forth.
In fact the people on 6502.org seem quite keen on forth.
Far more so than here it seems.

Lars Brinkhoff

unread,
Apr 12, 2017, 6:24:21 AM4/12/17
to
Mark Wills wrote:
> The TMS9900 family has my all time favourite instruction set, followed
> by the 68K family. I've looked at x86 but that too looks vile so I
> always stayed away from it.

I was almost born and raised with the 68000. The x86 instruction set
looks obnoxious and the encoding is even worse! However, my assemblers
for those two turned out to be almost the same in size, which felt like
a dissapointment.

Of course, "size of assembler written by random dude" isn't a good
metric for instruction set complexity.

Albert van der Horst

unread,
Apr 12, 2017, 6:26:32 AM4/12/17
to
In article <86h91uu...@molnjunk.nocrew.org>,
Indeed.
There are misconceptions about the 6502.
6502 is the purest 8-bits processor yet very powerful. The
implementation of FYSFORTH on the Apple II made it in an actually
usable piece of equipment in the Fysisch Laboratorium of the Utrecht
University.
For the chip real estate it occupies it is one of the best.

If you know the chip you realize that it has 128 16 bit registers
on the zero page. If you don't appreciate that, you're in good company,
Andrew Tannenbaum didn't realize that fact and dismissed the 6502
as a potential target for a c-compiler.

Groetjes Albert
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Lars Brinkhoff

unread,
Apr 12, 2017, 6:53:48 AM4/12/17
to
Albert van der Horst wrote:
> If you know the chip you realize that it has 128 16 bit registers on
> the zero page.

Of course proper use of the zero page is important in 6502 programming.
But can you honestly call them registers? They reside in external RAM,
so that's goes against the regular meaning of registers. If you ignore
that implementation detail, the programming model still don't expose
them as 16-bit register-like entities. You can only deal with chunks of
8 bits at a time.

Mark Wills

unread,
Apr 12, 2017, 8:10:36 AM4/12/17
to
Meh. I don't see them as registers. I see them as operands.

1) They're in RAM
2) They're still 8 bit.

Do anything in more than 8 bits SUCKS on the 6502!

They are surprisingly fast though given all the thrashing it has to
do (8 bits, and only three registers).

Anton Ertl

unread,
Apr 12, 2017, 9:09:37 AM4/12/17
to
Right. And you have to always go through one of the real registers to
get something there or out of there. The only register-like quality
the zero page had was that you could use two-byte units for
addressing, similar to index or address registers in other CPUs.

- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.forth200x.org/forth200x.html
EuroForth 2016: http://www.euroforth.org/ef16/

rickman

unread,
Apr 12, 2017, 12:20:04 PM4/12/17
to
On 4/12/2017 8:55 AM, Anton Ertl wrote:
> The only register-like quality
> the zero page had was that you could use two-byte units for
> addressing, similar to index or address registers in other CPUs.

Isn't that the important part that is otherwise missing in the 6502?

--

Rick C

hughag...@gmail.com

unread,
Apr 12, 2017, 5:38:01 PM4/12/17
to
On Wednesday, April 12, 2017 at 1:57:21 AM UTC-7, Mark Wills wrote:
> On Wednesday, 12 April 2017 09:46:15 UTC+1, Lars Brinkhoff wrote:
> > Mark Wills wrote:
> > > Also, the 6502, whilst a very simple device and quite fast, has a
> > > crap instruction set and no registers.
> >
> > I haven't done any serious 6502 programming, so I wouldn't know.
> > Certainly, handling anything beyond 8 bits is going to be awkward, since
> > it's very insistent on being an 8-bitter.
> >
> > But at least the instruction set *encoding* is quite regular and
> > pleasant. I wrote my own Forth assembler for it, and it turned out to
> > be very simple.
>
> I found the Z80 *much* nicer to code for, because it had proper 16 bit
> instructions, BC, DE, HL registers etc, and some nice macro instructions
> (LDIR etc.)

I thought the 65c02 was great! It had a very powerful well-designed instruction set. A 1 Mhz. 6502 was comparable in speed to a 2 Mhz. Z80, and the 65c02 used in the Apple-IIc was 3.6 Mhz --- the 65c02 was available up to 8 Mhz..

The major problem with the 65c02 was that the JMP indirect instruction used the X register. It should have used the Y register because X was already being used as the Forth data-stack pointer. If I could build a time-machine and go back to the 1970s, I would fix this flaw in the 65c02, as well as the problem of being limited to 64KB --- a 65c02 with 128KB of memory (64KB for code and data each) with the JMP indirect flaw fixed would be great --- this would really change the course of history, as the 65c02 computers would out-perform MS-DOS so well as to kill MS-DOS before it lumbered out of the gate.

Anyway, the best Forth system available was ISYS Forth for the Apple-IIc --- this was STC and did code-optimization. It had a lot of support for floating-point. I wrote a cross-compiler in 16-bit UR/Forth that allowed me to develop Forth code on an MS-DOS machine and cross-compile into the Apple-IIc. I used this to write a program that did symbolic math. I got all the way through being able to do derivatives, and simplifying the equations down fully. I was able to display math equations with Greek characters and various math symbols. I didn't do integrals though because I was already at the limit of what the Apple-IIc was capable of (integrals are pretty difficult to program; I would be hard-pressed to do that even today).

I also programmed the 6809 in assembly-language on the Radio Shack Color Computer. I never got into Z80, although I have studied it --- I wasn't impressed with its design --- it has major design flaws!

lehs

unread,
Apr 12, 2017, 8:16:38 PM4/12/17
to
How did you do the simplifications? How was it organized?

HAA

unread,
Apr 12, 2017, 8:43:35 PM4/12/17
to
Mark Wills wrote:
> ...
> Also, the 6502, whilst a very simple device and quite fast, has a
> crap instruction set and no registers.

Keeping in mind it was intended to be a low-cost, bare-bones,
alternative to the 6800 aimed at microcontroller work, the limited
register set/size and missing instructions are understandable.
The failure to correct existing broken instructions wasn't.

>
> Yack. Just yack.

You can blame Wozniak and the hobbyists for the appearance of
the 6502 in systems for which it wasn't intended. Tramiel and co.
weren't going to complain. They were making buckets of money
riding the crest of a craze.



hughag...@gmail.com

unread,
Apr 12, 2017, 9:10:01 PM4/12/17
to
On Wednesday, April 12, 2017 at 5:16:38 PM UTC-7, lehs wrote:
> How did you do the simplifications? How was it organized?

Um, it has been over 25 years, so my memory on how it was organized is foggy. IIRC, I had these kinds of nodes:
1.) dinary operator (such as + etc.) and two pointers to other nodes which were the parameters.
2.) unary operator (such as LN etc.) and one pointer to another node which was the parameter.
3.) symbol
4.) number

ISYS didn't have a heap. IIRC, I wrote this in a simple manner. I had two heaps, both of which just did an ALLOCATE by appending onto the pile of existing nodes. I would build the equation in one heap, then I would process the equation and move the new nodes to the other heap which was empty. Then I would empty the heap I had been working from. So, my equation just went back and forth between the two heaps. Each time I processed the equation, it would get smaller or stay the same size, so I only needed to make sure the initial heap was large enough to hold the derivative and then there was no danger of overflow. There was no FREE for freeing individual nodes --- I was only able to empty the entire heap of all nodes.

Processing the equation was pretty simple. I would just do a pass over the equation looking for a particular pattern. Every time that I found the pattern, I would modify the equation. Sometimes I had to do a modification and then try again with patterns I had already tried. For example:

+
5
+
x
6

I do the pass that looks for constants added to each other, but I don't find anything. Then I do a pass that rearranges same operators, so I get this:

+
x
+
5
6
Then I try again looking for constants added to each other, and I find this pattern. So I get this:
+
x
11

When I did all my pattern tests and found nothing, and I had already tried every permutation of modifications, I knew that I was done. This seems inefficient, but it worked reasonably well. On my Laser-128 that had a 3.6 Mhz. 65c02, pretty big and complicated equations could get converted into a derivative and get fully simplified in no more than 15 seconds (IIRC). The Apple-IIc had a 2 Mhz. 65c02, but I didn't own that because it was too expensive.

Getting the derivatives was pretty easy. There are only a few rules. This produced bloated equations though. A lot of the simplifications are done by a person without even thinking of them as simplifications, because they are so obvious. My program just did a dumb application of the rules though, so the resulting equation was quite bloated and didn't look like anything that a person would write. After it was simplified though, it looked the same as what a person would write.

It wasn't a useful program. With a little bit of practice, a person using pencil-and-paper could beat it in speed. Also, integrals are much more difficult for a program --- but with a little bit of practice a person can do those pretty fast with pencil-and-paper --- a program that can do this would be considered AI though (really beyond what could be accomplished with an Apple-IIc).

The Apple IIc had 128KB of memory. I used one 48KB bank for the part of the program that processed equations (this took a lot of memory because of the two heaps). I used the other 48KB bank for the part of the program that displayed equations on the screen (this took a lot of memory because I had my own character set with Greek letters and math symbols, and I used the high-res graphics screen rather than the character screen for the display). I had a 16KB common area at the top that contained all the Forth primitives and other code that was used by both sub-programs.

hughag...@gmail.com

unread,
Apr 12, 2017, 9:45:46 PM4/12/17
to
The 65c02 was a disappointment. It could have been so much better! If the JMP indirect instruction had used the Y register, the 65c02 would have been a very good processor for VMs. You could have 128 primitives each with a one-byte code. Your generated code would be quite compact. Most of the VM instructions would be one byte. Some of them would be two or three bytes because they have an operand. The NEXT routine would be very fast (note for Mark Wills: this NEXT would be about an order of magnitude faster than anything that could be done on the Z80).

In the 1980s, what people needed was a processor that could run a VM efficiently. Generating machine-code was not realistic (on any processor, including the Z80) because memory was too small. A VM however can be very efficient for memory, allowing people to write big programs. We could have had multiple VM systems --- Apple Pascal, Forth, Promal, etc. --- anybody could write a VM that was appropriate to whatever language they preferred. Eventually one of these languages would prove to be popular, then some chip-maker would build a processor that ran that VM as its machine-language to obtain a boost in speed of maybe a couple of orders of magnitude.

The 65c02 designers totally F'ed that up though --- they used the X register, not realizing that the X register is already used as the data-stack pointer in Forth --- they should have used the Y register that is typically a general-purpose register not dedicated to anything.

Also, they should have made the 65c02 address 128KB of memory. This would not have been difficult! All they needed was a 17-bit address-bus with the high-bit set when reading machine-code and unset when accessing data --- so, you have 64KB for machine-code and another 64KB for data (including the user's programs that are written in a VM of some kind).

These two features (the JMP indirect with Y rather than X and the 128KB memory) would have made the 65c02 far superior to the Z80 and pretty competitive with the i8088. CP/M and MS-DOS would not have gotten off the ground. Eventually we would have had a good OS available --- CP/M and MS-DOS were corporate products and were dull as dishwater --- the 65c02 programmers were a lot more intelligent and innovative by nature, so they would have done better. A Forth OS would have been pretty cool! The OS and language would both use the same command-line, so there would be no distinction between a "program" and a "function."

Michael Barry

unread,
Apr 13, 2017, 1:41:59 AM4/13/17
to
On Wednesday, April 12, 2017 at 5:10:36 AM UTC-7, Mark Wills wrote:
> On Wednesday, 12 April 2017 11:26:32 UTC+1, Albert van der Horst wrote:
> > In article <************>,
It "does" 8-bit tasks very efficiently, and 8-bits is good enough for
a lot of simple stuff, then and even today. Literally millions of
custom cores are executing 6502 code right now, hidden all over the
world in toys, pacemakers, and many other devices. It was the first
assembly language that I learned, and is still my favorite, for a
variety of reasons. <<<< 6502 forever!!! >>>

Mike B.

lehs

unread,
Apr 13, 2017, 2:46:18 AM4/13/17
to
Artificial Intelligence is just like this - smart programming to extend the limits of what is possible for a computer to do. Some people believe that computers or their programs sometime in the future are going to become alive, think and experience as humans. But since a computer program never can experience to be a human being it will never be able to experience human reality and make use of human language the way humans (is forced to) do.

Lars-Erik Svahn
https://forthmath.blogspot.se
https://github.com/Lehs/ANS-Forth-libraries

rickman

unread,
Apr 13, 2017, 3:00:57 AM4/13/17
to
On 4/13/2017 2:46 AM, lehs wrote:
>
> Artificial Intelligence is just like this - smart programming to
> extend the limits of what is possible for a computer to do. Some
> people believe that computers or their programs sometime in the
> future are going to become alive, think and experience as humans. But
> since a computer program never can experience to be a human being it
> will never be able to experience human reality and make use of human
> language the way humans (is forced to) do.

What part of existence is unique to humans? If you mean a machine will
never be perfectly like a human, that may be true in the say way all
animals have different existences. A dog can't experience existence
like a bird can. But that doesn't mean both aren't alive, thinking and
experiencing life.

In the same way a machine can in theory become self aware and have an
intelligence that experiences its own existence.

--

Rick C

lehs

unread,
Apr 13, 2017, 3:37:17 AM4/13/17
to
Not in theory, but in the imagination.

I restrict to the situation with classical computers and human beings, because else the question is totally mind blowing. Human beings are very different from each other but we have in common that we are born under human circumstances and is shaped by those. Our relation to human language is shaped by our situation and we recognice thousands of words, billions of formulations of situations in trivial reality such as fashion, friendship, threats, duties, mobbing, happiness, plannings, drugs... Our relation to those and more important things is reflected in human language. A computer speaking human language is a fake, formulating sentences created by humans about a context experienced by humans.

Chess is one form of human communication. The players communicate by moving pieces on a board according to a few strict rules. Nowadays the best programs can beat the best human players, but doing it by searching possible states far beyond what humans are able to. Yet, the number of states are much larger than the number of particles in universe. Imaging programs trying to overview all human situations by engage multiple cpu-operations. As far as I know, there are no other AI programming technics than those who result in this ALU-operations. And sinceince p

lehs

unread,
Apr 13, 2017, 3:42:21 AM4/13/17
to
Cont.
Since computer programs can't experience growing up as humans there is no way for it to have all this knowledge. And the ALU will never be aware of it self.

Joel Rees

unread,
Apr 13, 2017, 5:14:58 AM4/13/17
to
On Wednesday, April 12, 2017 at 7:26:32 PM UTC+9, Albert van der Horst wrote:
> In article <86h91uu...@molnjunk.nocrew.org>,
> Lars Brinkhoff wrote:
> >Mark Wills wrote:
> >> Also, the 6502, whilst a very simple device and quite fast, has a
> >> crap instruction set and no registers.
> >
> >I haven't done any serious 6502 programming, so I wouldn't know.
> >Certainly, handling anything beyond 8 bits is going to be awkward, since
> >it's very insistent on being an 8-bitter.
> >
> >But at least the instruction set *encoding* is quite regular and
> >pleasant. I wrote my own Forth assembler for it, and it turned out to
> >be very simple.

That has to be considered at least moderately serious programming.

> Indeed.
> There are misconceptions about the 6502.
> 6502 is the purest 8-bits processor yet very powerful. The
> implementation of FYSFORTH on the Apple II made it in an actually
> usable piece of equipment in the Fysisch Laboratorium of the Utrecht
> University.
> For the chip real estate it occupies it is one of the best.
>
> If you know the chip you realize that it has 128 16 bit registers

Potentially 128 index register substitutes, which, because you don't have to load them into X (displacing whatever was in X) to use them as index registers, really are that useful.

As compared to the 6800 which also has potentially 128 index register substitutes in page zero, but requires precisely displacing whatever is in X to use them as such.

As accumulators or targets of 16 bit math, the zero page on the 6502 had pretty much the same set of disadvantages as the zero page on the 6800.

> on the zero page. If you don't appreciate that, you're in good company,
> Andrew Tannenbaum didn't realize that fact and dismissed the 6502
> as a potential target for a c-compiler.

Index registers are pretty important for high-level languages, which is why the 6502 was an early target for pascal virtual machines.

But similar, but slightly slower VMs could have been built for the 6800.

C and VMs seem to be anathema, but there really isn't any reason the VM has to have in inner interpreter. Really, even on a large register CPU, the C run time essentially defines a VM that is imposed on the CPU and its register set.

It's a very flexible VM, but it is still a VM. That's perhaps the biggest reason that it's hard to get a classic Forth to co-exist with a C run-time. You have to understand both VMs and get them properly mapped to each other.

(Now, the 6809, which had both better support for the direct/zero page and memory indirection, did not have memory indirection on the direct/zero page. And that was kind of unfortunate, except I wonder how many more effective gates it would have required. I think it could have been shoe-horned into the index post-byte.

As it was, the 6809 only had, IIRC, about a thousand more gates than the 6502, and it could indirect through absolute variables or variables on the stack, so you didn't really feel the lack of indirection on the direct/zero page. Well, it would have made user page variables more convenient, but, again, with four actual indexable registers you still didn't feel it so much as on the 6800.)

>
> Groetjes Albert
> --
> Albert van der Horst, UTRECHT,THE NETHERLANDS
> Economic growth -- being exponential -- ultimately falters.

Economic growth only has to be exponential when it is only the exponential growth that is accepted as meaningful. And that's why you picked that quote for a siggy, right?

> albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Anton Ertl

unread,
Apr 13, 2017, 7:09:48 AM4/13/17
to
Joel Rees <joel...@gmail.com> writes:
>As it was, the 6809 only had, IIRC, about a thousand more gates than the 65=
>02

<https://en.wikipedia.org/wiki/Transistor_count>:

6502: 3510 transistors
6809: 9000 transistors

jim.bra...@ieee.org

unread,
Apr 13, 2017, 9:33:28 AM4/13/17
to
On Thursday, April 13, 2017 at 6:09:48 AM UTC-5, Anton Ertl wrote:
]> <https://en.wikipedia.org/wiki/Transistor_count>:
6809 and Z80 have very similar transistor counts.
IMHO Z80 makes very good use of memory bandwidth.
IMHO 6809 was/is the best of breed: most capable instruction set.
(and last to be introduced)

Which was fastest(at the same clock rate)?

Mark Wills

unread,
Apr 13, 2017, 10:24:05 AM4/13/17
to
On Thursday, 13 April 2017 14:33:28 UTC+1, jim.bra...@ieee.org wrote:

> Which was fastest(at the same clock rate)?

IIRC the 6502 had the shortest instruction cycle times, but it typically
got less work done per instruction than a (for example) Z80.

I got the overall impression that the 6502 was faster though.

It's been a long time and I was just kid. ~13/14 years old. I was yet to
discover girls or Dire Straits. :-)

Andrew Haley

unread,
Apr 13, 2017, 10:51:23 AM4/13/17
to
Mark Wills <markwi...@gmail.com> wrote:
> On Thursday, 13 April 2017 14:33:28 UTC+1, jim.bra...@ieee.org wrote:
>
>> Which was fastest(at the same clock rate)?
>
> IIRC the 6502 had the shortest instruction cycle times, but it typically
> got less work done per instruction than a (for example) Z80.
>
> I got the overall impression that the 6502 was faster though.

6502 instructions took between 2-12 microseconds @ 1MHz. Z80 was a
bit more complicated, but a Z80 machine cycle was 3 or 4 clocks, and
instructions took from 1-6 of theose cycles, so for a 2.5 MHz Z80
there was a speed advantage over the 1MHz 6502, especially given the
more powerful instructions, but it depended on what you were doing.
Later on, the Z80A was 4 Mhz and 6502s ran at 2 Mhz.

> It's been a long time and I was just kid. ~13/14 years old. I was yet to
> discover girls or Dire Straits. :-)

LOL! :-)

Andrew.

rickman

unread,
Apr 13, 2017, 12:57:15 PM4/13/17
to
On 4/13/2017 3:42 AM, lehs wrote:
> Cont.
> Since computer programs can't experience growing up as humans there is no way for it to have all this knowledge. And the ALU will never be aware of it self.

Nothing you have said precludes a machine becoming self aware. In
particular the above statement is false. Computers can and *do* learn.
There have been a number of computer programs that learn from experience.

http://bfy.tw/BDB2

--

Rick C

rickman

unread,
Apr 13, 2017, 1:00:28 PM4/13/17
to
On 4/13/2017 5:14 AM, Joel Rees wrote:
> On Wednesday, April 12, 2017 at 7:26:32 PM UTC+9, Albert van der
> Horst wrote:
>> In article <86h91uu...@molnjunk.nocrew.org>, Lars Brinkhoff
>> wrote:
>>>
>>> But at least the instruction set *encoding* is quite regular and
>>> pleasant. I wrote my own Forth assembler for it, and it turned
>>> out to be very simple.
>
> That has to be considered at least moderately serious programming.

What, writing an assembler? We wrote an assembler for an IBM360 as a
class assignment once.

--

Rick C

Andrew Haley

unread,
Apr 13, 2017, 1:02:01 PM4/13/17
to
:-)

I'm surprised that anybody can repeat the computers only do what
they're told and can't invent anything", given the recent work at
DeepMind with AlphaGo. This really wasn't a brute-force tree search
of the kind used by chess porgrams.

Andrew.

Anton Ertl

unread,
Apr 13, 2017, 1:22:01 PM4/13/17
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>6502 instructions took between 2-12 microseconds @ 1MHz.

They take 2-7 cycles, i.e., 2-7 microseconds @ 1MHz.

lehs

unread,
Apr 13, 2017, 1:42:14 PM4/13/17
to
That's a straw-man. I agree with that computers could learn, of course. But they can never experience to be humans or express human thoughts as a result of a mind.

I don't know what AI is, but AI programming is technique.

Andrew Haley

unread,
Apr 13, 2017, 1:56:00 PM4/13/17
to
lehs <skydda...@gmail.com> wrote:
>
> That's a straw-man.

It's your straw man, if so. You made the foolish "chess computer"
argument.

> I agree with that computers could learn, of course. But they can
> never experience to be humans or express human thoughts as a result
> of a mind.

Of course not, because humans are by definition us, the species that
evolved, made out of meat. But that tells us nothing about whether
machgines not made out of meat can have thoughts or minds.

Andrew.

lehs

unread,
Apr 13, 2017, 2:13:42 PM4/13/17
to
Mind has nothing to do with AI programming, unlike chess programming.

rickman

unread,
Apr 13, 2017, 2:23:40 PM4/13/17
to
You are using very odd phrasing. What exactly do you mean by "they can
never experience to be humans". As I have said, they can't "be humans"
in the same way an ant can't be a dog. But there is no reason to think
a machine can't be self aware and capable of similar thought to the
thoughts of a human.

The statement that a machine can't "express human thoughts as a result
of a mind" is just plain unsupportable. You keep making claims, but
offer nothing to support them.

--

Rick C

rickman

unread,
Apr 13, 2017, 2:24:25 PM4/13/17
to
If a mind is different from a program, what is it?

--

Rick C

lehs

unread,
Apr 13, 2017, 2:57:18 PM4/13/17
to
I was discussing programming and wrote that AI-programming is a technique. It's not alchemistry or something, just smart technique build upon scientific results - often with astounding results. There is no computer science dealing with the mind.

What is mind? I ask you what is space? Time? What is anything? You can't answer such questions with descriptions of models. Models are models.

Human thinking and language is about human living. You can't expect a robot experience waking up with the hed in an ashtray after been intoxicated with alcoholic the night before. Of course robots will communicate, and already does, and of course they will be able to communicate with humans in human language. But that communication would be a kind of interface communication thru some kind of protocoll.

rickman

unread,
Apr 13, 2017, 3:15:42 PM4/13/17
to
Ok, if you say so.

So tell me, am I a human or a robot? Can you tell? If you can't tell,
what is the difference?

--

Rick C

hughag...@gmail.com

unread,
Apr 13, 2017, 4:08:58 PM4/13/17
to
On Thursday, April 13, 2017 at 10:22:01 AM UTC-7, Anton Ertl wrote:
> Andrew Haley <andr...@littlepinkcloud.invalid> writes:
> >6502 instructions took between 2-12 microseconds @ 1MHz.
>
> They take 2-7 cycles, i.e., 2-7 microseconds @ 1MHz.

I noticed that too --- Andrew Haley was making stuff up again --- don't you Forth-200x committee members normally stick together though? Why did you correct him?

Anyway, getting back to ISYS Forth and my ISYS-compatible cross-compiler, it used a split data-stack. We had two stacks, one for the high-byte and one for the low-byte. The high-byte stack was addressed with $40,X and the low-byte stack was addressed with $0,X --- so, only a single DEX was needed to make room on the data-stack for a new 16-bit cell, and only a single INX was needed to DROP a 16-bit cell. Very fast! Also, made the machine-code smaller.

By comparison, the Rockwell R65F11 Forth system had the high-byte and low-byte juxtaposed, so two DEX were needed to make room on the data-stack for a new 16-bit cell, and two INX wee needed to DROP a 16-bit cell. Very slow!

One company that I worked for had tried the Rockwell Forth system in the past, but found it to be ponderously slow. After that experience, they gave up on Forth and switched to C. The Rockwell Forth used ITC which is extremely inefficient on the 6502. It is also unnecessary. A JSR instruction only takes 3 bytes, whereas an ITC link takes two, so ITC is only slightly smaller than STC. In ISYS Forth, as mentioned, DROP compiled into one byte. Also, ISYS Forth had jump-termination, meaning that a JSR followed by an RTS was converted into a JMP --- so 3 bytes in ISYS Forth compared to 4 bytes in an ITC system. Some code optimization done in ISYS Forth increased the size of the machine-code, but most of it decreased the size of the machine-code. In my cross-compiler, it was possible to turn on or off each kind of optimization as needed, so it was possible to optimize for speed or to optimize for size. Even when optimizing for size however, my code was about an order of magnitude faster than R65F11 code.

On a related note, I see that Stephen Pelc employs Randy Dumse of Rockwell R65F11 infamy:
http://www.mpeforth.com/sample-page/46-2/
I think this really shows the level of Forth skill that is accepted at MPE!

HAA

unread,
Apr 13, 2017, 11:02:17 PM4/13/17
to
hughag...@gmail.com wrote:
> On Wednesday, April 12, 2017 at 5:43:35 PM UTC-7, HAA wrote:
> > ...
> > Keeping in mind it was intended to be a low-cost, bare-bones,
> > alternative to the 6800 aimed at microcontroller work, the limited
> > register set/size and missing instructions are understandable.
> > The failure to correct existing broken instructions wasn't.
> > ...
> > You can blame Wozniak and the hobbyists for the appearance of
> > the 6502 in systems for which it wasn't intended. Tramiel and co.
> > weren't going to complain. They were making buckets of money
> > riding the crest of a craze.
>
> The 65c02 was a disappointment. It could have been so much better! If the JMP indirect
> instruction had used the Y register, the 65c02 would have been a very good processor
> for VMs. You could have 128 primitives each with a one-byte code. Your generated code
> would be quite compact. Most of the VM instructions would be one byte. Some of them
> would be two or three bytes because they have an operand. The NEXT routine would be
> very fast (note for Mark Wills: this NEXT would be about an order of magnitude faster
> than anything that could be done on the Z80).
> ...

Making the 6502 better would have entailed a total redesign. The 6502 was built
around small. Lack of 16-bit regs, limited zero page and fixed stack all became
problematic for larger apps. Intel took the opposite approach when upgrading
the 8008 to 8080. The 8080 was well suited for writing an OS and apps such
as language compilers.



rickman

unread,
Apr 14, 2017, 1:21:30 AM4/14/17
to
On 4/13/2017 11:01 PM, HAA wrote:
>
> Making the 6502 better would have entailed a total redesign. The 6502 was built
> around small. Lack of 16-bit regs, limited zero page and fixed stack all became
> problematic for larger apps. Intel took the opposite approach when upgrading
> the 8008 to 8080. The 8080 was well suited for writing an OS and apps such
> as language compilers.

I still have an 8008 computer somewhere. It is amazing just how much
impact that design had on all the following Intel CPUs. The most
obvious was the first three phases of the memory cycle were T1, T2 and
T3. These corresponded to the output of the two halves of the address
and the data movement in the 8008, because it was packaged in an 18 pin
DIP, so the 8 bit data bus was multiplexed for a 14 bit address and 8
bits of data. The next several generations of Intel CPUs had similar
arrangements even though they had full size address buses although some
multiplexed the data with the address like the 8085.

--

Rick C

hughag...@gmail.com

unread,
Apr 14, 2017, 2:23:10 AM4/14/17
to
Well, I'm not suggesting a total redesign.

1.) Make the JMP indirect use Y rather than X.

2.) Make code and data reside in separate 64KB memories (with a 17-bit address bus).

3.) Get rid of the indirect,X addressing mode because it is almost never used.

Note that #3 would made the 6502 smaller --- this could free up some chip space to allow some new instructions --- a multiply would be hugely helpful!

BTW: Abrash's famous book has a chapter titled: "Strange Fruit of the 8080" --- describes how the 8080 affected the future x86 chips in weird ways.

Also, I don't think the 8080 was:
"well suited for writing an OS and apps such as language compilers."
CP/M wasn't all that great.

Joel Rees

unread,
Apr 14, 2017, 11:06:05 AM4/14/17
to
On Thursday, April 13, 2017 at 8:09:48 PM UTC+9, Anton Ertl wrote:
> Joel Rees writes:
> >As it was, the 6809 only had, IIRC, about a thousand more gates than the 65=
> >02
>
> <https://en.wikipedia.org/wiki/Transistor_count>:
>
> 6502: 3510 transistors

My memory was more in the range of 6000+. Might be remembering a later part, however.

They do give a reference for that. The count is repeated in a number of places, but seems traceable to one book.

> 6809: 9000 transistors

My memory is in the range of 7000+.

I note that they don't show a reference, but that the number 9000 (too round, really) does show up around the web. So do other numbers. In particular, another wikipedia page

https://en.wikipedia.org/wiki/Microprocessor_chronology

indicates 11,000 transistors more than the 8086, which I am not going to believe.

(Some point out that different sources will count the transistors differently, and one count may be gates, where another may be actual junctions, ...)

Do you have a reference, other than, say,

http://marc.info/?t=127528078100001&r=1&w=2

which has both counts, and mentions the question of how transistors are counted.
0 new messages