Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

6 GHz stack machine

2,545 views
Skip to first unread message

Stephen Pelc

unread,
Jul 2, 2021, 7:49:54 AM7/2/21
to
An MPE client is currently designing a new dual-stack machine. The
predicted performance is 6 GHz (instructions per second). 40 CPUs
occupy less than 1 sqare mm.

It's for real, and they have a paying client for it.

Depending on life, there may be more information at EuroForth 21 in
Rome in September. I have my EU Covid passport already.

Stephen
--
Stephen Pelc, ste...@vfxforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, +44 (0)78 0390 3612, +34 649 662 974
http://www.mpeforth.com - free VFX Forth downloads

Brian Fox

unread,
Jul 2, 2021, 9:34:32 AM7/2/21
to
On 2021-07-02 7:49 AM, Stephen Pelc wrote:
> An MPE client is currently designing a new dual-stack machine. The
> predicted performance is 6 GHz (instructions per second). 40 CPUs
> occupy less than 1 sqare mm.
>
> It's for real, and they have a paying client for it.
>
> Depending on life, there may be more information at EuroForth 21 in
> Rome in September. I have my EU Covid passport already.
>
> Stephen
>

That's really exciting Stephen.

I have always found it tragic that Chuck's CPU ideas didn't find a home
in the bigger world.

If it's allowed can you tell us anything about the architecture.
- Which family it leans towards. shBoom perhaps?
- Maybe it's a clean room design.
- Is there are target application area?
- Are they going to make if C friendly with a few extra registers?

And with my business hat on, once it's developed is there a solid
go to market strategy? Without that it is an academic curiosity.
(again)



Stephen Pelc

unread,
Jul 2, 2021, 10:40:38 AM7/2/21
to
On 2 Jul 2021 at 15:34:28 CEST, "Brian Fox" <bria...@rogers.com> wrote:

> On 2021-07-02 7:49 AM, Stephen Pelc wrote:
>> An MPE client is currently designing a new dual-stack machine. The
>> predicted performance is 6 GHz (instructions per second). 40 CPUs
>> occupy less than 1 sqare mm.
>>
>> It's for real, and they have a paying client for it.
>>
>> Depending on life, there may be more information at EuroForth 21 in
>> Rome in September. I have my EU Covid passport already.
>>
>> Stephen
>>
>
> That's really exciting Stephen.
>
> I have always found it tragic that Chuck's CPU ideas didn't find a home
> in the bigger world.

One of the interesting parts of this design is that it is done in Verilog
using
industry standard tool chains and is prototyped in FPGAs.

The designers are getting paid for the chips. How much will be open is
yet to be seen/decided. The initial application is pretty specialised.

> If it's allowed can you tell us anything about the architecture.

It's all their own work. I can't tell you more yet.

> And with my business hat on, once it's developed is there a solid
> go to market strategy? Without that it is an academic curiosity.
> (again)

I'm working for the techies, not the capital people. It's for a practical
application that attracts capital people.

Stephen

Clive Arthur

unread,
Jul 2, 2021, 11:09:39 AM7/2/21
to
I hope they've read the PSC1000 manual. Fetch and store to the return
stack and the ability to drop multiple items from it is dead handy for
locals.

eg
3r@ - copy 3rd R to TOS
5r! - TOS to 5th R
6 rdrop - drop top 6 items from R
and of course r> and >r

--
Cheers
Clive

Marcel Hendrix

unread,
Jul 2, 2021, 11:45:08 AM7/2/21
to
On Friday, July 2, 2021 at 1:49:54 PM UTC+2, Stephen Pelc wrote:
> An MPE client is currently designing a new dual-stack machine. The
> predicted performance is 6 GHz (instructions per second). 40 CPUs
> occupy less than 1 sqare mm.
>
> It's for real, and they have a paying client for it.
>
> Depending on life, there may be more information at EuroForth 21 in
> Rome in September. I have my EU Covid passport already.

Integer / floating-point? 'C' or Forth-like?
How much memory can it address (and how)?
OS?

-marcel

none albert

unread,
Jul 3, 2021, 7:00:03 AM7/3/21
to
In article <sbna7h$8r8$1...@dont-email.me>,
That feature was present in the FIETS project in the 80's in FIG chapter
Holland. Chuck Moore with the NOVIX beat us to it,
so we never went far with it. We made an emulator though.
Glad to hear that this feature is useful. With us it never
got battle-tested.
http://www.keesmoerman.nl/e_forth.html
Choose Forth Processors
Note the 1984 award on that page!
>
>--
>Cheers
>Clive
>

Groetjes Albertwww.keesmoerman.nl/e_forth.html
--
"in our communism country Viet Nam, people are forced to be
alive and in the western country like US, people are free to
die from Covid 19 lol" duc ha
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Paul Rubin

unread,
Jul 3, 2021, 12:21:04 PM7/3/21
to
albert@cherry.(none) (albert) writes:
>>3r@ - copy 3rd R to TOS
>>5r! - TOS to 5th R
>>6 rdrop - drop top 6 items from R
>>and of course r> and >r
>
> That feature was present in the FIETS project in the 80's in FIG
> chapter Holland. Chuck Moore with the NOVIX beat us to it,

The Novix was able to reach into the interior of the R stack like that?
Of course then R is more like a register file.

> http://www.keesmoerman.nl/e_forth.html

This page looks interesting even though I can't read it. I'll try
google translate on it when I get a chance.

Fourthy Forth

unread,
Jul 4, 2021, 1:17:16 AM7/4/21
to
Stephen, thank you. Good new. Pity Ga not do after all years. Ga so difficult!

Ilya Tarasov

unread,
Jul 4, 2021, 11:00:52 AM7/4/21
to
пятница, 2 июля 2021 г. в 14:49:54 UTC+3, Stephen Pelc:
> An MPE client is currently designing a new dual-stack machine. The
> predicted performance is 6 GHz (instructions per second). 40 CPUs
> occupy less than 1 sqare mm.

So clickbaiting. 6 GHz meaning 40*150MHz? This is IPS parameter (Instruction
per second), not clock speed. If instruction needs more than 1 cycle in average,
CPI (cycle per instruction) is needed as well.

Intel CPU with 6 cores, 2 threads per core, and 4 GHz clock speed...
6*2*4 = 48 GHz, hmm?

> It's for real, and they have a paying client for it.

I wonder to see a silicon chip designed without a real goal. Of course, toys or
results of author's ambitions may be done just to show it may exist.

Jurgen Pitaske

unread,
Jul 4, 2021, 1:38:40 PM7/4/21
to
Clickbaiting on CLF - who is there to clickbait?
It might just attract the Peter Forths and Hugh Aguilars.
And they have been stumm until now.

I am quite surprised about your negativity.

If a customer wants to have a chip as you describe it - fine, they pay for it.
They must have a reason to do it. An ASIC will not be cheap.

Non-disclosure Agreements are there for a REASON.
If people here like it or not is actually irrelevant.
This is normal business practice.

I do appreciate the data that Stephen is allowed to disclose.
Something is happening with a chip design and Forth - should be great news for all of us here.
We should all be positive about it.
Nothing to do with clockbaiting.

Assuming they know what they do,
they might have heard about Intel or others,
and there are multi-processor ARM chips out there now,
and they could possibly have seen
that these chips do not achieve what is required in their application.
https://arstechnica.com/information-technology/2020/03/amperes-altra-is-80-arm-cores-of-cloud-native-power-efficient-cpu/


A RISC-V can easily do 5GHz clock frequency
https://www.techpowerup.com/275463/risc-v-processor-achieves-5-ghz-frequency-at-just-1-watt-of-power
and an FPGA RISC-V running mecrisp will be faster than the 150 MHZ you want.
And you could run as many cores as fit into the relevant FPGA.
Then unfortunately without a multiprocessor Forth.
As FPGA NOW. Free of Charge for the cores...

But the designers there might have had some spare gates
to add the pre-divivers to achieve the reduced clock speed you want.

So, I assume they must have a reason and an application that pays for the project cost.

I am actually more interested to hear about 2 things:

What is the target application? Probably AI or crypto mining
How are these CPUs coupled / how do they communicate?

Will it be available for the general market or is this just an internal project and custom design.

I hope it is for the general market, otherwise sales for MPE would be limited to project work unfortunately.
And we want to see commercial Forth Software grow - at least I want to see it.
The best proof of what Forth can achieve.

Or it will there be a multiprocessor VFX - so a Greenarrays++++
For a chip we can all buy a couple of design kits for ...
This fact is hopefully not covered by NDA and Stephen can reveal it.

Fingers crossed for MPE ( and the chip ).

Paul Rubin

unread,
Jul 4, 2021, 2:19:54 PM7/4/21
to
Ilya Tarasov <ilya74....@gmail.com> writes:

> пятница, 2 июля 2021 г. в 14:49:54 UTC+3, Stephen Pelc:
>> An MPE client is currently designing a new dual-stack machine. The
>> predicted performance is 6 GHz (instructions per second). 40 CPUs
>> occupy less than 1 sqare mm.
>
> So clickbaiting. 6 GHz meaning 40*150MHz?

I took it to mean 40*6ghz = 240gips total. We also don't know the die
size, or what else is on the die. It could be some kind of array
processor for machine vision, or any number of other things.

Ilya Tarasov

unread,
Jul 4, 2021, 2:30:06 PM7/4/21
to
воскресенье, 4 июля 2021 г. в 21:19:54 UTC+3, Paul Rubin:
Well, I'm really wonder to see people who can estimate 6 GHz as a clock speed.
We must not be disoriented by news and journalists reviews. In reality, clock
speed above 1 GHz is not easy to achieve, regardless of technology node.
Reasons are fine physical effects, non-ideal parameters, variations, non-ideal
routing etc etc etc. If someone draws a schematic and receive about 160 ps
(1/6 Ghz) as a summary gate delay - ok, I can fix he is entering chip making
world at a first time.

Paul Rubin

unread,
Jul 4, 2021, 3:47:11 PM7/4/21
to
Ilya Tarasov <ilya74....@gmail.com> writes:
> Well, I'm really wonder to see people who can estimate 6 GHz as a
> clock speed.

Chip designers, it sounds like.

> In reality, clock speed above 1 GHz is not easy to achieve, regardless
> of technology node.

Well, chip designers manage to do it.

> If someone draws a schematic and receive about 160 ps (1/6 Ghz) as a
> summary gate delay

Do EDA tools not mostly automate that? I.e. the designer writes HDL
rather than drawing a schematic. The tools handle all the layout etc.

Ilya Tarasov

unread,
Jul 4, 2021, 3:49:06 PM7/4/21
to
> Clickbaiting on CLF - who is there to clickbait?
> It might just attract the Peter Forths and Hugh Aguilars.
> And they have been stumm until now.

People are free in general and I have no goal to attract someone.
You are talking about a kind of cult with certain hierarchy and
'allowed' phrases about key points.

> I am quite surprised about your negativity.

Not every mention of Forth should generate a positive reaction

> If a customer wants to have a chip as you describe it - fine, they pay for it.
> They must have a reason to do it. An ASIC will not be cheap.

I see no list of chip features. There is only mention 'Forth has another success
but we will not tell you details'.

> I do appreciate the data that Stephen is allowed to disclose.
> Something is happening with a chip design and Forth - should be great news for all of us here.
> We should all be positive about it.

I was quite positive (in general) 10+ years ago when TechnoForth TF16 CPU was implemented
in 0.35 um silicon. After this, TechnoForth (claimed theyselves as only true Forth team in Russia)
goes to bankrupt. Slowly, step by step, but inevitable. Silicon implementation was a kind of last
chance for them to keep impressions with no real base. Indeed, their code was poor and overall
architecture inconsistent to application domain and technology. Their position, however, was
'it is Forth, and we are professional forthers, so you have no chance to understand our wisdom'.
Until bankruption.

> Nothing to do with clockbaiting.

6 GHz is clickbaiting.
Chuck Norris can easily do many things. Do you can the same? Don't tell me about chip topology unless
you have chips routed by you personally. This is a true art with a huge list of potential problems.

> and an FPGA RISC-V running mecrisp will be faster than the 150 MHZ you want.
> And you could run as many cores as fit into the relevant FPGA.
> Then unfortunately without a multiprocessor Forth.
> As FPGA NOW. Free of Charge for the cores...

JFF design (real Forth CPUs, though).
http://fforum.winglion.ru/viewtopic.php?f=3&t=3309&p=48880#p48880

It seems you will never turn to real activity...
Keep collecting rumors about possible Forth applications ;)

> Will it be available for the general market or is this just an internal project and custom design.
>
> I hope it is for the general market, otherwise sales for MPE would be limited to project work unfortunately.

Exactly then same as for TF16 or SeaForth. Another round of empty dreams?

> Fingers crossed for MPE ( and the chip ).

Crossed fingers will certainly prevent you from doing something :)))

Jurgen Pitaske

unread,
Jul 4, 2021, 3:50:42 PM7/4/21
to
Ilya,
we can only take what Stephen is allowed to mention.
Let us be happy that something happens on the Forth front.

How much these clock speeds or Instruction execution times vary in either direction is unclear
and will only be known when the design has been signed off at the silicon manufacturer
- or after testing when the chip exists, as you know,
and is not really important for now.
Let's hope the world has access to these chips and creates clickbaits for Forth anywhere..

Ilya Tarasov

unread,
Jul 4, 2021, 3:57:19 PM7/4/21
to
> > If someone draws a schematic and receive about 160 ps (1/6 Ghz) as a
> > summary gate delay
> Do EDA tools not mostly automate that? I.e. the designer writes HDL
> rather than drawing a schematic. The tools handle all the layout etc.

OMG, you are completely missing this. EDA tools for silicon are not
masterpiece generators, it looks like MS Paint with basic set of features,
and you need to create something really smart. Automatization is very limited
because there are many factorial-based complexity in place&route algorithms.
Even for FPGA, full automatization of optimal designing is impossible. Consider
you have now not predefined layout with only routing required (predefined also
and needed to connect by special points), but a clear piece of silicon with many
layers and many modifications of every logic gate. If an idea is 'Intel can do it,
so we can do it too, especially because we are touched by the Forth spirit', you
will definitely fail.

Jurgen Pitaske

unread,
Jul 4, 2021, 4:00:53 PM7/4/21
to
99% of the people here with fingers crossed - including YOU - can only make it better,
as none of us
are involved in this design
or the VFX software adaptations.

So, let us just be supportive.
And positive.
The future will tell - or do you have anything more positive to offer?

Ilya Tarasov

unread,
Jul 4, 2021, 4:02:11 PM7/4/21
to

> Ilya,
> we can only take what Stephen is allowed to mention.
> Let us be happy that something happens on the Forth front.

I will be happy if many Forth projects will be active, with many
approaches and a rich base of practical applications. Shrinking
Forth to a limited set of 'blessed' leaders and a fan club will lead
to fail of false leaders and disappointment of fans.

Jurgen Pitaske

unread,
Jul 4, 2021, 4:13:06 PM7/4/21
to
A language is a tool for applications
- the old hammer and nail example comes to mind.

If a language - Forth included -
is not used in real applications
the world will not bother
but just use other tools which work better - or they like more.

People who like Forth, or hobbyists
will continue to use the 100+ Forth variants they now play with for the next 50 years or longer.

Stephen Pelc

unread,
Jul 4, 2021, 4:24:31 PM7/4/21
to
On 4 Jul 2021 at 17:00:50 CEST, "Ilya Tarasov" <ilya74....@gmail.com>
wrote:

> пятница, 2 июля 2021 г. в 14:49:54 UTC+3, Stephen Pelc:
>> An MPE client is currently designing a new dual-stack machine. The
>> predicted performance is 6 GHz (instructions per second). 40 CPUs
>> occupy less than 1 sqare mm.
>
> So clickbaiting. 6 GHz meaning 40*150MHz? This is IPS parameter (Instruction
> per second), not clock speed. If instruction needs more than 1 cycle in
> average,
> CPI (cycle per instruction) is needed as well.

What I can say is limited by what I know and what I am allowed to say. MPE is
doing some tool-making for them. They have completed a fair number of chips.

No click baiting. Let's assume that 6Hz is a target figure for some unknown
process
- i.e. I don't know what it is. From what I do know, I would expect a figure
in excess
of 2GHz. That's per CPU. And I know very little about chip and FPGA design.

150MHz per CPU would happen on an FPGA.

>> It's for real, and they have a paying client for it.
>
> I wonder to see a silicon chip designed without a real goal. Of course, toys or
> results of author's ambitions may be done just to show it may exist.

Yes, there's a goal but it's not published yet. For Marcel's benefit,
it's a 32 bit integer CPU with floating point and custom instructions. The
custom instructions help with the goal.

Stephen

Ilya Tarasov

unread,
Jul 4, 2021, 5:47:32 PM7/4/21
to
> So, let us just be supportive.
> And positive.
> The future will tell - or do you have anything more positive to offer?

Following things are just an illusion of support:
- likes
- subscriptions
- automatic acceptance of every news about Forth

Following things are closer to support:
- experimental/modeling verifications
- technical questions
- discussions
- clarifying counterexamples to avoid negative effects
- comparisons and methodology summarizing

Hugh Aguilar

unread,
Jul 4, 2021, 8:40:42 PM7/4/21
to
On Sunday, July 4, 2021 at 1:00:53 PM UTC-7, jpit...@gmail.com wrote:
> On Sunday, 4 July 2021 at 20:49:06 UTC+1, Ilya Tarasov wrote:
> > 6 GHz is clickbaiting.
> > > A RISC-V can easily do 5GHz clock frequency
> > > https://www.techpowerup.com/275463/risc-v-processor-achieves-5-ghz-frequency-at-just-1-watt-of-power
> > Chuck Norris can easily do many things. Do you can the same? Don't tell me about chip topology unless
> > you have chips routed by you personally. This is a true art with a huge list of potential problems.

Good analogy, Ilya!
I have found, when trying to discuss MFX and the out-of-ordering of the instructions,
that people will say this is easy, and they read a magazine article about it, etc..
Maybe easily done by Chuck Norris! lol In practice, not done by anybody other than me.

> > > Fingers crossed for MPE ( and the chip ).
> > Crossed fingers will certainly prevent you from doing something :)))

Another good analogy, Ilya!
It is certainly difficult to get any work done with the fingers crossed. lol

> 99% of the people here with fingers crossed - including YOU - can only make it better,
> as none of us
> are involved in this design
> or the VFX software adaptations.
>
> So, let us just be supportive.
> And positive.

Juergen Pintaske disgusts me.
He wants the Forth community to cross their fingers for Stephen Pelc. lol
Shall we also bring Stephen coffee, shine his shoes, and sharpen his pencils?
How about a free blow-job? The Forth community is queuing up for the privilege!

> The future will tell - or do you have anything more positive to offer?

I wrote MFX back in 1994. That seemed to me like something positive to offer.
MFX was certainly a lot of work --- I had to use my brain to figure out how to do it!
Tom Hart now refuses to admit that I wrote MFX --- must be mad because
I never brought him coffee, shined his shoes or sharpened his pencils.
If he was hoping for a blow-job he should have hired John Passaniti instead of me.

Paul Rubin

unread,
Jul 5, 2021, 1:12:18 AM7/5/21
to
Ilya Tarasov <ilya74....@gmail.com> writes:
> OMG, you are completely missing this. EDA tools for silicon are not
> masterpiece generators, it looks like MS Paint with basic set of features,
> and you need to create something really smart.

Ok, that's interesting to hear, I was under the impression that you
write some HDL and the EDA and the fab house take care of (most of the rest).

> Automatization is very limited because there are many factorial-based
> complexity in place&route algorithms.

I think this specific issue is not too bad and some tools use SAT
solvers for routing. The worst case instances are very hard to solve,
but they don't come up that often in practice. I've heard solving SAT
compared with freezing water: if it's below temperature X, it's solid
ice and that's easy to understand. If it's above X, it's liquid water
and that's also easy to understand. It's only difficult if the
temperature is almost exactly X, so you get a complicated phase change
phenomenon that is very hard to analyze. Similarly, the "hard" SAT
instances seem to all have a certain critical density of clauses.

Here is a good tutorial on SAT and SMT solvers:

https://yurichev.com/writings/SAT_SMT_by_example.pdf

> If an idea is 'Intel can do it, so we can do it too, especially
> because we are touched by the Forth spirit', you will definitely fail.

I think the OKAD approach mirrors other stuff written in the 1980's
after Mead & Conway's book "Introduction to VLSI Design" shook things up
a lot. You'd lay out rectangles on a screen and iirc, if a red wire
crossed a green wire, the intersection was a transitor. People did
design chips using those methods, including the GA144, but it was a lot
of work and probably becomes unmanageable for much more complex chips.

I took one of those classes so I have a little bit of familiarity with
that old stuff, but today's stuff is much fancier and I don't know much
about it, beyond having looked at some HDL code here and there.

Jurgen Pitaske

unread,
Jul 5, 2021, 3:03:41 AM7/5/21
to
It was clear you would raise your ugly head as I had said in the first lines.

But that you would post more disgusting stuff than usual was a surprise -
and as usual not related to the theme of the post.

Rick C

unread,
Jul 5, 2021, 9:51:43 AM7/5/21
to
On Monday, July 5, 2021 at 1:12:18 AM UTC-4, Paul Rubin wrote:
> Ilya Tarasov <ilya74....@gmail.com> writes:
> > OMG, you are completely missing this. EDA tools for silicon are not
> > masterpiece generators, it looks like MS Paint with basic set of features,
> > and you need to create something really smart.
> Ok, that's interesting to hear, I was under the impression that you
> write some HDL and the EDA and the fab house take care of (most of the rest).

You can do that, but with devices that are not full custom. There are degrees of customization in ASICs. Well, there used to be gate arrays which are like metal layer programmed FPGAs. But I seem to recall they have largely been squeezed out by FPGAs getting bigger and faster. They should still be around since the full custom parts continue to get more expensive as the feature size gets smaller. Gate arrays are the traditional way of lowering the cost of FPGAs once you have been in production and are confident in the design. I know Xilinx started a program where they use the same die as their production parts, but only test to your design requirements lowering the test time, defect rate and so cost.


> > If an idea is 'Intel can do it, so we can do it too, especially
> > because we are touched by the Forth spirit', you will definitely fail.
> I think the OKAD approach mirrors other stuff written in the 1980's
> after Mead & Conway's book "Introduction to VLSI Design" shook things up
> a lot. You'd lay out rectangles on a screen and iirc, if a red wire
> crossed a green wire, the intersection was a transitor. People did
> design chips using those methods, including the GA144, but it was a lot
> of work and probably becomes unmanageable for much more complex chips.
>
> I took one of those classes so I have a little bit of familiarity with
> that old stuff, but today's stuff is much fancier and I don't know much
> about it, beyond having looked at some HDL code here and there.

I seem to recall some of the Forth community who were involved in the GA144 would denigrate Spice because they tried to model the transistors used in the part and got a poor result. But Spice is just a tool, not a model and the garbage in - garbage out rule definitely applies. It was a bit strange they held up this example as some sort of proof that traditional tools don't work in spite of the reality of working chips with any number of transistors you can imagine.

--

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209

Ilya Tarasov

unread,
Jul 5, 2021, 2:23:54 PM7/5/21
to
понедельник, 5 июля 2021 г. в 08:12:18 UTC+3, Paul Rubin:
> Ilya Tarasov <ilya74....@gmail.com> writes:
> > OMG, you are completely missing this. EDA tools for silicon are not
> > masterpiece generators, it looks like MS Paint with basic set of features,
> > and you need to create something really smart.
> Ok, that's interesting to hear, I was under the impression that you
> write some HDL and the EDA and the fab house take care of (most of the rest).

I didnt write EDA tools. Even using tools is a kind of art. Industry leading tools
like Cadence or Synopsis are a large set of utilities with complex design flow.
There is no 'pushbutton flow'.

> Here is a good tutorial on SAT and SMT solvers:

Looks like a single screw for building a car.

> https://yurichev.com/writings/SAT_SMT_by_example.pdf
> > If an idea is 'Intel can do it, so we can do it too, especially
> > because we are touched by the Forth spirit', you will definitely fail.
> I think the OKAD approach mirrors other stuff written in the 1980's
> after Mead & Conway's book "Introduction to VLSI Design" shook things up
> a lot. You'd lay out rectangles on a screen and iirc, if a red wire
> crossed a green wire, the intersection was a transitor. People did
> design chips using those methods, including the GA144, but it was a lot
> of work and probably becomes unmanageable for much more complex chips.

OKAD is a good example of how wrong and naiv people may be. VLSI design is
not a laying rectangles.

There is a story about a worker who wanted to assemble a television set.
He annoyed engineers by asking them for a schematic for a specific CRT.
They gave him such a scheme in the end. When after a while they asked
him ironically how he was doing with the TV, he brought them home and
they were shocked. The entire wall was covered with a sheet of plywood,
on which parts were nailed, assembled exactly as shown in the diagram.
This TV worked! OKAD looks the same. Printed circuit boards differ
significantly from circuits, and silicon dies differ significantly from circuits
and HDLs. There are so many design rules that are collected in huge
technology files. These rules are validated in CAD, and it is a difficult
skill to understand which rules and how tightly should be set in a project.


Paul Rubin

unread,
Jul 6, 2021, 2:35:16 AM7/6/21
to
Ilya Tarasov <ilya74....@gmail.com> writes:
> There are so many design rules that are collected in huge
> technology files. These rules are validated in CAD, and it is a difficult
> skill to understand which rules and how tightly should be set in a project.

Design rules in the "old days" were pretty simple: X unit line width, Y
units between lines, Z extra space around any via. There may have been
one or two other parameters but DRC was pretty simple. There was a
timing simulator that worked by estimating capacitance of rectangles.
People did make working digital chips this way. SPICE was more for
analog chips and if you were after higher performance than you could get
with the simple stuff.

It's interesting to hear that the workflow from HDL to finished chips is
not as simple as I'd imagined. Oh well.

Jurgen Pitaske

unread,
Jul 6, 2021, 3:13:52 AM7/6/21
to
Just for people who might have the time and look for some background:
One extreme - all of the aspects involved in ASICs and the complexity:
https://anysilicon.com/
And the other extreme, probably the closest there is to Chuck's way at the time
https://www.asic-gmbh.de/array_engl.html
Here you can even do Mixed-Signal ASIC prototyping using a breadboard
https://www.asic-gmbh.de/breadboard_engl.html
I remember the ASIC they designed with EEPROM at the time
- no microprocessor yet on-board - used by a model train company.
Manufactured by Hughes microelectronics

Ilya Tarasov

unread,
Jul 6, 2021, 4:11:59 AM7/6/21
to
вторник, 6 июля 2021 г. в 09:35:16 UTC+3, Paul Rubin:
> Ilya Tarasov <ilya74....@gmail.com> writes:
> > There are so many design rules that are collected in huge
> > technology files. These rules are validated in CAD, and it is a difficult
> > skill to understand which rules and how tightly should be set in a project.
> Design rules in the "old days" were pretty simple: X unit line width, Y
> units between lines, Z extra space around any via. There may have been
> one or two other parameters but DRC was pretty simple. There was a
> timing simulator that worked by estimating capacitance of rectangles.
> People did make working digital chips this way. SPICE was more for
> analog chips and if you were after higher performance than you could get
> with the simple stuff.

It works up to 250 or 180 nm, maybe. Just like breadboard and wires can be
used for 8 MHz MCU in a DIP package. High-frequency effects will add
problems for 100 MHz clock, and will prevent normal working of DDR3
memory - multilayer PCB wit controlled impedance, differential routing,
shielding, decoupling capacitors etc is strictly required.

For ASIC, at least two major technology shifts took place. First was at 130-90
nm, when a wire delay became equal/greater than a gate delay. Tricks with
short spikes, asynchronous schemes, and many other things are gone.
Design must be fully synchronous, clocked by carefully built clock tree.
Second is near 28 nm, with several non-obvious problems. Clock tree is no
longer able to cover an entire chip area - welcome to Globally Asyncronous,
Locally Synchronous architectures (GALS). Variations (process, voltage,
temperature) are huge, so timing analysis is very complex. For ASICs, local
overheating is a little bonus - is it not enough to place and route, active gates
may be so dense, so temperature may raise too high in a certain small area.
Oh, above 1 GHz a pack of routing tricks are also needed. For example, a net
may require differential and co-planar routing with shielding around curves.
This is a black magic known by people who are deeply inside physical design
process - I cannot provide a complete list of what is needed.

> It's interesting to hear that the workflow from HDL to finished chips is
> not as simple as I'd imagined. Oh well.

Even for FPGA, synthesis and implementation are separated in the CAD
workflow. For ASICs, implementation is a complete different from a
synthesis, heavily dependent on a factory and technology process.

Ilya Tarasov

unread,
Jul 6, 2021, 4:42:17 AM7/6/21
to

> Good analogy, Ilya!
> I have found, when trying to discuss MFX and the out-of-ordering of the instructions,
> that people will say this is easy, and they read a magazine article about it, etc..
> Maybe easily done by Chuck Norris! lol In practice, not done by anybody other than me.

There are a lot of things which cannot be done from a first attempt. Yes, out-of-order
is not an easy thing, and certainly require some exercices to get at least experience.
It's funny many hobbyists expect that any problem will be solved by itself, simply
because the Forth will be used.

> Juergen Pintaske disgusts me.
> He wants the Forth community to cross their fingers for Stephen Pelc. lol
> Shall we also bring Stephen coffee, shine his shoes, and sharpen his pencils?
> How about a free blow-job? The Forth community is queuing up for the privilege!

'The larger the cabinet, the louder it falls' :)
I saw SP-Forth community fall and TechnoForth fall. Ok, SP-Forth is still exist and
available to download, but very far from success. Both teams wanted to be leaders
of community and were catched by a kind of a deadlock. They started to waiting for
followers who must write applications, but their fans are started to waiting bright
news about their success. No serious activity as a result. I told to Andrey Cherezov
he should think about strategic plans, but he was blinded by the temporary success
of SP-Forth. His arguments was very like to what I can see here. Ok when... ;)

Fourthy Forth

unread,
Jul 8, 2021, 1:31:45 PM7/8/21
to
Maybe somebody not know how to design a forth procese.

Regular material, without deviation, always have statistically same action of action and interference. Not E=mc2, but tell a lot..

Hugh Aguilar

unread,
Jul 8, 2021, 8:36:31 PM7/8/21
to
On Tuesday, July 6, 2021 at 1:42:17 AM UTC-7, Ilya Tarasov wrote:
> > I have found, when trying to discuss MFX and the out-of-ordering of the instructions,
> > that people will say this is easy, and they read a magazine article about it, etc..
> > Maybe easily done by Chuck Norris! lol In practice, not done by anybody other than me.
> There are a lot of things which cannot be done from a first attempt. Yes, out-of-order
> is not an easy thing, and certainly require some exercices to get at least experience.

Well, I solved the out-of-ordering of the instructions for the MiniForth on the first attempt.
I had a solution that worked, anyway.
Steve Brault complained that in some cases my assembler's solution was not optimal.
It was possible to hand-code machine-language that did the same thing but was more
efficient in the sense of having fewer NOP instruction inserted.
I said that it was necessary for the assembly-language programmer to help the assembler
by writing his code in a "riscified" manner. Steve Brault complained that this was not
documented anywhere and he did not know what I meant.
I said that this was very similar to what Michael Abrash described in:
"Zen of Code Optimization" that covered the Pentium with its U and V pipes.
The idea is simple. You hold values in registers for as long as possible. You don't load a
register and then immediately use the register. You load the register, you do something
unrelated, then you use the register --- the idea is that the unrelated code will parallelize
with loading the register and/or with using the register. I never heard any further complaints
from Steve Brault, so I assume he understood what I was telling him --- I never saw any of
his assembly-language code though (except the function that did 16-bit integer addition),
so I don't know what quality level he was achieving. I never saw any of the motion-control
code written in MFX because the motion-control program was proprietary to Testra.
John and Tom Hart were very afraid that I would steal it and go start my own company
selling motion-control boards in competition with Testra. That was paranoia. I wasn't
going to do that, and this was way beyond my ability anyway.

> It's funny many hobbyists expect that any problem will be solved by itself, simply
> because the Forth will be used.

Programmers tend to be overly focused on the programming language, and not
focused enough on algorithms --- but algorithms can be ported between languages.

Telling people about writing MFX doesn't impress them. They always tell me that
they don't use Forth so all of this is irrelevant --- they are C programmers.
In actuality, a VLIW processor could be built to run C code, and my assembler
ideas would transfer over to it smoothly. Nobody ever builds VLIW processors though.
You are the only person I am acquainted with who knows what a VLIW processor is.
Quite a lot of people use the term VLIW as a synonym for "super-duper."

We had a thread with this hilarious title: "Zero Instruction Computing?"
https://groups.google.com/g/comp.lang.forth/c/dPvjIMFtRVA/m/PEPLhCtvBgAJ

On Sunday, December 27, 2020 at 11:59:07 PM UTC-7, gnuarm.del...@gmail.com wrote:
> It is a VLIW format with individual control points.

Whatever!

Fourthy Forth

unread,
Aug 5, 2021, 1:17:11 AM8/5/21
to
On Monday, 5 July 2021 at 6:24:31 am UTC+10, Stephen Pelc wrote:
> On 4 Jul 2021 at 17:00:50 CEST, "Ilya Tarasov" <ilya74....@gmail.com>
> wrote:
> > пятница, 2 июля 2021 г. в 14:49:54 UTC+3, Stephen Pelc:

> doing some tool-making for them. They have completed a fair number of chips.
..
> it's a 32 bit integer CPU with floating point and custom instructions. The
> custom instructions help with the goal.
>
> Stephen

Is this GA company. Done number chips and advanced 32bit list. Will run around in circles streaming, or from memory plus stream?

Brad Eckert

unread,
Aug 5, 2021, 1:46:42 AM8/5/21
to
That is impressive. It must be at a reasonably small process node like 28nm, which is affordable for MPW prototype chips.
At 28nm, you get 8M bits of RAM per square mm. Let's suppose this chip has 5M bits of RAM.
That would be 128K bits, or 4K 32-bit words, per core. Sounds about right.

I watched Chuck Moore's interview where he talked about designing his own computer chips with his his own tools. He looked good.
My eyeroll moment was when he said that he couldn't build large, fast RAMs. Presumably they aren't OKAD-friendly.
Okay, but isn't that the trick? Modern processors are big RAMs with processing logic bolted on here and there.

At today's prices, small companies can build their own even less ambitious Forth chips that make sense at the 130nm to 350nm nodes.
130nm is very popular because the masks can be made by laser instead of much slower e-beam and there's no need for
multi-layer phase shift masks. The wafer costs aren't too bad either.
With the current supply crunch, I suspect more companies are asking "Why are we still buying off-the-shelf MCUs?".

What's more is that Forth is the most elegant way of computing ever invented. It taps into mathematical principles that are only now
being discussed in terms of the mathematics of functional programming. Concatenative lambda calculus supported directly in
hardware is very good. Hardware stacks, also very good. GreenArrays proved that stacks are in fact green.

This will help create more Forth programmers. The good thing about a language ahead of its time is that its time hasn't passed.
Perhaps Forthers treat programming the way the French treat food. Would that make C the equivalent of English cuisine?



Jurgen Pitaske

unread,
Aug 5, 2021, 5:40:19 AM8/5/21
to
Just had a look at the latest news on theGreenarrays website, copied from there:

Latest developments:

As of Spring 2021, shipments of the EVB002 evaluation kit and of G144A12 chips continue to be made.
The arrayForth 3 integrated development system is in use with no reported problems.
Design of a new chip, G144A2x, continues; this will be upward compatible with the G144A12,
with significant improvements.
Development of Application Notes, including that of a solftware defined GPS receiver, continues.

Has there been a hint somewhere when this new chip will be out? It seems to be shifting.

dxforth

unread,
Aug 5, 2021, 6:09:50 AM8/5/21
to
On 5/08/2021 15:46, Brad Eckert wrote:
> ...
> This will help create more Forth programmers. The good thing about a language ahead of its time is that its time hasn't passed.
> Perhaps Forthers treat programming the way the French treat food. Would that make C the equivalent of English cuisine?
>

If Forth is anything like the cheap frozen French import pastries that
turn up on local supermarket shelves, then it suits consumers with more
imagination than taste.

Stephen Pelc

unread,
Aug 5, 2021, 8:25:48 AM8/5/21
to
On 5 Aug 2021 at 06:17:10 BST, "Fourthy Forth" <fourth...@gmail.com> wrote:
>
> Is this GA company.

No.

> Done number chips and advanced 32bit list. Will run around in circles
> streaming,
> or from memory plus stream?

I don't understand what you mean. Programs run from memory with separate
areas for code and data.

Stephen

Stephen Pelc

unread,
Aug 5, 2021, 8:29:45 AM8/5/21
to
On 5 Aug 2021 at 06:46:41 BST, "Brad Eckert" <hwf...@gmail.com> wrote:
>
> Perhaps Forthers treat programming the way the French treat food. Would that
> make C the equivalent of English cuisine?

Please, USAnian cuisine - lots of fat and sugar.

Stephen

Fourthy Forth

unread,
Aug 5, 2021, 10:02:02 AM8/5/21
to
On Thursday, 5 August 2021 at 10:25:48 pm UTC+10, Stephen Pelc wrote:
> On 5 Aug 2021 at 06:17:10 BST, "Fourthy Forth" <fourth...@gmail.com> wrote:
> >
> > Is this GA company.
>
> No.

Thank you

> > Done number chips and advanced 32bit list. Will run around in circles
> > streaming,
> > or from memory plus stream?
> I don't understand what you mean. Programs run from memory with separate
> areas for code and data.
>
> Stephen

Parallel serial process, small memory, program hard, vers normal

Brad Eckert

unread,
Aug 6, 2021, 3:58:58 PM8/6/21
to
On Friday, July 2, 2021 at 6:34:32 AM UTC-7, Brian Fox wrote:
> On 2021-07-02 7:49 AM, Stephen Pelc wrote:
> > An MPE client is currently designing a new dual-stack machine. The
> > predicted performance is 6 GHz (instructions per second). 40 CPUs
> > occupy less than 1 sqare mm.
> >
> > It's for real, and they have a paying client for it.
> >
> > Depending on life, there may be more information at EuroForth 21 in
> > Rome in September. I have my EU Covid passport already.
> >
> > Stephen
> >
> That's really exciting Stephen.
>
> I have always found it tragic that Chuck's CPU ideas didn't find a home
> in the bigger world.
>
I would suppose it uses lessons from shBoom but puts stacks in hardware.
In a modern chip, wire delays trump logic delays.
A hardware stack is a bidirectional shift register where all of the bits are adjacent so there are no long wires.
That allows stacks to run at high speed, much faster than a decently-sized memory.
So, a shBoom type of architecture with 8-bit instructions in a 32-bit group makes sense.
6 GHz instructions means 1.5 GHz memory.

Rick C

unread,
Aug 17, 2021, 8:32:21 AM8/17/21
to
I believe that much static RAM would consume a fair amount of current. In devices like this power consumption is typically an important detail to control. So it would be useful to learn what the device actually is using.


> I watched Chuck Moore's interview where he talked about designing his own computer chips with his his own tools. He looked good.
> My eyeroll moment was when he said that he couldn't build large, fast RAMs. Presumably they aren't OKAD-friendly.
> Okay, but isn't that the trick? Modern processors are big RAMs with processing logic bolted on here and there.

"Modern" processors aren't designed to run Forth apps and vice versa. Isn't that the issue being addressed by such chips?


> At today's prices, small companies can build their own even less ambitious Forth chips that make sense at the 130nm to 350nm nodes.
> 130nm is very popular because the masks can be made by laser instead of much slower e-beam and there's no need for
> multi-layer phase shift masks. The wafer costs aren't too bad either.
> With the current supply crunch, I suspect more companies are asking "Why are we still buying off-the-shelf MCUs?".

ADC, DAC and other analog I/Os as well as the many digital peripherals... that's a big part of the reason. There's a lot of IP amortized in such off the shelf MCUs, not to mention the large code base and tool sets. Even when building a custom chip it is not very common to roll your own CPU to put in it. There has to be a compelling case to support such an investment.


> What's more is that Forth is the most elegant way of computing ever invented. It taps into mathematical principles that are only now
> being discussed in terms of the mathematics of functional programming. Concatenative lambda calculus supported directly in
> hardware is very good. Hardware stacks, also very good. GreenArrays proved that stacks are in fact green.

Not to be snide, but I wasn't aware that GreenArrays proved much of anything with the GA144 except that a CPU chip could be designed that sounded so good and was nearly completely ignored by anyone building products. It is one of the most unrealistic CPUs ever conceived, worse than the RCA 1802 COSMAC. As odd as it was, it has found a home in the space community... somehow.


> This will help create more Forth programmers. The good thing about a language ahead of its time is that its time hasn't passed.
> Perhaps Forthers treat programming the way the French treat food. Would that make C the equivalent of English cuisine?

You can't say the time for Forth has passed. I'm not sure it ever existed... did it? Certainly it is not yet to come.

--

Rick C.

+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209

Brad Eckert

unread,
Aug 20, 2021, 2:46:09 PM8/20/21
to
On Tuesday, August 17, 2021 at 5:32:21 AM UTC-7, gnuarm.del...@gmail.com wrote:
> I believe that much static RAM would consume a fair amount of current. In devices like this power consumption is typically an important detail to control. So it would be useful to learn what the device actually is using.
That's an interesting thought, but it seems like being performance-oriented they aren't running off batteries. But isn't this a loaded question? Otherwise, why would you start off your response with a non-issue?
> "Modern" processors aren't designed to run Forth apps and vice versa. Isn't that the issue being addressed by such chips?
They are designed to run apps. Apps are CPU coupled to memory. I would imagine "the issue" being addressed has to do with the C programming model.
Maybe they are philosophically opposed to RISC's overhead of nested calls. Of course, saving every return address to the stack isn't cheap. Having stacks in memory, definitely not cheap.

But isn't this a language problem? Now we are penalizing factoring?

> ADC, DAC and other analog I/Os as well as the many digital peripherals... that's a big part of the reason. There's a lot of IP amortized in such off the shelf MCUs, not to mention the large code base and tool sets. Even when building a custom chip it is not very common to roll your own CPU to put in it. There has to be a compelling case to support such an investment.

I would characterize that as an investment in our youth. Of course, there is the sunk cost problem. But, Google (why does it have to be them?) found a way around some of that. eFabless gives your kids (or your inner kid) a playground to just do interesting things in.

> Not to be snide, but I wasn't aware that GreenArrays proved much of anything with the GA144 except that a CPU chip could be designed that sounded so good and was nearly completely ignored by anyone building products. It is one of the most unrealistic CPUs ever conceived, worse than the RCA 1802 COSMAC. As odd as it was, it has found a home in the space community... somehow.

Isn't this about where you are personally? Chuck Moore did in fact greatly enrich GlobalFoundries by fixing their broken fab models when it really mattered. The benefits to humanity of Chuck Moore are profound. That is a life well-lived.

Paul Rubin

unread,
Aug 20, 2021, 2:56:07 PM8/20/21
to
Brad Eckert <hwf...@gmail.com> writes:
> Chuck Moore did in fact greatly enrich GlobalFoundries by fixing their
> broken fab models when it really mattered.

Wait, what? Do you mean OUR Chuck Moore? You may be confusing him with
another Chuck Moore:

https://en.wikipedia.org/wiki/Charles_R._Moore_(computer_engineer)

Brad Eckert

unread,
Aug 20, 2021, 3:53:44 PM8/20/21
to
Wait, what? There's moore than one?

Yeah, that whole temperature model being important when nobody thought it mattered.
Well of course it matters if scaling matters. So, GlobalFoundries became the guys who could deliver.

Shouldn't we be building Forth systems just because we can? Just to pay tribute to such a remarkable human being as Charles H. Moore?

Forth was always doomed because of the sunk-cost problem. Sorry, no hiding the source in libraries, the source is the library. So yes. Forth was always ahead of its time. It doesn't play in the 3D money game. It's a whole different thing. Oh the times, they are a-changing.

C is a legacy of what? What did the 20th century get you? Bigger KaBooms? How will the industry paradigm overcome the same old problems that can't be ignored anymore? Isn't library code the real problem? Notice how Chuck always challenged his thinking. What a wizard.

The soul of a machine, isn't that the heart of the problem? This separation of the creator from his/her creation for reasons of financial empire. This empire of illusion. Too bad Chuck fell for the illusion, but it gave us Forth. Libraries are not your friend because they are built on this legacy.
No, if you are out to build something to materially benefit all of humanity, isn't Forth really your only sustainable option?

Look at Woody Harrelson as a role model for this. Lives in Maui not far from our own Elizabeth Conklin. Had his wedding bands custom made from gold dust panned from streams in Northern California. Not far from Chuck's old stomping grounds. He could have gotten that gold off the market. But he didn't.

Paul Rubin

unread,
Aug 20, 2021, 4:37:18 PM8/20/21
to
Brad Eckert <hwf...@gmail.com> writes:
> Well of course it matters if scaling matters. So, GlobalFoundries
> became the guys who could deliver... Shouldn't we be building Forth
> systems just because we can? Just to pay tribute to such a remarkable
> human being as Charles H. Moore?

Your entire post is completely confusing, but particularly: are you
saying that Charles H. Moore (the inventor of Forth) had something to do
with GlobalFoundries? Or are you thinking of Charles R. Moore, a
different person who was an architect at AMD, which later spun off its
fab division to become GlobalFoundries? It sounds more likely to me
that you are thinking of Charles R. Moore, but I can't tell.

> Too bad Chuck fell for the illusion, but it gave us Forth. Libraries
> are not your friend because they are built on this legacy.

I completely don't understand what point you are making about libraries.

> No, if you are out to build something to materially benefit all of
> humanity, isn't Forth really your only sustainable option?

This I don't understand either. Forth is interesting historically and
maybe in the present day, but of course there are other ways to write
software, and even ways to materially benefit humanity without involving
software.

> Look at Woody Harrelson as a role model for this. Lives in Maui not
> far from our own Elizabeth Conklin. Had his wedding bands custom made
> from gold dust panned from streams in Northern California.

That sounds pretty cool. We haven't heard from Elizabeth here lately.
I hope she comes back.

Brad Eckert

unread,
Aug 20, 2021, 6:31:22 PM8/20/21
to
Oh, I thought you were making a joke. My mistake. My recollection of the Charles H. Moore saga involves him trying desperately trying to make his OKAD tool
work with the foundry-supplied transistor models. These models were bad, and he made no secret of his use of GlobalFoundries as his supplier. The resulting corrections to that fab's "secret sauce" would be reasonably assumed to have important downstream ramifications.
Of course, if it hadn't been "our Chuck" it would have been someone with real money so the problem would have been fixed, after the crucial market window.

Other than that, my post is more to clarify my own thoughts on the metaphysics of sustainably sourced computing. Way too Zen. How this plays out in the hardware world is anyone's guess. Google is pioneering a new model that allows software guys to make real chips on real foundries. Yup, there are all kinds of languages and all kinds of tools. Did you write them? Did someone who loves you write them? What were their motivations?

Are those motivations something you want to buy into? Google's FOSS commitment seemed to be a showstopper for the fabs, but they found a way. Their way is something I can buy into.

I own an Apple smartphone and sure wish I didn't. Yup, I think the old duderino is about to go Android. I would love Apple if they weren't so cut-throat gangster. Apple is the new Philip Morris. Not something anyone should buy into, but that's me.

You seem to share the same view of historical Forth as Charles H. Moore. It's an interesting footnote, but it's not going anywhere. Forth is words and stack. What we build around that is up to us. That's why the multicore Forth chip - because they can. C is also an interesting historical footnote. But as you can see, it's still around. Why? Libraries. You know, the things that tie us to our past. The things Chuck tried to warn us about. Isn't the point of computing to move beyond the structures of the past? Isn't that why he left Forth Inc? Everyone wanted to canonize their past work, just like the C guys, instead of leaving Forth open to endless conceptualization. Here's your virtual machine, this is what Forth is. No it isn't, it's only one embodiment of an underlying informatics based on stacks.

I can see how reinventing the wheel doesn't make business sense. But what if reinventing the wheel is the point? That's what makes computer programming a kind of Zen practice. Keep taking away the old, not continue to build upon it. Libraries make us slaves to our past. Standing on the shoulders of giants is only as good as the giants themselves. If you're in for the hero's journey, Forth is probably more your thing. Yoda and floating rocks sold separately.


Paul Rubin

unread,
Aug 20, 2021, 11:08:29 PM8/20/21
to
Brad Eckert <hwf...@gmail.com> writes:
> C is also an interesting historical footnote. But as you can see, it's
> still around. Why? Libraries. You know, the things that tie us to our
> past. The things Chuck tried to warn us about.

C is still relevant because of the humongous amount of historical
programs written in it, by which I mean large programs like the Linux
kernel, not libraries in particular. There really isn't that much
library code for C compared with other languages. You have to do more
yourself. C is historical in that relatively few people today, when
they decide to embark on a large new software project, choose to write
it in C. C programming today is either small embedded applications, or
maintenance programming of older projects.

Library ecosystems spring up around new languages relatively quickly:
look at Go, Rust, Ruby, etc., all of which are much newer than C and
have plenty of libraries. This is especially true since you can usually
call C code from other languages.

Anton Ertl

unread,
Aug 21, 2021, 4:14:25 AM8/21/21
to
Paul Rubin <no.e...@nospam.invalid> writes:
>Your entire post is completely confusing, but particularly: are you
>saying that Charles H. Moore (the inventor of Forth) had something to do
>with GlobalFoundries? Or are you thinking of Charles R. Moore, a
>different person who was an architect at AMD, which later spun off its
>fab division to become GlobalFoundries?

Looking at the wikipedia page of Charles R. Moore, he was a computer
architect, and while that is adjacent to circuit design, I doubt that
he made substantial contributions to circuit design.

Concerning Charles H. Moore, he went from programming down to
programming language design (Forth), to computer architecture (Novix
ff.), and finally to circuit design (OKAD).

However, he has always used outdated processes (and I remember the
name MOSIS, but not Globalfoundries), so it's not very likely that he
finds problems and solutions that make a difference for a lot of other
products.

He also uses the processes in unusual ways: At EuroForth he mentioned
that he had a problem with one particular design rule (basically his
design was not dense enough), and Bernd Paysan commented that usual
circuits don't have a problem with that rule; so it can easily be that
he discovered things about the process that are not in the usual
models, but it's less clear that this discovery helps other circuits.

- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.forth200x.org/forth200x.html
EuroForth 2021: https://euro.theforth.net/2021

Jurgen Pitaske

unread,
Aug 21, 2021, 5:45:23 AM8/21/21
to
A slight correction:

> However, he has always used outdated processes (and I remember the
> name MOSIS, but not Globalfoundries), so it's not very likely that he
> finds problems and solutions that make a difference for a lot of other
> products.

These were not outdated processes of MOSIS at the time.
Chuck had to use processes that he could afford
and that were available for the way he generated the GDSII.

He had simulated what comes out,
so, the process selected by Chuck was suitable and good enough for his prototypes.

See Wikipedia https://en.wikipedia.org/wiki/MOSIS
It seems that the MOSIS Website is dead now,
So any new Greenarrays chips would need a new set of masks to be paid for at a new suitable fab.

Hopefully, GA still has sufficient wafers and dies to bridge the gap until the new version comes out - whenever this might be.
No dates for availability on the GA website.

Jurgen Pitaske

unread,
Aug 21, 2021, 6:03:32 AM8/21/21
to
Just to add to the timeline:
https://news.ycombinator.com/item?id=3267428
GA144, available since 2011,
and not many applications known unfortunately over the last 10 years.

Anton Ertl

unread,
Aug 21, 2021, 7:55:34 AM8/21/21
to
Jurgen Pitaske <jpit...@gmail.com> writes:
>On Saturday, 21 August 2021 at 09:14:25 UTC+1, Anton Ertl wrote:
>> However, he has always used outdated processes (and I remember the
>> name MOSIS, but not Globalfoundries), so it's not very likely that he
>> finds problems and solutions that make a difference for a lot of other
>> products.
>
>These were not outdated processes of MOSIS at the time.

They were many generations behind the leading edge. Such processes
are usually used for existing, working designs (designed at a time
when the process was leading edge or close to it), not for new
designs, so whatever he found out about the process did not help many
others.

>Chuck had to use processes that he could afford
>and that were available for the way he generated the GDSII.

This may explain, but does not contradict what I wrote.

>See Wikipedia https://en.wikipedia.org/wiki/MOSIS
>It seems that the MOSIS Website is dead now,

Works for me, albeit only with JavaScript. They seem to no longer do
their own manufacturing (if they ever did; I used to think so), but
currently serve as an intermediary to the big foundries TSMC and
Globalfoundries; it seems that you can still get very old processes
from these foundries. MOSIS offers TSMC 350nm; to give you an idea
how outdated that is: the Pentium Pro (1995) was manufactured in a
350nm process. There obviously still are circuits manufactured in
that process (otherwise the line would have been shut down), but not
new designs.

Jurgen Pitaske

unread,
Aug 21, 2021, 8:17:16 AM8/21/21
to
I am sorry, but your informationn about processes needs some updating I assume.
Processes are there to be used.
If not relevant anymore - they stop to be available.
Chuck had to use what he could afford. And the latest processes where out of reach.
And the volumes ordered did not justify it anyway.

> >Chuck had to use processes that he could afford
> >and that were available for the way he generated the GDSII.
> This may explain, but does not contradict what I wrote.

What you wrote is incorrect and does not change if you introduce a maybe.

If it works your way,
there will be only one latest process per foundty, and everybody can basically throw away the masks with the first delivery,
as it will be superseeded with a newer process when you order next time.
This is not software - this is silicon.

And regarding 350nm - you are basically proving my point - contradicting yours.

Jurgen Pitaske

unread,
Aug 21, 2021, 12:45:50 PM8/21/21
to
It makes an interesting comparison of 2 people who did Forth Processors:

On one side there is Charles H. Moore, who did it all,
including the 144 CPU GA144.

Design system: own and unchecked by others to verify quality
Commercial impact: not known
Applications: not known
Volumes sold: not known
Products manufactured:not known

On the other hand there is Bernd Paysan,
who released the b16 Code at about the same time,
and the Code made available on the Internet
This is not really comparable, but another Forth processor;

Design System: Standard supplier design tools for chip design
Commercial impact: b16 definitely used in commercial volume products
Applications: Seems to be battery control
Volumes sold: must be about 10 000 to 100 000 at least, otherwise not commercially viable

We can only hope,
that the processor IP was paid for well
and is still now,
as there were 2 processor versions used in these commercial applications
according to the Internet, commented by Bernd himself:
https://comp.lang.forth.narkive.com/89qfT2c3/msl16-fpga-forth-processors

S Jack

unread,
Aug 21, 2021, 1:44:37 PM8/21/21
to
On Friday, August 20, 2021 at 5:31:22 PM UTC-5, Brad Eckert wrote:
our past. The things Chuck tried to warn us about. Isn't the point of computing to move beyond the structures of the past? Isn't that why he left Forth Inc? Everyone wanted to canonize their

"Standards are a big impediment in the evolution of Forth into Super Forth."
-- Nietzsche

Anton Ertl

unread,
Aug 21, 2021, 1:44:43 PM8/21/21
to
Jurgen Pitaske <jpit...@gmail.com> writes:
>On the other hand there is Bernd Paysan,
>who released the b16 Code at about the same time,

Bernd Paysan was later. He took inspiration from Chuck Moore's
designs with 5-bit instructions (from MuP20 in 1990 to c18 in 2001).

>Design System: Standard supplier design tools for chip design
>Commercial impact: b16 definitely used in commercial volume products
>Applications: Seems to be battery control

It was used in several products from the company (or sequence of
companies as it was taken over) he worked for. One was Hi-Fi systems
for cars. Later they changed business and developed battery
controllers.

>Volumes sold: must be about 10 000 to 100 000 at least, otherwise not commercially viable

The battery controller was built into hundreds of millions, maybe
billions of smartphones and tablet computers, maybe they are still
using the b16, maybe not. The Hi-Fi application certainly had a much
smaller volume in terms of numbers of CPUs.

Paul Rubin

unread,
Aug 21, 2021, 3:38:57 PM8/21/21
to
an...@mips.complang.tuwien.ac.at (Anton Ertl) writes:
> [MOSIS] They seem to no longer do their own manufacturing (if they
> ever did; I used to think so), but currently serve as an intermediary
> to the big foundries TSMC and Globalfoundries

MOSIS is an MPW (multi-project wafer) shuttle service and always has
been, afaict. It is an industry-academic consortium that makes a lot of
chips for VLSI design classes in universities, and that sort of thing.
It gathers together chip designs from multiple users, then combines the
designs onto a single wafer, gets the wafers fabbed at wherever
depending on the process, deals with cutting up and packaging the chips
etc.

I don't know whether MOSIS is US-only. cmp.imag.fr is another service
like that, based in France. They have lots of processes available
including some relatively advanced ones.

Google will now do something like this for free for FOSS projects. I
don't remember the specifics. But if you want to make a Forth chip,
here is your chance. Some more info:

https://www.theregister.com/2020/07/03/open_chip_hardware/

Rick C

unread,
Aug 22, 2021, 11:55:25 PM8/22/21
to
On Friday, August 20, 2021 at 2:46:09 PM UTC-4, Brad Eckert wrote:
> On Tuesday, August 17, 2021 at 5:32:21 AM UTC-7, gnuarm.del...@gmail.com wrote:
> > I believe that much static RAM would consume a fair amount of current. In devices like this power consumption is typically an important detail to control. So it would be useful to learn what the device actually is using.
> That's an interesting thought, but it seems like being performance-oriented they aren't running off batteries. But isn't this a loaded question? Otherwise, why would you start off your response with a non-issue?

Sorry, I have no idea what you are referring to as "a loaded question". Power consumption is not purely an issue when running from batteries. Heat dissipation can also be important. I know much of today's more advanced electronics consider power consumption even when plugged in. Otherwise desktop computing would have been in the kW range by now.


> > "Modern" processors aren't designed to run Forth apps and vice versa. Isn't that the issue being addressed by such chips?
> They are designed to run apps. Apps are CPU coupled to memory. I would imagine "the issue" being addressed has to do with the C programming model.
> Maybe they are philosophically opposed to RISC's overhead of nested calls. Of course, saving every return address to the stack isn't cheap. Having stacks in memory, definitely not cheap.

Sorry, not following this thought.


> But isn't this a language problem? Now we are penalizing factoring?
> > ADC, DAC and other analog I/Os as well as the many digital peripherals... that's a big part of the reason. There's a lot of IP amortized in such off the shelf MCUs, not to mention the large code base and tool sets. Even when building a custom chip it is not very common to roll your own CPU to put in it. There has to be a compelling case to support such an investment.
> I would characterize that as an investment in our youth. Of course, there is the sunk cost problem. But, Google (why does it have to be them?) found a way around some of that. eFabless gives your kids (or your inner kid) a playground to just do interesting things in.

Sorry, not following your logic. Creating a new MCU is not inexpensive even ignoring the cost of spinning silicon. An CPU and all the peripherals around it are IP that must be developed, debugged, documented and verified before anyone builds a single chip. It is only the more advanced processes that require significant investment in the actual silicon fabrication.

I don't see how sunk cost enters into the issue unless you have already designed a CPU and surrounding IP and are throwing that away.


> > Not to be snide, but I wasn't aware that GreenArrays proved much of anything with the GA144 except that a CPU chip could be designed that sounded so good and was nearly completely ignored by anyone building products. It is one of the most unrealistic CPUs ever conceived, worse than the RCA 1802 COSMAC. As odd as it was, it has found a home in the space community... somehow.
> Isn't this about where you are personally? Chuck Moore did in fact greatly enrich GlobalFoundries by fixing their broken fab models when it really mattered. The benefits to humanity of Chuck Moore are profound. That is a life well-lived.

Sorry? Are you suggesting the design and fabrication of the GA144 was a charity effort to improve someone's tools for a fab?

Chuck Moore's life achievements should not be conflated with the company GreenArrays. GA has accomplished very little unless there have been advances that I am not aware of.

I'm glad GA built the chip. I just wish they had actually considered a market and attempted to design the chip to address one. The GA144 is a very interesting device, but it was an effort based on the idea of, "build it and they will buy", but they didn't. It got some press at the time and then faded into obscurity.

--

Rick C.

-- Get 1,000 miles of free Supercharging
-- Tesla referral code - https://ts.la/richard11209

Fourthy Forth

unread,
Aug 23, 2021, 9:46:26 AM8/23/21
to
On Monday, 23 August 2021 at 1:55:25 pm UTC+10, gnuarm.del...@gmail.com wrote:
> On Friday, August 20, 2021 at 2:46:09 PM UTC-4, Brad Eckert wrote:
> > On Tuesday, August 17, 2021 at 5:32:21 AM UTC-7, gnuarm.del...@gmail.com wrote:
> > > I believe that much static RAM would consume a fair amount of current. In devices like this power consumption is typically an important detail to control. So it would be useful to learn what the device actually is using.
> > That's an interesting thought, but it seems like being performance-oriented they aren't running off batteries. But isn't this a loaded question? Otherwise, why would you start off your response with a non-issue?

Parasitic memory, capacitance of surface, vlong refresh, high speed, not many transistors, Mr Moore not want to use. He know this, but Ga not do it. Descent size, take up as much as 144 processors. 144 go,1000 new process, big memory, standard outside memory. 144 like little boy, who think it can melt snow, but very small, very could boy, with little pee. 1000 advanced processors, with big memory, like Firetruck, very cold, loose little water, then hose snow. Done.


> Sorry, I have no idea what you are referring to as "a loaded question". Power consumption is not purely an issue when running from batteries. Heat dissipation can also be important. I know much of today's more advanced electronics consider power consumption even when plugged in. Otherwise desktop computing would have been in the kW range by now.
> > > "Modern" processors aren't designed to run Forth apps and vice versa. Isn't that the issue being addressed by such chips?
> > They are designed to run apps. Apps are CPU coupled to memory. I would imagine "the issue" being addressed has to do with the C programming model.
> > Maybe they are philosophically opposed to RISC's overhead of nested calls. Of course, saving every return address to the stack isn't cheap. Having stacks in memory, definitely not cheap.
> Sorry, not following this thought.
> > But isn't this a language problem? Now we are penalizing factoring?
> > > ADC, DAC and other analog I/Os as well as the many digital peripherals... that's a big part of the reason. There's a lot of IP amortized in such off the shelf MCUs, not to mention the large code base and tool sets. Even when building a custom chip it is not very common to roll your own CPU to put in it. There has to be a compelling case to support such an investment.
> > I would characterize that as an investment in our youth. Of course, there is the sunk cost problem. But, Google (why does it have to be them?) found a way around some of that. eFabless gives your kids (or your inner kid) a playground to just do interesting things in.
> Sorry, not following your logic. Creating a new MCU is not inexpensive even ignoring the cost of spinning silicon. An CPU and all the peripherals around it are IP that must be developed, debugged, documented and verified before anyone builds a single chip. It is only the more advanced processes that require significant investment in the actual silicon fabrication.
>
> I don't see how sunk cost enters into the issue unless you have already designed a CPU and surrounding IP and are throwing that away.
> > > Not to be snide, but I wasn't aware that GreenArrays proved much of anything with the GA144 except that a CPU chip could be designed that sounded so good and was nearly completely ignored by anyone building products. It is one of the most unrealistic CPUs ever conceived, worse than the RCA 1802 COSMAC. As odd as it was, it has found a home in the space community... somehow.
> > Isn't this about where you are personally? Chuck Moore did in fact greatly enrich GlobalFoundries by fixing their broken fab models when it really mattered. The benefits to humanity of Chuck Moore are profound. That is a life well-lived.
> Sorry? Are you suggesting the design and fabrication of the GA144 was a charity effort to improve someone's tools for a fab?
>
> Chuck Moore's life achievements should not be conflated with the company GreenArrays. GA has accomplished very little unless there have been advances that I am not aware of.

All trade secret, survive for years on paid work, could be hundreds million in one, we are not supposed to be told.

> I'm glad GA built the chip. I just wish they had actually considered a market and attempted to design the chip to address one. The GA144 is a very interesting device, but it was an effort based on the idea of, "build it and they will buy", but they didn't. It got some press at the time and then faded into obscurity.

Build what need, and they buy. Build what want and they buy. space X, but need better.

But we decide. 6Ghz chip built in France company.

Jurgen Pitaske

unread,
Aug 25, 2021, 4:17:16 AM8/25/21
to
Just to add some of the successes of the good old 1802, as it was not that bad
actually the first CMOS microprocessor worldwide for low power applications especially ... :

If you could afford it, you could drive down the motorway at 150 miles per hour , enabled by the BMW Motronic - 1802 based.

And after your trip you could use your German intelligent phone based on the 1802 - won against the TMS1000.

Or if you had to call from the petrol station, why not use the 1802 based British AGI payphone. Here I actually wrote some test routines for this in addition to the technical support .

Or finding out your next flight details using the French teletext terminal using the VIS system controlled by the 1802.

All of these were just a few between 1979-1984 and in Europe that come to mind,
so there must be a lot more in Europe and the rest of the world,
like the Nordic TELMAC hobby computer
and as we probably all know - ELF.

All of these projects and additional customers needed support to understand this processor,
so I put together and published the BMP802, you find the PDF here,
http://www.exemark.com/CDP1802%20Microprocessor%20IP%20in%20VHDL.htm
and it might be soon part of my bookshelf, as I just got permission to re-publish it here
https://www.amazon.co.uk/Juergen-Pintaske/e/B00N8HVEZM
42 years after the original was published in 1980.

The maker scene then was 1802 and amateur satellite systems AMSAT,
based on the 1802 and Forth,
see https://www.amazon.co.uk/gp/product/B07SGWCSKT/ref=dbs_a_def_rwt_bibl_vppi_i17.
And there was a Forth from Forth INC., to my knowledge the first Forth for Embedded Applications
and an 1802 Forth version from MPE.

And the 1802 group is still very active now as it has been for many years
https://groups.io/g/cosmacelf/message/148

Rick C

unread,
Aug 25, 2021, 2:41:43 PM8/25/21
to
On Wednesday, August 25, 2021 at 4:17:16 AM UTC-4, jpit...@gmail.com wrote:
> Just to add some of the successes of the good old 1802, as it was not that bad
> actually the first CMOS microprocessor worldwide for low power applications especially ... :

But that was because it was CMOS, not because it was a good processor.


> If you could afford it, you could drive down the motorway at 150 miles per hour , enabled by the BMW Motronic - 1802 based.

Is that some sort of metric, the fact that when it was new it found it's way into a product? So it did significantly better than the GA144. Not a high bar.


> And after your trip you could use your German intelligent phone based on the 1802 - won against the TMS1000.

The reference point is a 4 bit processor? The bar is getting lower.


> Or if you had to call from the petrol station, why not use the 1802 based British AGI payphone. Here I actually wrote some test routines for this in addition to the technical support .
>
> Or finding out your next flight details using the French teletext terminal using the VIS system controlled by the 1802.
>
> All of these were just a few between 1979-1984 and in Europe that come to mind,
> so there must be a lot more in Europe and the rest of the world,
> like the Nordic TELMAC hobby computer
> and as we probably all know - ELF.

Exactly, while the 8080 was flying off the shelves the 1802 was finding a few crumbs. While CPU/MCUs in general continued to increase capability and speed the 1802 remained stuck in first gear. It's a cute puppy with a brown spot around one eye.


> All of these projects and additional customers needed support to understand this processor,
> so I put together and published the BMP802, you find the PDF here,
> http://www.exemark.com/CDP1802%20Microprocessor%20IP%20in%20VHDL.htm
> and it might be soon part of my bookshelf, as I just got permission to re-publish it here
> https://www.amazon.co.uk/Juergen-Pintaske/e/B00N8HVEZM
> 42 years after the original was published in 1980.
>
> The maker scene then was 1802 and amateur satellite systems AMSAT,
> based on the 1802 and Forth,
> see https://www.amazon.co.uk/gp/product/B07SGWCSKT/ref=dbs_a_def_rwt_bibl_vppi_i17.
> And there was a Forth from Forth INC., to my knowledge the first Forth for Embedded Applications
> and an 1802 Forth version from MPE.
>
> And the 1802 group is still very active now as it has been for many years
> https://groups.io/g/cosmacelf/message/148

I would compare that to the Forth community and that's no compliment.

The real claim to fame for the 1802 is simply that they made a rad hard, space qualified version which has flown around the solar system. Otherwise it would be entirely on the scrap heap of processing history along with many other devices that had their day and are no more.

--

Rick C.

-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209

Fourthy Forth

unread,
Aug 25, 2021, 4:08:03 PM8/25/21
to
Please, no more evil

Rick C

unread,
Aug 25, 2021, 5:29:54 PM8/25/21
to
Yes, no more evil indeed!

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209

Jurgen Pitaske

unread,
Aug 26, 2021, 3:41:26 AM8/26/21
to
What you basically stated:

The HW and SW designers at BMW were idiots to use the 1802 - and obviously their management.
The same statement you meant regarding the other examples I used.
Obviously the RCA team were incompetent to design and manufacture the 1802 and IOs in the first place.

So the same you would probably state about the people who pay you.
Are they aware of this?
Might be worth a post on LinkedIN.

How arrogant can one person be? You are definitely a great example.

One evil person like you is definitely enough here.

Fortunately your knowledge is in PC104 and not in Forth as you state yourself.
http://www.arius.com/

https://www.linkedin.com/in/ariusinc/

Related to Forth from LinkedIN:

Rick Collins
Electrical Engineering Design and Production Services

I use Forth very intermittently.
So I have to relearn it for nearly every project.
Often it is hard to read my own code a year later.

Reply
See profile for Paul Bennett IEng MIET
Paul Bennett IEng MIET

Systems Engineer at HIDECS Consultancy

As they say, "practice makes perfect" and like anything, practicing helps with keeping it in mind.

Forth is strange to many, but if you know words and numbers, then you are at least half way there. Numbers are parameters that are placed on the stack. Words are named functions that work on items on the stack, consuming or depositing numbers during their performance. Words reside in the dictionary. You can extend teh system by adding new functions (words) into the dictionary. Some of the functions already in teh dictionary will assist in doing that compilation.

As for making it easier to read your code many years later, always include comments that explain what and how a word functions.

Rick C

unread,
Aug 26, 2021, 3:07:10 PM8/26/21
to
Wow! I used to think Hugh was the only real nutter here and others were whipped into a frenzy by his insanity. Clearly there are others who don't know how to have a civil discourse so as to discuss a topic rationally and can be every bit as insane as Hugh. I've made excuses for some, but clearly I was wrong in doing that. I don't even want to reply to his post as he may see that as an attack like Hugh does.

What can I say? Each of our posts speak for ourselves.

--

Rick C.

++ Get 1,000 miles of free Supercharging
++ Tesla referral code - https://ts.la/richard11209

Hugh Aguilar

unread,
Aug 27, 2021, 12:44:45 AM8/27/21
to
On Thursday, August 26, 2021 at 12:07:10 PM UTC-7, gnuarm.del...@gmail.com wrote:
> On Wednesday, August 25, 2021 at 5:29:54 PM UTC-4, Rick C wrote:
> > On Wednesday, August 25, 2021 at 4:08:03 PM UTC-4, Fourthy Forth wrote:
> > > Please, no more evil
> > Yes, no more evil indeed!
> Wow! I used to think Hugh was the only real nutter here and others were whipped into a frenzy by his insanity. Clearly there are others who don't know how to have a civil discourse so as to discuss a topic rationally and can be every bit as insane as Hugh. I've made excuses for some, but clearly I was wrong in doing that. I don't even want to reply to his post as he may see that as an attack like Hugh does.
>
> What can I say? Each of our posts speak for ourselves.

This post really speaks for itself --- TROLL!
Rick Collins is making an unprovoked attack against me in the hopes of getting an angry response.
He is also making an attack on the RCA 1802 processor in the hopes of getting an angry response.
Nobody has an emotional investment in any of this, so he won't get his desired angry response.

I had to look up the RCA 1802 on Wikipedia to get an overview. It seems somewhat quirky.
Quirky processors are part of micro-controller programming. On a scale of 1 to 10, the
MiniForth was a 10 --- the RCA 1802 was maybe 5, the 8051 4, the 6502 or Z80 were a 3,
the MC68000 was a 2 (having separate address and data registers), the 8086 was a 2
(having segment registers), the ARM was a 2 (conditional hops), the MSP430 or PIC24 is a 1, etc..

If you can't handle a quirky ISA, just become a C programmer and ignore efficiency.

Jurgen Pitaske

unread,
Nov 29, 2021, 10:45:53 AM11/29/21
to
On Thursday, 5 August 2021 at 13:29:45 UTC+1, Stephen Pelc wrote:
> On 5 Aug 2021 at 06:46:41 BST, "Brad Eckert" <hwf...@gmail.com> wrote:
> >
> > Perhaps Forthers treat programming the way the French treat food. Would that
> > make C the equivalent of English cuisine?
> Please, USAnian cuisine - lots of fat and sugar.
>
> Stephen

As Christmas is close - is there any news regarding this processor system?

After all of the news that Greg gave us on zoom recently - the race is on ...

Rick C

unread,
Nov 29, 2021, 11:17:01 AM11/29/21
to
On Friday, July 2, 2021 at 7:49:54 AM UTC-4, Stephen Pelc wrote:
> An MPE client is currently designing a new dual-stack machine. The
> predicted performance is 6 GHz (instructions per second). 40 CPUs
> occupy less than 1 sqare mm.
>
> It's for real, and they have a paying client for it.
>
> Depending on life, there may be more information at EuroForth 21 in
> Rome in September. I have my EU Covid passport already.

Yeah, Life! So 4000 of these CPUs would fit a 1 cm square chip which could be used to implement a real time (or faster than real time) implementation of the game of Life.

Was anything presented at EuroForth? I assume not since we didn't hear about it. I'm wondering what the app is and how the chip differs from the GA device. I assume since it has an application it won't have the same "lack of purpose" limitations?

--

Rick C.

--- Get 1,000 miles of free Supercharging
--- Tesla referral code - https://ts.la/richard11209

Stephen Pelc

unread,
Dec 1, 2021, 6:37:44 AM12/1/21
to
On 29 Nov 2021 at 16:45:52 CET, "Jurgen Pitaske" <jpit...@gmail.com> wrote:

> As Christmas is close - is there any news regarding this processor system?
>
> After all of the news that Greg gave us on zoom recently - the race is on ...

I would love to say more, but we're under an NDA.

Stephen
--
Stephen Pelc, ste...@vfxforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, +44 (0)78 0390 3612, +34 649 662 974
http://www.mpeforth.com - free VFX Forth downloads

Jurgen Pitaske

unread,
Dec 1, 2021, 8:46:03 AM12/1/21
to
I am still under a few NDAs from the past - so I know.

Could your customer at least agree to give a rough year / quarter when there is some news?
This would stop nagging here.
Your first post was in July 2021, so a rough guess would be 6 months for design, and the 6 months to get first silicon - so more news middle of next year?

Stephen Pelc

unread,
Dec 2, 2021, 6:04:20 AM12/2/21
to
On 1 Dec 2021 at 14:46:02 CET, "Jurgen Pitaske" <jpit...@gmail.com> wrote:

> I am still under a few NDAs from the past - so I know.
>
> Could your customer at least agree to give a rough year / quarter when there
> is some news?
> This would stop nagging here.
> Your first post was in July 2021, so a rough guess would be 6 months for
> design, and the 6 months to get first silicon - so more news middle of next
> year?

You make a reasonable assumption, but to quote Sir Humphrey, "I couldn't
possibly comment".

Rick C

unread,
Dec 2, 2021, 8:15:57 AM12/2/21
to
On Thursday, December 2, 2021 at 7:04:20 AM UTC-4, Stephen Pelc wrote:
> On 1 Dec 2021 at 14:46:02 CET, "Jurgen Pitaske" <jpit...@gmail.com> wrote:
>
> > I am still under a few NDAs from the past - so I know.
> >
> > Could your customer at least agree to give a rough year / quarter when there
> > is some news?
> > This would stop nagging here.
> > Your first post was in July 2021, so a rough guess would be 6 months for
> > design, and the 6 months to get first silicon - so more news middle of next
> > year?
> You make a reasonable assumption, but to quote Sir Humphrey, "I couldn't
> possibly comment".

But I thought you just did!

--

Rick C.

--+ Get 1,000 miles of free Supercharging
--+ Tesla referral code - https://ts.la/richard11209

Wayne morellini

unread,
Dec 2, 2021, 11:54:48 AM12/2/21
to
I've posted on the YouTube video, about the QDCA, researchers starting to talk about 10's of terahertz in future, playing with the clocking, which is what I was planning on doing. The ock stabilisation mechanism greatly slow it down. But, they already are making other advances. Magnetic Quantum Dot Cellular Automata, was the only technology that fit GA's immediate goals and fab problems. But it all was work, they could have done a decade before last to run at their current speeds at a fraction of the energy. This means a million processors at 64 bits all running equivalent to 500mhz plus, was a possibility. There would be a heap of back ground applications and IOT for that sort of technology scaled up or down.

But, GA is not producing anything but specialist application. At this stage, it doesn't matter because it's not for us. Bring on 6Ghz, 60Ghz. If it was MQDCA, it could do that at a lot less energy than the GA. People seem to irrationally focus on the will the way, when it's the figuring out, the way. This was something they could have spent time on figuring out. The technology they are using, is end life technology, and not too much interesting as this, which was a beginning life technology. It's just a shame things didn't work out, as 500mhz, was not sexy, but practical. Just that M/QD has much wider potential due to the near perfect efficiency power consumption. You should be able to 3D print them. Though it is not ideal, it hopefully will be practical.

Rick C

unread,
Dec 3, 2021, 2:04:23 AM12/3/21
to
Yeah, that's stack machines for you. Lots and lots of research, but no one is designing them to be used to solve applications problems. The GA144 was a technology that required a solution to how to use it before anyone could consider what apps it might be used for. In other words, it was a solution looking for a problem and never really found a good match.

--

Rick C.

-+- Get 1,000 miles of free Supercharging
-+- Tesla referral code - https://ts.la/richard11209

Paul Rubin

unread,
Dec 3, 2021, 4:38:05 AM12/3/21
to
Wayne morellini <waynemo...@gmail.com> writes:
> Magnetic Quantum Dot Cellular Automata, was the only technology that
> fit GA's immediate goals and fab problems.

I don't think anyone has made a microprocessor from quantum dots, I
don't see stack architectures as being especially suited or unsuited for
them, and my impression is that GA's chip worked as intended using
conventional fab techniques. It just didn't solve a problem that
couldn't be handled by more conventional means.

Wayne morellini

unread,
Dec 3, 2021, 6:54:31 AM12/3/21
to
Yes, I don't think I ever heard of such a thing. After the 500 MHz limitation, everything went quiet, except for fpga work with the occasional talk about it coming out, but I never found those either (I would see an new article, then try to find it again, not to be able to relocate it). But, FPGA is a lot faster than that too now. But, you literally could fit in maybe 1 million more circuitry for the same peak energy load (though that's really old figures and maybe it's a lot lower in real life). People were probably interested in speed. But, magnetic computing is one of the main end technologies in computing. I had hoped to get closer to 1-2Thz. These guys are talking about 10's of Thz. You max out most applications. But, I see no reason that it doesn't suit a stack machine. But with the brain damage now, I'm missing something.

You can basically make such a test processor out of ping pong balls wires and magnets. It's one of the test sample technologies in the book "Design and Test of Digital Circuits by Quantum Dot Cellular Automata", by Fabrizio Lombardi and Jing Huang, I have on my book shelf in front of me. Now, that's an interesting one to teach processor design to high school students. To see something like that in action with the balls swinging around.

Andy Valencia

unread,
Dec 3, 2021, 1:07:09 PM12/3/21
to
Rick C <gnuarm.del...@gmail.com> writes:
> ... The GA144 was a
> technology that required a solution to how to use it before anyone could
> consider what apps it might be used for. In other words, it was a solution
> looking for a problem and never really found a good match.

I briefly, with my network processor hat on, looked at how to morph GA144 to
become a decent solution. The short version was: nothing which the people
with their hand on the GA144 rudder would ever consider.

My memory is that each node, after overhead, is good for less than 100 bytes
of RAM? A GA144 degenerates into a really hard to use 14kb byte part. It
has some true innovations, but new ideas need to morph as they meet reality,
and I never saw that happen.

Andy Valencia
Home page: https://www.vsta.org/andy/
To contact me: https://www.vsta.org/contact/andy.html

Paul Rubin

unread,
Dec 3, 2021, 1:40:16 PM12/3/21
to
Wayne morellini <waynemo...@gmail.com> writes:
> [Quantum dots] But, you literally could fit in maybe 1 million more
> circuitry for the same peak energy load... These guys are talking
> about 10's of Thz. You max out most applications. But, I see no
> reason that it doesn't suit a stack machine.

What I mean is that this is science fiction technology at the moment.
If it becomes viable for microprocessors, then it would presumably be
fine for stack machines, but also for non-stack machines.

Rick C

unread,
Dec 3, 2021, 4:30:47 PM12/3/21
to
I agree, but I think the real issue is why chase pie in the sky implementations at 10s of THz when just a few GHz would be a significant improvement? It is normal for technology to proceed in steps rather than great leaps forward.

In general the world has rejected stack machines for many, many years. What stack machines really need is a stack machine application, not a stack machine implementation.

--

Rick C.

-++ Get 1,000 miles of free Supercharging
-++ Tesla referral code - https://ts.la/richard11209

Wayne morellini

unread,
Dec 3, 2021, 6:25:04 PM12/3/21
to
Thanks Andy.

I've also tried in the past. I've successfully done it with top companies, seeing good changes to ussability of products. But when people get into their own thinking, and misconstrue things, it's time to walk. I was dieing, so why waste my time.

Minimalist equals nick nack. You have to have a balanced level of minimalism to get the maximum benefit. I even came up with minimalist proposals for multi tier memory interface, cross linked auto communications, and simple caching, replacing lots of instruction execution on more advanced work loads, for a lot less energy. If only one chip had a direct execution 640KB address space (the external cache memory chip spoken of decades back), as I hoped, it would have made things a lot better. I've had a problem with brain disease, so I'm not good like I used to be, but have spent a lot of time in the past exploring alternative execution design. But I've considered a 8bit alternative to the chip, with a lot of execution features. But, I can't remember, them, but I think I have a design document somewhere This was accompanied by 16 and 32 bit versions, from memory. It's all rubbish after I tried to design decimal logic gates instead years ago. That's something I've held close to my chest, but the damage was setting in. The way I was trying to do the levels was difficult. However, my collection of design mechanisms, would make a great processor, even today. I still think a 10Ghz+ could be done on silicon.

It has been the easiest time for GA to design a successful processor. All they needed to do was to target android and JavaScript, and sell into that market. That market covers many devices. So, while they might not have been producing top mobile phone chips, there are lesser products that use java/JavaScript, even cheap phone chipsets After sales in those areas, they could have afforded to move up to better mobile chipsets.

I can tell you one thing. For the workloads they are looking at with the glasses. They should be dusting off the advanced 32 bit processor proposal. Which would be something you guys are interested in!


Wayne.

Wayne morellini

unread,
Dec 3, 2021, 6:35:47 PM12/3/21
to
I'm not saying that they should do a Thz processor. I'm saying that QDCA is a descent bet. That they should look at the 500mhz+ advances and do a few GHz version. When the Thz, if it ever does, gets worked out,, they can move onto that. No science fiction involved, it's what scientists are actually working towards. It's business, you plan for the future and take a bet on which direction to start taking steps into. It's often not clear when using external innovations.

Stack machines aren't the problem. Shboom, and rtx went on to have success. It's implementation we are concerned about.

Paul Rubin

unread,
Dec 3, 2021, 6:53:02 PM12/3/21
to
Rick C <gnuarm.del...@gmail.com> writes:
> In general the world has rejected stack machines for many, many years.
> What stack machines really need is a stack machine application, not a
> stack machine implementation.

Yeah the main attraction of stack machines (if there is one at all) is
very tiny implementation for a useful amount of processing. That can be
helpful for deeply embedded micropower things, or on resource limited
FPGA's. For example there's a new FPGA with 1000 LUT4's that will
supposedly cost around 50 cents per unit. I don't think a RISC-V soft
cpu will fit in there, but a single GA or b16-like cpu node might fit.
That would let you compute some stuff and also have enough left over for
some logic functions.

Rick C

unread,
Dec 3, 2021, 8:31:55 PM12/3/21
to
You talk about planning as if it were inevitable these things will get designed and built. Where does the money come from? With no track record to speak of the hard part is finding someone who wants to start spending millions and millions of dollars on totally unproven design ideas.


> Stack machines aren't the problem. Shboom, and rtx went on to have success. It's implementation we are concerned about.

What success? They may have found a few design wins. I think the RTX gets used in space apps because it is rad hard (very hard to come by in general). There's nothing about this pedigree that would attract the sort of investor who will pay for such grandiose chips.

--

Rick C.

+-- Get 1,000 miles of free Supercharging
+-- Tesla referral code - https://ts.la/richard11209

Wayne morellini

unread,
Dec 3, 2021, 11:26:41 PM12/3/21
to
On Saturday, December 4, 2021 at 11:31:55 AM UTC+10, gnuarm.del...@gmail.com wrote:
> On Friday, December 3, 2021 at 6:35:47 PM UTC-5, Wayne morellini wrote:
> > On Saturday, December 4, 2021 at 7:30:47 AM UTC+10, gnuarm.del...@gmail.com wrote:
> > > On Friday, December 3, 2021 at 1:40:16 PM UTC-5, Paul Rubin wrote:
> > > But, I see no
> > > > > reason that it doesn't suit a stack machine.
> > > > What I mean is that this is science fiction technology at the moment.
> > > > If it becomes viable for microprocessors, then it would presumably be
> > > > fine for stack machines, but also for non-stack machines.
> > > I agree, but I think the real issue is why chase pie in the sky implementations at 10s of THz when just a few GHz would be a significant improvement? It is normal for technology to proceed in steps rather than great leaps forward.
> > >
> > > In general the world has rejected stack machines for many, many years. What stack machines really need is a stack machine application, not a stack machine implementation.
> > >
> > > --
> > I'm not saying that they should do a Thz processor. I'm saying that QDCA is a descent bet. That they should look at the 500mhz+ advances and do a few GHz version. When the Thz, if it ever does, gets worked out,, they can move onto that. No science fiction involved, it's what scientists are actually working towards. It's business, you plan for the future and take a bet on which direction to start taking steps into. It's often not clear when using external innovations.
> You talk about planning as if it were inevitable these things will get designed and built. Where does the money come from? With no track record to speak of the hard part is finding someone who wants to start spending millions and millions of dollars on totally unproven design ideas.

Wherever it:s conventional silicon or not, it requires money. It's about where they will be in time, even 5. It's about commerce survival, and having the edge. They have to have something in offer for people to buy.

> > Stack machines aren't the problem. Shboom, and rtx went on to have success. It's implementation we are concerned about.
> What success? They may have found a few design wins. I think the RTX gets used in space apps because it is rad hard (very hard to come by in general). There's nothing about this pedigree that would attract the sort of investor who will pay for such grandiose chips.

Those where significant designs of their day. Again, there is only normal level design being talked about at this stage. If they earn money, then they can advance to better designs. At the moment, its not optimal, so something has to change to continue.

Marcel Hendrix

unread,
Dec 4, 2021, 4:42:46 AM12/4/21
to
On Saturday, December 4, 2021 at 12:53:02 AM UTC+1, Paul Rubin wrote:
[..]
> Yeah the main attraction of stack machines (if there is one at all) is
> very tiny implementation for a useful amount of processing. That can be
> helpful for deeply embedded micropower things, or on resource limited
> FPGA's. For example there's a new FPGA with 1000 LUT4's that will
> supposedly cost around 50 cents per unit. I don't think a RISC-V soft
> cpu will fit in there, but a single GA or b16-like cpu node might fit.
> That would let you compute some stuff and also have enough left over for
> some logic functions.

By the time you have finished that, that FPGA has a successor with
16,000 LUT4's and as a company you have no follow-up to fend off
the RISC-V guys. There might be a possibility when a time comes
that it is impossible to increase instructions/Watt or the number
of LUTs/mm^2.

-marcel

Paul Rubin

unread,
Dec 4, 2021, 5:25:03 AM12/4/21
to
Marcel Hendrix <m...@iae.nl> writes:
> By the time you have finished that, that FPGA has a successor with
> 16,000 LUT4's

It's not a hugely complicated thing to put a small cpu and some logic
into an fpga. There will not be a 16000 LUT, 50 cent FPGA anytime soon.
There has not been such a thing as a 50 cent FPGA at all in the past, as
far as I know. When there is one with 16000 luts for 50 cents, the one
with 1000 luts will be 5 cents. This is still important for some kinds
of products. Yes you can buy 50 cent microprocessors with 10s of
kilobytes of ram, but if your application only needs a few dozen bytes,
you can use a 3 cent processor.

https://jaycarlson.net/2019/09/06/whats-up-with-these-3-cent-microcontrollers/

Wayne morellini

unread,
Dec 4, 2021, 8:13:52 AM12/4/21
to
Paul, I like to look at things from different angles to get different modalities. To get results differently. For instance how fast can they make these things go these days? I remember the resistive alternative, which I was waiting to use, before it was taken off market, where 5Ghz decade before last, and it would seem to indicate it had possibly be head room to 20Ghz (how hot, I don't know). Such small dies allows massive heat sinking where you could overclock to get a basic 32 bit processor design up beyond the equivalent of gigahertz. Which is a much quicker way or achieve similar results to a doing a custom processor. (product with fpga to profit, to implement in the custom fpga processor in silicon). Many ways to skin a ... I had been waiting for the magneticQDCA FPGA instead.

Brian Fox

unread,
Dec 4, 2021, 9:40:43 AM12/4/21
to
Does Java qualify as a stack machine application?

I remember that Sh Boom CPU was capable of running Java at near native speed.
Nobody cared or...
the entire goto market strategy was flawed and/or underfunded. (my guess)

Having lived on both the engineering dept. side and the "make $XX million with this
product or you are fired!" side, I find I have to remind the tech side that a better
product only gives you permission to begin doing business.
Business success comes from money and psychology
ie:the "art" of marketing (one to many)
and sales (one to one) communication.

Rick C

unread,
Dec 4, 2021, 10:24:59 AM12/4/21
to
On Friday, December 3, 2021 at 11:26:41 PM UTC-5, Wayne morellini wrote:
> On Saturday, December 4, 2021 at 11:31:55 AM UTC+10, gnuarm.del...@gmail.com wrote:
> > On Friday, December 3, 2021 at 6:35:47 PM UTC-5, Wayne morellini wrote:
> > > On Saturday, December 4, 2021 at 7:30:47 AM UTC+10, gnuarm.del...@gmail.com wrote:
> > > > On Friday, December 3, 2021 at 1:40:16 PM UTC-5, Paul Rubin wrote:
> > > > But, I see no
> > > > > > reason that it doesn't suit a stack machine.
> > > > > What I mean is that this is science fiction technology at the moment.
> > > > > If it becomes viable for microprocessors, then it would presumably be
> > > > > fine for stack machines, but also for non-stack machines.
> > > > I agree, but I think the real issue is why chase pie in the sky implementations at 10s of THz when just a few GHz would be a significant improvement? It is normal for technology to proceed in steps rather than great leaps forward.
> > > >
> > > > In general the world has rejected stack machines for many, many years. What stack machines really need is a stack machine application, not a stack machine implementation.
> > > >
> > > > --
> > > I'm not saying that they should do a Thz processor. I'm saying that QDCA is a descent bet. That they should look at the 500mhz+ advances and do a few GHz version. When the Thz, if it ever does, gets worked out,, they can move onto that. No science fiction involved, it's what scientists are actually working towards. It's business, you plan for the future and take a bet on which direction to start taking steps into. It's often not clear when using external innovations.
> > You talk about planning as if it were inevitable these things will get designed and built. Where does the money come from? With no track record to speak of the hard part is finding someone who wants to start spending millions and millions of dollars on totally unproven design ideas.
> Wherever it:s conventional silicon or not, it requires money. It's about where they will be in time, even 5. It's about commerce survival, and having the edge. They have to have something in offer for people to buy.

Sorry, I don't understand what you are saying, literally.


> > > Stack machines aren't the problem. Shboom, and rtx went on to have success. It's implementation we are concerned about.
> > What success? They may have found a few design wins. I think the RTX gets used in space apps because it is rad hard (very hard to come by in general). There's nothing about this pedigree that would attract the sort of investor who will pay for such grandiose chips.
> Those where significant designs of their day. Again, there is only normal level design being talked about at this stage. If they earn money, then they can advance to better designs. At the moment, its not optimal, so something has to change to continue.

Significant to whom? They were tiny blips on the RADAR screen. I'm still waiting for someone to show any real world advantages to stack processor chips in the real world of today.

The GA144 attempted to be the universal peripheral laden MCU, but failed in being an MCU at all. The proponents talked about how low power the individual processors were and the low power when not processing, but the programming was so complex they came up with a virtual machine implementation that negated the power savings. In fact, everything they provided was a dollar short of being useful to a user.

Partly, the problem of the GA144 was overcoming the entrenchment of conventional processors, but that is the world at this point. Talking about some power savings or speed advantage or even the flexibility of peripherals is not of much use if it solves a problem the users don't have while creating problems users don't have with conventional solutions.

So how would money be made from such designs?

--

Rick C.

+-+ Get 1,000 miles of free Supercharging
+-+ Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Dec 4, 2021, 10:30:00 AM12/4/21
to
I think you missed something. He is not talking about arbitrary devices. He specifically mentioned a $0.50 FPGA. I don't know who's chip this is, but I vaguely recall someone announcing that. You won't find a 16,000 LUT FPGA at that price for more than a decade, even if then. Even when you put a RISC-V into a low priced FPGA you will find it runs much, much slower than a small stack machine and uses a lot more power. I believe there have been some impressive implementations of RISC-V in FPGAs that use very little space. However, they run very slowly.

--

Rick C.

++- Get 1,000 miles of free Supercharging
++- Tesla referral code - https://ts.la/richard11209

Wayne morellini

unread,
Dec 4, 2021, 10:36:22 AM12/4/21
to
You are right, but it includes process of steps and strategy
A good product only gives you something to sell. I remember years ago, when I was told about paradigm shifts requiring a 10x improvement to be accepted. This makes the product a lot more marketable. Some of the issues with shboom, was changes. Was HP and power PC taking away the target workstation company Apollo. Then, the whole Java thing itself. It didn't really take. Now. JavaScript has taken somewhat. Java is still ok, but Android and JavaScript had taken the light. I think JavaScript is, more or less, ok for many consumer side things. The biggest is the phone tablet market. On the lower end, is JavaScript phones based upon off a fork of the Firefox OS. Kaios. However, for a lot of third world gadgets, JavaScript is ok. This leads to opportunity for an common architecture for an android like phone chipset with or without phone features (2+ chips) where the chips are suitable for phones, gadgets, and internet of things. Of course, you don't have to use JavaScript, or C, you can use misc programming to get more out of it. Maybe a third more basic chip or two might also be helpful. But, these are common architecture with some features left out or included, and fabbed on the same die. A good place to start with one or two basic chips, and work towards better chips off of that. It can fit in there. For us, we can use it as misc, and be hired as the assembler like performance programmers. Live translation of JavaScript to native code is probably going be good, but misc better. Anyway, so we are talking about full screened $10 phone market with billions of potential customers. $30 phones, even better, and superior to other options, and $100 phones, which might be 100 million potential customers. Just using one chip. If you can reach 10% of those sales, then that pays for a future chips generation. Where you can aim at cheap good mid range phones and game phones for normal games. The problem is that they have run out their patent advantage, without taking advantage of the now costly hot phone chipset market, so anybody can do it. However, they are the experts to go to and hire for this, so they still have potential in the area. Low end phones could be a lot better, and even mid range phones are too high price, that is a market vulnerability. I'm not talking about giant leaps here, I'm talking about reading the land and taking small steps working up to giant leaps. If only they had done this 20 years ago, we would have had the leap, but there wasn't, so now there is small steps again, and maybe a multi GHz version of an enhanced array with some 32 but chips, when funded. The current prices or high end phones in a being driven in part, by the person be of the top chipsets. This elevator, has room under it, and as much as they might try to reverse it's direction, it's architecture can't get to the lowest floors well. I think.

Wayne morellini

unread,
Dec 4, 2021, 10:40:58 AM12/4/21
to
Ok, basically simple. Just overclocking potential to get to a higher speed, despite the power. Means, you have an interim alternative step to try a stack design. It's business.

Rick C

unread,
Dec 4, 2021, 10:41:06 AM12/4/21
to
No, there will never be 5 cent FPGAs. The low end price of FPGAs is limited by testing costs. FPGAs have very large numbers of interconnections (contrary to what Hugh thinks) and many bitstream loads are required to test them all. That's why you can save money on quantity buys by specifying your design and allowing the chips you buy to be tested only on the fabric in use. This detracts from the flexibility of after sale updates, but that's often not used anyway and this allows the FPGA cost to approach the cost of an ASIC.

I didn't see anything in the Padauk article about how they achieve the price, but it is entirely possible they do it by not testing the parts, or maybe by testing them in parallel while on the wafer.

--

Rick C.

+++ Get 1,000 miles of free Supercharging
+++ Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Dec 4, 2021, 10:45:57 AM12/4/21
to
I'm sorry, but I only understand about 10% of what you write. I often can't identify the verbs in your sentences. I don't get any of this.

--

Rick C.

---- Get 1,000 miles of free Supercharging
---- Tesla referral code - https://ts.la/richard11209

Wayne morellini

unread,
Dec 4, 2021, 11:00:13 AM12/4/21
to
On Sunday, December 5, 2021 at 1:24:59 AM UTC+10, gnuarm.del...@gmail.com wrote:
> On Friday, December 3, 2021 at 11:26:41 PM UTC-5, Wayne morellini wrote:
> > On Saturday, December 4, 2021 at 11:31:55 AM UTC+10, gnuarm.del...@gmail.com wrote:
> > > On Friday, December 3, 2021 at 6:35:47 PM UTC-5, Wayne morellini wrote:
> > > > On Saturday, December 4, 2021 at 7:30:47 AM UTC+10, gnuarm.del...@gmail.com wrote:
> > > > > On Friday, December 3, 2021 at 1:40:16 PM UTC-5, Paul Rubin wrote:
> > > > > But, I see no
> > > > > > > reason that it doesn't suit a stack machine.
> > > > > > What I mean is that this is science fiction technology at the moment.
> > > > > > If it becomes viable for microprocessors, then it would presumably be
> > > > > > fine for stack machines, but also for non-stack machines.
> > > > > I agree, but I think the real issue is why chase pie in the sky implementations at 10s of THz when just a few GHz would be a significant improvement? It is normal for technology to proceed in steps rather than great leaps forward.
> > > > >
> > > > > In general the world has rejected stack machines for many, many years. What stack machines really need is a stack machine application, not a stack machine implementation.
> > > > >
> > > > > --
> > > > I'm not saying that they should do a Thz processor. I'm saying that QDCA is a descent bet. That they should look at the 500mhz+ advances and do a few GHz version. When the Thz, if it ever does, gets worked out,, they can move onto that. No science fiction involved, it's what scientists are actually working towards. It's business, you plan for the future and take a bet on which direction to start taking steps into. It's often not clear when using external innovations.
> > > You talk about planning as if it were inevitable these things will get designed and built. Where does the money come from? With no track record to speak of the hard part is finding someone who wants to start spending millions and millions of dollars on totally unproven design ideas.
> > Wherever it:s conventional silicon or not, it requires money. It's about where they will be in time, even 5. It's about commerce survival, and having the edge. They have to have something in offer for people to buy.
> Sorry, I don't understand what you are saying, literally.

It's pretty simple. The spell auto corrector trashed the sentence without me realising. Maybe where they will be in 5 years. About commercial survival. That's all.

> > > > Stack machines aren't the problem. Shboom, and rtx went on to have success. It's implementation we are concerned about.
> > > What success? They may have found a few design wins. I think the RTX gets used in space apps because it is rad hard (very hard to come by in general). There's nothing about this pedigree that would attract the sort of investor who will pay for such grandiose chips.
> > Those where significant designs of their day. Again, there is only normal level design being talked about at this stage. If they earn money, then they can advance to better designs. At the moment, its not optimal, so something has to change to continue.
> Significant to whom? They were tiny blips on the RADAR screen. I'm still waiting for someone to show any real world advantages to stack processor chips in the real world of today.

Well, objectively they were out there and successful
That's it.


> The GA144 attempted to be the universal peripheral laden MCU, but failed in being an MCU at all. The proponents talked about how low power the individual processors were and the low power when not processing, but the programming was so complex they came up with a virtual machine implementation that negated the power savings. In fact, everything they provided was a dollar short of being useful to a user.

Now, that is a historical blip. What you say is true.

> Partly, the problem of the GA144 was overcoming the entrenchment of conventional processors, but that is the world at this point. Talking about some power savings or speed advantage or even the flexibility of peripherals is not of much use if it solves a problem the users don't have while creating problems users don't have with conventional solutions.

They have been operating on presenting solutions to businesses. So, for them. In that way, it has been an employment opportunity. The businesses hi neatly have seen a potential there, and we wouldn't even know if it was in the bionic eat, or their hearing aids here.

> So how would money be made from such designs?

What they have been doing, yes, but it is scary design for people, and the regular arm is more comfortable. In the 1980's, it would have been a great design, even in the 1990's, but it really needed the 18 bit 640kB address space, even back then. At least one processor with access, if not most or all of them. Now. I wonder why for nearly two decades. I remember, you talked about doing software radio with it, but it was just st too our there and restrictive for a modern high datarate format. My recent designs proposals are suitable for that, but this needs to work at lower data rates. Where a custom asic can dominate it. I am concentrating on how changing tac might produce a better marketable product.

Wayne morellini

unread,
Dec 4, 2021, 11:02:55 AM12/4/21
to
Don't worry about it.
Message has been deleted

Rick C

unread,
Dec 4, 2021, 11:41:29 AM12/4/21
to
On Saturday, December 4, 2021 at 11:00:13 AM UTC-5, Wayne morellini wrote:
> On Sunday, December 5, 2021 at 1:24:59 AM UTC+10, gnuarm.del...@gmail.com wrote:
> > On Friday, December 3, 2021 at 11:26:41 PM UTC-5, Wayne morellini wrote:
> > > On Saturday, December 4, 2021 at 11:31:55 AM UTC+10, gnuarm.del...@gmail.com wrote:
> > > > On Friday, December 3, 2021 at 6:35:47 PM UTC-5, Wayne morellini wrote:
> > > > > On Saturday, December 4, 2021 at 7:30:47 AM UTC+10, gnuarm.del...@gmail.com wrote:
> > > > > > On Friday, December 3, 2021 at 1:40:16 PM UTC-5, Paul Rubin wrote:
> > > > > > But, I see no
> > > > > > > > reason that it doesn't suit a stack machine.
> > > > > > > What I mean is that this is science fiction technology at the moment.
> > > > > > > If it becomes viable for microprocessors, then it would presumably be
> > > > > > > fine for stack machines, but also for non-stack machines.
> > > > > > I agree, but I think the real issue is why chase pie in the sky implementations at 10s of THz when just a few GHz would be a significant improvement? It is normal for technology to proceed in steps rather than great leaps forward.
> > > > > >
> > > > > > In general the world has rejected stack machines for many, many years. What stack machines really need is a stack machine application, not a stack machine implementation.
> > > > > >
> > > > > > --
> > > > > I'm not saying that they should do a Thz processor. I'm saying that QDCA is a descent bet. That they should look at the 500mhz+ advances and do a few GHz version. When the Thz, if it ever does, gets worked out,, they can move onto that. No science fiction involved, it's what scientists are actually working towards. It's business, you plan for the future and take a bet on which direction to start taking steps into. It's often not clear when using external innovations.
> > > > You talk about planning as if it were inevitable these things will get designed and built. Where does the money come from? With no track record to speak of the hard part is finding someone who wants to start spending millions and millions of dollars on totally unproven design ideas.
> > > Wherever it:s conventional silicon or not, it requires money. It's about where they will be in time, even 5. It's about commerce survival, and having the edge. They have to have something in offer for people to buy.
> > Sorry, I don't understand what you are saying, literally.
> It's pretty simple. The spell auto corrector trashed the sentence without me realising. Maybe where they will be in 5 years. About commercial survival. That's all.

Still not following. "Maybe where they will be in 5 years" is not a sentence, no verb. Who's commercial survival???


> > > > > Stack machines aren't the problem. Shboom, and rtx went on to have success. It's implementation we are concerned about.
> > > > What success? They may have found a few design wins. I think the RTX gets used in space apps because it is rad hard (very hard to come by in general). There's nothing about this pedigree that would attract the sort of investor who will pay for such grandiose chips.
> > > Those where significant designs of their day. Again, there is only normal level design being talked about at this stage. If they earn money, then they can advance to better designs. At the moment, its not optimal, so something has to change to continue.
> > Significant to whom? They were tiny blips on the RADAR screen. I'm still waiting for someone to show any real world advantages to stack processor chips in the real world of today.
> Well, objectively they were out there and successful
> That's it.

Were they? I don't know what definition of "successful" you are using. I suppose you could call the RTX successful in that they sold more than a handful, but what happened with the Shboom that would be called "success"???


> > The GA144 attempted to be the universal peripheral laden MCU, but failed in being an MCU at all. The proponents talked about how low power the individual processors were and the low power when not processing, but the programming was so complex they came up with a virtual machine implementation that negated the power savings. In fact, everything they provided was a dollar short of being useful to a user.
> Now, that is a historical blip. What you say is true.
> > Partly, the problem of the GA144 was overcoming the entrenchment of conventional processors, but that is the world at this point. Talking about some power savings or speed advantage or even the flexibility of peripherals is not of much use if it solves a problem the users don't have while creating problems users don't have with conventional solutions.
> They have been operating on presenting solutions to businesses. So, for them. In that way, it has been an employment opportunity. The businesses hi neatly have seen a potential there, and we wouldn't even know if it was in the bionic eat, or their hearing aids here.

I wasn't aware that anyone at GA was actually an employee in the sense of drawing a significant salary. If the company were selling any real quantity of parts, they would report the sales even if not the customer. I think they bought some thousands of chips and are still working on selling those.


> > So how would money be made from such designs?
> What they have been doing, yes, but it is scary design for people, and the regular arm is more comfortable. In the 1980's, it would have been a great design, even in the 1990's, but it really needed the 18 bit 640kB address space, even back then. At least one processor with access, if not most or all of them. Now. I wonder why for nearly two decades. I remember, you talked about doing software radio with it, but it was just st too our there and restrictive for a modern high datarate format. My recent designs proposals are suitable for that, but this needs to work at lower data rates. Where a custom asic can dominate it. I am concentrating on how changing tac might produce a better marketable product.

In the 1980s feature sizes crossed 1 um. The GA144 would have been a much larger chip (around a square inch) and run much more slowly.

The GA144 could implement a software radio easily. It samples at software determined rates up to MHz. You might be able to tune the FM band, but the AM band for sure. I don't recall the frequencies used for hand held unlicenced radios in the US, but they are probably in the UHF, so not as practical.

To create a product you typically start with the requirements and look for technology to implement it. The GA144 was a technology experiment to see what the chip could do with no application in mind. Maybe the device being designed now by another company will have a purpose.

--

Rick C.

---+ Get 1,000 miles of free Supercharging
---+ Tesla referral code - https://ts.la/richard11209

Anton Ertl

unread,
Dec 4, 2021, 1:06:27 PM12/4/21
to
Rick C <gnuarm.del...@gmail.com> writes:
>No, there will never be 5 cent FPGAs. The low end price of FPGAs is limite=
>d by testing costs.

That might be an opportunity for a minimal-area processor core (e.g.,
something like the b16): Instead of testing the FPGA on an expensive
testing machine, put a low-area core on the FPGA, and it does the
testing. Of course you still need a testing machine that puts power
to the die, gives the testing command and reads the result of testing,
but that could be much cheaper than the more powerful testing machines
used now.

- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.forth200x.org/forth200x.html
EuroForth 2021: https://euro.theforth.net/2021

Paul Rubin

unread,
Dec 4, 2021, 5:44:32 PM12/4/21
to
Brian Fox <bria...@brianfox.ca> writes:
> Does Java qualify as a stack machine application?

No it really doesn't. The JVM is a stack virtual machine but high
performance applications translate the JVM code into register machine
code. Also I believe (not sure) that the JVM stack can be indexed like
an array, so it is not a true stack with only LIFO access (other than
the top few elements).

Paul Rubin

unread,
Dec 4, 2021, 5:50:29 PM12/4/21
to
Rick C <gnuarm.del...@gmail.com> writes:
> The GA144 could implement a software radio easily. It samples at
> software determined rates up to MHz.... I don't recall the frequencies
> used for hand held unlicenced radios in the US, but they are probably
> in the UHF, so not as practical.

Software radios often have conventional mixer stages, so the SDR part is
digital demodulation of the mixer ouput. The sample rate doesn't have
to be anywhere near as high as the carrier.

> The GA144 was a technology experiment to see what the chip could do
> with no application in mind.

Or certainly without a clear enough picture of how to implement the
envisioned application. The underpants gnome school of design.
It is loading more messages.
0 new messages