Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Whereabouts of Charles H. Moore?

767 views
Skip to first unread message

Liang Ng

unread,
Sep 2, 2017, 9:18:58 PM9/2/17
to
Whereabouts of Charles H. Moore?

Along with Satoshi Nakamoto, Charles H. Moore has become the most mysterious programmer on my list.

I have not been able to find recent web results regarding Moore.

Can anyone shed some light?

Elizabeth D. Rather

unread,
Sep 2, 2017, 9:59:12 PM9/2/17
to
He has said that he is retiring from active participation in
Forth-related activities. We wish his well in his retirement.

Cheers,
Elizabeth

--
Elizabeth D. Rather
FORTH, Inc.
6080 Center Drive, Suite 600
Los Angeles, CA 90045
USA

hughag...@gmail.com

unread,
Sep 2, 2017, 11:18:31 PM9/2/17
to
On Saturday, September 2, 2017 at 6:59:12 PM UTC-7, Elizabeth D. Rather wrote:
> On 9/2/17 3:18 PM, Liang Ng wrote:
> > Whereabouts of Charles H. Moore?
> >
> > Along with Satoshi Nakamoto, Charles H. Moore has become the most mysterious programmer on my list.
> >
> > I have not been able to find recent web results regarding Moore.
> >
> > Can anyone shed some light?
>
> He has said that he is retiring from active participation in
> Forth-related activities. We wish his well in his retirement.

He said this to you, personally?
That seems very unlikely --- he left Forth Inc. in 1982 --- why would he be talking to you at all?

You always claim to be in communication with famous people. I think that you are making this stuff up --- they aren't talking to you.

You are a name-dropper.

JUERGEN

unread,
Sep 10, 2017, 1:28:09 PM9/10/17
to
Elizabeth, all the best for Chuck, but is there any person / group that will take care of the OKCAD package to design Chips in the future then?
As far as I understood, Chuck is the only one who knows how to use it - so if he retires - this leaves nobody, and the GA144 is the last chip designed using it. And no re-design of this chip either.
Greg had told me in the past, that this software is the jewel in the GA crown ...

Elizabeth D. Rather

unread,
Sep 10, 2017, 7:12:31 PM9/10/17
to
As far as I know, GreenArrays is still operating, and I assume that Greg
and probably others have learned OKCAD. Greg would be the one to ask.

rickman

unread,
Sep 11, 2017, 2:21:06 AM9/11/17
to
I'm pretty sure others at GA know how to use the design package. I believe
they were working on a 32 bit version of the F18A. Anyone heard news of
that project? Seems like it has been running for a long time, it may be
dead by now.

--

Rick C

Viewed the eclipse at Wintercrest Farms,
on the centerline of totality since 1998

gavino himself

unread,
Sep 13, 2017, 4:25:44 PM9/13/17
to
Hugh be polite.

billand...@gmail.com

unread,
Sep 16, 2017, 8:21:18 AM9/16/17
to

> Can anyone shed some light?

Pretty bad if Gavino calls you out on your stupid post.

I see Gregg Bailey on team of Minerva. Wondering if there is a hardware opportunity there for Greenarrays.

https://minerva.com/#team

JUERGEN

unread,
Sep 16, 2017, 2:59:40 PM9/16/17
to
It might be worth checking then or having it confirmed, if this is still a valid product, as the target technology might be obsolete. Sorry not being negative, just realistic and having worked in silicon and ASICs in the past.

JUERGEN

unread,
Sep 16, 2017, 3:02:12 PM9/16/17
to
I had just sent an email to Chuck and Greg at Greenarrays this morning - let's see what happens.

rickman

unread,
Sep 16, 2017, 5:31:01 PM9/16/17
to
If by "target technology" you are referring to the semiconductor process
used to make the chips, Green Arrays uses a "mature" process of 180 nm for
the GA144. This feature size was first used in 1999, nearly 20 years ago
and is already obsolete by any measure. It is offered by a few foundries
because the equipment is fully amortized and they can charge for making
devices based only on the operating costs (which are also low). In other
words, if you can't afford anything better, they will still sell parts to
you made on this very outdated node.

http://www.greenarraychips.com/home/documents/greg/180nm.htm

This is not unique to Green Arrays although they are carrying the idea to an
extreme. Many MCUs are made on older technology nodes for two reasons. One
is to minimize the cost of the fab line which can also lower the cost of the
die depending on the amount of logic in the design. A large, complex chip
will need to be made in a more modern technology to keep the area low. A
chip that is already small in an old technology won't be any cheaper made
with a smaller feature size as the pad size can limit the die size. But few
chips sold today are made at 180 nm. 120 nm or 90 nm or even 60 nm is more
common for MCUs and other small devices.

That said, 180 nm will still be available at some foundries for some time to
come.

Paul Rubin

unread,
Sep 16, 2017, 8:47:56 PM9/16/17
to
rickman <gnu...@gmail.com> writes:
> If by "target technology" you are referring to the semiconductor
> process used to make the chips, Green Arrays uses a "mature" process
> of 180 nm for the GA144.

I thought Juergen might be referring to OKAD itself, but I can't speak
for him.

Jeff Fox says GA used 180nm because at least at the time, that was the
node with the lowest leakage current. I know CMP (cmap.imag.fr) still
offers Mosis-like multi-project wafers in 0.35 micron.

> But few chips sold today are made at 180 nm. 120 nm or 90 nm or even
> 60 nm is more common for MCUs and other small devices.

It looks a lot more expensive to get start with.

Elizabeth D. Rather

unread,
Sep 16, 2017, 9:17:49 PM9/16/17
to
Minerva has been Greg's company since the 1960's. For years it consisted
of a small number of long-time customers for whom he developed and
maintained a high-volume, high security transaction processing network,
based on a native (and highly adapted) implementation of polyFORTH. At
least one of them had/has ~1,000 users. I hadn't realized they had
gotten into crypto currency!

We were discussing security in another thread... Greg probably knows
more about security than anyone in the Forth community. Among other
things, he wrote all the security algorithms for the smart card project
in the 90's.

Since he's been the managing director of Greenarrays for some years, I'm
sure that if he knows of any application for their technology he's using it!

rickman

unread,
Sep 16, 2017, 9:25:33 PM9/16/17
to
Yeah, that's why people design chips in old technologies, cost, both
recurring and non-recurring. But I wouldn't expect that to be a barrier to
using something a bit more modern, like 90 nm or even 60 nm if there is a
reasonable sized market for the device.

I expect the real reason for going with 180 nm is the non-recurring costs.
Leakage is not so significant an issue for a chip that is being used.
Leakage is a problem if the chip is *very* large and so the leakage current
is very high, or the chip is largely shut down doing nothing in which case
it is hard to justify needing 144 processors. The GA144 doesn't have power
down modes, either a CPU is running or it's not. Even having one node
running uses 4 mA. All nodes suspended the chip draws 10's of uA which is
high for most MCUs.

The GA144 is a *very* imbalanced chip. The contrasts of raw performance and
the *many* shortcomings is just too great for the device to be viable in all
but a very few applications. Heck, most people can't figure out how to
create a design with it. Keeping the static current low was not much of a
feature for a chip with so many limitations. But then using a 60 nm
technology for this chip wouldn't solve any of its problems either.

Elizabeth D. Rather

unread,
Sep 16, 2017, 9:27:30 PM9/16/17
to
On 9/16/17 2:21 AM, billand...@gmail.com wrote:
>
Minerva is an outgrowth of Athena (same goddess), which has been Greg's
company since the 1960's. For years Athena consisted of a small number
of long-time customers for whom he developed and maintained a
high-volume, high security transaction processing network, based on a
native (and highly adapted) implementation of polyFORTH. At least one of
them had/has ~1,000 users. I hadn't realized he had gotten into crypto
currency!

We were discussing security in another thread... Greg probably knows
more about computer security than anyone in the Forth community. Among
other things, he wrote all the security algorithms for the smart card
project in the 90's. From his bio on the Minerva site: "During the past
half century Greg's work through ATHENA has covered a wide variety of
applications, such as network protocols, cryptography, military
intelligence fusion, real-time control algorithms, directed energy
weapon system battle management, real-time transaction processing,
cartography, and physical security systems."

rickman

unread,
Sep 16, 2017, 9:37:06 PM9/16/17
to
Elizabeth D. Rather wrote on 9/16/2017 9:17 PM:
>
> Since he's been the managing director of Greenarrays for some years, I'm
> sure that if he knows of any application for their technology he's using it!

The oddest thing about Green Arrays is that they don't cite any applications
that use the chip. You would think if they had many customers there would
be at least one who would be touting how they are using such a unique chip
in their product.

Elizabeth D. Rather

unread,
Sep 16, 2017, 9:57:51 PM9/16/17
to
As luck would have it, Greg just phoned me, because he says Greenarrays
has been getting messages (presumably from some of you) about Chuck's
status. Greg assures me that Chuck is not retired, he is still doing
work for Greenarrays, however he has withdrawn his online presence
because it was taking too much of his time. Greg says he is planning to
be at Forth Day this year, and looks forward to seeing some of you there.

JUERGEN

unread,
Sep 17, 2017, 10:17:23 AM9/17/17
to
I agree with all the points you make, and would just add one, that for analog interfaces larger size works better. But chip size is cost.
I should have made my point clearer in the other post:
Is the company / node where the GA144 was made, still available and is the process unchanged after so many years - unlikely ( but possible ) as processes are always optimized, except for very special military ones that have to last forever ...
So, if this exact process at this exact company is not available anymore, your masks can go in the bin - all processes are different even in the same feature size. Second sourcing the same process in other facilities even within the same company has been a problem in the past, but anyway.
And if the masks are gone, we have a completely new scenario - then GA could go anywhere including 10nm or a shuttle run, just part of the wafer
- but then plus the cost for a new mask set - and many more than in the past
- and a new OKCAD system if adaptable
- or then use standard tools
- or NOW the FPGA solution I had been hoping for - no mask cost, memory of any size, and in ASIC if the application pays for it.

Liang Ng

unread,
Sep 18, 2017, 12:46:33 AM9/18/17
to
It is interesting how this thread is turning into a computer architecture discussion -- which was my passion once, now just a remote hobby.

But then, I might revive my interests.

Looking at how consumer electronics evolve, consumers will demand bigger and bigger television or monitor screen. Putting graphics chip directly to power monitors, might also solve the heat dissipation problem.

The other application that would take up lots of CPU power would be robotics. If chips can be laid in 2D array, why not 3D array?

Your opinions?

Andrew Haley

unread,
Sep 18, 2017, 4:23:28 AM9/18/17
to
Liang Ng <lsn...@gmail.com> wrote:

> It is interesting how this thread is turning into a computer
> architecture discussion -- which was my passion once, now just a
> remote hobby.
>
> But then, I might revive my interests.
>
> Looking at how consumer electronics evolve, consumers will demand
> bigger and bigger television or monitor screen.

Up to the limit of resolution, sure. Beyond that is VR, but we know
that the eye only scans a tiny part of the visual field, so there's
plenty of room for economization.

> The other application that would take up lots of CPU power would be
> robotics. If chips can be laid in 2D array, why not 3D array?

Heat is the usual problem. IBM has the densest CPU package I've seen,
and that requires quite a lot of plumbing. :-)

Andrew.

Alex

unread,
Sep 18, 2017, 5:31:01 AM9/18/17
to
On 18-Sep-17 09:23, Andrew Haley wrote:
> Liang Ng <lsn...@gmail.com> wrote:

>
>> The other application that would take up lots of CPU power would be
>> robotics. If chips can be laid in 2D array, why not 3D array?
>
> Heat is the usual problem. IBM has the densest CPU package I've seen,
> and that requires quite a lot of plumbing. :-)
>
> Andrew.
>

From a storage perspective, there's also 64 layer 3D NAND at 3 bits per
cell, which is 3D is a couple of aspects; increased layers over planar,
and more bits per cell, and finer etchings (down to 15nm).

Etching "holes" through multiple layers is hard but not impossible at
reasonable layer counts, but we're close to the limit of materials
physics. There's also the issue of shrinkage to a handful of electrons
in a single cell. At these scales we're well into quantum effects
territory.

There's talk of layering the layers ("string stacking") in a single
package, which is another aspect of 3Ding these things, but the industry
hasn't done that yet. Plus, getting the wiring in 3D structures isn't easy.

The problems at nano scale are very hard to solve. We're approaching
several physical scaling limits that are absolute and can't be passed
very rapidly indeed. I'm reminded of an old telecoms adage; bandwidth
you can fix with engineers, but to fix latency you need the help of God.

--
Alex

Ilya Tarasov

unread,
Sep 18, 2017, 7:47:30 AM9/18/17
to
воскресенье, 17 сентября 2017 г., 17:17:23 UTC+3 пользователь JUERGEN написал:
Moving to new nodes, 28 nm and below is VERY hard and complex. To be ready to topology requirements, RTL designer must clearly understand challenges coming with advanced nodes. There are many issues to GA144 chip, and major of them are:
1. Is this design a pure synchronous? From the overview this is not clear.
2. OKCAD seems to be useless because of no DRC (Design Rules Checking). This is enormous huge and important part of advanced nodes with tons of tricky physical effects (from difraction and EM crosstalk to even quantum effects starting approx. from 20 nm and below). Drawing GDSII just from scratch, like in OKCAD, will definitely lead you to failure.
3. System level of modelling becomes more and more important while chip size is growing. We may have 1000 cores but how to manage all them? How we can dispatch tasks, data exchange etc with so many cores? We should multiply cores to clock speed only after getting answers on that questions.

Liang Ng

unread,
Sep 18, 2017, 9:23:07 AM9/18/17
to
I am the author of Fifth / 5GL (the Fifth Generation Graph Language), an intermediary script language that translates and simplifies JavaScript and PHP (theoretically applicable to ALL programming languages), based on Reverse Polish Notation syntax, hence the pun on Forth.

https://www.linkedin.com/pulse/glava-webgl-actor-viewer-architecture-%E4%BC%8D%E6%A8%91%E7%9B%9B-%E5%8D%9A%E5%A3%AB-liang-ng-ph-d-?published=t

http://5gws.epizy.com/glx/glavax/trackball.html?i=1

The novelty I introduced is a JSON Input Output Memory (JSIOM) where each RPN (Forth-like) command stores the output as a JSON object, to be used by the next command as input. In fact, as memory is not luxury, output for ALL commands can be accessed by any command.

Due to a bug/feature of how this free website is setup, you can view all the code by omitting the filename, exploring the directory:

http://5gws.epizy.com/glx/

http://5gws.epizy.com

In short, Fifth would be a way to attract younger programmers to Forth: give them something they can use to program 2D/3D web, with a Forth-like syntax. Then eventually, port the web apps using low level Forth drivers.

In the example above, I used Fifth to simplify three.js (WebGL). The future implications of this is, I hope,

i) the Fifth / Forth-like language used to "translate" WebGL can be mapped directly to low level OpenGL driver,

ii) the VR/3D industry may take up the next decade to grow before saturated. During this time, chips will grow to spread out over 2D geometry (large screen displays, VR goggles)

Demand for 3D chip layout may begin after 2025 or 2030, where human level artificial intelligence needs sufficient training data in the VR/3D world to emerge.

When we have 3D chips, robots may keep the 3D chips within carbon fibre bones, compared with human evolved to place brain in head, the highest place in the anatomy, as dangers are minimized.

On Monday, 18 September 2017 16:23:28 UTC+8, Andrew Haley wrote:

Alex

unread,
Sep 18, 2017, 10:25:56 AM9/18/17
to
On 18-Sep-17 10:31, Alex wrote:
> We're approaching several physical scaling limits that are absolute and
> can't be passed very rapidly indeed.

I missed out a few commas.

We're approaching several physical scaling limits, that are absolute and
can't be passed, very rapidly indeed.



--
Alex

Albert van der Horst

unread,
Sep 18, 2017, 12:27:27 PM9/18/17
to
In article <ALqdnWcY0KZEWyjE...@supernews.com>,
My guess is still this. I remember somewhere that Chuck Moore said his
laptop died. I imagine it was the only laptop that could read the
floppies OKAD was on or run OKAD's graphics. Furthermore there is no
doc's of OKAD to speak off, be it user manuals or design documents.

So OKAD is lost forever, with only a lacking chip to remember it by.

It reminds me of SHRDLU a famous early artificial intelligence program.
It took a lot of effort to reestablish those early results, and
I'm not sure they even succeeded.

>
>Cheers,
>Elizabeth
>

Groetjes Albert
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

rickman

unread,
Sep 18, 2017, 11:28:09 PM9/18/17
to
First, I don't know what fab or process GA uses for their GA144, but I'd bet
it is still available. If they picked a nearly 20 year old process it will
very *likely* be around for some time to come.

To think that the loss of the ability to make the exact same chip would
somehow justify the use of a bleeding edge process is a bit out of line with
reality. While going to 120 or 90 or even 60 nm would cost a fair bit of
change, it is a paltry sum compared to making a single set of 15 nm masks.
I don't think 10 nm is anything but a wet dream even at Intel at the moment.
Regardless, the non-recurring engineering (NRE) costs for modern processes
is *ginormous* while older processes have much lower NRE costs.

rickman

unread,
Sep 19, 2017, 12:01:04 AM9/19/17
to
Ilya Tarasov wrote on 9/18/2017 7:47 AM:
>
> Moving to new nodes, 28 nm and below is VERY hard and complex. To be ready to topology requirements, RTL designer must clearly understand challenges coming with advanced nodes. There are many issues to GA144 chip, and major of them are:
> 1. Is this design a pure synchronous? From the overview this is not clear.

Some call the GA144 asynchronous while others say it is synchronous with
locally generated clocks. Each CPU runs independently with a clock created
with delay lines of differing delays selected by the instruction being run.
So the cycle time depends on the instruction and each processor runs
independently. There is no global clock lines.


> 2. OKCAD seems to be useless because of no DRC (Design Rules Checking). This is enormous huge and important part of advanced nodes with tons of tricky physical effects (from difraction and EM crosstalk to even quantum effects starting approx. from 20 nm and below). Drawing GDSII just from scratch, like in OKCAD, will definitely lead you to failure.

The approach in OKAD is to understand the details of the process enough to
construct basic transistors and interconnect, then wire them together to
produce larger blocks in a similar way to constructing an application in
Forth from low level words working upward. I have no idea how well this
will work in a cutting edge process, but there is a lot less dependency on
the tools finding your errors when your design is as simple as the F18A.


> 3. System level of modelling becomes more and more important while chip size is growing. We may have 1000 cores but how to manage all them? How we can dispatch tasks, data exchange etc with so many cores? We should multiply cores to clock speed only after getting answers on that questions.

These problems seem to exist in the present GA144 chip. Funny, in FPGAs
with 10,000 logic elements we don't have this problem of "dispatch" and
scheduling. You don't need to schedule logic when it is not multitasking
and dispatch is automatic, inputs change, outputs update. A chip with
10,000 F18a cores can be designed in a similar way where you don't worry
about keeping cores busy all the time or even worry if a core is used as
nothing more than a wire.

In fact, a chip like the GA1000 might be able to do something FPGAs have
been difficult to teach, partial reconfiguration on the fly. F18a CPUs can
do some things that allow them to be updated easily. They can execute code
from a comms port. It might be practical to pull code from an external
repository to reconfigure the logic of CPU nodes on the fly.

billand...@gmail.com

unread,
Sep 22, 2017, 9:51:48 PM9/22/17
to
> Minerva is an outgrowth of Athena (same goddess), which has been Greg's
> company since the 1960's. ...
...
...
...

> Cheers,
> Elizabeth

Thanks Elizabeth. I didn't know any of this history.

I watched Greg's live broadcast. I recollect he said they were planning on taking on some apprentices. Terrific opportunity for the right people.
0 new messages