Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What is greenarrays up to lately?

1,057 views
Skip to first unread message

polymorph self

unread,
Mar 12, 2015, 12:54:08 AM3/12/15
to
Can anyone speak to some forth working in the field apps greenarrays has achieved?

What kind of projects are they up to?

JUERGEN

unread,
Mar 15, 2015, 5:08:14 PM3/15/15
to
On Thursday, March 12, 2015 at 4:54:08 AM UTC, polymorph self wrote:
> Can anyone speak to some forth working in the field apps greenarrays has achieved?
>
> What kind of projects are they up to?

It seems, not much, as the whole clf community does not know anything - and even Greenarrays does not care to give any comment.

It seems that the core used in the Green Arrays Project is not relevant to the whole CLF community.

I can understand that the HW is too expensive to try it out,
and Greenarrays does not use any of the existing SW emulations either.

But he existing Forth community does not take it serious, otherwise a simlation in Forth of the core would have been done.

The only guess I could have is that the military applications pay for the salaries of the people involved.

So part of the next generation star wars - or Mars?
Well just guessing - it seems too many NDAs involved?

hughag...@gmail.com

unread,
Mar 15, 2015, 10:10:35 PM3/15/15
to
On Sunday, March 15, 2015 at 2:08:14 PM UTC-7, JUERGEN wrote:
> On Thursday, March 12, 2015 at 4:54:08 AM UTC, polymorph self wrote:
> > Can anyone speak to some forth working in the field apps greenarrays has achieved?

> > What kind of projects are they up to?
> The only guess I could have is that the military applications pay for the salaries of the people involved.
>
> So part of the next generation star wars - or Mars?
> Well just guessing - it seems too many NDAs involved?

The only application that I can think of for such a highly-parallel low-level system is code-breaking --- the number of possible keys can be narrowed down, but you still have a lot of possible keys and they just have to be exhaustively searched to find the one that works --- massive parallelism is the way to go, but each processor can be quite simple as it is just doing a simple logical operation.

GA would be terrible for "star wars" missile-defense, or a Mars mission --- both of these primarily involve fast precise motion-control (the MiniForth/RACE would be a better choice) --- there is no need for parallel processing.

Albert van der Horst

unread,
Mar 16, 2015, 2:03:53 PM3/16/15
to
In article <42d25675-3529-4aa7...@googlegroups.com>,
JUERGEN <epld...@aol.com> wrote:
>On Thursday, March 12, 2015 at 4:54:08 AM UTC, polymorph self wrote:
>> Can anyone speak to some forth working in the field apps greenarrays
>has achieved?
>>
>> What kind of projects are they up to?
>
>It seems, not much, as the whole clf community does not know anything -
>and even Greenarrays does not care to give any comment.
>
>It seems that the core used in the Green Arrays Project is not relevant
>to the whole CLF community.
>
>I can understand that the HW is too expensive to try it out,
>and Greenarrays does not use any of the existing SW emulations either.
>
>But he existing Forth community does not take it serious, otherwise a
>simlation in Forth of the core would have been done.

It has been done, and it is running the parpi program, where the
actual hardware doesn't. See
http://arrayfactor.org/

>
>The only guess I could have is that the military applications pay for
>the salaries of the people involved.
>
>So part of the next generation star wars - or Mars?
>Well just guessing - it seems too many NDAs involved?
>
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Jan Coombs <Jan-54 >

unread,
Mar 16, 2015, 3:42:16 PM3/16/15
to
On Sun, 15 Mar 2015 14:08:11 -0700 (PDT)
JUERGEN <epld...@aol.com> wrote:

> On Thursday, March 12, 2015 at 4:54:08 AM UTC, polymorph self wrote:
> > Can anyone speak to some forth working in the field apps greenarrays has achieved?
> >
> > What kind of projects are they up to?
>
> It seems, not much, as the whole clf community does not know anything - and even Greenarrays does not care to give any comment.
>
> It seems that the core used in the Green Arrays Project is not relevant to the whole CLF community.
>
> I can understand that the HW is too expensive to try it out,
> and Greenarrays does not use any of the existing SW emulations either.

It is possible to add link ports to the j1 or b16 processors so that
they can be linked into an array.

The processor can then boot from a link port. I've just started testing
this on the j1, but the b16 is also close.

> But he existing Forth community does not take it serious, otherwise a simlation in Forth of the core would have been done.

It's a bit more exciting if the simulation can export the design to the FPGA tools, then you are closer to having a chip.

Jan Coombs.
--
email valid, else fix dots and hyphen
jan4clf2014@murrayhyphenmicroftdotcodotuk


polymorph self

unread,
Mar 17, 2015, 12:58:53 AM3/17/15
to
robotic factories will set humanity free!!

esp ones building houses far apart large and cheap

and growing fruit in huge robofarms!

Id love to see a forth pc built from geenarrays shit!! maybe bill gates would hire a hit squad tho

JUERGEN

unread,
Mar 17, 2015, 3:19:58 PM3/17/15
to
Polymorph, what the hell are you talking about? It does not make sense
Or are you just a Greenarrays based RLG - Random Word Generator?
Not Forth Words in this context.

rickman

unread,
Mar 17, 2015, 4:24:15 PM3/17/15
to
On 3/15/2015 5:08 PM, JUERGEN wrote:
> On Thursday, March 12, 2015 at 4:54:08 AM UTC, polymorph self wrote:
>> Can anyone speak to some forth working in the field apps greenarrays
has achieved?
>>
>> What kind of projects are they up to?
>
> It seems, not much, as the whole clf community does not know anything
- and even Greenarrays does not care to give any comment.
>
> It seems that the core used in the Green Arrays Project is not
relevant to the whole CLF community.
>
> I can understand that the HW is too expensive to try it out,
> and Greenarrays does not use any of the existing SW emulations either.

The hardware is not expensive. I have two or three GA144 chips on
boards from http://www.schmartboard.com/ I haven't fired them up for
lack of a truly interesting project...


> But he existing Forth community does not take it serious, otherwise a
simlation in Forth of the core would have been done.

I don't follow that reasoning. Actually there have been several
simulators produced. I would expect the one by GA themselves was done
in Forth although I don't know that for sure. Someone else did one that
was better in some regard, but I don't recall what it was. They did a
demo for the SVFIG monthly meeting once. If you want to search through
their videos or maybe contact someone there they might be able to point
you to the right video.


> The only guess I could have is that the military applications pay for
the salaries of the people involved.

You don't need the military to pay salaries. Last I heard no one takes
a salary... Trust me, there are no military projects using the GA144.
GA as a company doesn't meet the minimum requirements for their devices
being used in a military project.


> So part of the next generation star wars - or Mars?
> Well just guessing - it seems too many NDAs involved?

For sure no Mars projects without a *lot* more work. To go on a space
craft the chips have to be rad hardened.

Last I heard they had a project to produce a 32 bit version of the
CPU... an F32 I suppose. We'll see if they fix any of the many utility
issues of the GA144.

--

Rick

Jason Damisch

unread,
Mar 17, 2015, 4:24:20 PM3/17/15
to
On Tuesday, March 17, 2015 at 12:19:58 PM UTC-7, JUERGEN wrote:

> Polymorph, what the hell are you talking about? It does not make sense
> Or are you just a Greenarrays based RLG - Random Word Generator?
> Not Forth Words in this context.

Another name for Polymoph Self is Gavino.

Jason

rickman

unread,
Mar 17, 2015, 4:26:53 PM3/17/15
to
On 3/16/2015 3:42 PM, Jan Coombs <Jan-54 wrote:
>
> It's a bit more exciting if the simulation can export the design to
the FPGA tools, then you are closer to having a chip.

I'm not sure what that means. What design? The chip or the code? I'm
not sure why the simulator is the right tool to be producing code. The
simulator is used to run the code you have produced.

--

Rick

JUERGEN

unread,
Mar 17, 2015, 4:57:46 PM3/17/15
to
Sorry Rick, Is there a link that works? I used yours and it got me nowhere useful, sorry. No GA, no price.

Which Simulators in Forth are there - sorry - no links again - no info.
If you comment - please add helpful info - no grey area.
I was asking the whole CLF community - and it seems there no relevant info.

So you are just supporting my assumption - no known projects.

As I assumed we are all guessing - no official information. Possibly NDA covered projects.

Thanks for the feedback.
Or do we rather wait for the F64, F128 ...

My ASIC background tells me 3-6 months for masks sets, another 3 - 6 moths for chips - so we are talking 2016 / 2017.

Paul Rubin

unread,
Mar 17, 2015, 5:36:54 PM3/17/15
to
JUERGEN <epld...@aol.com> writes:
>> The hardware is not expensive. I have two or three GA144 chips on
>> boards from http://www.schmartboard.com/
> Sorry Rick, Is there a link that works? I used yours and it got me
> nowhere useful, sorry. No GA, no price.

http://www.schmartboard.com/index.asp?page=products_csp&id=532

> Which Simulators in Forth are there - sorry - no links again - no
> info.

I believe the arrayforth simulator is written in forth, but its source
code is not public. You can download a windows binary from here:

http://www.greenarraychips.com/home/support/download-02b.html

Paul Rubin

unread,
Mar 17, 2015, 7:37:01 PM3/17/15
to
Paul Rubin <no.e...@nospam.invalid> writes:
> I believe the arrayforth simulator is written in forth, but its source
> code is not public. You can download a windows binary from here:
>
> http://www.greenarraychips.com/home/support/download-02b.html

Actually the source may be here:

https://github.com/DRuffer/arrayForth

rickman

unread,
Mar 17, 2015, 11:16:10 PM3/17/15
to
I don't know what to tell you. I tried it and the link works fine.
Takes you right to the Schmartboard home page.


> Which Simulators in Forth are there - sorry - no links again - no info.
> If you comment - please add helpful info - no grey area.
> I was asking the whole CLF community - and it seems there no relevant info.

Sorry. I don't have links handy. If you want to find the GA simulator
you just download the tools. I can't give you any more clear
instructions than that.

I already told you about the other simulator. It was presented at the
SVFIG monthly talks and you can either contact someone with SVFIG who
may be able to point you to the presentation on youtube or they may have
an online index of their monthlies.


> So you are just supporting my assumption - no known projects.

GA has indicated that they have sold chips to customers who use them in
products. They have provided no details that I am aware of. One
alleged app involved a rather complicated hearing aid algorithm that
maxed out a pair of TMS6700 DSPs or something similar. Again no details
on it going into production.


> As I assumed we are all guessing - no official information. Possibly NDA covered projects.

Both the hearing aid app and the 32 bit version were discussed by
reputable sources connected to GA.


> Thanks for the feedback.
> Or do we rather wait for the F64, F128 ...
>
> My ASIC background tells me 3-6 months for masks sets, another 3 - 6
moths for chips - so we are talking 2016 / 2017.

How do you know what point in the process they are currently at? They
may not even have started layout or they may have already ordered mask sets.

Why are you so interested? I would love to hear about anything they are
doing, but I'm not holding my breath in any way. They are a shoestring
operation designing a very unique product. Don't expect them to meet
anyone else's idea of a schedule or other measures of progress.

--

Rick

Howerd

unread,
Mar 18, 2015, 3:09:59 AM3/18/15
to
On Tuesday, 17 March 2015 21:36:54 UTC, Paul Rubin wrote:
Hi Paul,

Yes - the SoftSim GA144 simulator is written in ArrayForth.
The link you gave :
http://www.greenarraychips.com/home/support/download-02b.html
is indeed a Windows binary, but when run it installs ArrayForth, and when you run the command file Okad.bat in the C:\GreenArrays\EVB001 folder you can run the simulator by typing
softsim
( in normal QWERTY mode, just type softsim and press the Enter key )

The full source is there from block 148 - you can view it by typing
148 edit
after which you escape the world of QWERTY and ASCII, and have to use the editor keypad commands. You can press the Escape key to exit the editor.

BTW IMHO, I think the ArrayForth Windows/Native program is superb.
The executable file Okad2-42c-pd.exe is 38k bytes, and is a full Windows interface to the 313k colorForth file OkadWork.cf .
The OkadWork.cf file can also be run natively, and in either case uses LZ77 to decompress 1500 or more 1K colorForth blocks.

I like the fact that the GA144 development environment has the same character as the GA144 itself - small, powerful and elegant :-)
Awesome.

Best regards,
Howerd





Paul Rubin

unread,
Mar 18, 2015, 3:23:49 AM3/18/15
to
Howerd <how...@yahoo.co.uk> writes:
> The full source is there from block 148 - you can view it by typing
> 148 edit...
> I like the fact that the GA144 development environment has the same
> character as the GA144 itself - small, powerful and elegant :-)

I don't use Windows and haven't tried running Colorforth but I've looked
at some of the html dumps (the html converter utility is also in there,
written in Colorforth) on greenarrays.com and also from that github
repo. It's just amazing--it's as if the brains that wrote it came from
another planet. Something like those Vikings who managed to navigate
the world using nothing but a piece of magnetized rock as a GPS ;-).

Albert van der Horst

unread,
Mar 18, 2015, 7:05:43 AM3/18/15
to
>On Thursday, March 12, 2015 at 4:54:08 AM UTC, polymorph self wrote:
>> Can anyone speak to some forth working in the field apps greenarrays has achieved?
>>
>> What kind of projects are they up to?
>
>It seems, not much, as the whole clf community does not know anything - and even
>Greenarrays does not care to give any comment.
>
>It seems that the core used in the Green Arrays Project is not relevant to the whole CLF community.
>
>I can understand that the HW is too expensive to try it out,
>and Greenarrays does not use any of the existing SW emulations either.
>
>But he existing Forth community does not take it serious, otherwise a simlation in Forth of
>the core would have been done.

You didn't look hard enough. Leon Koning (current chairman of the Dutch
FIG chapter) made one with a liberal license (BSD)
www.arrayfactor.org
It is written in Factor which counts as a Forth-related language.

<SNIP>
Groetjes Albert

Dennis Ruffer

unread,
Mar 18, 2015, 10:00:47 AM3/18/15
to
That's mine, but it's an unfinished attempt at moving arrayForth into GForth's vmgen.

Nothing to see there yet. ;(

DaR

JUERGEN

unread,
Mar 18, 2015, 6:54:45 PM3/18/15
to
Thanks Paul, found it now, but it seems just a chip, I was expecting a minimum working system.

This is why I wrongly assumed that this word set would have been somewhere as a GA1 - just as simple core in SW.
Anybody could then link as many as necessary and run it - look at it,
at least this would be my approach, simulated in a free Forth like VFX.
During simulation speed is not necessary.

How many people on here:

Have bought a chip or a system

Have done some applications or tests with it

Are using it in a design?

Is this product commercially viable and why and in which application?

rickman

unread,
Mar 18, 2015, 7:27:16 PM3/18/15
to
On 3/18/2015 6:54 PM, JUERGEN wrote:
> On Tuesday, March 17, 2015 at 11:37:01 PM UTC, Paul Rubin wrote:
>> Paul Rubin <no.e...@nospam.invalid> writes:
>>> I believe the arrayforth simulator is written in forth, but its source
>>> code is not public. You can download a windows binary from here:
>>>
>>> http://www.greenarraychips.com/home/support/download-02b.html
>>
>> Actually the source may be here:
>>
>> https://github.com/DRuffer/arrayForth
>
> Thanks Paul, found it now, but it seems just a chip, I was expecting a minimum working system.

I can't say I understand what you are asking about. You are asking
about a simulator for the GA144, no? What do you mean a "working system"?


> This is why I wrongly assumed that this word set would have been somewhere as a GA1 - just as simple core in SW.
> Anybody could then link as many as necessary and run it - look at it,
> at least this would be my approach, simulated in a free Forth like VFX.
> During simulation speed is not necessary.
>
> How many people on here:
>
> Have bought a chip or a system
>
> Have done some applications or tests with it
>
> Are using it in a design?
>
> Is this product commercially viable and why and in which application?

I have bought the GA144 soldered on a board from Schmartboard. I have
done nothing with it. I thought it might make a good DDS and will
explore that when I get around to it. I also am interested in seeing
just how low power I can get in a digital receiver design for a very low
frequency, low bandwidth signal. Two apps, no progress on either.

At one time I thought it might do a good job in a design I am currently
using an FPGA for. But the same reason other MCUs are not viable
candidates excludes this chip. It can't bit bang the 30 MHz serial port
that is easy to do in the FPGA. The GA144 with 144 - 700 MIPS
processors is too slow to handle a peripheral interface.

--

Rick

Paul Rubin

unread,
Mar 18, 2015, 7:44:50 PM3/18/15
to
JUERGEN <epld...@aol.com> writes:
> [Schmartboard] Thanks Paul, found it now, but it seems just a chip, I
> was expecting a minimum working system.

Yeah, there's just the GA eval board for that. Rickman had an interest
in making a simpler dev board product, but I think there was not enough
interest to make it worth his while. If you want to buy a lot of them
then you and he should talk ;-).

> This is why I wrongly assumed that this word set would have been
> somewhere as a GA1 - just as simple core in SW.
> Anybody could then link as many as necessary and run it - look at it,

It isn't all that simple. Besides just interpreting the instruction
set, you probably want debugging and monitoring capabilities to trace
what's happening in the cores, you have to simulate the communication
on-chip and off, and you probably want to be able to perturb the timings
of individual cores in order to check for or debug the usual race
conditions and other timing faults. Overall it's a nontrivial amount of
programming. It's been done a few times in various languages.
Arrayforth is the one I know of that's in Forth, though the dialect is
apparently Colorforth.

> at least this would be my approach, simulated in a free Forth like VFX.

VFX isn't free, unless you count the limited evaluation version. It's
very full featured and high performance though, more so than any of the
free Forths. There are some other Forths that are free, though less
fancy than VFX.

> During simulation speed is not necessary.

The GA144 with all nodes going at once is stupendously fast, like 1000
GHz adding everything up. If your simulation is 1000x slower you're
doing pretty good. That drastically affects how useful it is. You
probably want to use a multicore PC running a parallel simulation,
complicating the software even more. I don't know if Arrayforth Softsim
attempts this.

> How many people on here:
> Have bought a chip or a system
> Have done some applications or tests with it
> Are using it in a design?
> Is this product commercially viable and why and in which application?

I know a few people here have bought the GA144 or its predecessor, the
40 core Seaforth chip, and done things with them, including the hearing
aid that Rick mentioned, plus various research projects. I don't know
of any products with it that have actually reached the point of being
manufactured and sold, but some have reached various stages of
development and demos.

The GA144's commercial viability I guess can only be shown by products
around it being shipped and we can't say that has happened til it
happens. A few of us have tried to figure out ways to use it, but found
it had too many annoying limitations for the apps that we thought of.
It seems like a "version 1" product that is very interesting and has
cool ideas, but is not really there yet. There has been mention of a
32-bit version in the works, that may be more usable.

rickman

unread,
Mar 18, 2015, 8:24:09 PM3/18/15
to
On 3/18/2015 7:44 PM, Paul Rubin wrote:
> JUERGEN <epld...@aol.com> writes:
>> [Schmartboard] Thanks Paul, found it now, but it seems just a chip, I
>> was expecting a minimum working system.
>
> Yeah, there's just the GA eval board for that. Rickman had an interest
> in making a simpler dev board product, but I think there was not enough
> interest to make it worth his while. If you want to buy a lot of them
> then you and he should talk ;-).

Now I understand what he was complaining about. Yes, the Schmartboard
device is just the chip mounted on a board, not even a very good mount
as they don't really provide for decoupling capacitance. I think it was
a compromise between a general purposes carrier for the package size of
the GA144 and something actually like an eval board. Thing is I have
never seen another device in that package so I suspect Schmart doesn't
sell many of these without a GA144. They should have at least added caps.


>> This is why I wrongly assumed that this word set would have been
>> somewhere as a GA1 - just as simple core in SW.
>> Anybody could then link as many as necessary and run it - look at it,
>
> It isn't all that simple. Besides just interpreting the instruction
> set, you probably want debugging and monitoring capabilities to trace
> what's happening in the cores, you have to simulate the communication
> on-chip and off, and you probably want to be able to perturb the timings
> of individual cores in order to check for or debug the usual race
> conditions and other timing faults. Overall it's a nontrivial amount of
> programming. It's been done a few times in various languages.
> Arrayforth is the one I know of that's in Forth, though the dialect is
> apparently Colorforth.
>
>> at least this would be my approach, simulated in a free Forth like VFX.
>
> VFX isn't free, unless you count the limited evaluation version. It's
> very full featured and high performance though, more so than any of the
> free Forths. There are some other Forths that are free, though less
> fancy than VFX.

If I understand what he is asking, I think his approach might actually
work. The GA144 has 144 async nodes. So making them async in software
should be entirely suitable. Comms would be programmed just as it works
on the chip, write to another node and stop the simulating until the
word is read.

I guess the idea of single stepping gets a bit messy when the processors
are async. Still, that is all in the existing simulator in a limited
extent.


>> During simulation speed is not necessary.
>
> The GA144 with all nodes going at once is stupendously fast, like 1000
> GHz adding everything up. If your simulation is 1000x slower you're
> doing pretty good. That drastically affects how useful it is. You
> probably want to use a multicore PC running a parallel simulation,
> complicating the software even more. I don't know if Arrayforth Softsim
> attempts this.

Reminds me of an airplane engine I heard about near the end of the
propeller days. It had a separate turbocharger on each cylinder. The
total horsepower generated by all the turbos was more than the power
delivered to the prop! The GA144 is a bit like that. Lots of MIPS, but
they can't be harnessed quite so literally. So I look at it like I do
hardware. I don't care if a gate is only toggling at 0.1% of it's max
rate. I care if it is doing the job I need it to do.

Don't call the F18s CPUs. Call them logic and maybe people will quit
worrying about keeping them running at full tilt.


>> How many people on here:
>> Have bought a chip or a system
>> Have done some applications or tests with it
>> Are using it in a design?
>> Is this product commercially viable and why and in which application?
>
> I know a few people here have bought the GA144 or its predecessor, the
> 40 core Seaforth chip, and done things with them, including the hearing
> aid that Rick mentioned, plus various research projects. I don't know
> of any products with it that have actually reached the point of being
> manufactured and sold, but some have reached various stages of
> development and demos.

It is entirely possible that there are GA144s being sold in products.
If there is an NDA GA can't tell us. A company selling a product might
not want anyone to know what is inside. If I were using the GA144 in a
product, I wouldn't want anyone to know. GA is not a very sturdy
company and the customer might have concerns about the longevity, lol


> The GA144's commercial viability I guess can only be shown by products
> around it being shipped and we can't say that has happened til it
> happens. A few of us have tried to figure out ways to use it, but found
> it had too many annoying limitations for the apps that we thought of.
> It seems like a "version 1" product that is very interesting and has
> cool ideas, but is not really there yet. There has been mention of a
> 32-bit version in the works, that may be more usable.

I'm not holding my breath that the 32 bit version will be better unless
they listen to the complains of what is wrong with it. I have not seen
an indication that anyone really recognizes the problems with the GA144.
Personally, I don't think the 18 bit word size was one of them.

--

Rick

Jan Coombs <Jan-54 >

unread,
Mar 19, 2015, 7:24:42 AM3/19/15
to
On Tue, 17 Mar 2015 16:26:53 -0400
rickman <gnu...@gmail.com> wrote:

> On 3/16/2015 3:42 PM, Jan Coombs <Jan-54 wrote:
> >
> > It's a bit more exciting if the simulation can export
> > the design to the FPGA tools, then you are closer to
> > having a chip.
>
> I'm not sure what that means. What design? The chip or the code?

Both. The simulator consumes the source code for the
hardware design. One file of which contains a simulation
model for the target processor code ROM, and also FPGA
chip family dependant code to initialize a matching RAM
block on the chip.


> I'm
> not sure why the simulator is the right tool to be producing code. The
> simulator is used to run the code you have produced.

It is right because the development is test driven,
and good because the edit-test loop is very short.


In more detail:

For each new feature a test is added to the test suite.
Once a processor has grown to a certain size this might
just mean adding a new target code fragment.

The simulator [1] is used to perform a suite of tests
to indicate that the last development step is not yet
achieved, also, that achieving it has not broken the
hardware design elsewhere.

During a test run the simulator also exports Verilog
and/or VHDL files for the chip tools. This adds
confidence that the hardware source is valid.

By occasionally running the FPGA tools it can be seen
that the code is still synthesizable, and that chip
dependent features are catered for, for example, RAM
initialisation to hold the target code.

That's why I feel "closer to having a chip"

Test driven development for hardware is still new to
me; I understand the theory, but still fiddle... :)

[1] http://www.myhdl.org/start/why.html

JUERGEN

unread,
Mar 19, 2015, 10:15:12 AM3/19/15
to
I think I just give up for 2015. Thanks for all the inputs.

JUERGEN

unread,
Mar 19, 2015, 10:22:27 AM3/19/15
to
Ist seems that Wave Semiconductor is designing a product similar to Ga144 then.
http://www.wavesemi.com/products.html
A bit of competition always helps to sharpen products and ideas...

JUERGEN

unread,
Mar 19, 2015, 10:34:58 AM3/19/15
to
On Thursday, March 19, 2015 at 12:24:09 AM UTC, rickman wrote:
Any decent Dev Board nowadays has a USB to TTL chip and the min HW interface to have a working system to communicate with the PC.
Too much ? Well a USBto TTL cable is enough
but to put it more bluntly:
If a company does not supply the dev system and/or has defined a suitable market for the chip - well it will probably not fly, and it is an interesting internal research project.
And it seems not even the Forth community has done anything with it.
There were not many answers for:

Paul Rubin

unread,
Mar 19, 2015, 4:23:36 PM3/19/15
to
JUERGEN <epld...@aol.com> writes:
> If a company does not supply the dev system and/or has defined a
> suitable market for the chip - well it will probably not fly, and it
> is an interesting internal research project.

GA sells a dev board that is priced commensurately with other
professional dev boards from the pre-Raspberry-Pi era, and that has the
features you mention. It's certainly affordable to the engineers the
company sees as working on serious products, and I suspect some of the
cost is because of an expected support load that the user would probably
create for GA's own engineers. The interest in an alternative board is
because the GA board is too expensive for typical hobbyists who are now
used to Arduinos and Launchpads.

>> >> How many people on here:
>> >> Have bought a chip or a system
>> >> Have done some applications or tests with it
>> >> Are using it in a design?

I think the answer for the first two questions is "very few but
definitely not zero". For the third question it might be zero: we don't
know for sure.

I myself have spent some time studying the GA documentation and writing
some untested code for it, but haven't bought hardware or run the
Windows simulator.

>> >> Is this product commercially viable and why and in which application?

There's definitely been at least one commercial project (the hearing
aid) that reached advanced development, but I don't know if it has
actually entered production or shipped to customers (or if it is even
still alive). As Rick says, it's possible that the chip is being used
in other applications where the info isn't public or hasn't reached CLF.

Rick mentioned a potential digital oscilloscope application, and might
be interested in this:

https://www.picotech.com/support/topic14347.html

Someone mentioned it to me (I think that's the product) and according to
the person's description, it uses the Beaglebone's on-chip realtime
coprocessors to implement an oscilloscope that goes up to 100 MHz.

Jan Coombs <Jan-54 >

unread,
Mar 19, 2015, 6:21:00 PM3/19/15
to
On Thu, 19 Mar 2015 07:22:26 -0700 (PDT)
JUERGEN <epld...@aol.com> wrote:
>
> Ist seems that Wave Semiconductor is designing a product similar to Ga144 then.
> http://www.wavesemi.com/products.html
> A bit of competition always helps to sharpen products and ideas...

Perhaps there is a small patent gap between Chucks work and
Achronix's async gate array. Google doesn't find much more:

Clockless Programmable Logic Startup Preps Product
http://electronics360.globalspec.com/article/4227/clockless-programmable-logic-startup-preps-product

Leveraging FDSOI to Achieve both Low Power AND High Speed
http://www.soiconsortium.org/fully-depleted-soi/presentations/september-2014-fd-soi-forum/Wave%20Semi%20FDSOI%20Forum_dist.pdf



BTW, this claim used to be in the body of the wikipedia article at:

http://en.wikipedia.org/wiki/Asynchronous_circuit#Asynchronous_CPU

SEAforth Overview "... asynchronous circuit design throughout the chip. There is no central clock with billions of dumb nodes dissipating useless power. ... the processor cores are internally asynchronous themselves

Is it true? Are Chucks chips internally asynchronous, or only asynchronous at the boundary of each processor core?


Just asking, Jan Coombs.

Bernd Paysan

unread,
Mar 19, 2015, 7:14:25 PM3/19/15
to
wrote:

> On Thu, 19 Mar 2015 07:22:26 -0700 (PDT)
> Is it true? Are Chucks chips internally asynchronous, or only
> asynchronous at the boundary of each processor core?

There are a few different delay elements within each core to determine the
speed of different operations. So it's not a single clock per core, either.

What's clearly wrong is that a synchronous device has "billions of dumb
nodes which dissipate useless power"; if you do synthesize your Verilog to
save power, you'll enable the clock gating option, and then, only those
nodes who need to actually get the clock. Sleeping cores wouldn't.

All CPUs now, including the performance-optimized, are done that way, as
power dissipation is the main problem with modern processes: you have to get
that down to a reasonable value.

Chuck stopped using mainstream EDA tools in the early 90s, and his
statements are all true for the EDA situation 20 years ago.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://bernd-paysan.de/
net2o ID: kQusJzA;7*?t=uy@X}1GWr!+0qqp_Cn176t4(dQ*

lynx

unread,
Mar 19, 2015, 7:40:43 PM3/19/15
to
In <141925bb-230a-4966...@googlegroups.com> Howerd <how...@yahoo.co.uk> writes:

>Yes - the SoftSim GA144 simulator is written in ArrayForth.
>The link you gave :
>http://www.greenarraychips.com/home/support/download-02b.html
>is indeed a Windows binary, but when run it installs ArrayForth, and when you run the command file Okad.bat in the C:\GreenArrays\EVB001 folder you can run the simulator by typing
>softsim
>( in normal QWERTY mode, just type softsim and press the Enter key )

>The full source is there from block 148 - you can view it by typing
>148 edit
>after which you escape the world of QWERTY and ASCII, and have to use the editor keypad commands. You can press the Escape key to exit the editor.

I'm not sure why folks are under the impression that everything
GreenArrays has on tap is "closed" source, so I'm glad you clarified
this. The 'softsim' is even touched on in the arrayForth tutorial (pdf)
on the GA website, and therefore should have been hard for anyone to
miss.

With that said, I believe the only bits not available to the public is
the PC arrayForth kernel (which I'm assuming is pretty similar to the
versions of colorForth Jeff Fox passed around prior to the formation of
GA), the Win32 wrapper for the arrayForth executable, and the actual
Okad program Chuck uses to design his chips. Everything else is there,
including the source to the chip ROM, as well as GA144 versions of
polyForth and eForth in their entirety. And the licensing for much of
it--while not "liberal" in the BSD sense--is such that anyone who
intends to develop for GA chips using GA software won't find themselves
unreasonably restricted from doing what they please. Personally, I
think GA supplies enough source and documentation to make the evaluation
board a pretty good value, even if it remains relatively expensive in an
absolute sense.

rickman

unread,
Mar 20, 2015, 12:12:45 AM3/20/15
to
On 3/19/2015 7:24 AM, Jan Coombs <Jan-54 wrote:
> On Tue, 17 Mar 2015 16:26:53 -0400
> rickman <gnu...@gmail.com> wrote:
>
>> On 3/16/2015 3:42 PM, Jan Coombs <Jan-54 wrote:
>> >
>> > It's a bit more exciting if the simulation can export
>> > the design to the FPGA tools, then you are closer to
>> > having a chip.
>>
>> I'm not sure what that means. What design? The chip or the code?
>
> Both. The simulator consumes the source code for the
> hardware design. One file of which contains a simulation
> model for the target processor code ROM, and also FPGA
> chip family dependant code to initialize a matching RAM
> block on the chip.

This is not clear to me at all.


>> I'm
>> not sure why the simulator is the right tool to be producing code. The
>> simulator is used to run the code you have produced.
>
> It is right because the development is test driven,
> and good because the edit-test loop is very short.
>
>
> In more detail:
>
> For each new feature a test is added to the test suite.
> Once a processor has grown to a certain size this might
> just mean adding a new target code fragment.
>
> The simulator [1] is used to perform a suite of tests
> to indicate that the last development step is not yet
> achieved, also, that achieving it has not broken the
> hardware design elsewhere.
>
> During a test run the simulator also exports Verilog
> and/or VHDL files for the chip tools. This adds
> confidence that the hardware source is valid.
>
> By occasionally running the FPGA tools it can be seen
> that the code is still synthesizable, and that chip
> dependent features are catered for, for example, RAM
> initialisation to hold the target code.
>
> That's why I feel "closer to having a chip"
>
> Test driven development for hardware is still new to
> me; I understand the theory, but still fiddle... :)
>
> [1] http://www.myhdl.org/start/why.html

You are talking about code describing the operation of the chip, not a
simulator to test your software. If you were designing a chip you would
use an HDL like Verilog, VHDL or one of the others. Otherwise you would
be writing a whole lot more than just the code for the chip, you would
be writing the simulator and synthesis engine too.

I can't say I follow what you are trying to say really. You seem to be
saying you would run a simulation on your code which is the test suite
to test the HDL? I think that would be most easily done in the HDL
simulator, no? That is what I did when I designed a CPU. So in that
sense I was *always* running the FPGA tools.

What tools are you talking about exactly?

--

Rick

rickman

unread,
Mar 20, 2015, 12:22:45 AM3/20/15
to
On 3/19/2015 10:22 AM, JUERGEN wrote:
>
> Ist seems that Wave Semiconductor is designing a product similar to Ga144 then.
> http://www.wavesemi.com/products.html
> A bit of competition always helps to sharpen products and ideas...

Lol, if you think GA gives a rat's rear about any competition, you
haven't been paying attention. If GA did care about competition, they
would have paid a little attention to it when they developed the GA144.
In reality there is no competition when there is no market. I don't
know anyone who has been able to identify the market the GA144 is
targeted to...

Also note that Wave Semi is still early on in the development process
and is not very interested in customers like you and I. "Deeper detail
about Azure is available under NDA to a limited number of qualified
customers. Contact Wave Marketing to discuss your application and how
it might benefit from an Azure implementation." In other words, if you
have *big* bucks, give us a call. Otherwise, don't bother.

--

Rick

rickman

unread,
Mar 20, 2015, 12:39:52 AM3/20/15
to
Sometimes I really don't understand what you are saying and I don't
think it is a language barrier. I think you write your thoughts without
enough background for others (well, me anyway) to understand what is
behind them.

I have several USB to TTL UART converters that only cost about $5 on
ebay. I can't see the issue with using one of these rather than
demanding it be on the eval board. The Schmartboard adapter is not
really intended to be an "eval" board, it is just an adapter to let you
use the GA144 in a circuit of your own design. The point is that if you
want to do something with the chip for a much lower cost than the GA
eval board you can use this to cobble your own system together.

Heck, the type of board you were looking for to support an MSP430 a few
months back was not much more than a CPU on a board with I/O pins. That
is what the Schmartboard product is. If that doesn't suit you, then sorry.


> but to put it more bluntly:
> If a company does not supply the dev system and/or has defined a suitable market for the chip - well it will probably not fly, and it is an interesting internal research project.
> And it seems not even the Forth community has done anything with it.

It is not a Forth chip, why would the Forth community care about it? It
is an embedded processor which is being ignored by the embedded
processing community. It does have some potential in signal processing
applications. Many DSP chips are a bit limited in peripheral support
like the GA144, but they often have much more memory. Many DSP apps
need lots of memory.


> There were not many answers for:
>>>> How many people on here:
>>>> Have bought a chip or a system
>>>> Have done some applications or tests with it
>>>> Are using it in a design?
>>>> Is this product commercially viable and why and in which application?

Yep, not many answers... did you really expect much more?

--

Rick

rickman

unread,
Mar 20, 2015, 12:51:58 AM3/20/15
to
On 3/19/2015 4:23 PM, Paul Rubin wrote:
>
> Rick mentioned a potential digital oscilloscope application, and might
> be interested in this:
>
> https://www.picotech.com/support/topic14347.html
>
> Someone mentioned it to me (I think that's the product) and according to
> the person's description, it uses the Beaglebone's on-chip realtime
> coprocessors to implement an oscilloscope that goes up to 100 MHz.

Is that 100 MHz bandwidth or 100 MHz sample rate? Most often any MHz
number on such a scope project is sample rate while commercial scopes
cite the usable bandwidth.

I believe it was when I was trying to determine the feasibility of the
scope project that I gave up on the GA144 in general. There are some
gaping holes in the timing data provided for the GA144 and when I tried
to dig a bit deeper into it I was told the info was "confidential" or
something to that effect. Seems if you learn the details of the timing
it might enable you to reverse engineer the comms port design or
something. So you can't get detailed timing on I/O operations and so
you can't use the I/O at close to it's max capabilities. I was told to
write my app and measure the performance... lol I guess GA
understands the concept of shooting from the hip and if it doesn't work,
just scrapping the design.

I shouldn't say I've given up entirely on the GA144. As I believe I
mentioned before there are still two projects I might try the GA144 in.
One is a DDS design where the GA144 could provide 5 simultaneous
outputs which could be related signals or completely independent. The 5
DAC inputs could be used for clocking, synchronization or other related
features.

The other project is a receiver app, rather an SDR (software defined
radio) for low frequency signals. This could be a direct to antenna
design if I can get the input threshold stable enough. In theory this
could run from a very low power source such as a single AA battery or
even scavenge power from the environment. The GA144 might be low power
enough to pull this off.

--

Rick

Mark Wills

unread,
Mar 20, 2015, 6:07:47 AM3/20/15
to
On Friday, 20 March 2015 04:22:45 UTC, rickman wrote:
> I don't
> know anyone who has been able to identify the market the GA144 is
> targeted to...

I have the same problem. It seems to me that the concept is cool
but they got carried away with the number of cores. It needs:

a) Far fewer cores
b) Far more resources (ROM/RAM) per core
c) A word size that is more readily recognised (16/24/32 bit)
d) Specialised cores (or perhaps even only one core) to deal
with getting I/O on/off the board.

8 cores would probably be enough, with, say 1K words ROM, 0.5K words RAM
and (heaven) a common memory area (multi-port) that all the cores could
access.

Oh dear. I think I just described the Parallax Propeller! Though I think
GA could probably put something together that is much faster and more
power efficient than the Propeller.

JUERGEN

unread,
Mar 20, 2015, 11:20:56 AM3/20/15
to
Karl Fant has the patent on the asynchronous NCL, the only real world boolean logic I have seen until now.
Synthesis and technology files are there to make Boolean Logic work, as there is no time factor included, and signals through gates run at diffeent sped compared to just clock wires across the chips. I know Karl from the past at Theseus Logic. He is or was part of Wave Semiconductor.
If I remember correctly, some of the first computers were implemented in asynchronous, so nothing new really, but difficult to test still now.
Often an asynchronous wrapper is put around some synchronous blocks to decouple them.

JUERGEN

unread,
Mar 20, 2015, 11:24:52 AM3/20/15
to
Bernd, even if you disable gates from changing by gating them, you still have the complete clock system across the chip and the capacities have to be driven. I assume this is what is meant. And only by reducing the clock tree length this can be reduced - independent of what you stated I assume?

Anton Ertl

unread,
Mar 20, 2015, 11:37:43 AM3/20/15
to
JUERGEN <epld...@aol.com> writes:
>Bernd, even if you disable gates from changing by gating them, you still ha=
>ve the complete clock system across the chip and the capacities have to be =
>driven.

They used that approach for the first Alpha (21064) in 1992, and the
clock used up IIRC 30% of the power of the chip, and it ran only at
200MHz. This approach did not scale, so they introduced clock
domains, each domain with its own clock generation, fed by a common
clock, but with some possible skew wrt the clock in the next domain;
therefore pushing data across clock domains loses a cycle for every
boundary crossed. At least that's what I gather from the explanations
why shift an multiply are slow on the original Pentium 4 (from 2001).

- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.forth200x.org/forth200x.html
EuroForth 2014: http://www.euroforth.org/ef14/

JUERGEN

unread,
Mar 20, 2015, 11:41:38 AM3/20/15
to
Mark, you basically describe what I was after in different words - an implementation of 8 GA1 cores in a FPGA, coupled asynchronously if you want - you are free about RAM and ROM to suit your application - even different RAM/ROM per core.

If even this will not be a good solution - why bother

The market tells GA for a year or 2 now: you might have an interesting chip - but in none of our applications we see an advantage using it.

If it were different, well GA would need 50 apps engs.

JUERGEN

unread,
Mar 20, 2015, 11:49:12 AM3/20/15
to
There was a comparison of the sych and the asynch DLX implementation at http://www.ics.forth.gr/carv/async/demo/dlx_asic.html
It seems now to have disappeared unfortunately

JUERGEN

unread,
Mar 20, 2015, 12:04:15 PM3/20/15
to
Better look here now: http://www.ics.forth.gr/carv/aspida/

rickman

unread,
Mar 20, 2015, 12:31:41 PM3/20/15
to
Perhaps you don't understand where the power goes in a clock tree. Just
as in a real tree, there are many, many leaves and few higher level
trunks (lower trunks in a real tree). These main trunks do not
dissipate most of the power, it is the branches feeding the lower level
nodes and the nodes themselves that consume most of the power. So when
a section of the chip is gated from the clock, the clock line feeding
the gate circuitry is not dissipating so much power.
Clk --+--+--+---X----+--+--+---O
| | | | | +---O
| | | | +------O
| | | +---------O
| | +---X----+--+--+---O
| | | | +---O
| | | +------O
| | +---------O
| +------X----+--+--+---O
| | | +---O
| | +------O
| +---------O
+---------X----+--+--+---O
| | +---O
| +------O
+---------O

Here the X represents the clock gate, the lines are clock routes and the
O are FFs. You can see how much of the routing is in the lower levels
and how the gating will reduce power in the clock routing.

--

Rick

rickman

unread,
Mar 20, 2015, 12:43:04 PM3/20/15
to
On 3/20/2015 6:07 AM, Mark Wills wrote:
> On Friday, 20 March 2015 04:22:45 UTC, rickman wrote:
>> I don't
>> know anyone who has been able to identify the market the GA144 is
>> targeted to...
>
> I have the same problem. It seems to me that the concept is cool
> but they got carried away with the number of cores. It needs:
>
> a) Far fewer cores
> b) Far more resources (ROM/RAM) per core
> c) A word size that is more readily recognised (16/24/32 bit)
> d) Specialised cores (or perhaps even only one core) to deal
> with getting I/O on/off the board.

Everyone focuses on the cores. The cores are not the problem. Thinking
of the F18 core the same way you think of an ARM A9 is the problem. The
F18 array is not marketable because they didn't do a very good job of
integrating it into any application.

I have said before that I think of the GA144 more like an FPGA, but with
small, fast processors rather than logic elements as the primitive
nodes. No one worries in an FPGA that the LUT4 logic element is being
used as an inverter. Don't worry about using an F18 as a wire.

The problem with the GA144 is what it can't do because the processors
are not fast enough and they wouldn't include a hard core to do the job.
USB 480 Mbps or Ethernet 100 Mbps for example. When you ask them how
to deal with this they say to add another chip. What chip would that
be??? ANOTHER MCU that was designed with real world applications in
mind and actually support the conventional I/Os in use today. Same
issue with the 1.8 volt I/Os and the software driven memory interface.
They just aren't suitable for today's needs.


> 8 cores would probably be enough, with, say 1K words ROM, 0.5K words RAM
> and (heaven) a common memory area (multi-port) that all the cores could
> access.

If you add all that memory to an F18 core it would slow down since the
memory would run slower. I don't have a major problem with the memory.
It would be nice though if there were some memory blocks on the chip,
but Chuck was designing processors, not memory.


> Oh dear. I think I just described the Parallax Propeller! Though I think
> GA could probably put something together that is much faster and more
> power efficient than the Propeller.

There is nothing magical about either chip. What in the Propeller do
you want to leave out? What in the GA144 do you want to retain? No
free lunches, make it more like the Propeller and it will loose the
GA144 advantages.

--

Rick

Jason Damisch

unread,
Mar 20, 2015, 12:48:52 PM3/20/15
to
On Friday, March 20, 2015 at 9:43:04 AM UTC-7, rickman wrote:

> There is nothing magical about either chip. What in the Propeller do
> you want to leave out? What in the GA144 do you want to retain? No
> free lunches, make it more like the Propeller and it will loose the
> GA144 advantages.

One must factor their program across multiple cores. If you can't
think that way, get another chip and forget about it.

rickman

unread,
Mar 20, 2015, 12:49:51 PM3/20/15
to
On 3/20/2015 11:41 AM, JUERGEN wrote:
>
> The market tells GA for a year or 2 now: you might have an interesting chip - but...

Stop right there. That is why they built the chip, because it was
interesting, not because it suited any application.

--

Rick

Bernd Paysan

unread,
Mar 20, 2015, 2:44:18 PM3/20/15
to
JUERGEN wrote:
> Bernd, even if you disable gates from changing by gating them, you still
> have the complete clock system across the chip and the capacities have to
> be driven. I assume this is what is meant. And only by reducing the clock
> tree length this can be reduced - independent of what you stated I assume?

No, I don't think you understand how clock gating works. This is not a gate
at each flip-flop, this is a gate at the part of the clock tree which is
turned off. I.e. let's assume you have a 100 cores with a 100 DFFs each.
So when everything is turned on, you have a tree with 10000 leaves. When
everything is turned off, you have a tree with 100 leaves (the clock gates),
consuming in the order of 100 times less (realistically, a linear factor
more, due to the fact that the wire length is bigger on the first nodes of
the clock system).

In the days before clock gating was everywhere, the clock tree consumed
about 30% of the overall power, and it was never off. So between idle and
full us, there was often just a factor 3.

Now we have clock gates everywhere, and significantly bigger factors between
all-on and all-off.

Bernd Paysan

unread,
Mar 20, 2015, 3:32:55 PM3/20/15
to
Anton Ertl wrote:

> JUERGEN <epld...@aol.com> writes:
>>Bernd, even if you disable gates from changing by gating them, you still
>>ha= ve the complete clock system across the chip and the capacities have
>>to be = driven.
>
> They used that approach for the first Alpha (21064) in 1992, and the
> clock used up IIRC 30% of the power of the chip, and it ran only at
> 200MHz. This approach did not scale, so they introduced clock
> domains, each domain with its own clock generation, fed by a common
> clock, but with some possible skew wrt the clock in the next domain;
> therefore pushing data across clock domains loses a cycle for every
> boundary crossed. At least that's what I gather from the explanations
> why shift an multiply are slow on the original Pentium 4 (from 2001).

Yes, these clock domains were an intermediate step between fully-synchronous
ungated clock trees and the fully synchronous gated clock trees that are now
used. That intermediate step was necessary, because the clock balancing
tools weren't up to the task at that time.

The original Pentium 4 was probably even worse, because it had that double-
pumped ALU, and this wasn't just a different clock domain, but one with
twice the frequency.

Paul Rubin

unread,
Mar 21, 2015, 12:05:12 AM3/21/15
to
rickman <gnu...@gmail.com> writes:
> Is that 100 MHz bandwidth or 100 MHz sample rate?

Hmm, I don't have any more info than what I wrote, but yeah, 100 mhz
sample rate sounds more plausible.

> there are still two projects I might try the GA144 in. One is a DDS
> design... The other project is a receiver app, rather an SDR
> (software defined radio) for low frequency signals.

I wonder if the BBB PRU's could be used for those. The BBB has two
realtime coprocessors that are 32-bit RISC cores running at 200 mhz.
They have some DSP-like instructions including MAC, and have 4k of
dedicated sram per PRU plus another 4k shared between the two of them.
It's one of the BBB's attractions over the raspberry pi.

rickman

unread,
Mar 21, 2015, 4:05:19 AM3/21/15
to
There is no reason why the BBB couldn't be used. But that will never be
a low power design. Heck, I burnt out a phone charger I was using to
power an rPi. 0.7 Amps is not enough... or more likely the phone
charger wasn't really capable of 0.7 Amps continuously even though it
says so.

Also, the BBB and rPi only have one analog output if I remember
correctly. That would require an add on card drawing more power. A DDS
is one of the few apps the GA144 is nearly a complete solution. The
only thing needed is a rate clock, a user interface and analog buffering
for the outputs.

--

Rick

Jan Coombs <Jan-54 >

unread,
Mar 21, 2015, 9:29:28 AM3/21/15
to
On Fri, 20 Mar 2015 08:41:37 -0700 (PDT)
JUERGEN <epld...@aol.com> wrote:

> On Friday, March 20, 2015 at 10:07:47 AM UTC, Mark Wills wrote:
> > On Friday, 20 March 2015 04:22:45 UTC, rickman wrote:
> > > I don't
> > > know anyone who has been able to identify the market the GA144 is
> > > targeted to...
> >
> > I have the same problem. It seems to me that the concept is cool
> > but they got carried away with the number of cores. It needs:
> >
> > a) Far fewer cores
> > b) Far more resources (ROM/RAM) per core
> > c) A word size that is more readily recognised (16/24/32 bit)
> > d) Specialised cores (or perhaps even only one core) to deal
> > with getting I/O on/off the board.
> >
> > 8 cores would probably be enough, with, say 1K words ROM, 0.5K words RAM
> > and (heaven) a common memory area (multi-port) that all the cores could
> > access.
>
>
> Mark, you basically describe what I was after in different words
> - an implementation of 8 GA1 cores in a FPGA, coupled asynchronously
> if you want - you are free about RAM and ROM to suit your application
> - even different RAM/ROM per core.

Perhaps you should try the very similar, open, and proven b16
processor core?

You could get eight into a X02-7000 FPGA for example, with about
3kB RAM each, and space left for some custom peripherals. (But
nine cores might be better for symmetry)

I've half finished a wrapper for the b16 which adds interprocessor
links, and modified the core so that it can execute code and boot
from an interprocessor link.

If you really need the 18b data width, then we could also increase
the number of instructions in each b16 fetch from three and a bit
to three and a half.

Would you like me to finish it, or would you be interested in
taking over this project?

Paul Rubin

unread,
Mar 21, 2015, 12:16:19 PM3/21/15
to
rickman <gnu...@gmail.com> writes:
> There is no reason why the BBB couldn't be used. But that will never
> be a low power design. Heck, I burnt out a phone charger I was using
> to power an rPi. 0.7 Amps is not enough...

Oh I see. Yes, true. Here are some beefier ones. I'd probably get the
2 amp since they recommend it for the BBB.

http://www.adafruit.com/products/501 1 amp
http://www.adafruit.com/products/1995 2 amp
http://www.adafruit.com/products/1466 4 amp

> Also, the BBB and rPi only have one analog output if I remember
> correctly. That would require an add on card drawing more power.

Oh that's interesting. I vaguely thought there were quite a lot of them
on the BBB and on the more recent rPi's (model B+ etc). Could that
limitation only have been on the original Pi models?

rickman

unread,
Mar 21, 2015, 2:02:49 PM3/21/15
to
Don't quote me on that. I'm trying to remember, but too lazy to look it
up. I know the BBB has one ADC input, but I don't recall any DAC
outputs actually. It may have a PWM output which would not be suitable
at all. Or maybe the video outputs could be usurped, but I'm not sure
they are analog either.

The point is that the GA144 has 5 ADC and 5 DAC built in which is one of
the very nice features. There are 5 edge cores, each with one ADC and
one DAC built in. I believe Chuck has said he should have put the ADC
and the DAC in separate cores so they may do that in the next version of
the GAxxx.

When it comes to power consumption, the GA144 can be right up there with
any of the high end ARM processors. But that is only with all 144 cores
running full tilt and in any app I can think of that would never happen.
With the cores loafing most of the time and running full tilt only
when needed it can be a very low power chip.

--

Rick

Jan Coombs <Jan-54 >

unread,
Mar 21, 2015, 7:03:22 PM3/21/15
to
On Fri, 20 Mar 2015 00:12:44 -0400
rickman <gnu...@gmail.com> wrote:

> On 3/19/2015 7:24 AM, Jan Coombs <Jan-54 wrote:
> > On Tue, 17 Mar 2015 16:26:53 -0400
> > rickman <gnu...@gmail.com> wrote:
> >
> >> On 3/16/2015 3:42 PM, Jan Coombs <Jan-54 wrote:
> >> >
> >> > It's a bit more exciting if the simulation can export
> >> > the design to the FPGA tools, then you are closer to
> >> > having a chip.
> >>
> >> I'm not sure what that means. What design? The chip or the code?
> >
> > Both. The simulator consumes the source code for the
> > hardware design. One file of which contains a simulation
> > model for the target processor code ROM, and also FPGA
> > chip family dependant code to initialize a matching RAM
> > block on the chip.
>
> This is not clear to me at all.

Anton writes a lot, it's his job. I've only read a little of
his work, but treasure a throwaway comment of his:

'Some things have to be seen to be believed, but
Most things have to be believed to be seen'

The hardware development approach and prime tool I use are very
different. If you want to know how I do it, you might have to
believe that the new methods I'm using could be good choices.

I've read our postings through quite a few times, and checked the
link to the tools which I gave, which gives extensive details of
the tool and rationale, but still fail to see what is not clear.

Jan Coombs

rickman

unread,
Mar 21, 2015, 9:14:29 PM3/21/15
to
The statements I was replying to and are quoted above are not clear to
me. I have no idea what is meant by "The simulator consumes the source
code for the hardware design." "Consuming" is not a term I often use
with the design process. Input is more common and is clear to me. Is
that what you mean?

Your description of what I think are the input files to the design
process is not at all clear to me. I have never thought of the design
files as being separated by RAM and ROM. I just don't understand what
you are describing. Are you trying to say there is an HDL file for the
processor and there is a separate file which contains the instruction
opcodes that are to be loaded into the program memory? Perhaps you are
describing a process where the program RAM can be loaded separately from
the processor design to prevent having to change the bitstream file each
time the program changes?

The original remark I didn't understand is:

> It's a bit more exciting if the simulation can export the design to
the FPGA tools, then you are closer to having a chip.

Here I don't understand the idea that a simulator would export a design.
A simulator has a design as its input. The FPGA tools would use that
same design to generate the bitstream for the FPGA. Simulation is just
a way to verifying the design does what you want before you program it
into a chip.

I think you have an image in your mind of what you are talking about and
I can't reconstruct that from what you write.

I thought we were talking about a simulator that is for testing code
written for a given processor. You seem to be talking about a simulator
for the HDL that describes a processor. The former is often much more
efficient at running code while the latter is useful for debugging the
processor design itself.

Are you describing a different development model? Can you be a little
more clear in your use of terms?

BTW, your quote from Anton did not help me understand any of this at all.

--

Rick

JUERGEN

unread,
Mar 22, 2015, 3:38:06 AM3/22/15
to
Rick, I would have to assume you allow for some slack in language.

Simulator written in SW ( any language - even Forth ) then the same running model transferred to the FPGA.

You could use Impulse with C to Fpga, or rewrite in VHDL/Verilog.

On the PC is running what is required.
- or you partition the design and move it to the FPGA in slices/blocks, the rest on the PC.

Has anybody done a Forth that compiles to FPGA?
C to FPGA has been done a few times - see Impulse and Celoxica.

rickman

unread,
Mar 22, 2015, 6:35:31 PM3/22/15
to
I'm not sure which language you mean, the language you are talking in or
the language you are talking about simulating.

> Simulator written in SW ( any language - even Forth ) then the same
running model transferred to the FPGA.

Why do this? The established method is to code your processor
description in an HDL (as I said, there are several with VHDL and
Verilog being the most popular) and simulate it using an HDL simulator.
At that point you have a complete description of the processor design
which can be synthesized to bits for an FPGA or a layout for a chip design.

What is the reason for abandoning this method? It works well and has
lots of precedence meaning there are very usable tools available.


> You could use Impulse with C to Fpga, or rewrite in VHDL/Verilog.
>
> On the PC is running what is required.
> - or you partition the design and move it to the FPGA in slices/blocks, the rest on the PC.
>
> Has anybody done a Forth that compiles to FPGA?
> C to FPGA has been done a few times - see Impulse and Celoxica.

I have thought about Forth to generate hardware. It would not be
"descriptive" as in typical HDLs where you describe the behavior and let
the tool figure out the implementation. It would be descriptive by
describing the logical elements and you build hierarchically like a
Forth program. Simulation is done by recompiling with a primitives
library for simulation rather than synthesis. I have not taken the
first step in realizing this.

--

Rick

Paul Rubin

unread,
Mar 22, 2015, 7:18:16 PM3/22/15
to
rickman <gnu...@gmail.com> writes:
> I have thought about Forth to generate hardware. It would not be
> "descriptive" as in typical HDLs where you describe the behavior and
> let the tool figure out the implementation. It would be descriptive
> by describing the logical elements and you build hierarchically like a
> Forth program.

That is basically what OKAD is, I believe. The target is custom silicon
rather than FPGA equtions.

Bernd Paysan

unread,
Mar 22, 2015, 8:06:09 PM3/22/15
to
FPGAs have primitives like LUTs, flip-flops, and routing resources. If you
want to go the bottom up way on the FPGA, you would describe LUT functions,
and build your more complex logic from assembling more LUTs.

rickman

unread,
Mar 22, 2015, 10:20:14 PM3/22/15
to
On 3/22/2015 8:06 PM, Bernd Paysan wrote:
> Paul Rubin wrote:
>
>> rickman <gnu...@gmail.com> writes:
>>> I have thought about Forth to generate hardware. It would not be
>>> "descriptive" as in typical HDLs where you describe the behavior and
>>> let the tool figure out the implementation. It would be descriptive
>>> by describing the logical elements and you build hierarchically like a
>>> Forth program.
>>
>> That is basically what OKAD is, I believe. The target is custom silicon
>> rather than FPGA equtions.
>
> FPGAs have primitives like LUTs, flip-flops, and routing resources. If you
> want to go the bottom up way on the FPGA, you would describe LUT functions,
> and build your more complex logic from assembling more LUTs.

Back in the old days we did FPGA design using schematic drawings. They
were usually hierarchical. More advanced drawings were essentially
macros with parameters that could specify bus width, relative placement
and other important features. One person I knew was able to squeeze out
the maximum performance by doing hand placement using these macros. He
initially resisted the change over to HDL until the FPGA apps engineers
showed him he could do the same sort of hierarchical design in VHDL
complete with all the Xilinx specific parameters. He never looked back,
lol.

I'm not crazy about logic inference (letting the compiler choose how to
implement your code). But that is how 99% of designs get done. It can
be a lot of work hand crafting FPGAs. I can't see the benefit in
designing your own language for FPGA design. I expect custom chip
design is different and for a simple chip, hand crafting can get you a
lot. But there is no way you could hand craft a large complex chip
unless it was very regular like the GA144 or an FPGA. Heck, even FPGAs
are getting pretty complex these days with many specialized function
blocks and hard cores. I don't suppose they hand craft much of that.

If the GA144 had some hard macro functions like 480 Mbps USB or 100 Mbps
Ethernet, it would be a much more useful device, but would have been
much harder to design.

--

Rick

rickman

unread,
Mar 22, 2015, 10:38:11 PM3/22/15
to
On 3/12/2015 12:54 AM, polymorph self wrote:
> Can anyone speak to some forth working in the field apps greenarrays has achieved?
>
> What kind of projects are they up to?

Oh yeah, I just remembered a project I was discussing with another
engineer for a control loop processor. It needed 5 ADCs and DACs or
something like that. He selected a four channel ADC with a 48 kHz
sample rate and used two of them harnessed to an ARM CM4. He wasn't
sure the I/O would be fast enough and I realized the GA144 might do the
entire thing in one device. One limitation was his perceived need for a
high resolution on the ADC which the GA144 might not have quite been
able to achieve.

The real deal killer was that he wanted programming talent that lived
within 10 miles of his house so they could work together at a moment's
notice. Oh well... But the GA144 might serve very well in such a
controller app. They typically have low I/O count requirements but need
a fair amount of processing. Here the 18 bit word size would be a
benefit too.

--

Rick

Jason Damisch

unread,
Mar 23, 2015, 11:02:10 AM3/23/15
to
On Sunday, March 22, 2015 at 7:20:14 PM UTC-7, rickman wrote:

> If the GA144 had some hard macro functions like 480 Mbps USB or 100 Mbps
> Ethernet, it would be a much more useful device, but would have been
> much harder to design.

Gripe, Moan, Complain,

Let's see -you- create your own circuit design system,
your own microprocessor, and then your own OS with your
own applications and then come to the realization that....

You can please some of the people most of the time, and
you can please most of the people some of the time, but
you can't please all of the people all of the time.

That is, unless you are Sun/Oracle. Then the gods love
you and everybody just agrees.

Jason

rickman

unread,
Mar 23, 2015, 4:13:47 PM3/23/15
to
No I didn't create my own circuit design system,
I didn't create my own microprocessor...(well actually I did, but we
won't get into that)
I didn't create my own OS...

But the devices I build don't sit on the shelf wanting someone to use
them either...

I think that most people discussing the GA144 in a negative light are
really lamenting that it fell so short of what it could have been.

Sometimes I think of designing as exploring a large mansion with many,
many rooms. You find rooms that have no lights, some rooms have no
furniture, some rooms have no heat. To live in a room you need to find
one that provides everything you need and does it well. Exploring and
finding a room that has some really nice artwork on the wall doesn't
make up for the lack of running water anywhere on the hallway. You
can't live in it.

--

Rick

Jan Coombs <Jan-54 >

unread,
Apr 8, 2015, 4:51:15 AM4/8/15
to
On Sat, 21 Mar 2015 21:14:28 -0400
rickman <gnu...@gmail.com> wrote:

> On 3/21/2015 7:03 PM, Jan Coombs <Jan-54 wrote:
> > On Fri, 20 Mar 2015 00:12:44 -0400
> > rickman <gnu...@gmail.com> wrote:
> >
> >> On 3/19/2015 7:24 AM, Jan Coombs <Jan-54 wrote:
> >>> On Tue, 17 Mar 2015 16:26:53 -0400
> >>> rickman <gnu...@gmail.com> wrote:
> >>>
> >>>> On 3/16/2015 3:42 PM, Jan Coombs <Jan-54 wrote:
> >>>> >
> >>>> > It's a bit more exciting if the simulation can export
> >>>> > the design to the FPGA tools, then you are closer to
> >>>> > having a chip.
> >>>>
> >>>> I'm not sure what that means. What design? The chip or the code?

Sorry, here's two other people's description of the MyHDL
design flow, a pretty one [1], and a professional one [2]

Offer:

I'd like to share stack processor designs, and would be
happy to package these so that they can be used with no
hardware design tool knowledge.

These can be run in simulation on a PC, or on a cheap FPGA
development board, provided it has a USB comm port, and I
have the tools to target and program it.

I need to finish some work to connect the b16 or j1 tools,
written in gforth, to the processor, so please book early
for next winter!

Jan Coombs.
--

[1] http://old.myhdl.org/doku.php/projects:python_hardware_processor
[2] http://www.jandecaluwe.com/hdldesign/digmac.html

rickman

unread,
Apr 10, 2015, 12:47:20 AM4/10/15
to
On 4/8/2015 4:51 AM, Jan Coombs <Jan-54 wrote:
> On Sat, 21 Mar 2015 21:14:28 -0400
> rickman <gnu...@gmail.com> wrote:
>
>> On 3/21/2015 7:03 PM, Jan Coombs <Jan-54 wrote:
>>> On Fri, 20 Mar 2015 00:12:44 -0400
>>> rickman <gnu...@gmail.com> wrote:
>>>
>>>> On 3/19/2015 7:24 AM, Jan Coombs <Jan-54 wrote:
>>>>> On Tue, 17 Mar 2015 16:26:53 -0400
>>>>> rickman <gnu...@gmail.com> wrote:
>>>>>
>>>>>> On 3/16/2015 3:42 PM, Jan Coombs <Jan-54 wrote:
>>>>>> >
>>>>>> > It's a bit more exciting if the simulation can export
>>>>>> > the design to the FPGA tools, then you are closer to
>>>>>> > having a chip.
>>>>>>
>>>>>> I'm not sure what that means. What design? The chip or the code?
>
> Sorry, here's two other people's description of the MyHDL
> design flow, a pretty one [1], and a professional one [2]

I'm a little unclear still. I think there is a python program to run on
a soft core which is compiled by a python program running on a PC.
There is also a MyHDL program which describes the soft core which the
first python program runs on. It is not clear where each program runs
and how. Further it is not clear what is simulated and where. I
suppose the python program for the soft core can be loaded into the
simulated RAM of the MyHDL soft core when simulated, but I don't know
enough about MyHDL to know if it includes a simulator or not.

Once the simulation is running correctly the entire design is converted
to another HDL (Verilog or VHDL) and synthesized for the target device.

I'm not at all clear why this is better than the more conventional
method of coding the target processor application in whatever language
you want where the executable code is loaded into the target RAM for
simulation and/or execution. I have simulated such code in the VHDL
simulator. Seems simple enough to me.

I guess I don't see a problem with the existing approach. But then I
don't code in C or python. I code in assembler or Forth.


> Offer:
>
> I'd like to share stack processor designs, and would be
> happy to package these so that they can be used with no
> hardware design tool knowledge.
>
> These can be run in simulation on a PC, or on a cheap FPGA
> development board, provided it has a USB comm port, and I
> have the tools to target and program it.
>
> I need to finish some work to connect the b16 or j1 tools,
> written in gforth, to the processor, so please book early
> for next winter!
>
> Jan Coombs.
>


--

Rick
0 new messages