Embedded how low can it Go

5,051 views
Skip to first unread message

folkn...@googlemail.com

unread,
Nov 11, 2009, 11:56:57 AM11/11/09
to golang-nuts
I notice mentions of ARM along with X86 targeting which is cool, but I
can't help wonder just how low Go can be targeted.

Could it for example be used down at the ARM Cortex M3 or M0 levels
where Flash and Ram is very limited.

It would be wonderful to have safer alternatives to C/C++ within the
lower embedded markets?

Not to mention the messaging semantics of send and receive and other
cool features etc..

Kai Backman

unread,
Nov 11, 2009, 2:43:01 PM11/11/09
to folkn...@googlemail.com, golang-nuts
On Wed, Nov 11, 2009 at 8:56 AM, folkn...@googlemail.com
<folkn...@googlemail.com> wrote:
> Could it for example be used down at the ARM Cortex M3 or M0 levels
> where Flash and Ram is very limited.

The code generator that 5g uses was originally developed for plan9 and
should be able to target quite old hardware. I've since introduced one
incompatibility by using strex/ldrex for cas but that's quite easy to
fix. A bigger deficiency is that there currently isn't soft float
support but you will be fine as long as you don't use floats. Soft
float support is planned but not yet done.

I've been looking at running on a AT91SAM7S256 which has 256Kb of
flash and 64Kb of ram. I'll try to shoehorn at least part of the
runtime into that space.

> It would be wonderful to have safer alternatives to C/C++ within the
> lower embedded markets?

Concur .. :-)

Kai

Ian Lance Taylor

unread,
Nov 12, 2009, 1:14:09 AM11/12/09
to folkn...@googlemail.com, golang-nuts
"folkn...@googlemail.com" <folkn...@googlemail.com> writes:

> I notice mentions of ARM along with X86 targeting which is cool, but I
> can't help wonder just how low Go can be targeted.
>
> Could it for example be used down at the ARM Cortex M3 or M0 levels
> where Flash and Ram is very limited.

In principle, yes. But Go does carry a sizable runtime compared to C,
so one would hit limits with Go before hitting them with C.

Ian

a...@folknology.com

unread,
Nov 12, 2009, 10:41:57 AM11/12/09
to golang-nuts


On Nov 12, 6:14 am, Ian Lance Taylor <i...@google.com> wrote:
C++ is often cut down to size by feature removal for the low end
embedded market so I wonder if Go can be reduced for such targets?

How big would the runtime be, what things would be good candidates for
leaving out on a small target, any ideas?

regards
Al

Jan Mennekens

unread,
Nov 12, 2009, 11:12:30 AM11/12/09
to golang-nuts
First of all, allow me to say I'm *very* enthusiastic about the 'go' language -- I have been waiting since my Occam development years for something similar. Especially the revival of CSP in a usable form is great! I really hope this will lead somewhere.

Me too, I would be interested in using 'go' for embedded development -- for small systems, given the right runtime, it might even replace the complete OS.
In order to do that, I think we need (correct me if I'm wrong)
- an easily portable, bootable runtime (and reasonably small, as remarked before)
- priorities in goroutines (real-time behavior is too much to ask, I know, but some form of prioritization would be needed, IMHO)
- a way to couple channels to hardware, e.g. replace an interrupt or timer with a channel input (à la Occam)
- using channels to do inter-processor communications

I haven't studied the libraries extensively yet, so I don't know what's possible, or how much work it would represent.

Any thoughts?

Jan

Ian Lance Taylor

unread,
Nov 12, 2009, 6:04:20 PM11/12/09
to a...@folknology.com, golang-nuts
"folkn...@googlemail.com" <a...@folknology.com> writes:

> C++ is often cut down to size by feature removal for the low end
> embedded market so I wonder if Go can be reduced for such targets?
>
> How big would the runtime be, what things would be good candidates for
> leaving out on a small target, any ideas?

The obvious thing to leave out would be type reflection.

Ian

Ian Lance Taylor

unread,
Nov 12, 2009, 6:17:24 PM11/12/09
to Jan Mennekens, golang-nuts
Jan Mennekens <jan.me...@gmail.com> writes:

> First of all, allow me to say I'm *very* enthusiastic about the 'go' language -- I have been waiting since my Occam development years for something similar. Especially the revival of CSP in a usable form is great! I really hope this will lead somewhere.
>
> Me too, I would be interested in using 'go' for embedded development -- for small systems, given the right runtime, it might even replace the complete OS.
> In order to do that, I think we need (correct me if I'm wrong)
> - an easily portable, bootable runtime (and reasonably small, as remarked before)

Yes.


> - priorities in goroutines (real-time behavior is too much to ask, I know, but some form of prioritization would be needed, IMHO)

Yes, most likely.


> - a way to couple channels to hardware, e.g. replace an interrupt or timer with a channel input (à la Occam)

Clearly there needs to be some access to hardware, but I don't know
that a channel represents the best model.


> - using channels to do inter-processor communications

Yes, assuming they share memory.


> I haven't studied the libraries extensively yet, so I don't know what's possible, or how much work it would represent.

Well, it's quite a bit of work. The current runtime assumes that it
is running on top of a Unix kernel. I don't think you would have to
change much other than the runtime, but a substantial part of the
runtime would have to be rewritten.

Ian

LarryP

unread,
Nov 17, 2009, 10:35:01 AM11/17/09
to golang-nuts
Greetings all,

IMHO, the first things needed to make go work in a small-footprint/
bare-metal application is implementing malloc, sensibly for the target
env. (Caveat, I note that sensible implementation is a highly
subjective thing.)

A casual glance at go's memory allocator shows that it expects lots of
memory to play with (and some hard-coded numbers for that.) IMHO,
those numbers ought to be collected into a single place (.h file),
and probably conditionally defined (#ifdef default, ifdef small,
#ifdef yourVeryOwnMemModel.) I'm not wild about profiles, but
sufficient control to make go work on smaller targets would be quite
useful.

What is/should be go's behavior when malloc fails?
malloc's behavior is defined (returns a NULL pointer), but what does
go's runtime do when malloc returns a NULLptr?
Does the language spec. address this?


Stacks/segmentation/mem Mgmnt:
It's less clear to me whether memory management is required. My hope
(and guess, based on the IMHO Cish roots) is that go could be made to
work without mem. management. But a bare-metal implementation may
involve either constraining stacks (so they don't need to be
segmented), or somehow indicating to the runtime env. which stacks are/
aren't allowed to segment.
IMHO, the out-fo-stack behavior would also need to be nailed down.

What do others (esp. those interested in embedded applications of go)
think?

-- Larry



On Nov 12, 6:17 pm, Ian Lance Taylor <i...@google.com> wrote:
> Jan Mennekens <jan.mennek...@gmail.com> writes:
> > First of all, allow me to say I'm *very* enthusiastic about the 'go' language -- I have been waiting since my Occam development years for something similar. Especially the revival of CSP in a usable form is great! I really hope this will lead somewhere.
>
> > Me too, I would be interested in using 'go' forembeddeddevelopment -- for small systems, given the right runtime, it might even replace the complete OS.

Lars Pensjö

unread,
Nov 17, 2009, 10:47:01 AM11/17/09
to golang-nuts
Usually, a good principle for small embedded systems is to not use
dynamic memory allocation. if so, you don't need malloc. It can be
hard to prove correctness of a program otherwise, with a small memory
availability.

That way you won't spend time doing garbage collection, another issue
if you want improved real-time.

Rick R

unread,
Nov 17, 2009, 10:55:41 AM11/17/09
to golang-nuts
On Tue, Nov 17, 2009 at 10:47 AM, Lars Pensjö <lars....@gmail.com> wrote:
Usually, a good principle for small embedded systems is to not use
dynamic memory allocation. if so, you don't need malloc. It can be
hard to prove correctness of a program otherwise, with a small memory
availability.

That way you won't spend time doing garbage collection, another issue
if you want improved real-time.


Given that Go puts its stacks/closures on the heap. It will be hard to avoid malloc.

LarryP

unread,
Nov 17, 2009, 11:03:05 AM11/17/09
to golang-nuts

On Nov 17, 10:47 am, Lars Pensjö <lars.pen...@gmail.com> wrote:
> Usually, a good principle for small embedded systems is to not use
> dynamic memory allocation.
<snip>

IMHO, go is hard to use (effectively) without triggering some run-time
allocation of memory.
Many of the go libraries seem to assume the go memory allocator is
there. I'm not convinced that go without some (albeit) limited
version of memory allocation would be workable (or worthwhile vs. C.)
If there are ways around this, I'm all ears.

-- Larry

James Snyder

unread,
Nov 23, 2009, 12:29:21 AM11/23/09
to golang-nuts
I would disagree to some extent that having to do run-time allocation
of memory would be the death of making this (or other languages) work
on embedded targets. I stumbled upon this thread because over the
past several months I've been working with a project that runs a
lightly modified version of Lua (REPL, compiler, VM, essentially the
whole standard Lua 5.1.4) on ARM7TDMI, Cortex-M3, AVR-32 and a few
other architectures (including x86): http://www.eluaproject.net/
While I can't speak extensively about the ins and outs of using
particular allocators on the platform, Lua runs just fine in 64kB of
RAM (even better with some tricks to keep immutable data in flash).
We're not talking about writing hugely complicated programs, but these
are embedded targets anyways :-)

We're using newlib's malloc in most cases, dlmalloc for cases where
we've got extra RAM that isn't on-chip, and have a few other
experimental allocators that one of the developers has been testing.

I've not looked at the go sources, but based on some earlier comments
in this thread I suspect that the expectations of standard UNIX
infrastructure may be a larger problem. Either that, or how much
memory overhead the runtime has. What OS features are expected
besides malloc?

James Snyder

unread,
Nov 23, 2009, 8:30:53 PM11/23/09
to Ian Lance Taylor, golang-nuts
On Mon, Nov 23, 2009 at 9:31 AM, Ian Lance Taylor <ia...@google.com> wrote:
> James Snyder <jbsn...@gmail.com> writes:
>
>> I've not looked at the go sources, but based on some earlier comments
>> in this thread I suspect that the expectations of standard UNIX
>> infrastructure may be a larger problem.  Either that, or how much
>> memory overhead the runtime has.  What OS features are expected
>> besides malloc?
>
> The current runtime and syscalls packages would certainly have to be
> extensively overhauled to support a low level embedded system.  They
> expect to be able to clone new threads of execution when running a
> system call which may block.  This assumes an underlying scheduler.

I was afraid of that given the whole concurrency focus :-) I assume
each thread needs a stack of its own and that some sort of green
threading isn't much of an option? Having multiple threads each with
its own stack would be a quick and easy way to fill up all available
ram on one of these devices.

In poking around for information related to Go & threads, I've
stumbled upon the SplitStacks work that you're doing for GCC. I wonder
if this might have some positive implications not only for huge thread
counts and address space limitations on 32-bit architectures, but also
for low memory, low thread-count situations?

I should probably also point out, in context with the above, that many
of these really low level embedded environments are MMU-less :-)


> Core packages like os and net expect underlying support for a file
> system and for network connections.  Those would have to be supported
> or stubbed out.

Handling certain system calls, including those related to filesystems
isn't too hard with newlib:
http://www.sourceware.org/newlib/libc.html#Stubs

As far as networking goes, there are light weight TCP/IP stacks for
microcontrollers like uIP
(http://www.sics.se/~adam/uip/index.php/Main_Page) and lwip
(http://www.sics.se/~adam/lwip/).

>
> Ian
>



--
James Snyder
Biomedical Engineering
Northwestern University
jbsn...@gmail.com
PGP: http://fanplastic.org/key.txt
Phone: (847) 448-0386

Pete Wilson

unread,
Nov 25, 2009, 12:22:05 PM11/25/09
to golang-nuts
My experience is that a key feature of embedded systems is handling
interrupts (duh!); and that the trouble with interrupt routines is
that you can't use your favourite lock/mutex/message-passer mechanisms
to have the ISRs play nice with the rest of the software.

This only leads to tears.

I'd *much* prefer to have interrupts look like messages (which happen
to be sourced from hardware) delivered via channels to ordinary
goroutines.

But this *does* require priorities - a goroutine which fields
'hardware messages' must be preferentially and preemptively scheduled
above others (and quickly, too). priorities don't need to be
language'd, but a standard 'embedded' package would be a Good Thing.

Anyone want to take a stab at that? Can we "do embedded" with a set of
interfaces and then slowly try to implement and see where, if
anywhere, we need language changes rather than runtime changes?

The other 'obvious big problem' is non-determinisim introduced by gc.
I suspect that if the only 'allocation' done is done implicitly by the
system (for goroutine stacks, for example) - as would be typical in
many OS-free embedded systems - the problem is fairly limited; the
runtime can do a traditional free when an appropriate return is
encountered.So, yes; a good malloc.

One way or another, an embedded go also has to be able to communicate
between 'processes' (multiple address spaces); even if a given core or
controller doesn't support an MMU, an embedded system frequently has
multiple intelligences which need to communicate; providing go channel
communication between them (probably with split channel endpoints -
one in each address space, or using a hardware message-forwarder).

And of course it (the embedded program, not the toolchain) *has* to be
able to to run on bare metal, or bare metal with (something like) the
MCAPI layer welded onto it

Final point: yes, this isn't what go was invented for, but the space
*does* need an efficient, safe language/runtime which supports
concurrency and communication, and doing an 'embedded go' is more
likely to achieve acceptance and success than creating yet another
language (for our attempt, look at plasma in http://opensource.freescale.com/fsl-oss-projects/)

-- Pete



On Nov 12, 5:17 pm, Ian Lance Taylor <i...@google.com> wrote:
> Jan Mennekens <jan.mennek...@gmail.com> writes:
> > First of all, allow me to say I'm *very* enthusiastic about the 'go' language -- I have been waiting since my Occam development years for something similar. Especially the revival of CSP in a usable form is great! I really hope this will lead somewhere.
>
> > Me too, I would be interested in using 'go' forembeddeddevelopment -- for small systems, given the right runtime, it might even replace the complete OS.

Bob Cunningham

unread,
Nov 26, 2009, 4:37:48 AM11/26/09
to Pete Wilson, golang-nuts
> On Nov 12, 5:17 pm, Ian Lance Taylor<i...@google.com> wrote:
>> Jan Mennekens<jan.mennek...@gmail.com> writes:
>>> First of all, allow me to say I'm *very* enthusiastic about the 'go' language -- I have been waiting since my Occam development years for something similar. Especially the revival of CSP in a usable form is great! I really hope this will lead somewhere.
>>
>>> Me too, I would be interested in using 'go' forembeddeddevelopment -- for small systems, given the right runtime, it might even replace the complete OS.
>>> In order to do that, I think we need (correct me if I'm wrong)
>>> - an easily portable, bootable runtime (and reasonably small, as remarked before)
>>
>> Yes.
>>
>>> - priorities in goroutines (real-time behavior is too much to ask, I know, but some form of prioritization would be needed, IMHO)
>>
>> Yes, most likely.
>>
>>> - a way to couple channels to hardware, e.g. replace an interrupt or timer with a channel input (� la Occam)
>>
>> Clearly there needs to be some access to hardware, but I don't know
>> that a channel represents the best model.
>>
>>> - using channels to do inter-processor communications
>>
>> Yes, assuming they share memory.
>>
>>> I haven't studied the libraries extensively yet, so I don't know what's possible, or how much work it would represent.
>>
>> Well, it's quite a bit of work. The current runtime assumes that it
>> is running on top of a Unix kernel. I don't think you would have to
>> change much other than the runtime, but a substantial part of the
>> runtime would have to be rewritten.
>>
>> Ian

<Moved top-post here for reply>

On 11/25/2009 09:22 AM, Pete Wilson wrote:
> My experience is that a key feature of embedded systems is handling
> interrupts (duh!); and that the trouble with interrupt routines is
> that you can't use your favourite lock/mutex/message-passer mechanisms
> to have the ISRs play nice with the rest of the software.
>
> This only leads to tears.
>
> I'd *much* prefer to have interrupts look like messages (which happen
> to be sourced from hardware) delivered via channels to ordinary
> goroutines.
>
> But this *does* require priorities - a goroutine which fields
> 'hardware messages' must be preferentially and preemptively scheduled
> above others (and quickly, too). priorities don't need to be
> language'd, but a standard 'embedded' package would be a Good Thing.

An issue that is more important to me is latency: Servicing an interrupt late may easily be worse than not servicing it at all. Having a varying latency can lead to bugs that are maddeningly difficult to uncover.

But missing an interrupt is also pretty bad: A delivery guarantee and some sort of failure check may be needed if the delivery system isn't sufficiently robust.

The #1 cause of late and lost interrupts is overly-long interrupt handler routines. Only the minimum possible work should be performed in the interrupt state, so the system will be ready to handle the next interrupt as promptly as possible.

Taken together, these mean that interrupt messages must be fast, reliable, and consistent.

> Anyone want to take a stab at that? Can we "do embedded" with a set of
> interfaces and then slowly try to implement and see where, if
> anywhere, we need language changes rather than runtime changes?

Interrupts cause a change of context, so we'd need enough C/asm code to send the appropriate goroutine a message, schedule that goroutine for immediate resumption, then leave the interrupt context.

> The other 'obvious big problem' is non-determinisim introduced by gc.
> I suspect that if the only 'allocation' done is done implicitly by the
> system (for goroutine stacks, for example) - as would be typical in
> many OS-free embedded systems - the problem is fairly limited; the
> runtime can do a traditional free when an appropriate return is
> encountered.So, yes; a good malloc.

There are several viable approaches to handling GC in a real-time/embedded context, most include the following features:
1. GC must be interruptable, resumable (preferably), and restartable.
2. GC is normally disabled, and is enabled only by the lowest-priority goroutine (the "idle" goroutine).
3. It is best if the GC can be run in small increments, on smaller contexts, rather than globally. This may mean using multiple small heaps, and GC-ing them one at a time.

The malloc can be brain-damaged simple. In a real-time/embedded system, memory is *never* returned to the system after it has been allocated to the application. So, in effect, you allocate all the memory you need to the application one time, at startup, and let GC handle it after that.

Better yet, careful design can completely remove the need for any/all GC runs in many real-time/embedded systems. I once worked on a Java app that ran fine for months with GC disabled.

> One way or another, an embedded go also has to be able to communicate
> between 'processes' (multiple address spaces); even if a given core or
> controller doesn't support an MMU, an embedded system frequently has
> multiple intelligences which need to communicate; providing go channel
> communication between them (probably with split channel endpoints -
> one in each address space, or using a hardware message-forwarder).

Without an MMU, this is a non-issue, since just about any messaging system will work. Copying should be avoided to ensure minimal latency, which means single-write/no-copy messaging, which in turn means either passing around buffer pointers behind the scenes, or writing directly to the destination goroutine's address space (easy without an MMU!).

> And of course it (the embedded program, not the toolchain) *has* to be
> able to to run on bare metal, or bare metal with (something like) the
> MCAPI layer welded onto it

Bootstrapping is always needed: I doubt any high-level compiler will ever issue the instructions needed to configure and handle interrupts, and to switch and restore contexts. Something more than a PC-style BIOS will be needed: OpenEFI or OpenFirmware/OpenBoot should do nicely!

I don't see any obvious blocks that would prevent going directly from OpenEFI/OpenFirmware/OpenBoot to Go code. Each is more than powerful enough to set up whatever initial environment is needed by the Go runtime.

> Final point: yes, this isn't what go was invented for, but the space
> *does* need an efficient, safe language/runtime which supports
> concurrency and communication, and doing an 'embedded go' is more
> likely to achieve acceptance and success than creating yet another
> language (for our attempt, look at plasma in http://opensource.freescale.com/fsl-oss-projects/)

One thing to consider is that Go has some nice features that make it useful for creating domain-specific languages. We may want to run a tailored version of Go that is optimized for embedded use. Call it "RT-Go".

Remember, even in an embedded real-time system, it is normally the case that much of the code can run outside of real-time. Use "RT-Go" only where it is needed, and Generic Go everywhere else (such as for the GUI).

The first step would be to write an RT system simulator for Go, and see how Go behaves in that environment (running a Go runtime under Go). Determine which language features present issues, and which packages need updating or replacing. Only after all that has been tested, debugged and thoroughly analyzed would it be worth considering a hardware port, so work on bootstrapping the Go runtime can proceed separately, in parallel.

Pete Wilson

unread,
Nov 26, 2009, 12:57:26 PM11/26/09
to golang-nuts
I think we're in "violent agreement" on the fundamentals of most of
these points.

Behind my mutterings on interrupts-by-messages was a desire to see
better hardware (that is, hardware which thinks it's sending/receiving
a message rather than "raising an interrupt" or "servicing an
interrupt"), both in the processor and in "interrupt controllers",
along with a much better programming paradigm. No interrupts, just
messages. In both hardware and software.
yes; and almost as good a description of "the hardware should send a
message rather than generate an interrupt" as one could hope for.

>
> > Anyone want to take a stab at that? Can we "do embedded" with a set of
> > interfaces and then slowly try to implement and see where, if
> > anywhere, we need language changes rather than runtime changes?
>
> Interrupts cause a change of context, so we'd need enough C/asm code to send the appropriate goroutine a message, schedule that goroutine for immediate resumption, then leave the interrupt context.

I had in mind eventual migration to a slightly different hardware
model. We'll need to send messages between processors. Good hardware
(cheap, not overcomplex) will expedite and simplify this. Same
mechanisms can be used for 'messages from hardware devices'.

In the interim, yes, gaskets will be needed.
>
> > The other 'obvious big problem' is non-determinisim introduced by gc.
> > I suspect that if the only 'allocation' done is done implicitly by the
> > system (for goroutine stacks, for example) - as would be typical in
> > many OS-free embedded systems - the problem is fairly limited; the
> > runtime can do a traditional free when an appropriate return is
> > encountered.So, yes; a good malloc.
>
> There are several viable approaches to handling GC in a real-time/embedded context, most include the following features:
> 1. GC must be interruptable, resumable (preferably), and restartable.
> 2. GC is normally disabled, and is enabled only by the lowest-priority goroutine (the "idle" goroutine).
> 3. It is best if the GC can be run in small increments, on smaller contexts, rather than globally.  This may mean using multiple small heaps, and GC-ing them one at a time.
>
> The malloc can be brain-damaged simple.  In a real-time/embedded system, memory is *never* returned to the system after it has been allocated to the application.  So, in effect, you allocate all the memory you need to the application one time, at startup, and let GC handle it after that.
>
> Better yet, careful design can completely remove the need for any/all GC runs in many real-time/embedded systems.  I once worked on a Java app that ran fine for months with GC disabled.

I think we're in violent agreement.

>
> > One way or another, an embedded go also has to be able to communicate
> > between 'processes' (multiple address spaces); even if a given core or
> > controller doesn't support an MMU, an embedded system frequently has
> > multiple intelligences which need to communicate; providing go channel
> > communication between them (probably with split channel endpoints -
> > one in each address space, or using a hardware message-forwarder).
>
> Without an MMU, this is a non-issue, since just about any messaging system will work.  Copying should be avoided to ensure minimal latency, which means single-write/no-copy messaging, which in turn means either passing around buffer pointers behind the scenes, or writing directly to the destination goroutine's address space (easy without an MMU!).

Nope, I was unclear. I was trying to point out that sending messages
between processors in different chips without shared memory was
similar to sending messages between separate address spaces. Even un-
MMU's devices need to send messages between chips. Most (over-
generalisation) low-end machines don't have shared memory, or don't
have cache coherence, or have shared chunks of memory with funny
properties..
>
> > And of course it (the embedded program, not the toolchain) *has* to be
> > able to to run on bare metal, or bare metal with (something like) the
> > MCAPI layer welded onto it
>
> Bootstrapping is always needed:  I doubt any high-level compiler will ever issue the instructions needed to configure and handle interrupts, and to switch and restore contexts.  Something more than a PC-style BIOS will be needed: OpenEFI or OpenFirmware/OpenBoot should do nicely!

Again, I'm looking forward to the glorious day when systems think that
messages and goroutines and channels (or whatver melange we end up
with) are as basic and fundamental to systems design as interrupts,
functions, loops, and the supporting operations in cpus and support
stuff are today.

Throw you mind waaaaayyyy back to the Inmos transputer. It provided
(sloppy thinking) a set of facilities which enabled one to have
thousands of communicating threads, arguing the one with the other via
message exchange, scheduled (with a horribly primitive 2-level
scheduler along with timeslicing on backward branches for the lower
priority) all sans any "RTOS", since the ops were in the hardware
(well, microcode :-)

The go universe is richer (because of libraries, reflection etc) than
that universe, but a nice proof of possibility.

>
> I don't see any obvious blocks that would prevent going directly from OpenEFI/OpenFirmware/OpenBoot to Go code.  Each is more than powerful enough to set up whatever initial environment is needed by the Go runtime.
>
> > Final point: yes, this isn't what go was invented for, but the space
> > *does* need an efficient, safe language/runtime which supports
> > concurrency and communication, and doing an 'embedded go' is more
> > likely to achieve acceptance and success than creating yet another
> > language (for our attempt, look at plasma inhttp://opensource.freescale.com/fsl-oss-projects/)
>
> One thing to consider is that Go has some nice features that make it useful for creating domain-specific languages.  We may want to run a tailored version of Go that is optimized for embedded use.  Call it "RT-Go".
>
> Remember, even in an embedded real-time system, it is normally the case that much of the code can run outside of real-time.  Use "RT-Go" only where it is needed, and Generic Go everywhere else (such as for the GUI).

GUI??? my engine controller don't need no stinking GUI!!
But yes.

>
> The first step would be to write an RT system simulator for Go, and see how Go behaves in that environment (running a Go runtime under Go).  Determine which language features present issues, and which packages need updating or replacing.  Only after all that has been tested, debugged and thoroughly analyzed would it be worth considering a hardware port, so work on bootstrapping the Go runtime can proceed separately, in parallel.

Ah. Experiment, *then* define? Refreshing!

Thanks for thoughts

-- Pete

a...@folknology.com

unread,
Dec 11, 2009, 7:50:56 AM12/11/09
to golang-nuts
You might be interested in what XMOS have been up to with both their
hardware and their programing language XC which substitutes events for
interrupts and guarantees latencies based on hardware based round
robin threads/processes. Incidentally they also use a CSP Concurrency
model, well worth checking out.

http://www.xmos.com/support/documentation
http://www.xmos.com/published/xc_en (PDF)

regards
Al

roger peppe

unread,
Dec 11, 2009, 8:31:58 AM12/11/09
to a...@folknology.com, golang-nuts
2009/12/11 folkn...@googlemail.com <a...@folknology.com>:
> http://www.xmos.com/published/xc_en (PDF)

that's interesting. very much occam-influenced.

i particularly like their select functions - i could see some version of them
working in go - in particular they could make it possible
to use select on private data types without exposing the
underlying channel.

timer channels would be great in go too.

it's also interesting how many restrictions they have
on channel use (e.g. one-to-one only, receive-only select),
and i wonder how necessary they are to get decent
performance and/or h/w channel capability.

perhaps some future version of go might
go that route in some respects. (e.g. you can get
a chan on a h/w port but you only get the reader (chan <-T) or
the writer (chan T<-) end and it's an error to share it around).

Pete Wilson

unread,
Dec 11, 2009, 12:19:15 PM12/11/09
to golang-nuts


On Dec 11, 7:31 am, roger peppe <rogpe...@gmail.com> wrote:
> 2009/12/11 folknol...@googlemail.com <a...@folknology.com>:
>
> >http://www.xmos.com/published/xc_en(PDF)
>
> that's interesting. very much occam-influenced.
>

..and you can also buy dirt cheap eval hardware - four cores in a
single chip, connects to your favourite computer via USB for power,
program load and debug.

El-cheapo ($99) : https://www.xmos.com/products/development-kits/xc-5-development-kit

I bought one (over a year ago). .. other more expensive available.

No formal connections with XMOS, just wanting cheap interesting
scalable hardware on which to play with this stuff

-- Pete

Pete Wilson

unread,
Dec 11, 2009, 12:26:35 PM12/11/09
to golang-nuts

..and you can also buy dirt cheap eval hardware - four cores in a
> single chip, connects to your favourite computer via USB for power,
> program load and debug.
>
> El-cheapo ($99) :https://www.xmos.com/products/development-kits/xc-5-development-kit
>


Oops. I bought this one:

https://www.xmos.com/products/development-kits/xc-1-development-kit

-- P

smosher

unread,
Dec 11, 2009, 3:04:34 PM12/11/09
to golang-nuts
On Nov 12, 6:17 pm, Ian Lance Taylor <i...@google.com> wrote:
> Jan Mennekens <jan.mennek...@gmail.com> writes:
> >  - a way to couple channels to hardware, e.g. replace an interrupt or timer with a channel input (à la Occam)
>
> Clearly there needs to be some access to hardware, but I don't know
> that a channel represents the best model.

Channels and interrupts seem about right to me, if you guarantee that
an available receive on an interrupt channel causes execution to
switch immediately (okay, that sounds like it can be messy.) Of course
it's not like installing conventional handlers is troublesome either.
Other things like registers aren't really such a good fit. I was
wondering how Go would deal with those.

> >  - using channels to do inter-processor communications
>
> Yes, assuming they share memory.

I've been FIFOing between ARM cores because their main body of shared
memory was too slow to use, so channels seem really good here whether
or not they do share memory (unless that is somehow also suboptimal on
some devices.) Of course I was not passing pointers around.

Matt

unread,
Dec 11, 2009, 5:14:23 PM12/11/09
to golang-nuts
On Dec 11, 8:31 am, roger peppe <rogpe...@gmail.com> wrote:
> 2009/12/11 folknol...@googlemail.com <a...@folknology.com>:
> that's interesting. very much occam-influenced.

This is shameless self-promotion in a way, but it is on the thread of
open (GPL/LGPL) CSP-based languages and runtimes in embedded spaces...

A project I've contributed to (in the occam-related space) is the
Transterpreter project. It provides a portable bytecode interpreter
for the occam-pi programming language. This January we'll be releasing
full Arduino support. We've run on the H8, ARM, and a number of
embedded platforms in the past.

http://www.occam-pi.org/
http://www.transterpreter.org/

If you're looking for a way to do CSP-based programming in the
embedded space, you might check out the project. We run everything but
the dynamic elements of occam-pi in roughly 16K of flash (and a
reasonably small number of words of RAM) on the Atmega328, and can run
the full language on 32-bit platforms in not-much-more. Interrupts get
latched in as waitable events, meaning you can easily set up interrupt-
driven channels, etc.

Arduino-related work will be released at http://www.concurrency.cc/
(bringing information from other sites together into a more usable
format), with IDE and book support for all major platforms. Source for
the whole project lives at

http://projects.cs.kent.ac.uk/projects/kroc/trac/

Cheers,
Matt

David Anderson

unread,
Dec 11, 2009, 5:30:26 PM12/11/09
to Matt, golang-nuts
Just a quick +1 on this, even if it is sliding slightly off-topic. The
Transterpreter codebase is very clean (my personal first indicator of
a smoothly run project), and offers a very interesting environment to
play with in the embedded space.

As an example of how easy it is to port and get running, I got a very
basic Transterpreter running on the Lego Mindstorms NXT (ARM7
platform, 64k ram) in about an hour, give or take. Since I already had
most of the device drivers ready, it really was as simple as hooking
the Transterpreter build into the firmware cross-compile (ANSI C, so
perfect clean build first time), and firing up the VM in
kernel_main(). Sadly I failed to follow through with that project (we
had a couple of important drivers missing at the time that greatly
reduced the VM's usefulness), but the structure the Transterpreter
provides really does make it a matter of hours/a few days to get it up
and running once you have device drivers.

</halfway-off-topic>

- Dave

Bob Cunningham

unread,
Dec 11, 2009, 6:04:02 PM12/11/09
to a...@folknology.com, golang-nuts
Wow. I can use this. Thanks!

-BobC

a...@folknology.com

unread,
Dec 13, 2009, 6:03:13 PM12/13/09
to golang-nuts
Slightly off topic

But if the XC language is of interest, the XMOS folks haverecently
started a new online community XCore to support developers which is
worth checking out for insight :
http://www.xcore.com/

Hope I am not out of place linking to it here, but I have found it
really useful.

At some point it would be great if the powers that be at Google on the
Go project could actually get together with the XMOS guys and perhaps
hatch a GO implementation, I am sure others here would certainly be
interested in that happening.



Pete Wilson

unread,
Dec 19, 2009, 10:16:31 AM12/19/09
to golang-nuts
Off-topic:

I'm interested in the code density of the original T800 etc. It looked
excellent at the time, but compilers weren't that cunning back then.

Does anything related to the transterpreter universe let me compile C
for it with a good high-performance compiler? (C so that relatively
fair comparisons can be made with other ISAs). Details privately to
pe...@kivadesigngroupe.com, if so, please.

Thanks!

On Dec 11, 4:14 pm, Matt <jad...@gmail.com> wrote:
> On Dec 11, 8:31 am, roger peppe <rogpe...@gmail.com> wrote:
>
> > 2009/12/11 folknol...@googlemail.com <a...@folknology.com>:
> > that's interesting. very much occam-influenced.
>
> This is shameless self-promotion in a way, but it is on the thread of
> open (GPL/LGPL) CSP-based languages and runtimes in embedded spaces...
>

> A project I've contributed to (in the occam-related space) is theTransterpreterproject. It provides a portable bytecode interpreter


> for the occam-pi programming language. This January we'll be releasing
> full Arduino support. We've run on the H8, ARM, and a number of
> embedded platforms in the past.
>

> http://www.occam-pi.org/http://www.transterpreter.org/


>
> If you're looking for a way to do CSP-based programming in the
> embedded space, you might check out the project. We run everything but
> the dynamic elements of occam-pi in roughly 16K of flash (and a
> reasonably small number of words of RAM) on the Atmega328, and can run
> the full language on 32-bit platforms in not-much-more. Interrupts get
> latched in as waitable events, meaning you can easily set up interrupt-
> driven channels, etc.
>

> Arduino-related work will be released athttp://www.concurrency.cc/

Aram Hăvărneanu

unread,
Aug 8, 2012, 9:39:43 AM8/8/12
to steve....@gmail.com, golan...@googlegroups.com, pe...@kivadesigngroupe.com
The transputer had hardware channels, IIRC.

--
Aram Hăvărneanu

Pete Wilson

unread,
Aug 8, 2012, 11:24:29 AM8/8/12
to golan...@googlegroups.com, pe...@kivadesigngroupe.com, steve....@gmail.com
Steve

As I said in my private reply (finger trouble :-( ) back in the 1980's a company called Inmos sold a microprocessor family (the transputer family)(horrid name) which included support for 'block-structured goroutines' (that is, the magic word par introduced a collection of simple or compound statements which were to be executed in parallel, the block not completing until all elements had finished) along with message-passig over channels; along with DMA-driven interprocessor links which allowed the construction of arbitrarily large multiprocessor systems, the links looking like channels to the hardware. The message-passing and simplistic priority-oriented process ('go routine') scheduling were done in hardware. Sending a message was hardly more expensive than passing arguments to a function. The language, occam, didn't allow recursion (OK for small embedded stuff) so the stack (which was per-process) was allocated at compile time to be the right size. real software could be run in the onchip 4K SRAM; an external interface let you add as much or as little external SRAM/DRAM/EEPROM as you wanted. Oh, and the language had a formal semantcis so you could prove many helpful properties of your system (an't crash, won't deadlock, has same meaning as a sequential program etc etc)

So - go plus all its libraries wouldn't fit in the target hardware you mention; but then C plus all its libraries wouldn't either. However, the basic runtime ('bare metal go') with some omissions (such as reflection) for go or an equivalent language should be highly practical. But that's not the target of the Google go team.



On Wednesday, August 8, 2012 6:40:59 AM UTC-5, steve....@gmail.com wrote:
Pete Wilson wrote: <<<Behind my mutterings on interrupts-by-messages was a desire to see better hardware (that is, hardware which thinks it's sending/receiving a message rather than "raising an interrupt" or "servicing an interrupt"), both in the processor and in "interrupt controllers", along with a much better programming paradigm. No interrupts, just messages. In both hardware and software.>>>

These are great ideas, but in embedded firmware, we're still grateful to C for rescuing us from the primordial assembler swamp! ;-) Is there any possibility of Go running in 32k of FLASH and 4 k of RAM? This would leave room for an application too, in the memory of some of the larger processors we deal with (e.g. 64k FLASH; 8 k RAM). When we can do that, I'll chat all day to you about message-passing, and other such sophistications. ;-)

"Who cares, wins"

Pete Wilson

unread,
Aug 8, 2012, 11:30:35 AM8/8/12
to golan...@googlegroups.com, pe...@kivadesigngroupe.com, steve....@gmail.com
On a separate perspective, I started (professional) life doing an embedded system in a Ferranti FM1600B system. 24 bit machine, 96 KB main memory (core storage - you could unplug it and wheel it over to anotehr machine), around a microsecond cycle time. The system was CAAIS, for the Royal (British) Navy. We autotracked radar-detected targets, displayed on multiple screens, had an inter-ship/intercomputer networking/messaging system, etc. 

Paul Borman

unread,
Aug 8, 2012, 11:38:46 AM8/8/12
to Pete Wilson, golan...@googlegroups.com, steve....@gmail.com
There is a lot of software written in C for devices with limited ram, say 4K in this case.  You just don't use all the gnu library cruft.  You use an appropriate runtime.  A C runtime can be very very tiny.  Gnu libraries generally can't be.

For these devices C is quite appropriate while Go is not.

    -Paul

Pete Wilson

unread,
Aug 8, 2012, 11:48:43 AM8/8/12
to golan...@googlegroups.com, Pete Wilson, steve....@gmail.com
I think we're in violent agreement. You could write programs in go or in C and compile them provide both, give both a tiny runtime (C's would be smaller) but there's no way you could provide, in 4K, the complete library of either language. And with go, you'd need 'embedded go', with some facilities removed as well. Oh, the CAAIS system was written "in Coral 66", but the language spec allowed assembler :-)

Paul Borman

unread,
Aug 8, 2012, 12:56:07 PM8/8/12
to Pete Wilson, golan...@googlegroups.com, steve....@gmail.com
We are in agreement on half :-)

Go is not Go without garbage collection and goroutines.  Go simply requires a much larger runtime than C does so I don't see how Go could be used for these devices.  Someone could define eGo, a version of Go with out those things, but it would most certainly not be Go as you would need to do manual memory management.  It would program much more like C than Go.

    -Paul

Steve Merrick

unread,
Aug 8, 2012, 1:05:08 PM8/8/12
to Paul Borman, Pete Wilson, golan...@googlegroups.com
Pete, I remember the transputer, Occam, the FM1600B, and Doug Rawson-Harris patching FM1600B software, while it was running, in front of a party of visitors from the MOD. :-) Nevertheless, C runs with ease on a platform such as I described. As Paul pointed out, you just don't include all those big libraries in your build. From the sound of it, us embedded firmware designers will have to carry on waiting for an alternative to C. [Maybe Vala, I wonder?]

George Thomas

unread,
Aug 8, 2012, 2:28:53 PM8/8/12
to golan...@googlegroups.com, Paul Borman, Pete Wilson, steve....@gmail.com
Hi Pete and paul,

I did try building the GCC version of go for the tiny avr architecture. I was able to get it built without go libraries

I did find dependencies with c header files to time.h and signal.h which are not really available in the architecture.

For a function with concurrency built into the language would it be able to run without the presence of these libraries and an operating system ?

Is there a way to get these features running in such architectures ?

Paul Borman

unread,
Aug 8, 2012, 3:01:13 PM8/8/12
to George Thomas, golan...@googlegroups.com, Pete Wilson, steve....@gmail.com
You cannot run Go without the Go runtime so you need at least that package.  That will pull in garbage collection, goroutine scheduling and so on.

    -Paul

errorde...@gmail.com

unread,
Feb 21, 2014, 9:24:39 PM2/21/14
to golan...@googlegroups.com
Ah, seems like a long-lived thread, though I'd pick-up. The headers like time.h are pretty easy to just implement, have you tried anything else since? It'd be great if you could provide some background to this... I was also thinking that running GC on a multicore MCU (e.g. LPC4350) would work, if we have pauseless GC.

--
Ilya

minux

unread,
Feb 22, 2014, 11:27:01 PM2/22/14
to errorde...@gmail.com, golang-nuts
On Fri, Feb 21, 2014 at 9:24 PM, <errorde...@gmail.com> wrote:
Ah, seems like a long-lived thread, though I'd pick-up. The headers like time.h are pretty easy to just implement, have you tried anything else since? It'd be great if you could provide some background to this... I was also thinking that running GC on a multicore MCU (e.g. LPC4350) would work, if we have pauseless GC.
1. the gc toolchain can't generate thumb (or thumb2) code (gccgo can, but there isn't a bare metal runtime yet).
2. even 256KB RAM is not enough for current Go runtime. (the runtime is always a problem when running Go
on embedded systems; we need a more compact runtime, also the current Go runtime relies heavily on
a virtual memory environment.)
3. we don't have pauseless GC. (although I don't think multicore MCU necessarily require pauseless GC.
in fact, is there any pauseless GC available for MCUs?)

rmfr...@gmail.com

unread,
Dec 16, 2015, 8:23:08 AM12/16/15
to golang-nuts, errorde...@gmail.com
A very late note on this…

A few weeks ago, I got to wondering what the smallest system that could usefully run Go programs was.  Seems to be a 16-bit MCU; anything with less than about 16k of RAM probably isn’t adequately going to support the heap, though you might just possibly get away with 4k. I suppose the runtime would have to be extensively rewritten to fit, but I think it is possible. It would demand, I think, a considerably different coding style than is currently used in Go; any numeric type over 16 bits — that includes "rune!" — would carry a performance penalty.

Reply all
Reply to author
Forward
0 new messages