Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

FPGA C Compiler on sourceforge.net (TMCC derivative)

11 views
Skip to first unread message

air_...@yahoo.com

unread,
Nov 2, 2005, 10:56:39 AM11/2/05
to
Project released a beta based on an enhanced TMCC, and is looking for
help
to bring FpgaC up to a production stable level.

Home page at http://fpgac.sourceforge.net/
Project page at http://sourceforge.net/projects/fpgac

Rene Tschaggelar

unread,
Nov 2, 2005, 11:34:47 AM11/2/05
to
air_...@yahoo.com wrote:

Why are those guys so keen on C ? Suggesting
compatibility with something while having least
readability ?

Rene

Jan Panteltje

unread,
Nov 2, 2005, 1:02:26 PM11/2/05
to
On a sunny day (2 Nov 2005 07:56:39 -0800) it happened air_...@yahoo.com
wrote in <1130946999.6...@g14g2000cwa.googlegroups.com>:

Interesting.
here is something to think about from the libc.info from gcc C.
maybe you already support these?
I recently found out it is better to use these in C then 'int' and 'short',
as some programs I wrote stopped working when compiled on AMD 64....
(header file structures with 'int' and 'short' and 'BYTE' ).

Integers
========

The C language defines several integer data types: integer, short
integer, long integer, and character, all in both signed and unsigned
varieties. The GNU C compiler extends the language to contain long
long integers as well.

The C integer types were intended to allow code to be portable among
machines with different inherent data sizes (word sizes), so each type
may have different ranges on different machines. The problem with this
is that a program often needs to be written for a particular range of
integers, and sometimes must be written for a particular size of
storage, regardless of what machine the program runs on.

To address this problem, the GNU C library contains C type
definitions you can use to declare integers that meet your exact needs.
Because the GNU C library header files are customized to a specific
machine, your program source code doesn't have to be.

These `typedef's are in `stdint.h'.

If you require that an integer be represented in exactly N bits, use
one of the following types, with the obvious mapping to bit size and
signedness:

* int8_t

* int16_t

* int32_t

* int64_t

* uint8_t

* uint16_t

* uint32_t

* uint64_t

_________________________________________
Usenet Zone Free Binaries Usenet Server
More than 140,000 groups
Unlimited download
http://www.usenetzone.com to open account

air_...@yahoo.com

unread,
Nov 2, 2005, 4:43:03 PM11/2/05
to

Rene Tschaggelar wrote:
> Why are those guys so keen on C ? Suggesting
> compatibility with something while having least
> readability ?
>
> Rene

The description at the project page pretty much says it all:

"C provides an excellent alternative to VHDL/Verilog for algorithmic
expression of tasks targeting FPGAs for reconfigurable computing."

Project page at http://sourceforge.net/projects/fpga

The object is to use FPGAs as computing engines, and less concerned
about how to describe circuits in an HDL for synthesis, as most
hardware designers would with VHDL or Verilog. VHDL and Verilog are the
rough equivalent of programming in assemenbly language, as the
implementation languages are directly expressing clocks, registers and
wires which adds a tremendous design state factor and skill level in
their use.

Simplifed C to netlist compilers, such as FpgaC are attempting to hide
most of synthesis details, to ease the design burden for hardware
implementations of applications which have rich algorithmic complexity.
These applications range from doing network stacks in FPGAs for wire
speed performance at gigabit rates, to richly parallel algorithms such
as searching which are performance limited by the serial nature of
traditional CPU/Memory architectures.

FPGAs in this decade are simply building blocks for high performance
computing,
not just a dense PLD to express hardware funtions for the logic
designer.

air_...@yahoo.com

unread,
Nov 2, 2005, 5:03:54 PM11/2/05
to

Jan Panteltje wrote:
> Interesting.
> here is something to think about from the libc.info from gcc C.
> maybe you already support these?
> I recently found out it is better to use these in C then 'int' and 'short',
> as some programs I wrote stopped working when compiled on AMD 64....
> (header file structures with 'int' and 'short' and 'BYTE' ).

The native types, including long long as 64 bit, are in the parser and
emit the expected word length. The current net list generator only
builds signed values which are width+1 in size. Properly implementing
unsigned is part of the next pass work at bring FpgaC inline with
expected normal C implementations.

One of the interesting parts of FpgaC is that

int VeryWideInt:512;

will build a 512 bit plus sign integer ... which doesn't make a very
fast counter, as the carry tree is pretty slow, but you get the
precision asked for.

typedef is also currently missing from FpgaC's parser, and will be
added soon so that standard header files can also be imported into an
FpgaC project.

Eric Smith

unread,
Nov 2, 2005, 5:21:12 PM11/2/05
to
Rene Tschaggelar wrote:
> Why are those guys so keen on C ? Suggesting compatibility with
> something while having least readability ?

air_...@yahoo.com writes:
> The description at the project page pretty much says it all:
>
> "C provides an excellent alternative to VHDL/Verilog for algorithmic
> expression of tasks targeting FPGAs for reconfigurable computing."

That doesn't explain *why* it's an excellent alternative. I can just
as easily state that "C provides a terrible alternative to VHDL/Verilog
for algorithmic expression of tasks targetting FPGAs for reconfigurable
computing". So why is their statement any more accurate than mine?

Eric Smith

unread,
Nov 2, 2005, 5:22:41 PM11/2/05
to
air_...@yahoo.com writes:
> One of the interesting parts of FpgaC is that
>
> int VeryWideInt:512;
>
> will build a 512 bit plus sign integer ... which doesn't make a very
> fast counter, as the carry tree is pretty slow, but you get the
> precision asked for.

Why is that any better than

VeryWideInt: unsigned (511 downto 0);

in VHDL? I expect that this can just as easily be expressed in Verilog,
too.

air_...@yahoo.com

unread,
Nov 2, 2005, 5:49:19 PM11/2/05
to

I can build them at a schematic level too, so why is any HDL better?

I can even wire them out of TTL so why is using an FPGA better?

Heck, I can even wire them out of diodes or vacumn tubes, ....

So why waste people time bitching about others preferences, and the
tools
they use to implement them. If you like VHDL, certainly use it.

have a nice day ;)

Eric Smith

unread,
Nov 2, 2005, 5:56:20 PM11/2/05
to
air_...@yahoo.com writes:
> I can build them at a schematic level too, so why is any HDL better?
> I can even wire them out of TTL so why is using an FPGA better?
> Heck, I can even wire them out of diodes or vacumn tubes, ....
>
> So why waste people time bitching about others preferences, and the
> tools
> they use to implement them. If you like VHDL, certainly use it.

I'm not the one claiming that any particular HDL is better than
another. But I'm trying to understand the hype about using C as
an HDL. Where is the actual benefit?

air_...@yahoo.com

unread,
Nov 2, 2005, 6:08:33 PM11/2/05
to

There are a few hundred thousand engineers on the planet that can
express large complex algorithms in C, and a few tens of thousands of
engineers that can express large complex algorithms in VHDL/Verilog,
and probably a few thousand that can actually grasp the test vector
space and simulation effort necessary to actually get a large
VHDL/Verilog design working for large complex alogoritsms. So access to
design talent is one.

There are clear advantages to being able write, test, and debug large
complex algorithms on a traditional processor with a source code
debugger and moving the nearly finished product to FPGAs for deployment
and performance. So access to advanced software development tools is
two.

The embedded logic analyzer cores are a very poor subsitute when
debugging complex algorithms with lots of state and data.

air_...@yahoo.com

unread,
Nov 2, 2005, 6:45:19 PM11/2/05
to

Eric Smith wrote:
> That doesn't explain *why* it's an excellent alternative. I can just
> as easily state that "C provides a terrible alternative to VHDL/Verilog
> for algorithmic expression of tasks targetting FPGAs for reconfigurable
> computing". So why is their statement any more accurate than mine?

There are probably a few hundred billion statements of C/C++ IP for
designs which
contain easily reusable code segments (IE cores) in nearly every
application
area. Probably a few trillion, when you include privately held IP in
addition
to what is on SourceForge and other open source depositories.

I suspect, the total IP coded in VHDL/Verilog is three to four orders
of magnitude less.

So that is three reasons why C can be an excellent althernative for
reconfigurable computing projects, and for the home hobbyist that
already knows C and would like to use an FPGA for a reconfigurable
platform for their robotic or other project.

Jim Granville

unread,
Nov 2, 2005, 7:31:46 PM11/2/05
to

How about some examples, of some real applications, that can be coded
in either, and the resulting source examples, and the FPGA resource
mapping that results ?

I presume a mixed-language design is possible ? - an example of that
as well, would assist understanding.

Otherwise, it's all arm-waving.....

-jg

air_...@yahoo.com

unread,
Nov 2, 2005, 7:33:34 PM11/2/05
to

Eric Smith wrote:
> I'm not the one claiming that any particular HDL is better than
> another. But I'm trying to understand the hype about using C as
> an HDL. Where is the actual benefit?

You have it backwards. The intent is not to use C for hardware design,
but to use FPGA's for computing. There is a grey area in between, but
the view point is from completely different ends of the problem design
space.

air_...@yahoo.com

unread,
Nov 2, 2005, 7:44:47 PM11/2/05
to

Jim Granville wrote:
> How about some examples, of some real applications, that can be coded
> in either, and the resulting source examples, and the FPGA resource
> mapping that results ?
>
> I presume a mixed-language design is possible ? - an example of that
> as well, would assist understanding.
>
> Otherwise, it's all arm-waving.....

Most applications of reconfigurable computing are not hardware design
applications, so any pure HDL may be the wrong tool, as it's design
focus
is at the gate/register level.

Reconfigurable computing is about taking tranditional C/C++
applications and pushing the resource intensive parts into net lists
for a performance gain ... frequenly as much as 200 times the fastest
RISC/CISC processors by removing memory latency and ALU pipelines (both
of which are serial resources) from the critical path.

Moving the front end of web servers, router/classifier logic and TCP/IP
stacks into several large FPGA's ... VertexII-Pro's with rocket IO's
and PPC backend engines is one example. These applications are already
written in C, and get married to the external hardware with logic
typically written in VHD/Verilog.

Likewise, most protocol converters which interface different fibre
connections are freqently being pushed into fpga's to maintain wire
speed operation.

Mixed C, VHDL, Verilog, and schematic are all very likely. C is just
one
more implementation tool.

Eric Smith

unread,
Nov 2, 2005, 8:43:43 PM11/2/05
to
air_...@yahoo.com writes:
> There are a few hundred thousand engineers on the planet that can
> express large complex algorithms in C, and a few tens of thousands of
> engineers that can express large complex algorithms in VHDL/Verilog,

Of those few hundred thousand that know C, very few have any clue how
to design hardware. If you turn them loose with C as an HDL, you're
going to end up with really crappy hardware, just like when programmers
are thrust into Verilog or VHDL.

It's not knowledge of the syntax of a particular language that's the
problem. The semantics of hardware design are fundamentally different
from the semantics of sequential software design.

> and probably a few thousand that can actually grasp the test vector
> space and simulation effort necessary to actually get a large
> VHDL/Verilog design working for large complex alogoritsms.

Are you claiming that test vectors and simulation aren't needeed when
using C as an HDL? I'd be very skeptical of any such assertion.

> The embedded logic analyzer cores are a very poor subsitute when
> debugging complex algorithms with lots of state and data.

What's that got to do with your choice of HDL? If you want to know
what's going on inside your FPGA, that's not fundamentally any easier
with C as your HDL than it is with Verilog or VHDL.

Eric

Eric Smith

unread,
Nov 2, 2005, 8:45:57 PM11/2/05
to
air_...@yahoo.com writes:
> You have it backwards. The intent is not to use C for hardware design,
> but to use FPGA's for computing. There is a grey area in between, but
> the view point is from completely different ends of the problem design
> space.

You're right, I wasn't aware of that distinction.

Still, if you're going to use reconfigurable computing, surely each
configuration is a hardware design, and much better expressed in a
language optimized for hardware design, rather than a language optimized
for strictly sequential operation.

Eric

Rene Tschaggelar

unread,
Nov 3, 2005, 2:56:59 AM11/3/05
to
air_...@yahoo.com wrote:

Thanks. It is a pitty that C was chosen. The choice
of C means some guys want to adhere to a standard,
instead of flexibly adapt to the problems ahead.
There should be a way to tell the compiler how
quick I want to have an operation. Do I want a one
cycle result with an enormous loockup table, do I
want an N-digit multiply accumulate loop, do I
want an NlogN solution or is it even less important.
Is this being solved by a bunch of #pragma ?

I'd be interested in the first floating point unit
coming out of such a compiler.

Rene

Simon Peacock

unread,
Nov 3, 2005, 4:32:21 AM11/3/05
to
HDL has a benefit that you can include abstract ideas in your FPGA. That's
why HDL is better.

I can also see the point of using C as a base language... But I can just
imagine the latest windows.. instead of crashing once a week..it now crashes
every mili second :-).. Poor software is still poor software... C isn't
typed strong enough to use with out adding other tools, so it would be at
the bottom of my list of recommended software. Also anything you create in
C you will need to be able to verify. Failing to do so will only lead to
hours and hair loss.

Pascal would have been a better choice IMO.

Simon


"Eric Smith" <er...@brouhaha.com> wrote in message
news:qhfyqel...@ruckus.brouhaha.com...

Jim Granville

unread,
Nov 3, 2005, 4:39:38 AM11/3/05
to
Simon Peacock wrote:
> HDL has a benefit that you can include abstract ideas in your FPGA. That's
> why HDL is better.
>
> I can also see the point of using C as a base language... But I can just
> imagine the latest windows.. instead of crashing once a week..it now crashes
> every mili second :-).. Poor software is still poor software... C isn't
> typed strong enough to use with out adding other tools, so it would be at
> the bottom of my list of recommended software. Also anything you create in
> C you will need to be able to verify. Failing to do so will only lead to
> hours and hair loss.
>
> Pascal would have been a better choice IMO.

Or Modula-2, or IEC 61131, ... or even better, something like :


http://research.microsoft.com/fse/asml/
"AsmL is the Abstract State Machine Language. It is an executable
specification language based on the theory of Abstract State Machines."

-jg

c d saunter

unread,
Nov 3, 2005, 6:20:35 AM11/3/05
to
Rene Tschaggelar (no...@none.net) wrote:

: Why are those guys so keen on C ? Suggesting


: compatibility with something while having least
: readability ?

: Rene

Oee might ask the same question about hardware engineers and
Perl - why is this such a commonly used tool? I'd venture to
guess that after being forced to accept VHDL or Verliog as the
prime language, the idea of using Perl or C derivatives for
hardware proramming doesn't seem so twisted.

cds

Robin Bruce

unread,
Nov 3, 2005, 6:41:45 AM11/3/05
to
I think we have to accept that high-level languages are going to be the
future for FPGAs. Not to say that HDLs will be replaced entirely, but
they'll be largely supplanted by the HLLs. Algorithms are easier to
verify: testing can be done using a software compiler and, providing
you can trust your tools and hardware infrastructure, you shouldn't
need to do extensive hardware testing of the implemented algorithm. Why
should we want to know what's going on inside of the FPGA? Development
time too is vastly reduced.

I think C has been selected as the starting point for most HLL-to-HDL
tools not because of its eminent suitability for the task, but because
it decreases the pain in switching to the new tool. C syntax is
familiar, it's a good jumping-on point. However, my experience in using
these tools tells me that hopes for massive re-use of legacy code are
still very much a pipe-dream. You will still have to understand the
underlying hardware. You will have to understand the spatial, temporal
and memory tradeoffs, and understand how to infer the pipelining and
parallelism that is most suitable. What HLLs free you up from is the
need to fiddle about with the timing on pipelines and other such
details. I can change the mix of ALUs in a complex pipelined algorithm
easily and painlessly. I don't need to go and manually re-time my
pipeline to account for the changes (and so know I won't make an
off-by-one error, introducing a fiendish bug).

First generation tools are far from perfect, but they will see use
because they significantly decrease development time. Your HLL-designed
system may not be as efficient as the best possible VHDL design, but if
it's good enough and you get to market months before the competition,
you'll come out on top.

Once the user base has been built up, I see the tools maturing and
becoming less and less C-like. New languages will be demanded to better
express parallelism and pipelining and to account for heterogeneous
processing units and memory structures.

Sorry if I've gone on a bit... :)

Robin

Thomas Reinemann

unread,
Nov 3, 2005, 6:44:15 AM11/3/05
to
air_...@yahoo.com schrieb:

> Eric Smith wrote:
>
>>That doesn't explain *why* it's an excellent alternative. I can just
>>as easily state that "C provides a terrible alternative to VHDL/Verilog
>>for algorithmic expression of tasks targetting FPGAs for reconfigurable
>>computing". So why is their statement any more accurate than mine?
>
>
> There are probably a few hundred billion statements of C/C++ IP for
> designs which
> contain easily reusable code segments (IE cores) in nearly every
> application
> area. Probably a few trillion, when you include privately held IP in
> addition
> to what is on SourceForge and other open source depositories.
Ok, you want to map GUIs, database engines, programming languages a.s.o.
directly on an FPGA. Perhaps it makes sense to map 1% of all
application to an FPGA.FPGAs offer massive parallelism, therefore only
application/problems which utilize this parallelism should be
implemented in FPGAs. They are all a kind of communication system or
signal processing system.

>
> I suspect, the total IP coded in VHDL/Verilog is three to four orders
> of magnitude less.

May be, but it uses the hardware very efficiently


>
> So that is three reasons why C can be an excellent althernative for
> reconfigurable computing projects, and for the home hobbyist that
> already knows C and would like to use an FPGA for a reconfigurable
> platform for their robotic or other project.

Never, since most of them think in sequential algorithms and don't
understand the advantages of hardware.

Bye Tom

Martin Ellis

unread,
Nov 3, 2005, 6:40:53 AM11/3/05
to
Jim Granville wrote:
> How about some examples, of some real applications, that can be coded
> in either, and the resulting source examples, and the FPGA resource
> mapping that results ?

Here's a reference that was posted here a while ago and I'm just following
up just now:

"Survey of C-based Application Mapping Tools for Reconfigurable Computing"
http://klabs.org/mapld05/program_sessions/session_c.html

On p14, the C-based implementation performs faster than the VHDL
implementation, despite the VHDL being developed after 'semester-long
endeavor into algorithm?s parallelism'.

They point to one of their own references that describes the
implementation, but I guess you'd probably need to ask them for the
resulting source.

I guess Celoxica can probably give you some references to C-based examples
too.

Note that this sort of example is a more likely application in the HPC
community rather than the hardware design community per se.

Martin

Robin Bruce

unread,
Nov 3, 2005, 7:07:37 AM11/3/05
to
>Never, since most of them think in sequential algorithms and don't
>understand the advantages of hardware.

What are you saying? That people who don't understand hardware don't
make good hardware designs? Why would that make VHDL better than a
HLL-to-VHDL tool?

I could just as easily say that people who've never heard of algorithms
don't understand the advantages of a microprocessor, therefore
assembler is better than C. It's a non sequitur...

Martin Ellis

unread,
Nov 3, 2005, 7:09:10 AM11/3/05
to
Thomas Reinemann wrote:

> FPGAs offer massive parallelism, therefore only
> application/problems which utilize this parallelism should be
> implemented in FPGAs. They are all a kind of communication system or
> signal processing system.

All? Perhaps you should read these for other high-performance computing
applications that can be accelerated using FPGAs:

@MISC{compton00reconfigurable,
author = {K. Compton and S. Hauck},
title = {Reconfigurable Computing: A Survey of Systems and Software},
year = {2000},
text = {K. Compton, S. Hauck, Reconfigurable Computing: A Survey
of Systems and Software, submitted to ACM Computing Surveys, 2000.},
url = {http://citeseer.nj.nec.com/compton00reconfigurable.html},
}

@ARTICLE{hauck98roles,
author = {Scott Hauck},
title = {{The Roles of FPGAs in Reprogrammable Systems}},
journal = {Proceedings of the IEEE},
year = {1998},
volume = {86},
number = {4},
pages = {615--638},
month = {Apr},
url = {http://citeseer.nj.nec.com/hauck98roles.html},
}

>> I suspect, the total IP coded in VHDL/Verilog is three to four orders
>> of magnitude less.

> May be, but it uses the hardware very efficiently

Isn't that what people said about assembly language? And GOTO statements?

>> So that is three reasons why C can be an excellent althernative for
>> reconfigurable computing projects, and for the home hobbyist that
>> already knows C and would like to use an FPGA for a reconfigurable
>> platform for their robotic or other project.

> Never, since most of them think in sequential algorithms and don't
> understand the advantages of hardware.

Yawn. I wonder when people from traditional hardware design backgrounds
will get over this kind of attitude.

So what if some hobbyists don't 'get' it at first? People aren't born
hardware designers, nor software programmers. Are you really saying you've
never made any mistakes while you were learning?

It's not like using a HLL for FPGA design is only useful for hobbyists
anyway.

Martin

air_...@yahoo.com

unread,
Nov 3, 2005, 1:16:10 PM11/3/05
to

Martin Ellis wrote:

> Thomas Reinemann wrote:
> >> I suspect, the total IP coded in VHDL/Verilog is three to four orders
> >> of magnitude less.
>
> > May be, but it uses the hardware very efficiently
>
> Isn't that what people said about assembly language? And GOTO statements?

Isn't that what people said about Schematic based designs when HDLs
popped up?

The reality is the HLLs targeting reconfigurable computing on FPGAs get
very good
fits already, just as HDLs do simply because the back end optimizers in
the tool
chains for space/time traceoffs, partitioning, mapping, and routing
yield the same
benefits for HLLs. The biggest differences is that most HLLs hide
implementation
details that create design risks, which are considered expert tools for
HDL users, thus
allowing coders with less hardware experience the ability to realize
functional designs
with a small performace penalty. Given that the speed ups from a
RISC/CISC architecture
CPU to FPGAs that can be obtained where there is parallism, are often
one to three
orders of magnitude, this small efficiency loss is completely mouse
nuts. To gain that
extra effieciency would require an experienced HDL coder and sigificant
delays in the
development schedule, each of which have marginal cost benefit gains in
comparison
to the huge gains made by using reconfigurable computing with FPGAs.

Heck, Impulse C is said to use VHDL as the netlist technology to
optimize the fit.
FpgaC even has some experimental code that uses Verilog as the netlist
technology
instead of XNF. Even the XNF outputs allow for respresenting the output
design as
basic gates or packed LUTs with equations allowing the user to decide
which will
do the best technology mapping. Each of these choices gives the backend
tool chain
considerable room to optimize the HLL produced netlist for the target
technology, just
as VHDL and Verilog designs expect.

> It's not like using a HLL for FPGA design is only useful for hobbyists
> anyway.

Celoxica C and Impluse C are becoming thriving products with expensive
high
value tool chains. Others are likely to become successful as well, and
it's very
likely that Xilinx or Altera or other FPGA company will offer a C
HLL/HDL as their
flagship tool chain as reconfigurable computing takes off and drives
the high end
FPGA revenues. Some expect that may be in the form of System C if that
techology
really takes off as a system level design specification tool.

Doesn't matter. There are few low cost C HLL/HDL tools available for
students,
hobbyists, and low budget design shops. The total cost of the Celoxica
tool chain
for a modest sized development team can easily run the cost of several
engineers
for the multiple licenses needed. Small 1-10 man shops like mine simply
can not
afford Celoxica licenses, so I've used a mix of TMCC and Verilog for
several
projects to meet my budget.

There are several interesting specialty C HLL/HDL research tools that
knock your socks
off. Several data flow C compilers have been presented at conferences
that would be
awesome for some projects if they ever became a real product that was
affordable or released GPL (search for ROCCC and PiCoGA projects, and
work by Oskar Mencer). Sarah's
partial evaluation C compiler called HARPE generates some awesome logic
optimizations
which coupled with her Async work would make a killer tool to add to
ones tool chain for
certain types of projects (see
http://findatlantis.com/syncpe_paper.pdf). And Mihai Budiu's
research at CMU which produced the ASH tool chain (see
http://www.cs.cmu.edu/~mihaib/research/research.html ) Other projects
like SA-C at ColoState.edu (see http://www.cs.colostate.edu/cameron/)
and Spark at UCSD (see
http://mesl.ucsd.edu/spark/) and a few dozen others are all exploring
and showing good solid gains in how to map HLLs to FPGA and win big.

C as an HLL to netlist toolchain is here to stay, and probably only get
better with time.
C as an HDL (in the form of Handel-C by Celoxica) is clearly here.

Andy Peters

unread,
Nov 3, 2005, 1:46:12 PM11/3/05
to
Robin Bruce wrote:
> >Never, since most of them think in sequential algorithms and don't
> >understand the advantages of hardware.
>
> What are you saying? That people who don't understand hardware don't
> make good hardware designs?

That's essentially it. Let's define "good" hardware designs: best use
of hardware resources with highest performance (clock speed).

> Why would that make VHDL better than a HLL-to-VHDL tool?

Because VHDL and Verilog are designed from the ground up to take
advantage of the parallelism inherent in hardware designs. Sequential
programming languages such as C are not, and much hackery has to happen
for C to map well to hardware.

As an example, think about how you could implement a FIR filter in C
for a DSP, and then think about how you could implement the same filter
on an FPGA. I suppose one could write a tool that's smart enough to
translate the C description of a FIR filter into efficient hardware but
one presumes that sufficient restraints would need to be put into the
"hardware C" for the tools to work well. But if the intent is to take
high-level C developed by a software guy and have it map to hardware as
well as it runs on a DSP, well, I just think you'll leave a lot of FPGA
peformance on the table.

> I could just as easily say that people who've never heard of algorithms
> don't understand the advantages of a microprocessor, therefore
> assembler is better than C. It's a non sequitur...

You could, but the statement is irrelevant.

-a

Jim Granville

unread,
Nov 3, 2005, 1:51:59 PM11/3/05
to
Martin Ellis wrote:
> Jim Granville wrote:
>
>> How about some examples, of some real applications, that can be coded
>>in either, and the resulting source examples, and the FPGA resource
>>mapping that results ?
>
>
> Here's a reference that was posted here a while ago and I'm just following
> up just now:
>
> "Survey of C-based Application Mapping Tools for Reconfigurable Computing"
> http://klabs.org/mapld05/program_sessions/session_c.html
>
> On p14, the C-based implementation performs faster than the VHDL
> implementation, despite the VHDL being developed after 'semester-long
> endeavor into algorithm?s parallelism'.
<snip>

Thanks, very interesting link.

I like this oxymoron, on p5 :
" * Companies create proprietary ANSI C-based language
* Languages do not have all ANSI C features
and this very important point
* Must adhere to specific programming “style” for maximum optimization
"

The benchmarks are usefull - and show the choice is very much a lottery.
One benchmark they did not give, was just what results were if Generic
C, from a generic graduate, was thrown at these tools.

Source snippets are important, because these solutions are not C, but
C-based. The devil is in the details....

-jg

Robin Bruce

unread,
Nov 3, 2005, 2:01:21 PM11/3/05
to
I'm afraid if you throw generic C at these tools, it won't compile...

Robin Bruce

unread,
Nov 3, 2005, 2:23:04 PM11/3/05
to
Andy,

firstly, you've misunderstood my post. The previous poster seemed to
suggest that there was some kind of link between people not
understanding hardware and HLLs being inferior to VHDL. I didn't see
what the level of experience of people who use HLLs has to do with
whether or not HLLs are superior to HDLs. This was what led to my
intentionally fatuous comment:


> I could just as easily say that people who've never heard of algorithms
> don't understand the advantages of a microprocessor, therefore
> assembler is better than C. It's a non sequitur...

OK, as for your other comments:


>> much hackery has to happen for C to map well to hardware.

well, much 'hackery' obviously has happened, as there are tools that
map C well to hardware. We're not talking about what might happen,
we're talking about what is happening.

>I suppose one could write a tool that's smart enough to
>translate the C description of a FIR filter into efficient hardware but
>one presumes that sufficient restraints would need to be put into the
>"hardware C" for the tools to work well.

Check slide 13 of Brian Hollands presentation from MAPLD, you'll find
all 3 HLL tools tested beat the VHDL for FIR implementation:

>But if the intent is to take
>high-level C developed by a software guy and have it map to hardware as
>well as it runs on a DSP, well, I just think you'll leave a lot of FPGA
>peformance on the table.

Who said that's what we're trying to do? We're talking about high-level
languages not so we can compile legacy code. We're doing it so we can
rapidly infer reliable hardware using a more concise expression than
that achieved using HDLs while paying a minimal price in lost potential
performance.

Martin Ellis

unread,
Nov 3, 2005, 2:32:04 PM11/3/05
to
Jim Granville wrote:
> I like this oxymoron, on p5 :
> " * Companies create proprietary ANSI C-based language
> * Languages do not have all ANSI C features

I don't think that's an oxymoron. Just because a language is proprietary,
it does not mean it can't be based on ANSI C.

Sure it might read a bit funny - but we've all tried to cram too much onto
slides before.

> and this very important point

> * Must adhere to specific programming ?style? for maximum optimization

Yes. That's a very important point.

> The benchmarks are usefull - and show the choice is very much a lottery.
> One benchmark they did not give, was just what results were if Generic
> C, from a generic graduate, was thrown at these tools.

I think you've got the wrong end of the stick there.

This isn't about taking arbitrary C code and compiling it to an FPGA.
You simply can't do that.

Reason: It's possible to write architecture specific code (that is, code
specific to a particular ISA) in C. Self-modifying code and dynamic code
generation are examples of this.

For example, linkers and JIT compilers modify code which is then executed.
You couldn't compile that efficiently to an FPGA - it would need the
re-synthesis every time code was modified..

Another reason is that a compiler can't guess which inner loops are program
'hot-spots', and thus good candidates for synthesis. Such information is
application-domain specific.

Concisely: the aim isn't to be able to take a program written by someone
who knows nothing about hardware (at least, not yet). The aim is to be
able to develop hardware acceleration for a given algorithm.

One advantage of C-based languages is that when trying to accelerate an
algorithm, it might not be clear which parts to synthesise - this requiring
some trial-and-error for difficult problems, and also being dependent on
some rather arbitrary parameters. It's easier to move a computation unit
from software to hardware, or vice-versa, if the languages are similar.
There's also a whole raft of software based optimisations that can be
applied before the hardware optimisations even get a look in.

Another, is that for some projects, a C simulation is developed to check the
algorithms anyway. For example, Timothy Miller did a software model for
the OpenGraphics project. The practise isn't uncommon.

> Source snippets are important, because these solutions are not C, but
> C-based. The devil is in the details....

For the reasons above, it is - in general - necessary to provide a compiler
with some pragmas or other hints that describe what code would be a good
candidate for synthesis.

However, the solutions are often close enough to C that it's possible to
execute the program entirely in software, as well as compile to a
object code/bitstream target. That's useful for the intended applications.

Nobody's pretending C-based synthesis is a complete replacement for HDL,
only that for some applications/projects it's a very compelling
alternative.

Martin

air_...@yahoo.com

unread,
Nov 3, 2005, 2:51:50 PM11/3/05
to

Eric Smith wrote:
> Still, if you're going to use reconfigurable computing, surely each
> configuration is a hardware design, and much better expressed in a
> language optimized for hardware design, rather than a language optimized
> for strictly sequential operation.

Actually, Handel-C (Celoxica), Impulse-C (derivative of StreamsC), ASH,
SA-C, HarPE, Spark, and even FpgaC all take a subset of the language
and to various degrees present extensions to the language to be a
language optimized for hardware design, some much more rigorous than
others.

Celoxica's long term goal is to compete 1 on 1 with VHDL/Verilog and
they are doing pretty well at it so far.

Impules-C with it's VHDL backend is ment to take communication based
designs (AKA RPC or MPI or other cluster based communication library)
and use FPGA's as computing nodes with clearly defined streams to build
pipelined system designs. The use of the VHDL backend leaves lots of
room to optimized the resulting design at a low level.

SA-C and ASH are clearly targeting high performance designs, actually
large high performance designs, with the intent to get as good a
hardware fit as VHDL/Verilog or better by picking a specification
language higher than VHDL/Verilog and lower than a full C/C++ which is
highly optimizable to give a better design yield than mid-level
experienced coders would get with VHDL/Verilog. Actually all the C HLL
offerings pretty much share this goal of trying to do better than the
average VHDL/Verilog coder.

ASH and HarPE go after optimizations that even a skilled VHDL/Verilog
coder are likely to miss, or even decide to avoid in the effort to
leave the VHDL/Verilog code readable and maintainable.

air_...@yahoo.com

unread,
Nov 3, 2005, 3:15:30 PM11/3/05
to

Martin Ellis wrote:
> This isn't about taking arbitrary C code and compiling it to an FPGA.
> You simply can't do that.

Actually you could, and in the future when a few million luts are
cheaper
than a fairly fast CPU, some people probably will by using mixed
technologies
inside the FPGA ... a combination of application specific cpu cores and
generic netlists. Already Xilinx is targeting that market with PPC
cores and
micro blaze cores as an addition to the FPGA logic synthesis.

> Another reason is that a compiler can't guess which inner loops are program
> 'hot-spots', and thus good candidates for synthesis. Such information is
> application-domain specific.

Actually, that is only partially true. It's been common for some time
to use
profiler input from actual runs to guide the compiler optimations for
later builds.
This happens to be one sweet spot that lcc exploits to beat gcc and pcc
executables. See
http://www.cs.princeton.edu/software/lcc/doc/linux.html

> However, the solutions are often close enough to C that it's possible to
> execute the program entirely in software, as well as compile to a
> object code/bitstream target. That's useful for the intended applications.

Actually, it's very easy to write C to the subset implemented by a
particular
C to netlist HDL, that with a few #ifdef's is usable in either
environment, and
can accellerate development testing and debugging by doing most, if not
all,
the high level debugging in a well structured source code debugging
environment.

Others take it a step farther, and use rigourous type checking combined
with
super "lint" tools that provide verifiable correct construction
checking. In a
lot of ways, the original C++ development was exactly that, and was
implemented
as a front end preprocessor for standard C which was specifically ment
to be
only lightly typed so that it was a productive low level systems
programming
language only marginally higher level than PDP-11 assembly language.

It's actually fairly easy to code in C++ with abstract types that
directly implement
(IE emulate) the abstract hardware types that would result after
synthesis to
yield a strict HLL development environment with strict typing and
verifiable
designs and still translate to a subset dialect of C (or Verilog) for
synthesis.

> Nobody's pretending C-based synthesis is a complete replacement for HDL,
> only that for some applications/projects it's a very compelling
> alternative.

Choke, cough .... ummmm ... Celoxica, ASH, and a few other projects
really have
that goal. Celoxica has very clear guidelines, just as VHDL and Verilog
have, to
allow the coder to understand just what registers and logic will be
instantiated.

When you stop and think about it ... there is very little difference
between the
coding syntax of Handel-C and a subset of Verilog.

Martin Ellis

unread,
Nov 3, 2005, 3:36:22 PM11/3/05
to
air_...@yahoo.com wrote:
> Martin Ellis wrote:
>> This isn't about taking arbitrary C code and compiling it to an FPGA.
>> You simply can't do that.
>
> Actually you could, and in the future when a few million luts are
> cheaper than a fairly fast CPU, some people probably will by using mixed
> technologies inside the FPGA ... a combination of application specific cpu
> cores and generic netlists. Already Xilinx is targeting that market with
> PPC cores and micro blaze cores as an addition to the FPGA logic
> synthesis.

Um. Go back to the self-modifying code example and you'll see that it can't
always work. To some extent, I agree with you, but the programs *have* to
be sensible and well behaved (type safe to some extent).

And that excludes *arbitrary* C.

>> Another reason is that a compiler can't guess which inner loops are
>> program 'hot-spots', and thus good candidates for synthesis. Such
>> information is application-domain specific.

> It's been common for some time to use profiler input from actual runs to


> guide the compiler optimations for later builds.

Yes, I know about this technique, and I thought about mentioning it here,
but didn't for brevity. I don't call that fully automatic though. You
need to seed it with some appropriate test data.

>> However, the solutions are often close enough to C that it's possible to
>> execute the program entirely in software, as well as compile to a
>> object code/bitstream target. That's useful for the intended
>> applications.
>
> Actually, it's very easy to write C to the subset implemented by a
> particular C to netlist HDL, that with a few #ifdef's is usable in either
> environment, and can accellerate development testing and debugging by
> doing most, if not all, the high level debugging in a well structured
> source code debugging environment.

That's exactly what I'm arguing. Why are you arguing with me?

<!-- Snip further stuff about type-safety and C++ -->

>> Nobody's pretending C-based synthesis is a complete replacement for HDL,
>> only that for some applications/projects it's a very compelling
>> alternative.

> Choke, cough .... ummmm ... Celoxica, ASH, and a few other projects
> really have that goal. Celoxica has very clear guidelines, just as VHDL
> and Verilog have, to allow the coder to understand just what registers and
> logic will be instantiated.

As far as I know, Celoxica's products and similar offerings only target
digital designs for FPGAs.

I'm not aware of any C-based language that was intended to cover ASIC
manufacture, or that would cover, say the 'Standard VHDL Analog and
Mixed-Signal Extensions' for example.

I could be very wrong there, and I'd be interested to know if they do intend
to target those aspects of HDLs though, if you can point me at any
references.

> When you stop and think about it ... there is very little difference
> between the coding syntax of Handel-C and a subset of Verilog.

Sure. The syntax is very similar, but syntax is normally the least
interesting part of a language.


Martin


Jim Granville

unread,
Nov 3, 2005, 3:42:51 PM11/3/05
to
Robin Bruce wrote:

<snip>


>
>>But if the intent is to take
>>high-level C developed by a software guy and have it map to hardware as
>>well as it runs on a DSP, well, I just think you'll leave a lot of FPGA
>>peformance on the table.
>
>
> Who said that's what we're trying to do? We're talking about high-level
> languages not so we can compile legacy code. We're doing it so we can
> rapidly infer reliable hardware using a more concise expression than
> that achieved using HDLs while paying a minimal price in lost potential
> performance.

Which has a lot in common with the ASM-HLL debates on microcontrollers.

The best solutions will come from a mix of tools
- but the sad reality is marketing dept drive is to push the hot new
thing, as a silver bullet, and any suggestions or examples of mixing
HLL/HDL, might be seen as admitting that their hot-new-thing is not
actually the universal new tool....

There is another, more recent shift in FPGA's, which means
a 'Sea of DSP' deployed in the FPGA, and that is missing from
this link:


"Survey of C-based Application Mapping Tools for Reconfigurable Computing"
http://klabs.org/mapld05/program_sessions/session_c.html

The HLL -> HDL path, misses the alternative of HLL -> FPGA Running HLL
amd the best tool set, will be one that allows a softer migration
between Opcodes and Registers.

The next generation FPGA will be interesting to watch, as we are
steadily getting more coarse & complex blocks, in BlockRAM and
DSP-able blocks, with each release.
This may outflank the efforts to create C -> registers ?

-jg

Jeremy Stringer

unread,
Nov 3, 2005, 5:13:19 PM11/3/05
to
Robin Bruce wrote:
> I think we have to accept that high-level languages are going to be the
> future for FPGAs. Not to say that HDLs will be replaced entirely, but
> they'll be largely supplanted by the HLLs. Algorithms are easier to

<snip>

> First generation tools are far from perfect, but they will see use
> because they significantly decrease development time. Your HLL-designed
> system may not be as efficient as the best possible VHDL design, but if
> it's good enough and you get to market months before the competition,
> you'll come out on top.

Or cynically speaking, we may just get bigger, faster and cheaper FPGAs,
so that it doesn't really *matter* how efficient you are, merely that
you're in the ballpark. I think this has happened to a certain extent
in the software world anyway...

Jeremy

Ray Andraka

unread,
Nov 3, 2005, 6:01:25 PM11/3/05
to
Robin Bruce wrote:

>well, much 'hackery' obviously has happened, as there are tools that
>map C well to hardware. We're not talking about what might happen,
>we're talking about what is happening.
>
>
>

What tools would those be? I've yet to see a tool that will take C
code that has not been so badly bastardized that it no longer looks
much like C code and turn out even half decent hardware. All of them
require proprietary extensions to the C language to sufficiently
describe hardware, as well as a very specific and stilted programming
style that is as foriegn to C programmers as VHDL or verilog is.

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email r...@andraka.com
http://www.andraka.com

"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759


air_...@yahoo.com

unread,
Nov 3, 2005, 8:31:06 PM11/3/05
to

Ray Andraka wrote:
> What tools would those be? I've yet to see a tool that will take C
> code that has not been so badly bastardized that it no longer looks
> much like C code and turn out even half decent hardware. All of them
> require proprietary extensions to the C language to sufficiently
> describe hardware, as well as a very specific and stilted programming
> style that is as foriegn to C programmers as VHDL or verilog is.

Which is very true, Celoxica being a prime example as code written for
their target as an HDL would be very tough to get to run on a RISC/CISC
machine and do anything meaningful.

You have to move up the food chain to a C++ design with heavy operator
overloading before you can get close to having the same source target
both enviroments if you are going to introduce HDL features into C. Std
C
just lacks the native types that get introduced with HDL features in
Handel-C.

SO, that leaves two distinctly different camps each tring to use the
same or
similar tools for two opposite goals ... the HDL guys designing
hardware, and
the reconfigurable computing guys just trying to gain a faster
computing platform
with FPGAs.

Personally, I'm comfortable with using VHDL/Verilog for HDL and a
fairly generic
C to netlist tool (FpgaC) for general reconfigurable computing, and a
mix of tools for gluing
projects togather (SoC's).

The which HDL is better debate is pretty much preference and
requirements based,
and impossible to win as a general case.

I do think we will see HLLs that target particular techologies that are
well defined and
difficult to easily code ... the whole pipelined data path problem for
distributed arithmetic
and filters is already shaping up that way with core generators (which
are in fact simple
forms based HLLs).

Benjamin Ylvisaker

unread,
Nov 3, 2005, 9:15:23 PM11/3/05
to
On 3 Nov 2005 12:15:30 -0800
air_...@yahoo.com wrote:

>
> Martin Ellis wrote:
> > Nobody's pretending C-based synthesis is a complete replacement
> > for HDL, only that for some applications/projects it's a very
> > compelling alternative.
>
> Choke, cough .... ummmm ... Celoxica, ASH, and a few other projects
> really have that goal. Celoxica has very clear guidelines, just as
> VHDL and Verilog have, to allow the coder to understand just what
> registers and logic will be instantiated.

If by ASH you mean the application specific hardware project run by
Seth Goldstein and Mihai Budiu (until he graduated) at CMU, then you
have gotten the wrong impression. I spent a semester in that research
group, and they most certainly do not intend to replace HDLs with C.
They have developed some very interesting compiler technologies that
can generate surprisingly efficient circuits from nearly arbitrary C
code, but even they wouldn't claim that C is an appropriate
replacement for HDLs in all cases. They are spinning their compiler
as a tool that can be used by many more people than traditional HDL
synthesis tools, but the quality of the circuits they produce is still
far from optimized HDL-based designs.

Benjamin

Robin Bruce

unread,
Nov 4, 2005, 6:59:47 AM11/4/05
to
Ray,

OK, maybe I should rephrase what I said, reading it again myself I
don't quite agree with it :). What I meant was that there are tools out
there that can map HLL well to hardware. I didn't mean to suggest that
they are ANSI C. I realise it's a bit of an abuse of the language to
describe these things as C, but I tend to describe the C-inspired
languages of these tools as C, and talk about ANSI C when I want to
make it clear I'm talking about canonical C.

It does seem that most of the tools out there are extensions to C. I
should say at this point that I've never used any of the tools that
have been discussed so far on this board, so I'll leave it to someone
else to talk about how close they are to C. I'm a research engineer
based in Nallatech, and I've been working with a tool being developed
there, DIME-C. I can safely say that DIME-C is to all intents and
purposes a subset of C, so everything you do in it can be compiled with
a gcc compiler. You can't have pointers, and you have to go round the
houses sometimes to avoid breaking your pipelines, but it's definitely
recognisable as C. If anyone is particularly interested, I could send
them some examples of the code. I don't want to bring DIME-C into the
debate though, I'm interested in finding out more about what's out
there, rather than in doing marketing :)

Cheers,

Robin

Robin Bruce

unread,
Nov 4, 2005, 7:02:07 AM11/4/05
to
Jim,

I agree with you about the value of mixing and matching HLL and HDL
solutions in your system. You might want to design the core of your
algorithm using an HLL tool, then link it up to control and memory
systems that you've designed using HDL.

I also can see the constant changing of the underlying hardware as
being a challenge to those who are developing C-to-hardware tools. I
think perhaps it favours those who target their systems at a single
architecture, like SRC with their Carte programming environment.
HandelC has seen a lot of real success, the most notable in my mind is
it being used in an effort by Lockheed Martin to create a space system
to dock with Hubble
(http://klabs.org/mapld05/presento/220_feifarek_p.ppt). My
understanding of HandelC is that the basic package contains generic HDL
routines for the common operations (data array storage and retrieval,
fixed-point multiplication etc.) bu the user is given the option to
supplement this with implementation specific routines. So in Xilinx
V-II this would be BRAMs for data array storage and use of the 18x18
multipliers for the fixed-point stuff.

With each new generation of FPGA, you'll need to update your underlying
library routines. As I recall Peter Alfke saying at this year's FPL, to
get the best out of FPGAs, you need to target your architectures,
generic just won't cut it (apologies to Peter if that's not what he was
getting at).

So I guess I agree with you Jim, in that C -> registers is not the best
approach.

Apologies for my ignorance, but can I ask you to expand on "alternative
of HLL -> FPGA Running HLL amd the best tool set". I wasn't sure what
you meant.

Cheers,

Robin

Kolja Sulimma

unread,
Nov 4, 2005, 8:05:45 AM11/4/05
to
air_...@yahoo.com schrieb:
> There are a few hundred thousand engineers on the planet that can
> express large complex algorithms in C,
Yes, but Java and C# have essentially the same syntax without any of the
defunct side effect issues that make C so damn hard to synthesize but do
not have any real use anyway.

But instead of using an existing standard lagnuage that can be
synthesized they define a new language of their own and call it "C, but
you are not allowed to use this and that"

BTW, VHDL has the same issues.

> There are clear advantages to being able write, test, and debug large
> complex algorithms on a traditional processor with a source code
> debugger and moving the nearly finished product to FPGAs for deployment
> and performance. So access to advanced software development tools is
> two.

Yes, but it also is a clear advantage to have a language that allows for
explicit parallelism, processes, signals and events on a finer grain
that threads.

Such languages exist for traditional processors for a long time now.
Algol comes to mind as a very old example.
You know, it is very easy to map explicit parallelism on a serial
machine. But it is very hard to extract parallelism from a serial
description.

I am a big fan of high level synthesis from algorithmic descriptions
that do not describe the hardware details, but I am sure that C is "A
Really Bad Choice (TM)".

Heck, C is a really bad choice for serial CPUs to begin with. If you
build hardware, you want your compiler to do as many compile time checks
as possible. There's not much that you can catch at compile time with
plain C.


Kolja Sulimma

air_...@yahoo.com

unread,
Nov 4, 2005, 8:49:42 AM11/4/05
to

Benjamin Ylvisaker wrote:
> I spent a semester in that research
> group, and they most certainly do not intend to replace HDLs with C.
> They have developed some very interesting compiler technologies that
> can generate surprisingly efficient circuits from nearly arbitrary C
> code, but even they wouldn't claim that C is an appropriate
> replacement for HDLs in all cases. They are spinning their compiler
> as a tool that can be used by many more people than traditional HDL
> synthesis tools, but the quality of the circuits they produce is still
> far from optimized HDL-based designs.

I was not suggesting that C syntax HDL's will obsolete VHDL/Verilog
and other technology specific HDL's, but just as you clearly state the
goal is to produce "efficient circuits from nearly arbitrary C" which
from
the presentations by the group are very very close to that of a good
HDL
with a good to excellent coder. With FPGAs increasing in size at
roughly
the same Moore's Law rate, and similar performance increases, the end
result is that "surprisingly efficient circuits from nearly arbitrary
C" is more
than good enough to replace VHDL/Verilog for the majority of
algorithmic
and systems level designs to allow faster time to market, with a larger
pool of talent (being able to draw on C coders), with good enough
performance
and fit so that expensive fine tuning and coding in VHDL/Verilog will
NOT
be required.

The clear intent is to produce acceptable circuits with a less talented
engineers for a variety of target applications ranging from full
systems to
reconfigurable computing.

air_...@yahoo.com

unread,
Nov 4, 2005, 9:15:28 AM11/4/05
to

Kolja Sulimma wrote:
> Yes, but Java and C# have essentially the same syntax without any of the
> defunct side effect issues that make C so damn hard to synthesize but do
> not have any real use anyway.

The issue is having to subset a language and it's expected runtime
environment.
Both Java and C# barrow heavilty from the same subset of C syntax, and
both
have similar problems of porting arbitrary code from a traditional
sequential
execution environment to FPGA's. The lack of "real addressable memory"
results in difficulties in dynamic allocation, and runtime
architectures that
expect pointers to create and manage data objects.

Starting a "my language is better than your language debate" is really
non-productive,
as the real test is does it exist in a usable form today for this
application, and the
assumption or assertions are in the end only validated by availability
and use as the
true test of what is the best for target applications. One clear
standard is access to
trained labor pools, as frequently the "best" tools for briliant
experienced engineers
create unmanagable complexities for less talented, skilled, experienced
engineers
that will have to maintain the project over it's life. Managing
concurrency has always
been a tough one for less skilled engineers unable to grasp global
system architecture
and state enough to protect from the hazards and deadlocks.

There is a lot of room in this new HLL to netlist market ... we would
ALL like to
see affordable usable tools that do a better job. Bitching that C tools
are not
to some higher standard is pretty non-productive, when the existing
broadly used
tools are at even a lower standard.

There are few affordable open source tools for students, hobbiests, and
small
development shops for FPGAs .... I don't see any that meet your minimum
requirements, maybe your talents will make them available?

Robin Bruce

unread,
Nov 4, 2005, 10:15:00 AM11/4/05
to
>With FPGAs increasing in size at roughly the same Moore's Law rate, and similar >performance increases, the end result is that "surprisingly efficient circuits from nearly >arbitrary C" is more than good enough to replace VHDL/Verilog for the majority of >algorithmic and systems level designs to allow faster time to market, with a larger pool >of talent (being able to draw on C coders), with good enough performance and fit so >that expensive fine tuning and coding in VHDL/Verilog will NOT be required.

Wow, that's a long sentence :) and one I broadly agree with. I'm not so
sure about the phrase "nearly arbitrary C". I don't know all the tools
though, so I'm presenting my limited experiences here, not what I think
is universally true... So no flaming. :)

Is there any tool out there that can produce code that rivals good VHDL
when written by a "C coder"? I'm currently working with a very bright
undergrad who has never used VHDL before, and I've got them using a
C-to-VHDL tool. To effectively use the tool, they're having to
understand why they need shortcuts that avoid "/, % and *" as much as
possible. They need to think about memory management in a very new way,
in a land where BRAM is not limitless. They're also having to consider
the differences between BRAM, SRAM and registered values and their
effects on performance. They need to understand what will break the
pipeline and what won't, what will result in big pipeline latency and
what won't. They need explanations as to why a few small changes can
quarter the final slice count. To work effectively you need to
understand what kind of hardware that you are inferring by what you
write. Two functionally equivalent statements can compile to two very
different VHDL projects. In my experience, HLLs free you up from the
drudgeries of HDLs, but they don't yet free you up from the need for an
understanding of the underlying hardware. With nothing but a C
knowledge you can get something big and slow, but for small and fast,
you need to know what you're inferring.

Cheers,

Robin

air_...@yahoo.com

unread,
Nov 4, 2005, 10:43:46 AM11/4/05
to

Robin Bruce wrote:
>Two functionally equivalent statements can compile to two very
> different VHDL projects. In my experience, HLLs free you up from the
> drudgeries of HDLs, but they don't yet free you up from the need for an
> understanding of the underlying hardware. With nothing but a C
> knowledge you can get something big and slow, but for small and fast,
> you need to know what you're inferring.

That's a clear problem, even with C coders and doing any kind of device
driver on a traditional system. The pool of C coders that understand
hardware
enough to write drivers is a small percentage of actual coders. The
difference
is that it's relatively easy to teach the high level aspects of
hardware from a
systems perspective to train new device driver writers and maintainers,
and
one good low level engineer can mentor some half dozen others with less
skill
and maintain the quality level necessary for production work. I've done
so on
several cross architecture porting projects with undergrad students.

I believe there is a similar leverage in using C coders for FPGA work,
you do
not need EE's with logic level design skills to develop fpga projects,
but you do
need to teach C coders about the hardware architecture models that the
HLL
is going to produce after synthesis. Even today it's necessary to teach
C coders
about cycle counting, as assembly language coding is no longer a basic
skill.
Performance sensitive designs require teaching about working sets,
cache
performance issues, CPU pipelines, and a number of issues that a
typical
C++ or other object oriented coder remains clueless about. There is a
continuous
design space:

BitLevelLogic<===============================>AbstractAlgorithms

Schematic designs on the left, VHDL/Verilog somewhere left of center, C
based
HLLs somewhere just right of center, with better HLLs to come even
farther right
of center. EE's tend to design to the left of center, and HLL coders to
the right of
center under this model, and the better the HLL effieciently hides the
hardware, the
farther to the right we move.

So the bottom line, is that just as EE's were the entire computer
development staff
in the 50's, today EE's are a fraction of the product development team.
With FPGAs
becoming common along with HLLs we will see that same trend.

During the 70's we saw a lot of old salt EE's and Systems types crying
about HLLs
and computer performance, which is nearly a moot point today. Ditto for
EE's and
systems types that will be crying about large commodity FPGAs similarly
not being
effieciently used by HLLs generating FPGA designs from abstract
language tools.

Mike Treseler

unread,
Nov 4, 2005, 10:51:20 AM11/4/05
to
Robin Bruce wrote:

> I can safely say that DIME-C is to all intents and
> purposes a subset of C, so everything you do in it can be compiled with
> a gcc compiler. You can't have pointers, and you have to go round the
> houses sometimes to avoid breaking your pipelines, but it's definitely
> recognisable as C. If anyone is particularly interested, I could send
> them some examples of the code.

Simple code examples and synthesis results is what is
missing from the web sites and discussions I have
seen. If you've got some, consider posting
a link so that all interested can have a look.

-- Mike Treseler

air_...@yahoo.com

unread,
Nov 4, 2005, 11:19:27 AM11/4/05
to

Robin Bruce wrote:
> I'm a research engineer
> based in Nallatech, and I've been working with a tool being developed
> there, DIME-C.

For those that haven't looked at this stuff, it's the next generation
HLL
FPGA environment, two steps above C with a cute GUI based system level
abstraction tool .... very cool :)

http://www.nallatech.com/mediaLibrary/images/english/4063.pdf

Jim Granville

unread,
Nov 4, 2005, 3:50:11 PM11/4/05
to
Robin Bruce wrote:
> Jim,

>
> Apologies for my ignorance, but can I ask you to expand on "alternative
> of HLL -> FPGA Running HLL amd the best tool set". I wasn't sure what
> you meant.

Jim Granville wrote:
>> The HLL -> HDL path, misses the alternative of HLL -> FPGA Running HLL
>> amd the best tool set, will be one that allows a softer migration
>> between Opcodes and Registers.

"FPGA Running HLL" is a terse way of saying a SoftCPU (can be DSP
enhanced) running opcodes (ex HLL), on the FPGA. ie a FPGA CPU

eg NIOS has an interesting Opcode extension scheme - you can
code in C, run in C, and then grab ONLY the tight stuff for
expansion into hardware, and call with an opcode.

The FPGA vendors seem to be favouring the 'sea of DSP' and std
tool flows, over 'sea of programmers' approach :)

-jg

Kolja Sulimma

unread,
Nov 5, 2005, 6:51:11 AM11/5/05
to
air_...@yahoo.com schrieb:

> Kolja Sulimma wrote:
>
>>Yes, but Java and C# have essentially the same syntax without any of the
>>defunct side effect issues that make C so damn hard to synthesize but do
>>not have any real use anyway.
>
>
> The issue is having to subset a language and it's expected runtime
> environment.
> Both Java and C# barrow heavilty from the same subset of C syntax, and
> both
> have similar problems of porting arbitrary code from a traditional
> sequential
> execution environment to FPGA's. The lack of "real addressable memory"
> results in difficulties in dynamic allocation, and runtime
> architectures that
> expect pointers to create and manage data objects.

You are perfectly right. As I wrote in my post I think that you need
some features like explicit parallelism that none of the mainstream
languages offer, albeit there are languages available that would be
suitable.

But apparently all those highly trained clever software engineers can
not be bothered to learn another language. At least this argument
allways comes up at that point. (Maybe I can find an engineer in india
that is still capable of learning?)
So if your really need C-syntax as many believe - I don't - at least use
a modern C derived language that is easier to compile.
With java essentially only the "new" operated is a problem. With C, well
look at the System-C restrictions.

On the other hand: What's so hard about dynamic allocation? Tell the
designer that it will be slow, and if he uses it simply synthesize to a
microblaze implementation. You will not meet the timing constraint, but
it can be synthesized.
Or even use profiling to find a typicall number of allocated objects and
create them in hardware. If more are used halt execution. That is
exeactly what a sequential processor would. You can't call malloc a
billion times in C and maybe you can not call it 16 times in hardware C.
It is the same type of constraint that is not imposed by the language
but by the implementation fabric and the designer needs to know the
capabilities of his system before implementing.


Kolja Sulimma

air_...@yahoo.com

unread,
Nov 5, 2005, 8:25:14 AM11/5/05
to

Kolja Sulimma wrote:
> You are perfectly right. As I wrote in my post I think that you need
> some features like explicit parallelism that none of the mainstream
> languages offer, albeit there are languages available that would be
> suitable.

The reality is that forms of parallelism emerge when using C as
an HLL for FPGAs. The first is that the compiler is free to parallel
statements as much as can be done. This alone is typically enough
to bring the performance of a 300MHz fpga clock cycle near the
performance of a several GHz RISC/CISC CPU for code bodies that
have a significant inner loop. Second, explicit parallelism is
available
by replicating these inner loops by creating threads with the same
code body and using established MPI code structures and libraries.
Third the compiler is free to unroll inner loops. Fourth the compiler
is free to flatten the netlists to gain additional parallelism. All
this
and more is obtained without abandoning stable mature development
tools, without learning a new development environment that might add a
few percent higher performance, and without significant unwarranted
risks for many projects.

After several decades of managing large development projects across
multiple facility and platform evolutions we have learned to mitigate
risks and maximize human potential across a large number of projects,
teams, and technologies while repeatedly delivering results with
acceptable tradeoffs judged by our experience.

Many have also advocated radical changes in language and
development styles. We have the gained the experience in
this process after watching radical changes fail for human and
technology elements not considered by the radical technologies
as proposed. We do learn from those that do succeed and incorporate
with reasoned process to mitigate risks.

One critial risk not forseen by many of these brash proposed changes
is observing that individuals have different degrees of ability to
manage
state in designs. Some with high natural ability can learn tosafely
manage
a very large amount of state with concurrency, and many more lack this
ability and after the best training can only handle significantly
smaller
amounts of concurrency in a design. This is not a training issue, this
is
not an experience issues, this is natural ability that is developed
with
training and experience, but the maximum for each individual is
independent of training and experience. Managing these differences in
natural ability causes tradeoffs in complexity that may not be the best
for some, but are best for organizations over time. It's not uncommon
to see briallant designs to be completely unmaintainable by mortals.

Time, and time alone, judges sucesses and failures. Not idealism
and insults.

air_...@yahoo.com

unread,
Nov 5, 2005, 9:10:11 AM11/5/05
to

Kolja Sulimma wrote:
> But apparently all those highly trained clever software engineers can
> not be bothered to learn another language. At least this argument
> allways comes up at that point. (Maybe I can find an engineer in india
> that is still capable of learning?)
> So if your really need C-syntax as many believe - I don't - at least use
> a modern C derived language that is easier to compile.
> With java essentially only the "new" operated is a problem. With C, well
> look at the System-C restrictions.

Open source know no boarders, no race, no religion, no ethnic, no
political, no barriers to who can contribute to the world.

Since you KNOW the answer, share it. We will be looking for your
work on sourceforge, and your announcement here. Good ideas which
are never realized, always ways worthless failures.

> On the other hand: What's so hard about dynamic allocation? Tell the
> designer that it will be slow, and if he uses it simply synthesize to a
> microblaze implementation. You will not meet the timing constraint, but
> it can be synthesized.

Good designs have excellent space time tradeoffs. CPU cores are large,
and take you right back to serial execution with poor parallelism, that
in
many cases would have been done better with a VLSI cpu, either as a
hard core, or external device. Likewise, pointer based memory takes
you right back to serial access of that memory as a critical path
resource. Dynamic allocation is implicitly serial by design.

> Or even use profiling to find a typicall number of allocated objects and
> create them in hardware. If more are used halt execution. That is
> exeactly what a sequential processor would. You can't call malloc a
> billion times in C and maybe you can not call it 16 times in hardware C.
> It is the same type of constraint that is not imposed by the language
> but by the implementation fabric and the designer needs to know the
> capabilities of his system before implementing.

A language designed around dynamic allocation of objects and classes
is implicitly unusable if limited to 16 such allocations, if only
trivial
code body invocations can be realized.

The multiplexors to emulate a memory pool of statically allocated
objects
are both huge and implicitly serial once hazzard free for conncurrent
access.
This takes us right back to poor space time tradeoffs and a lack of
implict
parallism that static objects offer.

Obviously you see right past these problems, and we are waiting for
your
magic to be realized as a much better language offering on
sourceforge.net.

Since the programmers in India are by your assertion superior, please
show
us by results, and time may prove you right.

Mike Treseler

unread,
Nov 5, 2005, 12:38:21 PM11/5/05
to
air_...@yahoo.com wrote:

> For those that haven't looked at this stuff, it's the next generation
> HLL
> FPGA environment, two steps above C with a cute GUI based system level
> abstraction tool .... very cool :)
>
> http://www.nallatech.com/mediaLibrary/images/english/4063.pdf


Yes. All of the next-gen websites are cute.
Why is a working code example so hard to find?

-- Mike Treseler

air_...@yahoo.com

unread,
Nov 5, 2005, 5:12:05 PM11/5/05
to

Mike Treseler wrote:
> Yes. All of the next-gen websites are cute.
> Why is a working code example so hard to find?

You can always ask the various sites, or some user. Robin seems to
be using and happy with the DIME stuff, email him for some samples.
Have you tried talking with the company?

Impluse C offers a full featured 30 day trial, and they are pretty cool
to
talk with, and have done a good job of productizing Streams C.

Streams C is free for non-commercial use, and is available from
http://www.streams-c.lanl.gov/

SA-C by the Colostate team (Wimm Bohm) looks like they don't
intend to make it publically available, except to companies funding
their research projects.

ASH by the CMU guys, isn't likely to get open source released either,
and is likely to end up licensed to someone for a revenue stream
from what I was told by one person last year ... but I haven't seen
even that yet. Mihai Budiu appears to now be at Microsoft,
and publishing papers from there on the technology, so maybe
Microsoft will be licensing the technology, or working from Mihai's
development independent or in partnership with CMU. The papers
have been very cool, but until it's publicly available or a product
it's hard to judge just how useful for others. The ASH team offered
training at a conference earlier this year, and may do more.

Celoxica isn't quite as easy to get a demo copy from, but some Xilinx
reps seem to have a copy, and they were offering training seminars with
Xilinx across the country.

FpgaC has some examples in the download image, and is free to run
your own tests with, and has no restrictions against commerical use.

It seems pretty easy to get working examples simply by downloading
or asking the sales guys ... who didn't respond to your asking?

Jeremy Stringer

unread,
Nov 6, 2005, 4:20:18 PM11/6/05
to
air_...@yahoo.com wrote:
> The reality is that forms of parallelism emerge when using C as
> an HLL for FPGAs. The first is that the compiler is free to parallel
> statements as much as can be done. This alone is typically enough
> to bring the performance of a 300MHz fpga clock cycle near the
> performance of a several GHz RISC/CISC CPU for code bodies that
> have a significant inner loop. Second, explicit parallelism is
> available
> by replicating these inner loops by creating threads with the same
> code body and using established MPI code structures and libraries.

An interesting point with this, of course, is that it's just splitting
the work less - instead of going

C -> object code -> Processor (with out of order execution),

this would seem to be a case where the management of of-of-order
execution type things is done statically at compile time, rather than
dynamically by the processor.

It could be interesting to see how far this could go -
Compile to code+processor, where the processor architecture is
implemented by the compiler subject to the requirements of the design.

My 2c,
Jeremy

Jim Granville

unread,
Nov 6, 2005, 5:10:42 PM11/6/05
to
Jeremy Stringer wrote:
> It could be interesting to see how far this could go -
> Compile to code+processor, where the processor architecture is
> implemented by the compiler subject to the requirements of the design.

I think that is being done already, tho at the simple end of the scale,
it does prove it is possible.
IIRC, it involved compiling the design twice. Once to generate the
Core+Codes, and again to remove unused portions of the core.
It can introduce other problems - if the CPU changes every time, that
complicates things more, and what looks like a few lines of code, might
enable a new block of the CPU, and have an unexpected hit on both %
Usage, and speed.

Cores themselves are not too large these days, the bigger bottleneck
is on chip code memory.

-jg

air_...@yahoo.com

unread,
Nov 6, 2005, 9:01:54 PM11/6/05
to

Jim Granville wrote:
> IIRC, it involved compiling the design twice. Once to generate the
> Core+Codes, and again to remove unused portions of the core.
> It can introduce other problems - if the CPU changes every time, that

That's interesting :) ... who's tools are doing that?

The other extreme are Sarah's HarPE tools which even optimize away
pretty much the whole core into logic.

Phil Tomson

unread,
Nov 7, 2005, 2:55:44 AM11/7/05
to
In article <qhoe52l...@ruckus.brouhaha.com>,
Eric Smith <er...@brouhaha.com> wrote:
>Rene Tschaggelar wrote:
>> Why are those guys so keen on C ? Suggesting compatibility with
>> something while having least readability ?
>
>air_...@yahoo.com writes:
>> The description at the project page pretty much says it all:
>>
>> "C provides an excellent alternative to VHDL/Verilog for algorithmic
>> expression of tasks targeting FPGAs for reconfigurable computing."
>
>That doesn't explain *why* it's an excellent alternative. I can just
>as easily state that "C provides a terrible alternative to VHDL/Verilog
>for algorithmic expression of tasks targetting FPGAs for reconfigurable
>computing". So why is their statement any more accurate than mine?

The main advantage that C has over the HDLs is that many software
engineers know C, not many know VHDL/verilog. Perhaps the goal of
targetting FPGAs with C is to allow lots of software engineers to be able
to develop algorithms that can be accelerated in a FPGA.

Of course a lot of software engineers do not prefer C these days....

Phil

Robin Bruce

unread,
Nov 7, 2005, 12:41:06 PM11/7/05
to
OK, here's an example from a few months back. It's a functional block
that can carry out either an FFT, an IFFT or a complex multiply. The
ensemble makes a pulse compressor. The tool has changed a little since
then, so I wouldn't write it quite like this again. For example, there
was a problem with the % operator back then. Plus, now that I know a
little better what I'm doing, I'd work my index variables differently.
I'd also make it all one loop, so as to better exploit the pipelining.
I maybe add about 100log2(SIZE) cycles by not having it as one loop. I
seem to remember this compiled to around 16000 slices of an X2CV6000,
and I could clock it at 120MHz (slightly more than ISE said, but you
can get away with these sort of things in the lab).

#define FFT_FORWARD -1
#define FFT_BACKWARD 1
#define CMPLX_MULT 2
#define SIZE 4096
#define 2xSIZE 8192

void PC4096Opt(IEEE754 realA_result[SIZE],
IEEE754 imagA_result[SIZE],
IEEE754 realB[SIZE],
IEEE754 imagB[SIZE],
IEEE754 Root_u1[2xSIZE],
IEEE754 Root_u2[2xSIZE],
int shuffle[SIZE],
int nn,
int m,
char mode,
IEEE754 scale){

int toggle;

float c1,c2,scaleLocal,t1,t2,u1,u2,xi1,xi,xIn,yi1,yi,yIn,
xA[SIZE],
yA[SIZE],
xB[SIZE],
yB[SIZE];

int w,z;

int i,j,i1,l,l1,l2,count,index,offset,shuff;

if ( (mode == FFT_FORWARD) || (mode == FFT_BACKWARD) ){

if (mode == FFT_FORWARD) offset = 0;
else offset = nn;

scaleLocal = (float) scale;

for (i = 0; i < nn; i++)
{
shuff = shuffle[i];
xIn = (float)realA_result[shuff];
xA[i] = xIn;
yIn = (float)imagA_result[shuff];
yA[i] = yIn;
}

// Compute the FFT

c1 = -1.0;
c2 = 0.0;
l2 = 1;
count = offset;
toggle = 0;

for (l=0;l<m;l++) {
l1 = l2;
l2 <<= 1;
for (j=0;j<(nn>>1);j++) {
// Pipelined Inner Loop
index = count + (j>>((m-l)-1));
u1 = (float) Root_u1[index];
u2 = (float) Root_u2[index];
w = l2*j;
z = w / (nn-1);
i = w - (nn-1) * z;
// Really, should be: i = (l2*j) % (nn-1)
i1 = i + l1;

if (toggle == 0)
{
xi1 = xA[i1];
xi = xA[i];
yi1 = yA[i1];
yi = yA[i];

t1 = u1 * xi1 - u2 * yi1;
t2 = u1 * yi1 + u2 * xi1;
xB[i1] = xi - t1;
yB[i1] = yi - t2;
xB[i] = xi + t1;
yB[i] = yi + t2;
}
else
{
xi1 = xB[i1];
xi = xB[i];
yi1 = yB[i1];
yi = yB[i];

t1 = u1 * xi1 - u2 * yi1;
t2 = u1 * yi1 + u2 * xi1;
xA[i1] = xi - t1;
yA[i1] = yi - t2;
xA[i] = xi + t1;
yA[i] = yi + t2;
}
}
count = (l2 - 1) + offset;
toggle = toggle ^ 1;
}
// Scaling for forward transform
for (i=0;i<nn;i++) {
if (toggle == 0)
{
realA_result[i] = (IEEE754) ( scaleLocal * xA[i]);
imagA_result[i] = (IEEE754) ( scaleLocal * yA[i]);
}
else
{
realA_result[i] = (IEEE754) ( scaleLocal * xB[i]);
imagA_result[i] = (IEEE754) ( scaleLocal * yB[i]);
}
}
}

else if (mode==CMPLX_MULT){
for(i=0; i<nn; i++)
{
xA[i] = (float) realA_result[i];
yA[i] = (float) imagA_result[i];
xB[i] = (float) realB[i];
yB[i] = (float) imagB[i];
}
for(i=0; i<nn; i++)
{
realA_result[i] = (IEEE754) ( (xA[i]*xB[i]) - (yA[i]*yB[i]) );
imagA_result[i] = (IEEE754) ( (yA[i]*xB[i]) + (xA[i]*yB[i]) );
}
}
}

air_...@yahoo.com

unread,
Nov 8, 2005, 5:01:20 PM11/8/05
to
Phil Tomson wrote:
> The main advantage that C has over the HDLs is that many software
> engineers know C, not many know VHDL/verilog. Perhaps the goal of
> targetting FPGAs with C is to allow lots of software engineers to be able
> to develop algorithms that can be accelerated in a FPGA.

When we look at FPGAs for reconfigurable computing, that is certainly
the draw. When you look at C like offering for FPGAs from firms like
Mitrionics and their product Mitrion-C which directly targets moving
High Performance Computing (HPC) applications to FPGAs:


http://news.taborcommunications.com/msgget.jsp?mid=461789&xsl=story.xsl
http://www.mitrionics.com/index.shtml

with the claim of 20 times faster execution on FPGAs, it's pretty clear
that FPGA have a new volume market. If you google search reconfigurable
computing, there are links to hundreds of firms and projects with this
goal.

> Of course a lot of software engineers do not prefer C these days....

There are more different types of software engineers than there are
hardware engineers. Most software engineers have never liked plain
vanilla C, just as most hardware engineers don't like RF and power
supply engineering.

That has always been true. C has always been the systems programming
language of choice for low level implementation as a direct substitute
for
assembly language. This has been true since the early days in the 1970s
when C was designed from the "B programming Language" which was a
threaded interpreter to a fully compiled language usable to replace
almost
all the assembly in the UNIX operating system and utilities during the
V5,
to V6, to V7 migrations.

Higher level languages, with better database and GUI interfaces, and
other
applications development libraries have always been the language of
choice for higher level applications. These days that is a large number
of
higher level object oriented or application specific languages,
including
C++ and Java. While C++ and Java resemble C syntax, that is about
where the simularity ends. Much like Apples, Oranges, and Bananas
all grow on plants known as trees, and that is where the simularity
ends.

C as a low level assembly language replacement, is primarily used by a
subset of programmers doing systems level programming and a small
group of applications programmers doing hardware interfacing and
performance sensitive optimizations. These programmers frequently have
the skill sets to understand interfacing to hardware at a high level,
and
are the target of many current C based reconfigurable computing
development tools projects. While some high level applications
programmers
used to coding in C++, Java, and other production languages may be
trainable to do low level C work on FPGAs, in general they will find C
about as primative as a VHDL/Verilog designer will find Schematics.

So, like it or not, there are two vary different markets for FPGA
hardware
and tools. Those building hardware, and those building applications for
HPC platforms. And a lot of grey area in between.

air_...@yahoo.com

unread,
Nov 10, 2005, 5:48:40 PM11/10/05
to
Another developer joined the FpgaC project today to cleanup some of
the yacc/lex design issues as a C compiler (like variable scope).

This, structures, unions, typing, and unsigned variables are on the
near
term list of things to fix in this compiler so it will take a cleaner
subset
of traditional C.

Longer term I would like to add direct support for bit serial
distributed
arithmetic to be able to support multiply, multiply-accumulate, divide
and mod (%) operations. Plus add direct support for threads by using
some compiler primatives that know about fork(), exec(), along with
library primatives for MPI and posix-threads so that parallelism would
remain expressed the same way in FPGA and RISC/CISC testing
environments.

Anybody else have pet ideas, suggestions, or wish-list items that
a C based HLL/HDL tool should handle?

Martin Thompson

unread,
Nov 11, 2005, 5:22:14 AM11/11/05
to
air_...@yahoo.com writes:


> Anybody else have pet ideas, suggestions, or wish-list items that
> a C based HLL/HDL tool should handle?
>

I had a quick play the other day and I noticed that the counter
example generates it's own set of lookup-tables for the increment
logic, creating an enormous VHDL file. I imagine the code generator
could be simplified somewhat to take advantage of modern
synthesis... writing a process with x:=x+1 in!

My personal wishes for it would be

a) you can "simulate" the code with a suitable wrapper using a normal
C-compiler

b) you can explicitly parallelise stuff within a given function

c) some form of communication is provided between processes, that also
simulates in boggo-C. Something like Occam channels or FSL ports
would be my preference

Cheers,
Martin


--
martin.j...@trw.com
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.trw.com/conekt

air_...@yahoo.com

unread,
Nov 11, 2005, 11:26:48 AM11/11/05
to

Martin Thompson wrote:
> I had a quick play the other day and I noticed that the counter
> example generates it's own set of lookup-tables for the increment
> logic, creating an enormous VHDL file. I imagine the code generator
> could be simplified somewhat to take advantage of modern
> synthesis... writing a process with x:=x+1 in!

I had a similar experience and mind set for a while, but I'm not so
sure after spending a few hours hacking on the code to preserve
bit vector naming last spring. The VHDL netlist output is called after
all the optimizations have been done in FpgaC, and it more or less
then attempts to build the same circuit structures in VHDL that it
would in XNF.

Internally TMCC/FpgaC does not save statements while parsing, it
goes straight to building internal netlists for the operations. As a
result using FpgaC for a C to VHDL translator is more than a bit
of work and would radically interfer with the compiler C to netlist
functions. It would effectively require breaking FpgaC into two
parts, which would probably be better done by stripping the backend
off FpgaC and making a second tool for a C to VHDL translator
that would preserve statement level syntax.

Since the long term goal is to produce good XNF/EDIF netlists
that are tightly integrated to a set of RPM's and cores for
reconfigurable computing, it has not been clear if VHDL output
is the best way to get there, or a diversion.

>
> My personal wishes for it would be
>
> a) you can "simulate" the code with a suitable wrapper using a normal
> C-compiler

That is already the goal for FpgaC. I've already done some of this as
I've
hacked the original TMCC work. Implementing the rest of the C native
types, moving the port definitions to pragmas, and starting the move to
using bit field syntax is all part of that goal. The current bit field
size hack
isn't precise C, as FpgaC doesn't have structures. One of the next
changes
is to introduce structures, unions, and enums into Fpga so that bit
fields
in a structure will be the same as bit fields in standard C. Adding
some
pragmas which guide enum, counter, and state machine generation by
suggesting one-hot, grey coded, and traditional counter types is
somewhere
in the long term road map.

Typedef also needs to be added, but that will probably happen sooner as
we fix the symbol table scoping rules in the next group of changes that
will overhall the entire symbol dictionary code.

While other C to netlist tools are generally trying to make C an HDL,
and
are willing to make radical changes to the language departing from std
C,
I'm looking to target FpgaC as an HLL to netlist tool, less concerned
with
having it compete directly with VHDL/Verilog. The intent here is to
make
FpgaC a very good C to netlist tool for reconfigurable computing, which
preserves std C execution expectations on a traditional compiler and
cpu.

Thus, it should always simulate correctly on a cpu, unless you work
hard
to break it by using some HLL specirfic feature (if any remain), or the
word size choice in the netlist target can not be represented in the
CPU
you are simulating on and there are some word size or endan problems
as a result.

> b) you can explicitly parallelise stuff within a given function

This breaks a).

The goal is to do a) in the most parallel way that can be done
transparently.
Much as multiissue super scalar CPU's do .... IE exploit as much
parrelism
as exists in the sequential specification. This is actually quite a
bit, and
frequently much more than a super scalar processor can muster.

Currently, every statement block is allowed to become a flattened
combinatorial
netlist. So a single statement loop, doesn't have much parallism,
unless it's
a pretty complex statement with a lot of subexpression operations.
There we
win, as the entire expression can be flattened to a single
combinatorial net list.

Later to improve this I plan to add loop unrolling guided with a pragma
as a space
time tradeoff.

This doesn't break a), and preserves the ability to use a traditional
cpu as an
effective simulation environment.

> c) some form of communication is provided between processes, that also
> simulates in boggo-C. Something like Occam channels or FSL ports
> would be my preference

In a traditional multiple processor environment, MPI and PVM provide
the
communications standards that are widely used. The intent is to create
some intrinsics that the compiler uses to build mailboxes and FIFO's
for
IPC needs, and then provide an FpgaC set of libraries and header files
that do most of std libc, MPI, PVM and posix-threads.

This addition to FpgaC is needed to handle PCI to host communications,
as
well as inter-FPGA communcations in multiple FPGA platforms.

Extending FpgaC in this way, preserves a).

air_...@yahoo.com

unread,
Nov 11, 2005, 11:40:19 AM11/11/05
to
I should add, that Chirag and I coulfd use help in realizing these
goals.

0 new messages