How about taking a look at SystemC www.systemc.org . As far as i know a
synthezisable subset of systemc exist but i dont find the document at this
moment.
You can also take a look to the following documents:
http://www.systemc.org/projects/sitedocs/document/SystemC_WP20/en/1
http://www.systemc.org/projects/sitedocs/document/E3/en/1
Hope i could help
Ciao
"DJohn" <deepu...@yahoo.com> wrote in message
news:am6s84$303tn$1...@ID-159866.news.dfncis.de...
mike wrote:
--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email r...@andraka.com
http://www.andraka.com
"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759
> I can't claim to be an expert, but what I think you want to do isn't
> going to work. You can't take any arbitrary C/C++ program and convert it
> into VHDL code that will run on an FPGA.
Check out Handel-C ... which can convert most ANSI C programs into
VHDL/EDIF suitable for FPGA's. They also have some great extensions to the
C language that are useful in hardware design.
http://www.celoxica.com/home.htm
Sameer.
--
MTech Student,
Reconfigurable Computing Lab,
KReSIT, IIT-Bombay.
----------------------------------------------------------------------
Your supervisor is thinking about you.
Handle C is a nice language, but is not done to devellope hardware
interface like DDR, or PCI-X interface.
So if you want to interace real hardware, real time, then I recomand
you to code VHDL for these component, even if Celoxica claim they can
do it (but they don't support DDR at all for now).
When data is in the chips, then you can use Handle-C
compiler/component to devellope the algo. Celoxica can translate
Handle-C to VHDL or to EDIF target.
Celoxica compiler has a Visual Studio feeling, and they made an
amazing job to translate harware concept and terms to software concept
and terms. You can instantiate dual port ram in block ram in few
lines :
#define DEF_BPP 8
#define DEF_NB_PIXEL_PER_LINE 1024
set clock = "myProcessingClock";
typedef mpram myDualPort
{
wom unsigned int DEF_BPP wPort[DEF_NB_PIXEL_PER_LINE];// Write only
rom unsigned int DEF_BPP rPort[DEF_NB_PIXEL_PER_LINE];// Read only
} DUAL_PORT;
// Dual port RAM instantiation, in block RAM
static DUAL_PORT m_DualPort with { block = 1};
But still, you have to know FPGA architecture, and try to understand
what a pointer to an array of RAM means...
To route the chips, you still need ISE. So to do a complete project,
you need editor/simulator (Active HDL), Syntesis tool (Synplicity),
router (ISE) and Celoxica. This is big $$$
Good luck
Pierre
"Sameer D. Sahasrabuddhe" <same...@it.iitb.ac.in> wrote in message news:<pan.2002.09.19....@it.iitb.ac.in>...
Uncle Noah
Synthesis from a C/C++ algorithm is absolutely possible and has been in
use for some time. Even "plain vanilla C" algorithms can be used with the
right synthesis product. Obviously it does take some amount of hardware
knowledge to get reasonable hardware out the backend. All that should be
required is to add the hardware interface structure to the C/C++ algorithm.
For that we recommend using SystemC (there is a reference implementation
available under an open source license at www.SystemC.org). SystemC provides
the necessary abstraction in C++ to add concurrency, bit accuracy, and other
hardware-isms to the C/C++ algorithm.
To take the algorithm to hardware (RTL Verilog or VHDL) my company offers
a product called "Cynthesizer" for high-level synthesis from SystemC. We've
had a number of customers take generic algorithms (some even from the web)
such as filters, encryption, multimedia, etc. and convert them directly to
RTL Verilog and VHDL. The resulting RTL can be put into any FPGA or ASIC
synthesis tool as well as any other tool that operates on RTL.
For more information check out our web site at www.ForteDS.com or feel free
to email me directly.
Best regards,
Brett
"Ray Andraka" <r...@andraka.com> wrote in message
news:3D894809...@andraka.com...
Brett Cline wrote:
> Hi All-
>
> Synthesis from a C/C++ algorithm is absolutely possible and has been in
> use for some time. Even "plain vanilla C" algorithms can be used with the
> right synthesis product.
Show me. Nothing I've seen can handle C code that was not specifically written
to create hardware. I think your next sentence probably validates that as
well. Plain vanilla C has nothing in it to support concurrency, and I know of
no product that can infer that concurrency from existing (not written
specifically to map to hardware, usually using special extensions) code.
> Obviously it does take some amount of hardware
> knowledge to get reasonable hardware out the backend. All that should be
> required is to add the hardware interface structure to the C/C++ algorithm.
> For that we recommend using SystemC (there is a reference implementation
> available under an open source license at www.SystemC.org). SystemC provides
> the necessary abstraction in C++ to add concurrency, bit accuracy, and other
> hardware-isms to the C/C++ algorithm.
>
> To take the algorithm to hardware (RTL Verilog or VHDL) my company offers
> a product called "Cynthesizer" for high-level synthesis from SystemC. We've
> had a number of customers take generic algorithms (some even from the web)
> such as filters, encryption, multimedia, etc. and convert them directly to
> RTL Verilog and VHDL. The resulting RTL can be put into any FPGA or ASIC
> synthesis tool as well as any other tool that operates on RTL.
But at what price? Is the performance and density reasonably close to a what a
skilled hardware designer can accomplish?
Tim Callahan did his thesis on such a compiler for an environment
(GARP) where you had an FPGA coprocessor. He was able to get speedup
for some loops, but only after heroic effort on the compiler side.
Hand mapping produced far superior results.
--
Nicholas C. Weaver nwe...@cs.berkeley.edu
Yeah, and show me too. I've yet to see any benchmarks that show any where
near usable performance and size. From my experience, you are better off
re-writing it in an HDL, then trying to fight with these tools in both the
size and speed arena.
Why do things that supposedly make your life "easier" end up making them
harder?
> > To take the algorithm to hardware (RTL Verilog or VHDL) my company
offers
> > a product called "Cynthesizer" for high-level synthesis from SystemC.
We've
> > had a number of customers take generic algorithms (some even from the
web)
> > such as filters, encryption, multimedia, etc. and convert them directly
to
> > RTL Verilog and VHDL. The resulting RTL can be put into any FPGA or ASIC
> > synthesis tool as well as any other tool that operates on RTL.
>
> But at what price? Is the performance and density reasonably close to a
what a
> skilled hardware designer can accomplish?
Of course not, Ray, you know better than to ask that question ;-)
Regards,
Austin
Austin Franklin wrote:
--
> "Ray Andraka" <r...@andraka.com> wrote in message
> news:3D9A76DD...@andraka.com...
> >
> >
> > Brett Cline wrote:
> >
> > > Hi All-
> > >
> > > Synthesis from a C/C++ algorithm is absolutely possible and has been
> in
> > > use for some time. Even "plain vanilla C" algorithms can be used with
> the
> > > right synthesis product.
> >
> > Show me.
>
> Yeah, and show me too. I've yet to see any benchmarks that show any where
> near usable performance and size. From my experience, you are better off
> re-writing it in an HDL, then trying to fight with these tools in both the
> size and speed arena.
Celoxica have some fairly impressive demos. In particular, the
ray-tracing one I saw (can't find a reference for it though). The MP3
decoder which "...took a two man team less than eight weeks to produce
a working silicon prototype, including implementing a CD-ROM
controller to allow management of the input data stream."
http://www.celoxica.com/products/technical_papers/case_studies/cs_001.htm
And that included converting the algorithm from floating point to
fixed point. I don't know much about MP3, or how representative of
'average engineers' the team that did it is, but that sounds
reasonably quick. Anyone else care to comment?
Yes, the device may be bigger than required, but if the development
time is reduced by an order of magnitude, surely that's a gain, in
some circumstances? Especially in the research realm, where some
problems may be intractable because a) software isn't fast enough, b)
the HW solution will take too long to design the traditional way.
Agreed, you are never going to get as good results as a hand
optimised, laid-out device, but wasn't the argument against C
compilers by the assembler programmers similar many years ago. And
against java...
<snip>
> >
> > But at what price? Is the performance and density reasonably
> close to a what a skilled hardware designer can accomplish?
>
> Of course not, Ray, you know better than to ask that question ;-)
>
And indeed it will be a long time before high-end designs can avoid
getting *that* close to the hardware. Still, I imagine in 20 years
time, the Ray's of this world will be pushing the then current
methodoligies to the limits of performance anyway - there'll always be
a demand for clever HW designers!
Just MHO!
Cheers,
Martin
--
martin.j...@trw.com
TRW Conekt, Solihull, UK
http://www.trw.com/conekt
> Celoxica have some fairly impressive demos. In particular, the
> ray-tracing one I saw (can't find a reference for it though).
> Yes, the device may be bigger than required, but if the development
> time is reduced by an order of magnitude, surely that's a gain, in
> some circumstances? Especially in the research realm, where some
> problems may be intractable because a) software isn't fast enough, b)
> the HW solution will take too long to design the traditional way.
Good point, but AFAIK Celoxica's solution is occam, with a C syntax.
Occam is a reasonable solution to the problem. It is probably not
too difficult to fiddle with the VHDL syntax to make it more C-like,
but you would not be moving the problem forwards.
Parallel programming, as in hardware design, is much harder than
sequential programming. It is a little optimistic to hope that
we will be able to write a parallel program by
1. writing a sequential equivalent
2. summoning up a magic parallelizer
Though you can certainly use your parallel hardware as sequentially
as your budget can stand, and you can obviously use any sequential
language to express the core dataflow graph (I think I mean
dataflow :-))
/Tim
Martin Thompson wrote:
--
> I agree that the C to hardware things have their place. If nothing else, it
> lowers the bar for entry into FPGAs. What is missing is the necessary caveat
> explaining that there is a lot of performance/density left on the table...and
> that is much more than is commonly touted. I think an order of magnitude
> wouldn't be far wrong as an average.
>
I wouldn't argue there!
> Martin Thompson wrote:
>
> > Celoxica have some fairly impressive demos. In particular, the
> > ray-tracing one I saw (can't find a reference for it though).
>
> > Yes, the device may be bigger than required, but if the development
> > time is reduced by an order of magnitude, surely that's a gain, in
> > some circumstances? Especially in the research realm, where some
> > problems may be intractable because a) software isn't fast enough, b)
> > the HW solution will take too long to design the traditional way.
>
> Good point, but AFAIK Celoxica's solution is occam, with a C syntax.
> Occam is a reasonable solution to the problem. It is probably not
> too difficult to fiddle with the VHDL syntax to make it more C-like,
> but you would not be moving the problem forwards.
>
As I see it (and I'm not an expert in Handel-C by any stretch of the
imagination!) the big difference is that Handel-C is serial with
explicit parallelism, whereas VHDL is parallel with explicit serialism
(you write your own state machine)
> Parallel programming, as in hardware design, is much harder than
> sequential programming. It is a little optimistic to hope that
> we will be able to write a parallel program by
>
> 1. writing a sequential equivalent
> 2. summoning up a magic parallelizer
>
Funny - just this debate has been going on on comp.arch recently, but
I can't find it on google yet.
<snip>
Martin Thompson wrote:
Â
Celoxica have some fairly impressive demos. In particular, the
ray-tracing one I saw (can't find a reference for it though). The MP3
decoder which "...took a two man team less than eight weeks to produce
a working silicon prototype, including implementing a CD-ROM
controller to allow management of the input data stream."
http://www.celoxica.com/products/technical_papers/case_studies/cs_001.htm
Â
Â
--
Healthy skeptisim is great. I was actually skeptical when I started
working on this technology about 3 years ago.
Regarding your question on performance. In every case so far, we met or
beat the performance targets for the device. In most cases we will beat area
as well, but not all. The reason is that the compiler will do things (like
share resources) that you simply will not because of the complexity of
trying to manage it.
I don't advocate using Handle-C. Everything I've seen as well brings it
too close to hardware. For an algorithmic design, I want to leave my
algorithm untouched. And to leave the algorithm untouched you must have, as
someone pointed out, a very sophisticated compiler. We've already done
designs such as image compression, encryption, filters, etc. where we've
been able to create multiple RTL implementations with different
area/performance tradeoffs in a fraction of the time it took to create a
single version of the hardware by hand. And, in most cases, we were able to
get a final end results (area vs. performance) than the customer got by
hand. You'll say the same thing they did: "No Way!" -- that's ok, it's up to
us to prove it.
This is simply a progression in hardware design much as a C compiler was
a progression beyond assembly in software. Yes, there will always cases
where the HDL has to be so compact to meet the goals that you must write low
level verilog (or even gates). As president of your company, I'm sure you
would agree for certain applications, the value of time to market
dramatically overshadows most other costs in the process. Clearly the design
must meet timing - clearly it must be in some tolerance of area -- but if
you can save 3,6, or more months to market -- what is the savings (or gain)
there?
For the "Show Me" part, let's talk about that - the proof is in real
results.
Thanks for the spirited discussion -
Brett
"Ray Andraka" <r...@andraka.com> wrote in message
news:3D9A76DD...@andraka.com...
> Regarding your question on performance. In every case so far, we met or
> beat the performance targets for the device. In most cases we will beat area
> as well, but not all. The reason is that the compiler will do things (like
> share resources) that you simply will not because of the complexity of
> trying to manage it.
<Set cynical>
Meeting performance targets is easy if you pick low enough performance
targets.
<End cynical>
Designs for high clock speeds usually can't do much resource sharing:
routing and muxing of signals kills the performance. Usually the
reverse is needed. I often find I need to duplicate resources to meet
performance. That is, I'll make two copies of a group of logic so as to
reduce the routing delays. As you seem to be selling Forte, does it
make duplicate copies of logic based on the floorplan it creates? Can
it make duplicate copies if told to? Oh, wait, I should ask this first:
does Forte make or use a floorplan? If it doesn't, then how do you
expect to match the speed of a design with a floorplan? Floorplans
often reduce the clock cycle time of a design by 30%. So maybe I
shouldn't expect a Forte based design to match the speed of a well
crafted VHDL design?
> I don't advocate using Handle-C.
Perhaps you mean Handel-C?
<Assume Handel-C is subject>
> Everything I've seen as well brings it
> too close to hardware. For an algorithmic design, I want to leave my
> algorithm untouched.
Handel-C is a somewhat higher level language than VHDL, however it is
closer to the hardware than SystemC. And that is both a good thing, as
you can build what you want, and a bad thing. I've been thinking about
using Handel-C for the next design as it seemed to me to be getting
close enough to a reasonable tool to be worth using. Not for the whole
design, just of a part that is both complex and should be fairly easy to
meet timing even if the tool doesn't do the best job.
--
Phil Hays
As I said, there is a place for such tools, but for the most part, it is
probably not appropriate for something where cost, power or density is an
issue. What tools like this do is lower the bar to make it more accessible for
those without the expertise. It is not a replacement for the expertise, just a
tool that lowers the bar somewhat. My experience with many of the high level
tools has been any time savings afforded has been far overshadowed by the
"pushing on a rope" that is often needed to get the tool to do what you know is
right. Synthesis allows you to get around that where needed. Does this tool
also allow it? Some of the other HLL tools do not, so they become very awkward
to make work when you are pushing into the corners of the envelope.
Ray Andraka <r...@andraka.com> wrote:
> beat the target. The real question is whether you are using the device anywhere
> near its capability. As for comparisons, you need to be careful what you are
> comparing it to. There is an astonishing amount of mediocre or worse design
> going on out there. The fact that the average FPGA user only does one FPGA
> every 18-24 months and the tools/devices are changed at a much faster rate than
> that alone says that a large number of the FPGA designs are being done as
> As I said, there is a place for such tools, but for the most part, it is
> probably not appropriate for something where cost, power or density is an
> issue. What tools like this do is lower the bar to make it more accessible for
> those without the expertise. It is not a replacement for the expertise, just a
> tool that lowers the bar somewhat.
I used the toolset from Cynapps which is now DSForte for a system
solving 3sat with dedicated HW and SW.
Using the whole set allows you to describe HW in a syntax very near to
VHDL, which I prefered for my part of the job.
The advantage I saw came from better testbench developing using C++
and faster simulationspeed. Of course thats not what they want to sell
their tools, because testbench development isn't exactly the theme our
management want to buy *g*.
I seldom found this aspect in the discussion traditional HDL vs C/C++
based.
I agree with you, that the tools didn't replace expertise for problems
you need expertise to solve. I didn't believe you will ever be able to
squeeze out the design with high-level synthesis. But more and more
you don't need to. No matter to take the next bigger and faster device
[1]. So work could be done using high-level synthesis without wasting
time for details no one cares about.
> right. Synthesis allows you to get around that where needed. Does this tool
> also allow it? Some of the other HLL tools do not, so they become very awkward
> to make work when you are pushing into the corners of the envelope.
Thats an interessting point. The version of the high-level synthesizer
I used was a push-and-go. If the result fits it is the best solution
you can get, if not you loose. I don't think thats a matter of
high-level vs low-level synthesis but a matter of tool philosophy. I
had to use Synplify these days (sadly not the full version) and miss
all the switches Synopsys dc_shell would offer. On the other hand you
don't need to have experience to use Synplify, where I needed weeks
before I got first results from Synopsys [2].
bye Thomas
[1] A bit ironically as most of my work consist of squeezing out
smallest FPGA to the max, but in fact thats way I see all around me.
[2] The first time I used Synopsys was hard :). Today I'm experienced
to get some results but still lack the knowlegde to use the full
potential of the tool.
I wonder if ever someone will be able to use 100% of dc_shell if
necessary.
You are a tough group. Answering the questions in order starting with Mr.
Hays:
- Setting the bar low would make for an easy way to meet the objectives. You
can imagine that a vendor would like to do that for marketing purposes,
however, none of our customers would pay us if we just went and spit out
some "acceptable" result. They want a product that meets THEIR expectations
and provides some additional benefit over what they are doing today.
Otherwise, why change?
- The product will duplicate resources when needed depending on the
constraints. It is quite possible that having an additional multiplier to
meet the performance goal uses less area than the muxing needed to reuse the
existing ones. The product will figure that out.
- I am obviously selling Forte's products from the standpoint of marketing
them. In reality, I'm also a big proponent of the high-level synthesis
movement in general -- which is really what I'm selling. I obviously believe
we have advantages as well.
- We don't directly have a floor planner nor do we plan to. However, the RTL
code that comes out can go directly to floor planning, logic synthesis,
power estimation, etc. And we can get there in 20% of the time that it takes
you to get there by hand. (I know you don't believe me. Would you believe
the Chairman of the Board of NEC? They did a design using a behavioral
synthesis tool and time-to-RTL savings was 80% on a design of 3x
complexity -- see: http://videos.dac.com/videos/39th/k1/k1/index.htm -- his
key note address at DAC 2002.
- "Perhaps you mean Handel-C?" Please excuse the typo - Neither is ideal in
my mind.
>> I've been thinking about using Handel-C for the next design as it seemed
to me to be getting close enough to a reasonable tool to be worth using.
- What advantage do you believe that you will have using any C-based flow?
Why would you move from VHDL for a language that is not a lot higher-level
than RTL? Why do you consider SystemC not close enough to the hardware? What
tolerances in QoR (performance or area) would you allow if the product
allowed you to save 3+ months in your time-to-RTL? I guess overall, what are
your expectations?
I am going to be in Munich on Wednesday are you in that area of Germany?
And to Mr. Andraka:
- I agree with your assessment. I assure you we are not picking low-end
designs.
- I'm not convinced that this technology lowers the bar for hardware design.
"What tools like this do is lower the bar to make it more accessible for
those without the expertise."
Would you agree that you are less capable than your predecessors (or perhaps
yourself) who were creating hardware using schematics just because you use
RTL? Using your argument, this would be true although I'm sure this is not
the case.
Has it really lowered the barrier or increased the ability of the same
hardware designers to create more complex designs. Could you do the complex
FPGA you are working on right now using schematics -- yes. Could you do it
in a reasonable amount of time to make money on the device, no.
Let's keep this going, I think it is a healthy discussion... and it lets you
guys beat up the marketing guy for a while - I'll be traveling out of
country this week so I apologize for any delays in responses.
I'd also like to be able to talk to you one on one, please send me contact
information if you think this is appropriate. It is often difficult to get
all of the thoughts out in an email.
Best regards
Brett
noSPAM...@ForteDS.com.NOSPAM
"Thomas Stanka" <tho...@stanka-web.de> wrote in message
news:d92cdee8.02100...@posting.google.com...
You might really want to rethink this design decision.
The synthesis tools have an amazing awareness of the datapath, after
all, they are constructing the datapath itself. The benefits for
placing the datapath can range from 10-30% easily (Say it with me
"SIMULATED ANNEALING SUCKS!!!"). Simple datapath creation strategies
can include placement and create significantly impressive results.
EG, see Tim Callahan's research for some possibilities along these
lines, http://brass.cs.berkeley.edu/~timothyc/res.html
(Oh, by the way, Tim's on the job market)
Another thing which is amazingly low hanging but unplucked fruit in
the synthesis world: C-slow retiming and automatic repipelining. It's
a big BIG win, and not hard to do (nearly trivial if you already have
retiming in your flow). You just have to actually have the tools DO
it.
>And to Mr. Andraka:
>Would you agree that you are less capable than your predecessors (or perhaps
>yourself) who were creating hardware using schematics just because you use
>RTL? Using your argument, this would be true although I'm sure this is not
>the case.
Considering that Ray Andraka's design flow is "RTL is textually
represented gates" and the heavy use of generators in his workflow
[1], I'd say otherwise simply because he hasn't given up the detailed
specification abilities present in a schematic flow.
But that's just my take on it.
[1] Generators are very nice when thinking low level, but rather a
weakness in Verilog or schematics (where one needs a separate
program), although considerably better in VHDL.
Generators, on their own, are a bad system for inputing designs, but
are an incredibly useful addition as part of the flow, for building
most of the modules in a design. I think Xilinx really missed the
boat on not makeing CoreGen have a public API for people to use.
A (very) brave last sentence!
Just suppose, hypothetically of course, that
the 'products' decison and the designers' choice are NOT identical:
How does the designer then _control_ the 'products' decision flow ?
How do they extract the info, on what trade-offs the 'product' made ?
>
> - We don't directly have a floor planner nor do we plan to. However, the RTL
> code that comes out can go directly to floor planning, logic synthesis,
> power estimation, etc. And we can get there in 20% of the time that it takes
> you to get there by hand.
So, if someone WANTS to floor plan, how is that done, and what
happens over design-revision boundaries ?
- jg
> - The product will duplicate resources when needed depending on the
> constraints. It is quite possible that having an additional multiplier to
> meet the performance goal uses less area than the muxing needed to reuse the
> existing ones. The product will figure that out.
What if it's wrong?
The C++ to VHDL converter has no idea how long a route is, so how can it
make a wise decision on sharing or duplicating logic? Remember that the
longest path in a FPGA design is often 2/3 or more (66%+) routing
delay...
> And we can get there in 20% of the time that it takes
> you to get there by hand.
Amusing. Would this be for all designs?
> - What advantage do you believe that you will have using any C-based flow?
> Why would you move from VHDL for a language that is not a lot higher-level
> than RTL?
Handel-C might be enough better of "RTL" than VHDL to make it worth
using. Maybe. For some designs, surely not all. And if it's not,
simulation might be enough easier to make up the shortfall. Again,
maybe. Or maybe not.
--
Phil Hays
> In every case so far, we met or
> beat the performance targets for the device. In most cases we will beat
area
> as well, but not all.
That's about the most amorphous claim I've ever heard. Of what significance
is this claim? What's important, is how does it compare with OTHER
technologies, not some arbitrary target!
> This is simply a progression in hardware design much as a C compiler was
> a progression beyond assembly in software.
Er, huh? C is merely a software programming language, how is that a
"progression in hardware design"??? I highly disagree with that statement,
and see no basis for it. There are already hardware HDLs that work
reasonably well, this is NOT a progression from instantiating things gate by
gate (assembly language), as that progression has already been made long
before this.
> ...the value of time to market
> dramatically overshadows most other costs in the process.
Yes, but unreal expectations made by unrealistic claims of tools and wasted
time trying to force fit a tool to do something it just simply can't is not
going to get one to market faster. The thing to do is set honestly based
claims and expectations, as well as honest comparisons with other
technologies. Not amorphous claims and marketing fluff.
> Clearly the design
> must meet timing - clearly it must be in some tolerance of area -- but if
> you can save 3,6, or more months to market -- what is the savings (or
gain)
> there?
If, if....if. What if the same design could have been done by someone using
another design methodology that would fit in a part for 1/3rd the cost?
> For the "Show Me" part, let's talk about that - the proof is in real
> results.
Yes, the key word is "real". I know this sounds a bit nasty, but what do
you expect with all the snake oil that has been peddled over the years with
HDLs, much less now with stuff like this. Granted, HDLs are now a viable
design entry methodology, but...they have always been designed TO be
hardware design entry tools, and lots of time has passed for them to become
refined to the point they are today.
Austin
We bang on about QoR, size, area performance...why don't we use
schematic capture and layout, this will give us serious efficiency?
Probably because multi-million gate designs, or even multi-hundred
thousand gate designs would take an eternity to complete and the
product would be defunct before it got anywhere near the market.
The time-to-market, productivity and cost issue drove the development
of VHDL as a simulation langauge...the level of design abstraction was
raised. -It wasn't designed for implmentation (only a subset is
implementable) and thats why RTL was then developed.
HLLs are just another increase in the level of design abstraction and
their 'C' base makes them attractive for system or SoPC design. Let's
get talking the same language here. It's not difficult to implement
parallelism in a software language.
Handel-C and SpecC use the 'par' construct developed through CSP
(Communicating Sequential Processes) a notation for controlling levels
of abstraction. Check out CSP and Hoare on google for more
information. (Hoare developed CSP at Oxford University). I don't
claim to be an expert, but this work and theory has stood the test of
time. It has been discussed and critiqued and has remained a proven
theory, and is widely accepted.
Why use a C based language? Taking a spec and re-writing it into HDL
is time-consuming and can be error prone. It's ...easier to migrate a
spec to a HLL and it certainly helps if you have libraries of legacy C
IP in your company - encryption, compression, protocol stacks - things
that can benefit from being in hardware.
And the more you think about it, when we optimise our design for area,
speed, efficiency, if it is a part of a system how do we know the
partition we are optimising is correct? How did we test and verify
the design partition? It's not good engineering practice to spend
many months of time, effort, dollars and expertise optimising
something that from the outset is not optimal.
HLLs can help at the partitioning stage.
Fundamentally there us no one right or wrong way to design hardware.
Look at your design requirements, what your customer wants and what is
important to them, then choose you methodology, language and tool flow
- be that language an HDL, HLL, block based design or Schematic
Capture.
Things are changing, we either influence the direction of that change
and or stand by and watch it happen.
Noel
Martin Thompson <martin.j...@trw.com> wrote in message news:<u3crmd...@trw.com>...
> We bang on about QoR, size, area performance...why don't we use
> schematic capture and layout, this will give us serious efficiency?
> Probably because multi-million gate designs, or even multi-hundred
> thousand gate designs would take an eternity to complete and the
> product would be defunct before it got anywhere near the market.
That's simply not true. The Alpha CPUs were designed using schematic
capture, as well as many other multi-million gate designs. It IS entirely
possible to design multi-million gate designs with schematics equally as
fast and obviously far more efficiently with schematics. IF you know how to
use schematics. You don't draw things by the gate, you use a library of
heriarchical symbols, and it's basically drag and drop.
Austin
I was of the same opinion too, but eventually the C model still had to
be rewritten in Verilog to synthesize it. I specifically used an RTL C
style of coding in instances so the Verilog RTL was almost a 1 to 1
map. Problem was the C code looked bloody ugly, hierarchy was very
difficult to manage, & was manually levelised and ended up being a
very poor mans Verilog. Worse still, all the revisions were done to
the Verilog & not all the way back to the C as it was often quite
difficult to express in poor mans C what is easy in HDL. At the end of
the road I even had the C env auto magically spit out the
corresponding Verilog using lots of ugly macros that did both
simulation & equiv HDL code output. This was maybe ok in the gate
level world, lots of companies did this too!
I got real tired of doing that flow. The cleverer the tool got to be
more like Verilog, the slower the sims became. In the end I junked it
& wrote a direct RTL subset of Verilog to C compiler. NOW I can model
in RTL Verilog AND I have the speed of C in simulation. There is a
penalty, the backend doesn't optimize as well as it could, but it is
only a few times slower than raw C. Expect to fix that one day, gee
might even add x86 raw output since passing the C code to VC6 tends to
really burden it (100k lines in a fn() will kill most C compilers). I
call it Vpp (like the old Cpp from Bell Labs) or sometimes V2C. One
day might put it out under open source (BSD/MIT) if there is enough
interest.
Other than that, I have to agree with Ray & others, there is some room
for HandelC, Occam, StreamsC, CSP etc for Reconfigurable Computing
with a FPGA board to speed up a PC, but to create an actual ASIC or
FPGA product, I am much more skeptical? I would expect to see a
performance penalty of at least a decade or so but the more the C app
looks like a DSP flow, the better the result will close in to hand
RTL. I have seen the Celoxica demos & I was quite impressed with the
Venus de Milo show at the Xilinx VirtexPro fair last spring.
Now I would like to draw your attention to CSP or Occam. 40 yrs ago
Tony Hoare was a real live EE & did design work in RTL (Resister
Transister Logic hehehe). he later moved into SW (at Oxford U) & later
became world famous for his work in Par programming.
In v1 of his work, I believe (& so does he now) that he & others got
it all wrong, hence this is why the SW world has such a hard time with
Par programming since most SW people work with spinlocks, P & V,
deadlocks, mutexes, semaphores etc. These have nothing to do with HW
abstraction so they don't help make that Par code any use to HW
design. It is also why Java & even C# with threads are Thread unsafe &
why that kind of Par programming is extremely hard when HW guys find
Par so easy but Seq much harder.
In v2 of his work only a few years later (1968 IIRC), he must have
sobered up & remembered he was once a HW guy. CSP & esp Occam is a HDL
(or PDL) using processes & messages to describe logic. Channels in
Occam are essentially equiv to wires connecting the outputs of one
process to the inputs of another. Now the Transputer executed Occam
programs in a similar way to how a HDL simulator works. It included a
round robin scheduler to effect the par execution of all the par
expressions, & shuffling the outputs back to inputs (eventually). At
Inmos, Occam was even used to write a simulator & to describe the T800
FPU.
So I though I might also mention I am working on a modern Transputer
called T2 for FPGA using Vpp etc and lots of work still to do. Perhaps
next yr more news. Now with a Transputer in FPGA, you really could
could array them in the bigger FPGAs and benefit from the near linear
increase in performance since using lots of Transputers was alot like
partitioning a big HW block into smaller blocks with the interconnect
timeshared over the channels/links.
I have now come to the conclusion that Transputing is the soft side of
RC & FPGAs are the hard side, both complimentary and somewhat
interchangeable. Now if you put a Transputer or 2 (or more) in FPGA,
it is not only the ideal embedded cpu, but it makes it easy to divy up
the project into HW (FPGA) & SW (T/Occam/.. code).
my 2c
Interesting post.
What's the status of Transputer tools ?
Don't STm still have some flavour of the Transputer targeting
the set-top-box market ?
- jg
> > We bang on about QoR, size, area performance...why don't we use
> > schematic capture and layout, this will give us serious efficiency?
> > Probably because multi-million gate designs, or even multi-hundred
> > thousand gate designs would take an eternity to complete and the
> > product would be defunct before it got anywhere near the market.
>
> That's simply not true. The Alpha CPUs were designed using schematic
> capture
... by a large building full of designers.
--
Phil Hays
readable/editable without special tools.
parameterizable, means smaller libraries.
elaborate testbenches. HDL can be treated like a programming language
for simulation only stuff.
ability to simulate parts of design with behavioral models to allow
full simulation before design is done.
portable across tools, at least to some degree.
I do find that it takes longer to grok a design, especially one that
someone else did if it is presented in an HDL than in a well organized
schematic, but that's just because I work better with visuals than with
text. I've seen some really good HDL and schematics, and some gawdawful
examples of both as well. If good discipline is followed, I still believe
that a schematic design can be completed in the same time frame as an HDL
design, especially if you've already amassed a decent library of commonly
used pieces. Heck, we do that with the HDLs now for much of the high
performance stuff...basically using generates to create a textual netlist.
Many times, it is less work to do that than to do the push on a rope trick
to get the synthesizer to produce what you want from an RTL description.
In those cases, the main advantage of the HDL is the parameterization
afforded.
Phil Hays wrote:
--
> elaborate testbenches. HDL can be treated like a programming language
> for simulation only stuff.
Why is this precluded for schematics?
> ability to simulate parts of design with behavioral models to allow
> full simulation before design is done.
Again, why is this precluded for schematics?
> I do find that it takes longer to grok a design, especially one that
> someone else did if it is presented in an HDL than in a well organized
> schematic, but that's just because I work better with visuals than with
> text.
It's because you can't see data flow in text. Most everyone simply draws
out the data path via a block diagram, even if it's your own HDL or someone
else's.
And of course the MAJOR benefit of schematics (or netlisting using HDLs), is
you KNOW what the tools are going to give you for output, and you don't have
to fight with them NEAR as much to make things fit, and make timing.
The ideal methodology, IMO, is mixed schematic (for data path) and HDL for
control logic...but I've seen some very good schematic state machine designs
(and libraries can be used here as well) that no HDL could come close to.
Austin
The toolset was adapted in to the ST20 toolset, but I don't think anyone
at ST supports Occam anymore. The transputers have been end-of-lifed.
>
> Don't STm still have some flavour of the Transputer targeting
> the set-top-box market ?
>
The ST20 (formerly the T450) is in a number of chips serving the STB and
DVD markets. However, it doesn't really have any of the things that made
it a Transputer anymore (i.e. links and channels), although it has a
similar instruction set and register archiecture.
At least, that's where things were in 2000 when I left ST.
-- Andrew MacCormack and...@cadence.com
-- Senior Design Engineer Phone: +44 1506 595360
-- Cadence Design Foundry http://www.cadence.com/designfoundry
-- Alba Campus, Livingston EH54 7HH, UK Fax: +44 1506 595959
Rams site and Wotug have archives of the original Transputer tools
IIRC, I don't know what ST uses for current ST cores. The Transputer
cores are now embedded cores, the ISA is still the same but all the
Inmos technical talk of processes, Occam is all gone. ST only
describes the ISA in the blandest way. Only one link is left which
almost defeats the idea of multiple cpus, they did add lots of more
generic serial ports instead. It was morphed down into a dumb ass C
style cpu.
IIRC ST does use it in in 70% of set tops boxes.
No on-chip channels! They must have changed the instruction set.
And do they still have the ROM kernel?
Austin Franklin wrote:
> Hi Ray,
>
> > elaborate testbenches. HDL can be treated like a programming language
> > for simulation only stuff.
>
> Why is this precluded for schematics?
Schematics require you to come up with a circuit for the test bench. If the
testbench is done as a programming language you have considerably more
flexibility. When I was doing schematics, we often did testbenches in VHDL, but
it required additional tools to do it, and was at times awkward.
>
>
> > ability to simulate parts of design with behavioral models to allow
> > full simulation before design is done.
>
> Again, why is this precluded for schematics?
>
> > I do find that it takes longer to grok a design, especially one that
> > someone else did if it is presented in an HDL than in a well organized
> > schematic, but that's just because I work better with visuals than with
> > text.
>
> It's because you can't see data flow in text. Most everyone simply draws
> out the data path via a block diagram, even if it's your own HDL or someone
> else's.
>
> And of course the MAJOR benefit of schematics (or netlisting using HDLs), is
> you KNOW what the tools are going to give you for output, and you don't have
> to fight with them NEAR as much to make things fit, and make timing.
I do this for critical and placed stuff in HDLs using a library of generated
instances. As you know, I had fine tuned my schematic entry to be able to turn
around designs quickly using a rather extensive library. The same common
components written with generate statements encapsulating primitives works fine
in VHDL and gives the same degree of control as I had with schematics. The big
win with VHDL is I have written those components so that they are parameterized
to generate exactly what is needed for each instance from a single library
design. The advantage is if I make a change to the macro, it only gets changed
in one place, which is not necessarily true with schematics (using 2 bit slices
for arithmetic, it is almost true, but you still have the special cases at the
start and end of a carry chain). The parameterization includes options for
layout, assignment to different device families (RLOC format for example),
automatic signed/unsigned extension, automatic selection of reset vector values
with the proper FDRE/FDSE etc. These are things that were a little awkward with
schematics, and are very easy to do with the HDL generates.
>
>
> The ideal methodology, IMO, is mixed schematic (for data path) and HDL for
> control logic...but I've seen some very good schematic state machine designs
> (and libraries can be used here as well) that no HDL could come close to.
Yes, you've probably seen my schematic flow chart state machines too. They are
very readable compared with HDL, and just as easy to edit.
The main reason for going to HDL, however (at least in my mind) is to maintain a
more or less mainstream tools flow, which seems to be pretty important to my
customers. Schematic entry is considered by most to be an archaic design entry
method (not that I agree, but the fact is that is the prevailing attitude). By
moving to HDLs a few years ago, I kept from locking myself out of many
customers.
So will I be seeing you in San Jose tomorrow? If so, we can discuss this in
person.
>
>
> Austin
I don't understand. What tools are you talking about? I simply run my
testbench, written in HDL, same as I would for any design even if the design
is in HDL, and get the same output waveforms...or better yet, it displays
the actual signals on the schematics.
> > And of course the MAJOR benefit of schematics (or netlisting using
HDLs), is
> > you KNOW what the tools are going to give you for output, and you don't
have
> > to fight with them NEAR as much to make things fit, and make timing.
>
> I do this for critical and placed stuff in HDLs using a library of
generated
> instances. As you know, I had fine tuned my schematic entry to be able to
turn
> around designs quickly using a rather extensive library. The same common
> components written with generate statements encapsulating primitives works
fine
> in VHDL and gives the same degree of control as I had with schematics.
What about placement? Problems I've had were the tools didn't allow the use
of consistent names, either when a change was made with to either the
design, or the toolset.
> The big
> win with VHDL is I have written those components so that they are
parameterized
> to generate exactly what is needed for each instance from a single library
> design.
And what preculdes you from doing that with schematics? Did you ever see
Philip's tool for generating schematic elements?
> The advantage is if I make a change to the macro, it only gets changed
> in one place, which is not necessarily true with schematics (using 2 bit
slices
> for arithmetic, it is almost true, but you still have the special cases at
the
> start and end of a carry chain). The parameterization includes options
for
> layout, assignment to different device families (RLOC format for example),
> automatic signed/unsigned extension, automatic selection of reset vector
values
> with the proper FDRE/FDSE etc. These are things that were a little
awkward with
> schematics, and are very easy to do with the HDL generates.
Hum, I don't find them awkward at all with schematics, but do with HDLs...
> > The ideal methodology, IMO, is mixed schematic (for data path) and HDL
for
> > control logic...but I've seen some very good schematic state machine
designs
> > (and libraries can be used here as well) that no HDL could come close
to.
>
> Yes, you've probably seen my schematic flow chart state machines too.
They are
> very readable compared with HDL, and just as easy to edit.
Sorry, haven't seen them.
> The main reason for going to HDL, however (at least in my mind) is to
maintain a
> more or less mainstream tools flow, which seems to be pretty important to
my
> customers.
Mainstream? Not really. Synplify may be the "tool de Jour", but I don't
see that as being any better than schematics, though you are locked to a
single vendor with schematics, no doubt. Also, as you know, every damn
revision of these HDL compilers generates different code...which reeks havoc
on some designs.
> Schematic entry is considered by most to be an archaic design entry
> method (not that I agree, but the fact is that is the prevailing
attitude).
It is the prevailing attitude amongst "young" people who know no better,
yes. Unfortunately, this abysmal "ignorance" has prevailed with respect to
programming...not that I'm advocating assembly programming, as current C
compilers are far far far better than current HDL compilers.
> By
> moving to HDLs a few years ago, I kept from locking myself out of many
> customers.
I agree. Designs I do for my own projects, I do in schematics...simply
because it keeps the parts cost down, ups the speed significantly...and I
don't have to wrestle with the tools. I do mostly HDL work for clients now,
as for misbegotten reasons, they believe it saves them time and money...when
in every instance, it absolutely, unquestionably does not.
> So will I be seeing you in San Jose tomorrow? If so, we can discuss this
in
> person.
No, sigh...I am unable to make it, but I was assured by Philip that you
would defend the fort better than either of us would ;-)
Regards,
Austin
Phil Hays wrote:
> Austin Franklin wrote:
>>That's simply not true. The Alpha CPUs were designed using schematic capture
> ... by a large building full of designers.
Who no longer work for Digital Equipment Corp.
-- Mike Treseler
Yea. But fairness requires me to point out that schematic entry wasn't
the reason why DEC failed.
--
Phil Hays
Thanks for the laugh, Phil. I never even thought of that reply in that way
;-)
The story of what happened to "some" of the Alpha developer, is they "went"
to Intel, and worked on the subsequent Pentium processors. It was alleged
(and very well founded in my opinion) that technology that was developed
(and patented) by Digital for the Alpha was subsequently "used" in the
Pentium designs. A lawsuit ensued.
The disposition of these allegations was never legally determined, because
the lawsuit was settled out of the courts...in a rather bizarre (in my
opinion, at least) way.
Austin
I didn't the first time as well. Glad to be of service, Austin.
--
Phil Hays
Austin Franklin wrote:
> I don't understand. What tools are you talking about? I simply run my
> testbench, written in HDL, same as I would for any design even if the design
> is in HDL, and get the same output waveforms...or better yet, it displays
> the actual signals on the schematics.
My viewlogic license is only enabled for viewsim and viewdraw. It doesn't
support the viewlogic VHDL, that was extra. Frankly, the Viewlogic VHDL is quite
poor compared to Modelsim or Aldec.
> What about placement? Problems I've had were the tools didn't allow the use
> of consistent names, either when a change was made with to either the
> design, or the toolset.
I haven't had much problems with placement, at least not with stuff instantiated
in the code (that is one of the major reasons I do as much structural
instantiation as I do). Only inferred logic changes its names, instantiated
logic generally does not. Inferred flip-flops generally take on the name of the
output net, so there is no problem floorplanning using inferred flip-flops. The
LUTs do tend to get random names, so they are not as easy to deal with in
floorplanning, but then if you do your design with one level of logic, the
mapper packs them with the flip-flops anyway.
> And what preculdes you from doing that with schematics? Did you ever see
> Philip's tool for generating schematic elements?
Yes, I did, and it is a very nice tool too. I never did get my own copy because
just as he came out with it I was in the middle of a transition to HDLs. HDLs
give you that capability without having to obtain an add on tool.
>
>
> > The advantage is if I make a change to the macro, it only gets changed
> > in one place, which is not necessarily true with schematics (using 2 bit
> slices
> > for arithmetic, it is almost true, but you still have the special cases at
> the
> > start and end of a carry chain). The parameterization includes options
> for
> > layout, assignment to different device families (RLOC format for example),
> > automatic signed/unsigned extension, automatic selection of reset vector
> values
> > with the proper FDRE/FDSE etc. These are things that were a little
> awkward with
> > schematics, and are very easy to do with the HDL generates.
>
> Hum, I don't find them awkward at all with schematics, but do with HDLs...
For example, say you have (this is from a design that I did with schematics) a
bank of 128 129 bit LFSRs. Each is identical except it has a different reset
value. With an HDL, you can construct one parameterized module that generates
the proper combination of sets and resets without ever having to look inside the
module, then you can instantiate those 128 modules in a generate statement that
indexes a constant array (probably in a package in another file so that you
never have to modify the source even if you change the constants) to
parameterize the initial values of each module. If I want to change the intial
values, I just edit the list of initial values, which by the way can be
expressed as binary, decimal, hex, octal or any mix of that you like. With
schematics, you need to generate each module using FDREs and FDSEs. As I
recall, Philips tool didn't do this readily, and it certainly didn't read the
init values out of a common file. Similarly, I can set up filter coefficients
for distributed arithmetic filters as a naturally ordered list of coefficients
in a separate package. My VHDL filter is parameterized to read the coefficients
from a file, process them (with a procedure) to create the init values for the
DA LUTs, and then build the filter including placement. The code is
parameterized for the coefficient width, filter add tree width, bits per clock,
length of the filter etc, and I never have to go inside that module to modify
anything. It took a while to build the library to where I was as productive
with VHDL as I was with schematics, but I am now well past there.
> Mainstream? Not really. Synplify may be the "tool de Jour", but I don't
> see that as being any better than schematics, though you are locked to a
> single vendor with schematics, no doubt. Also, as you know, every damn
> revision of these HDL compilers generates different code...which reeks havoc
> on some designs.
Depends on the definition of mainstream, I suppose. Where I come from,
mainstream means that which most people are doing, not what which is 'best'.
When all of my customers are asking for an HDL design flow, I think that can be
described as the mainstream. Schematic entry for FPGA design, like it or not,
can't really be considered mainstream anymore, at least by the definition of
mainstream I am familiar with. You can get around the variations between
compilers by using structural generation for the critical parts of your design.
We do it in a large percentage of each of our designs. As an indicator, I spend
far more time tweaking things for PAR than I do for getting the synthesis to
turn out what I want.
>
> I agree. Designs I do for my own projects, I do in schematics...simply
> because it keeps the parts cost down, ups the speed significantly...and I
> don't have to wrestle with the tools. I do mostly HDL work for clients now,
> as for misbegotten reasons, they believe it saves them time and money...when
> in every instance, it absolutely, unquestionably does not.
I'm not sure an HDL will save money or not. Because my library is far more
parameterized than I was able to achieve with schematics, and because I have the
option of structural cosntruction where it matters or RTL level coding where
things are not as critical, my design capture is perhaps a little bit shorter
than it was with schematics. I have seen tremendous gains in the simulation
however because the sophistication of the testbenches is much higher.
>
>
> > So will I be seeing you in San Jose tomorrow? If so, we can discuss this
> in
> > person.
>
> No, sigh...I am unable to make it, but I was assured by Philip that you
> would defend the fort better than either of us would ;-)
Depends which fort. I don't defend the schematic fort anymore. I got out of
there right before it burned down around me. As for using a mix of schematics
and HDLs, I find that more awkward than using either...it means maintaining two
libraries, proficiency on additional tools and customers griping louder because
of more tools needed to support a design. You missed a good meeting, They were
very receptive and have been following up this week (which is something we
didn't see before).
It appears you are using your HDL for, more or less, a netlister. I have no
problem with parameterized modules, that's the way this stuff SHOULD be
done, whether HDL or schematic. It's above that level that I really care
about...and personally, I believe schematics (or some graphical interface)
is far more functional for most humans to understand/work with.
If the HDL synthesizers weren't 10-30 times the cost of the schematic
programs, I'd buy part of the argument about tools...and with every new
synthesizer, you have to spend time learning it...though the HDL may be
portable, the tool operation (and results) isn't.
One of the main reasons customers have been asking for HDL designs is
because that's what they are being told to do...by who else, the tool and
FPGA vendors. It's supposed a selling point, though, only until recently,
it really didn't work well at all. For about 5 years I was getting a call
per week about how a client was told by "someone" (tool vendor or FPGA
vendor) how they had to use synthesis, it would solve all their
problems...and so they did...and their design wouldn't get above 10MHz, and
wouldn't fit in the part...and I was asked to fix the problem. As you and I
have pointed out many times, a tool is only as good as the person using it
can make it, and the point is, though you can do netlist design in HDL, the
typical user of HDL will not be doing so, and as such, is relying on tool
"efficiency" for logic density and speed. Fortunately for the "mainstream"
HDL user, the FPGAs are a lot faster and denser, so it hides the foibles of
HDL, and the tools have gotten a LOT better in the past three years.
To be able to match schematics ability to do critical logic mapping and
placement is relatively new to HDL, and yes, that does make it able to match
schematics in performance and density ability (for the netlisted logic that
is), as you really are not using the synthesizer to do any synthesizing...I
guess that's a good thing.
I also believe this ability is somewhat tool specific? Are the mapping and
placement abilities of HDLs cross tool abilities? Will the highly mapped
and placed HDL code used for Synplify work with FPGA Express? That, of
course, is a major issue in touting portability if it is not, as you ARE
relying on a single vendor for the tool, just like you are with schematic,
though not as entirely...as the code can be massaged to "work", but not
necessarily as well as it would with the tool it was intended for.
Regards,
Austin
"Ray Andraka" <r...@andraka.com> wrote in message
news:3DD2F558...@andraka.com...
> I also believe this ability is somewhat tool specific? Are the mapping and
> placement abilities of HDLs cross tool abilities?
Is there a schematic file format that crosses tool boundaries?
-- Mike Treseler
I believe there are some schematic translators, and possibly EDIF, but not
really seamlessly, as it is an acknowledged deficiency of schematic tools.
That was the point I was making about HDLs, that, depending on how much
"tool specific" things your design contains/relies on, that then becomes the
same issue to some degree.
Austin
Austin Franklin wrote:
> Ray,
>
> It appears you are using your HDL for, more or less, a netlister. I have no
> problem with parameterized modules, that's the way this stuff SHOULD be
> done, whether HDL or schematic. It's above that level that I really care
> about...and personally, I believe schematics (or some graphical interface)
> is far more functional for most humans to understand/work with.
Yes, the point is I have the option to use it as a netlister for stuff I want
to tightly control (typically data path) or to use a higher level of abstraction
(RTL) for the stuff that I am not as fussy about such as my control signals. An
RTL level description can be made to preserve duplicated registers and whatnot
with extra attributes. Typically, these attributes might differ a bit between
tools, but then where they do there is no overlap and you can include attributes
for all tools. The ones not used by a tool are generally ignored.
While an HDL can do the netlisted stuff (and like I said, I do a good amount of
that to force the construction of my data paths), it's real value is to those
who for whatever reason don't want to design at the device level. You can get
away with a fairly gross description and get a working circuit, albiet not
usually as dense or as fast as a netlisted design, but certainly functional
without having to get into the details of the part. (Gawd, I never thought I'd
be saying those words).
The netlisting is not tool specific as long as you use primitives out of the
device library (e.g. unisims) and then put them together. Even the placement
does not have to be tool specific provided the tool supports user attributes by
passing user attributes through to the edif netlist (Synplicity, Leo, Mentor
Precision, XST all do that, FPGA express doesn't do it too well last I looked).
Where you get some tool specificity is when you infer logic, particularly LUTs
and then encapsulate them to create a LUT. In many cases, you can get away with
inferring the LUT logic and letting the placer worry about putting it with the
flip-flop. In cases where you can't you can either use the tool specific
constructs (Synplify's is xc_map) to make a LUT out of inferred logic, or you
can write a function that converts a boolean string to an INIT attribute for a
LUT primitive (or you could compute and enter the LUT init string manually, but
I don't recommend it...not to readable and very prone to mistakes). In any
case, the modifications to get a structural design to work under different tools
is very minimal. It is more work to get an RTL level design where you did some
of the pushing on a rope routine to make it do what you wanted to port to
another tool. For portability, it is that middle ground that provides the most
resistance, not the RTL just want it functional designs and not the structurally
instantiated ones.
Austin Franklin wrote:
--
The point was "thwarting" was highly probable with mapping and placement
when using HDLs, I don't believe I said file formats had anything to do with
the aforementioned "thwarting"?
Austin
"Ray Andraka" <r...@andraka.com> wrote in message
news:3DDABEA8...@andraka.com...
I know my structural code works properly under synplify, precision, leo and XST
with at most an attribute change (one or two lines per file, and you can use the
replace in all files in Aldec to do it in one fell swoop). I can also put the
attributes for all of these tools in, because they are non-interfering. It
isn't the structural code with the associated placement and mapping that causes
a problem, rather it is the RTL code that has been 'rope-pushed' that causes a
problem. That is part of the reason I opt for structural code when things are
critical. For example, this works in any of the above with a renaming on th
syn_translate off/on to the appropriate pragma in that tool. The rest of the
attributes are user attributes, which as long as your synth supports them is
portable. All the majors do now. This code snippet places SRL16's and FDRE's
in an array.
L:for i in 0 to width-1 generate
constant y:integer:= (btoi(row_limit/=0)*(i mod (row_limit+1)) +
btoi(row_limit=0)*(i/(col_limit+1)))*row_pitch;
constant x:integer:= (btoi(col_limit=0)*(i/(row_limit+1)) +
btoi(col_limit>0)*(i mod (col_limit+1)))*col_pitch;
constant xy_str:string:= "x"&itoa(x) & "y" & itoa((y/2)-origin);
constant k: integer:= x - col_pitch + (1-first_slice)*(1-2*btoi(col_pitch<0));
constant rc_str:string:= "R"&itoa(origin-(y/2)) & "C"&itoa(k/2) &".S" &
itoa((k+1) mod 2);
constant rloc_str : string := pickstring(virtex,rc_str,xy_str);
signal ds,qr,qs: STD_LOGIC;
attribute INIT of U0 : label is "AAAA";--INIT= attribute to pass to PAR
through synplicity
attribute BEL of U0:label is bel_lut(y mod 2);
attribute BEL of U1:label is bel_ff(y mod 2);
attribute BEL of U2:label is bel_ff(y mod 2);
attribute RLOC of U0 : label is rloc_str;
attribute RLOC of U1 : label is rloc_str;
attribute RLOC of U2 : label is rloc_str;
begin
U0: SRL16E
--synthesis translate_off
generic map ( -- init generic is for simulation model, not seen by Synplicity
or PAR
INIT => X"AAAA")--int2bit_vec(lut_init,16))
--synthesis translate_on
port map (
CLK => clk,
CE => weq,
A0 => dx0(i),
A1 => dx1(i),
A2 => dx2(i),
A3 => dx3(i),
D => wdq,
Q => ds);
U1: FDRE port map (
Q => qx(i),
D => ds,
R => lcl_rst,
CE => lcl_ce,
C => clk );
end generate L;
Austin Franklin wrote:
> Hi Ray,
>
> The point was "thwarting" was highly probable with mapping and placement
> when using HDLs, I don't believe I said file formats had anything to do with
> the aforementioned "thwarting"?
>
> Austin
>
> "Ray Andraka" <r...@andraka.com> wrote in message
> news:3DDABEA8...@andraka.com...
> > But not nearly to the same degree. For the most part, a design will be
> > functional under any tool you compile it with. It is usually the
> attributes,
> > and in the case of some of the cheaper tools, unsupported language
> constructs
> > that thwart portability, not file formats.
> >
--
I don't think you do disagree, much less heartily ;-)
> Structural code, which is required for placement and
> mapping in the code is probably the most portable way to write code in
VHDL.
Agreed, I never said any differently.
> The same would be true with schematics if the tools shared a common file
format,
> but they don't. WIth HDL's the file format is common, so that is not an
issue
> here.
Exactly, and I never said any differently.
> ... I can also put the
> attributes for all of these tools in, because they are non-interfering.
Yes, but my point is they are tool specific. That was it, it's the only
point I was making, and you apparently agree with that.
Regards,
Austin
This part of the discussion started with your asking about, and positing that
mapping and placement in a structural design makes the code tool-specific. My
arguement is that it does not, and in fact makes the code more portable than
'rope-pushed' RTL. The vendor specific attributes have nothing to do with
place and map for the most part (although some do have vendor equivalents,
xc_map and xc_rloc in synplify, for example, which I don't use because a) they
are not portable, b) they had been broken for a while, c) they are not as
flexible as the user attributes).
For reference, you started this part of the discussion with:
"I also believe this ability is somewhat tool specific? Are the mapping and
placement abilities of HDLs cross tool abilities? Will the highly mapped
and placed HDL code used for Synplify work with FPGA Express? That, of
course, is a major issue in touting portability if it is not, as you ARE
relying on a single vendor for the tool, just like you are with schematic,
though not as entirely...as the code can be massaged to "work", but not
necessarily as well as it would with the tool it was intended for."
Austin Franklin wrote:
>
>
> > ... I can also put the
> > attributes for all of these tools in, because they are non-interfering.
>
> Yes, but my point is they are tool specific. That was it, it's the only
> point I was making, and you apparently agree with that.
>
> Regards,
>
> Austin
--
"Ray Andraka" <r...@andraka.com> wrote in message
news:3DDD2E65...@andraka.com...
> But is not the placement attributes that are tool specific. If you do
those
> with user attributes, like my example, those do not change from tool to
tool,
> and you get EXACTLY the same result regardless of the tool provided your
tool
> supports user attributes.
> It is the attributes for RTL level stuff, things
> like keep buffers, preserving inferred registers and what not that can
> differ....basically the synthesis directive type attributes.
Understood. It's that the overall issue of tool dependency still exists
(and may always) even with HDLs, to some degree, and of course, depends on
your level of use.
> This part of the discussion started with your asking about, and positing
that
> mapping and placement in a structural design makes the code tool-specific.
My
> arguement is that it does not, and in fact makes the code more portable
than
> 'rope-pushed' RTL. The vendor specific attributes have nothing to do with
> place and map for the most part (although some do have vendor equivalents,
> xc_map and xc_rloc in synplify, for example, which I don't use because a)
they
> are not portable, b) they had been broken for a while, c) they are not as
> flexible as the user attributes).
The issue, at least in my book, is repeatibility and predictibility,
irrespective as to what the cause is. If you don't instantiate/netlist etc.
your design (which most no one does), you won't get repeatable (or
predictable) results, between tool vendor and even tool revision...or much
less even changes to your own code. For some people, that isn't an issue.
For some, it is a very big issue.
Austin
We do a mix of RTL and structural in our designs. The data path is generally
placed structural logic comprised of library elements for common functions. The
control is generally carefully executed RTL (combinatorial terms between ff's
kept simple enough to be mapped into one or two logic layers).
Austin Franklin wrote:
>
>
> The issue, at least in my book, is repeatibility and predictibility,
> irrespective as to what the cause is. If you don't instantiate/netlist etc.
> your design (which most no one does), you won't get repeatable (or
> predictable) results, between tool vendor and even tool revision...or much
> less even changes to your own code. For some people, that isn't an issue.
> For some, it is a very big issue.
>
> Austin
--