Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

I'd rather switch than fight!

99 views
Skip to first unread message

rickman

unread,
Apr 9, 2010, 10:07:46 AM4/9/10
to
I think I have about had it with VHDL. I've been using the
numeric_std library and eventually learned how to get around the
issues created by strong typing although it can be very arcane at
times. I have read about a few suggestions people are making to help
with some aspects of the language, like a selection operator like
Verilog has. But it just seems like I am always fighting some aspect
of the VHDL language.

I guess part of my frustration is that I have yet to see where strong
typing has made a real difference in my work... at least an
improvement. My customer uses Verilog and has mentioned several times
how he had tried using VHDL and found it too arcane to bother with.
He works on a much more practical level than I often do and it seems
to work well for him.

One of my goals over the summer is to teach myself Verilog so that I
can use it as well as I currently use VHDL. Then I can make a fully
informed decision about which I will continue to use. I'd appreciate
pointers on good references, web or printed.

Without starting a major argument, anyone care to share their feelings
on the differences in the two languages?

Rick

Andy

unread,
Apr 9, 2010, 10:50:46 AM4/9/10
to
Before the fixed and floating point packages came out, I would have
said there is little difference regarding RTL capabilities between
Verilog and VHDL. But those two packages revealed a fundamental
strength of VHDL that simply does not exist in verilog. By simply
writing a new package, a whole new capability was created that would
take a substantial language change in Verilog. Yes, the as-released
packages took advantage of features only available in a related change
in the language itself, but the "compatibility" packages ably
demonstrate that the working concept is viable even within the
confines the original language, thus demonstrating the true strengh of
the basic language of VHDL.

Not that the fixed/floating point packages are nirvana, but they do
represent a huge step in the right direction. If we only had
assignment operator overloading in VHDL, it would be much closer...
Still, that's a capability much closer to reality in VHDL than in
Verilog. Sure, verilog has many "built-in" tricks, but they are only
applicable to the existing type structure, and cannot be expanded upon
without revising the language itself.

Even before the fixed/floating point packages, integers simply work in
VHDL (within the range limitations), whereas in Verilog, they don't
always, but they also don't complain when they don't work either.

In general, strong typing and built-in bounds checking in VHDL catch
more problems, closer to the source of the problems, with no
additional code being written, than is possible in Verilog without
having to write A LOT of extra code. It seems for almost every weak-
typing-enabled shortcut in verilog, there is also a hidden, often
silent, "gotcha" to go along with it.

Andy

gabor

unread,
Apr 9, 2010, 2:25:29 PM4/9/10
to

At the end of the day, it really comes down to how you can be more
productive. If you tend to code with many levels of abstraction
you may do better with VHDL. I find that I am more productive
with Verilog, but it could be because I tend to look at hardware
at a fairly detailed level, a bottom-up approach if you will. I
inherited Verilog projects at my current place of employment and
just stuck with the language as it grew on me. At one point I
read Thomas & Moorby's green book from cover to cover. However
it described Verilog 95, not the more commonly used Verilog 2001,
and was not a particularly good reference book. I keep a copy
of the Doulos Golden Reference handy for the bits I don't use
every day.

Good Luck,
Gabor

glen herrmannsfeldt

unread,
Apr 9, 2010, 3:01:57 PM4/9/10
to
In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
(snip)


> Without starting a major argument, anyone care to share their feelings
> on the differences in the two languages?

I started with verilog, as that is what others I was working with
were doing, and was also told that it was a better choice for
previous C programmers. (Not that I believe that HDL should be
related to a software language.)

At some point, I learned to read VHDL, at least enough to convert
an module to verilog when needed, or to understand why something
didn't work the way I thought it should. (I had one project with
schematic capture, VHDL, and AHDL, and then I started adding
verilog to it.)

It seems to me that verilog, similar to C, gets the ideas across
without being excessively wordy. In comparison to some other
languages, I find the convenience in C of converting between
char and int without the need for any special conversion operation
(such as the Fortran CHAR function) convenient.

Well, I write my verilog mostly using continuous assignment,
with a fairly small amount of behavioral verilog. For those who
prefer behavioral coding, the recommendation might be different.

-- glen

Jonathan Bromley

unread,
Apr 9, 2010, 3:07:54 PM4/9/10
to
On Apr 9, 3:07 pm, rickman <gnu...@gmail.com> wrote:
> I think I have about had it with VHDL.  I've been using the
> numeric_std library and eventually learned how to get around the
> issues created by strong typing although it can be very arcane at
> times.  I have read about a few suggestions people are making to help
> with some aspects of the language, like a selection operator like
> Verilog has.  But it just seems like I am always fighting some aspect
> of the VHDL language.
>
> I guess part of my frustration is that I have yet to see where strong
> typing has made a real difference in my work... at least an
> improvement.

I think Andy has it about right. If you think signed arithmetic
was a tad messy in VHDL, wait until you find how successfully you
can be screwed by Verilog. The really cool thing is that Verilog
is polite, and doesn't tell you when it's screwing you. At least
VHDL is up-front about it. How many times have you created a
design in VHDL that got through compilation, but was broken in
a surprising way that was directly related to a quirk of the
language? Betcha you can count the occurrences on one hand.
Verilog does that to you all the time; it has startlingly
weak compile-time checking, and only slightly stronger
elaboration-time checking.

How comfortable are you with most-significant bits being
silently lost when you copy a wide vector into a narrow
one? How about signed values being silently zero-filled
to the width of a wider target?

> My customer uses Verilog and has mentioned several times
> how he had tried using VHDL and found it too arcane to bother with.
> He works on a much more practical level than I often do and it seems
> to work well for him.

Is "practical" here a euphemism?

> One of my goals over the summer is to teach myself Verilog so that I
> can use it as well as I currently use VHDL.  Then I can make a fully
> informed decision about which I will continue to use.  I'd appreciate
> pointers on good references, web or printed.

Good luck. As I've pointed out on many occasions, the textbook
situation is much less satisfactory for Verilog than it is
for VHDL. Whatever you do, PLEASE get yourself a copy of
Sutherland's Verilog Gotchas book (much of it is available free
online). You may not understand all of it at first, but
you sure will want to revisit it later. It's just a pity
that it's incomplete and doesn't cover ALL the many ways
in which Verilog can silently mess you up.

To be serious for a moment: a training class from a
reputable independent provider will save you a ton
of money in the long run. Your time is valuable.

> Without starting a major argument, anyone care to share their feelings
> on the differences in the two languages?

Errrrm, I think I just did.
--
Jonathan Bromley

Nico Coesel

unread,
Apr 9, 2010, 3:31:20 PM4/9/10
to
rickman <gnu...@gmail.com> wrote:

>I think I have about had it with VHDL. I've been using the
>numeric_std library and eventually learned how to get around the
>issues created by strong typing although it can be very arcane at
>times. I have read about a few suggestions people are making to help
>with some aspects of the language, like a selection operator like
>Verilog has. But it just seems like I am always fighting some aspect
>of the VHDL language.
>
>I guess part of my frustration is that I have yet to see where strong
>typing has made a real difference in my work... at least an

I also write a lot of C. Over the past years I've noticed that C
compilers (GCC to be exact) have become much more strict when it comes
to type checking. No more automatic casting. I'm sure this is done for
a good reason!

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------

David Brown

unread,
Apr 12, 2010, 9:56:22 AM4/12/10
to
On 09/04/2010 21:31, Nico Coesel wrote:
> rickman<gnu...@gmail.com> wrote:
>
>> I think I have about had it with VHDL. I've been using the
>> numeric_std library and eventually learned how to get around the
>> issues created by strong typing although it can be very arcane at
>> times. I have read about a few suggestions people are making to help
>> with some aspects of the language, like a selection operator like
>> Verilog has. But it just seems like I am always fighting some aspect
>> of the VHDL language.
>>
>> I guess part of my frustration is that I have yet to see where strong
>> typing has made a real difference in my work... at least an
>
> I also write a lot of C. Over the past years I've noticed that C
> compilers (GCC to be exact) have become much more strict when it comes
> to type checking. No more automatic casting. I'm sure this is done for
> a good reason!
>

It's not correct that C compilers have become stricter about type
checking - a standards-conforming C compiler has to support exactly the
same automatic typecasting as always. However, gcc's warning
capabilities have improved enormously over the years (it's better than
any other C compiler I have used), and if you enable those warnings it
will tell you about potential problems with losing bits on casts, mixing
signed and unsigned data, etc. It still has to accept it as valid C,
however (unless you use the option to make warnings act as errors).

Compiling in C++ mode gives you a little bit extra type strength - for
example, enumerations become proper types.

Message has been deleted

Bernd Paysan

unread,
Apr 14, 2010, 5:39:54 PM4/14/10
to
Jonathan Bromley wrote:

> How comfortable are you with most-significant bits being
> silently lost when you copy a wide vector into a narrow
> one? How about signed values being silently zero-filled
> to the width of a wider target?

Icarus Verilog (the free simulator) has pretty good checks on this and tells
you about vector size mismatches - IIRC, it is more pedantic than Cadence's
original Verilog simulator (they improved in the meantime, but I haven't
checked if they are as pedantic as Icarus Verilog). This is IMHO not a
language problem, but a quality of implementation issue.

--
Bernd Paysan
"If you want it done right, you have to do it yourself!"
http://www.jwdt.com/~paysan/

rickman

unread,
Apr 14, 2010, 9:41:25 PM4/14/10
to

I have always used Hardware Description Languages (HDLs) as a way to
describe hardware rather than just a way to code an application. In
the early days this would pay off in smaller implementations. But the
tools are better now and you have to work to get an improvement in the
size of your design. Sometimes the durn language just seems to get in
the way of being able to cleanly express what I want to do.

Rick

rickman

unread,
Apr 14, 2010, 9:44:34 PM4/14/10
to
On Apr 9, 10:50 am, Andy <jonesa...@comcast.net> wrote:
>
> In general, strong typing and built-in bounds checking in VHDL catch
> more problems, closer to the source of the problems, with no
> additional code being written, than is possible in Verilog without
> having to write A LOT of extra code. It seems for almost every weak-
> typing-enabled shortcut in verilog, there is also a hidden, often
> silent, "gotcha" to go along with it.

People say that strong typing catches bugs, but I've never seen any
real proof of that. There are all sorts of anecdotal evidence, but
nothing concrete. Sure, wearing a seat belt helps to save lives, but
at what point do we draw the line? Should we have four point
harnesses, helmets, fireproof suits...?

Rick

rickman

unread,
Apr 14, 2010, 9:54:58 PM4/14/10
to
On Apr 9, 3:07 pm, Jonathan Bromley <s...@oxfordbromley.plus.com>
wrote:

> On Apr 9, 3:07 pm, rickman <gnu...@gmail.com> wrote:
>
> How comfortable are you with most-significant bits being
> silently lost when you copy a wide vector into a narrow
> one?  How about signed values being silently zero-filled
> to the width of a wider target?

Isn't this just a matter of knowing the rules and following them? The
same is true with VHDL. There may be fewer ways to forget, but it is
still the same.


> > My customer uses Verilog and has mentioned several times
> > how he had tried using VHDL and found it too arcane to bother with.
> > He works on a much more practical level than I often do and it seems
> > to work well for him.
>
> Is "practical" here a euphemism?

No, I mean it literally. He is always focused on getting the job done
and doesn't spend any time on things he isn't sure will pay off in
tangible ways. When I designed the board, I also designed a test
fixture since the board goes in his chassis that costs a few grand.
It took a while and ended up delaying some of the FPGA work a bit, but
in the end has paid off greatly since I can do so much debugging on my
own without involving him. He wouldn't have done that. Oh, it is
also the only way to power the board so the FPGA can be programmed.
He is supposed to develop a download capability in his system, but has
never spent the time to get it working. Again, he'll do that when he
knows it will pay some return.


> > One of my goals over the summer is to teach myself Verilog so that I
> > can use it as well as I currently use VHDL.  Then I can make a fully
> > informed decision about which I will continue to use.  I'd appreciate
> > pointers on good references, web or printed.
>
> Good luck.  As I've pointed out on many occasions, the textbook
> situation is much less satisfactory for Verilog than it is
> for VHDL.  Whatever you do, PLEASE get yourself a copy of
> Sutherland's Verilog Gotchas book (much of it is available free
> online).  You may not understand all of it at first, but
> you sure will want to revisit it later.  It's just a pity
> that it's incomplete and doesn't cover ALL the many ways
> in which Verilog can silently mess you up.

When I started learning VHDL I thought about writing a book from the
"practical" side of the language since so many were more like text
books. But I was overtaken by the market and it was flooded before I
got comfortable enough to do that. However, there may be a good
Verilog for the VHDL Designer book in this. If I find it was a good
thing to switch, I will seriously consider that.


> To be serious for a moment: a training class from a
> reputable independent provider will save you a ton
> of money in the long run.  Your time is valuable.

You are assuming my time is worth something. When I have no work,
there is no point in paying someone big bucks to teach me something I
can learn on my own. I am expecting to not have a lot of work over
the next few months.


> > Without starting a major argument, anyone care to share their feelings
> > on the differences in the two languages?
>
> Errrrm, I think I just did.

You did what, shared your feelings or started a major argument?

Rick

glen herrmannsfeldt

unread,
Apr 14, 2010, 10:07:12 PM4/14/10
to
In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
(snip)

> People say that strong typing catches bugs, but I've never seen any
> real proof of that. There are all sorts of anecdotal evidence, but
> nothing concrete. Sure, wearing a seat belt helps to save lives, but
> at what point do we draw the line? Should we have four point
> harnesses, helmets, fireproof suits...?

Seatbelts may save lives, but statistically many other safety
improvements don't. When people know that their car has air bags,
they compensate and drive less safely. (Corner a little faster, etc.)
Enough to mostly remove the life saving effect of the air bags.

It does seem likely that people will let down their guard and
code more sloppily knowing that the compiler will catch errors.

One of my least favorite is the Java check on variable initialization.
If the compiler can't be sure that it is initialized then it is
a fatal compilation error. There are just too many cases that
the compiler can't get right.

-- glen

Matthew Hicks

unread,
Apr 15, 2010, 1:20:51 AM4/15/10
to

Sorry, but I have to call BS on this whole line og "logic". Unless you can
point to some studies that prove this, my experiences are contrary to your
assertions. I don't change the way I code when I code in Verilog vs. VHDL
or C vs. Java, the compiler just does a better job of catching my stupid
mistakes, allowing me to get things done faster.


---Matthew Hicks


David Brown

unread,
Apr 15, 2010, 4:32:43 AM4/15/10
to
On 15/04/2010 04:07, glen herrmannsfeldt wrote:
> In comp.arch.fpga rickman<gnu...@gmail.com> wrote:
> (snip)
>
>> People say that strong typing catches bugs, but I've never seen any
>> real proof of that. There are all sorts of anecdotal evidence, but
>> nothing concrete. Sure, wearing a seat belt helps to save lives, but
>> at what point do we draw the line? Should we have four point
>> harnesses, helmets, fireproof suits...?
>

It is difficult to be concrete about these things, because it is so
dependent on the programmers involved. If you see strong typing as a
useful tool, and write your code accordingly, then you will write
clearer and neater code and therefore have fewer mistakes - and more of
these will be caught at compile time. If you see strong typing as a
pain that has to be worked around, you will write ugly code that will
have more mistakes than clearly written code in with a weak typed language.

The main thing is to write your code clearly, and to take advantage of
whatever typing mechanisms your tool supports - even if it doesn't check
them (for example, use typedefs and enums in C rather than "int", even
though the compiler treats them the same). You can write good code in
any language, and you certainly can write bad code in any language. But
it is easier to write good code if the language and tools make support it.

> Seatbelts may save lives, but statistically many other safety
> improvements don't. When people know that their car has air bags,
> they compensate and drive less safely. (Corner a little faster, etc.)
> Enough to mostly remove the life saving effect of the air bags.
>

It's a matter of understanding your tools. The big trouble with air
bags is that people think they don't need seatbelts because they have
air bags, without realising that they are designed to work together.

> It does seem likely that people will let down their guard and
> code more sloppily knowing that the compiler will catch errors.
>
> One of my least favorite is the Java check on variable initialization.
> If the compiler can't be sure that it is initialized then it is
> a fatal compilation error. There are just too many cases that
> the compiler can't get right.
>

I don't use Java, but my C compilers have similar checks. There are
very few circumstances that the compiler will get this wrong unless you
have written particularly convoluted code. The answer, of course, is to
avoid writing convoluted code.

Any checking mechanism has a risk of false positives and false negatives
- the compiler doesn't know everything the programmer knows. But you
work with the tool to give the compiler as much knowledge as you can,
and let it help check that as much as /it/ can.


Brian Drummond

unread,
Apr 15, 2010, 9:03:02 AM4/15/10
to
On Thu, 15 Apr 2010 10:32:43 +0200, David Brown
<da...@westcontrol.removethisbit.com> wrote:

>On 15/04/2010 04:07, glen herrmannsfeldt wrote:
>> In comp.arch.fpga rickman<gnu...@gmail.com> wrote:
>> (snip)
>>
>>> People say that strong typing catches bugs, but I've never seen any
>>> real proof of that. There are all sorts of anecdotal evidence, but
>>> nothing concrete. Sure, wearing a seat belt helps to save lives, but
>>> at what point do we draw the line? Should we have four point
>>> harnesses, helmets, fireproof suits...?

>The main thing is to write your code clearly, and to take advantage of

>whatever typing mechanisms your tool supports - even if it doesn't check
>them (for example, use typedefs and enums in C rather than "int", even
>though the compiler treats them the same). You can write good code in
>any language, and you certainly can write bad code in any language. But
>it is easier to write good code if the language and tools make support it.

Then (using enums as a simple example), strong typing can either catch
or eliminate errors that may not immediately be considered as type
errors.

Loop count errors or indexing off the end of an array, for example, are
usually type errors, since they use the range of a discrete subtype. In
C you have to build something complex and error prone, simply to
replicate "for i in my_enum'range loop ..." And remember to maintain it
whenever you add a value to the enum...

>
>Any checking mechanism has a risk of false positives and false negatives
>- the compiler doesn't know everything the programmer knows. But you
>work with the tool to give the compiler as much knowledge as you can,
>and let it help check that as much as /it/ can.
>

Agreed

- Brian

Patrick Maupin

unread,
Apr 15, 2010, 10:33:27 AM4/15/10
to

Patrick Maupin

unread,
Apr 15, 2010, 10:37:54 AM4/15/10
to
On Apr 15, 12:20 am, Matthew Hicks <mdhic...@uiuc.edu> wrote:

You can "call BS" all you want, but the fact that you don't change the
way you code in Verilog vs. VHDL or or C vs. Java indicates that your
experiences are antithetical to mine, so I have to discard your
datapoint.

Regards,
Pat

Andy

unread,
Apr 15, 2010, 3:23:50 PM4/15/10
to
The benefits of a "strongly typed" language, with bounds checks, etc.
are somewhat different between the first time you write/use the code,
and the Nth time reuse and revise it. Strong typeing and bounds
checking let you know quickly the possibly hidden side effects of
making changes in the code, especially when it may have been a few
days/weeks/months since the last time you worked with it.

A long time ago there was a famous contest for designing a simple
circuit in verilog vs. vhdl to see which language was better. The
requirements were provided on paper, and the contestents were given an
hour or two (don't remember how long, but it was certainly not even a
day), and whoever got the fastest and the smallest (two winners)
correct synthesized circuit, their chosen language won. Verilog won
both, and I don't think vhdl even finished.

IMHO, they missed the point. Any design that can be completed in a
couple of hours will necessarily favor the language with the least
overhead. Unfortunately, two-hour-solvable designs are not
representative of real life designs, and neither was the contest's
declared winner.

If you just want to hack out the code and get it working by yourself
for the day, a weakly typed language may be the better choice for you.
If you need to be able to reuse/revise/update/extend the design over
time, more strongly typed languages are preferred.

Andy

David Brown

unread,
Apr 15, 2010, 4:12:13 PM4/15/10
to
On 15/04/2010 21:23, Andy wrote:
> The benefits of a "strongly typed" language, with bounds checks, etc.

This is perhaps a nit-pick, but bounds checks are not directly related
to type strength. Bound checks are either run-time (and therefore have
a cost), or compile-time (or both). For compile time, this mainly means
that the compiler must see the full declaration of the array or other
type when you are using it - modern gcc will do pretty good compile-time
bounds checking if it has enough information, even with weakly-typed C.

> are somewhat different between the first time you write/use the code,
> and the Nth time reuse and revise it. Strong typeing and bounds
> checking let you know quickly the possibly hidden side effects of
> making changes in the code, especially when it may have been a few
> days/weeks/months since the last time you worked with it.
>

Agreed, although this applies to a lot of good programming techniques
(you can write strongly-typed code even if the language doesn't enforce
it, and typically the compiler's warnings will give you more help than
the language standards). It's faster to write code full of functions
"test" and "foo" than to think of meaningful names, but it makes a big
difference when you go back to the code latter.

> A long time ago there was a famous contest for designing a simple
> circuit in verilog vs. vhdl to see which language was better. The
> requirements were provided on paper, and the contestents were given an
> hour or two (don't remember how long, but it was certainly not even a
> day), and whoever got the fastest and the smallest (two winners)
> correct synthesized circuit, their chosen language won. Verilog won
> both, and I don't think vhdl even finished.
>
> IMHO, they missed the point. Any design that can be completed in a
> couple of hours will necessarily favor the language with the least
> overhead. Unfortunately, two-hour-solvable designs are not
> representative of real life designs, and neither was the contest's
> declared winner.
>
> If you just want to hack out the code and get it working by yourself
> for the day, a weakly typed language may be the better choice for you.
> If you need to be able to reuse/revise/update/extend the design over
> time, more strongly typed languages are preferred.
>
> Andy

Another famous contest involved a C and Ada comparison. It took the Ada
more than twice as long as the C team to write their code, but it took
the C team more than ten times as long to debug their code.

David Brown

unread,
Apr 15, 2010, 4:19:00 PM4/15/10
to
On 15/04/2010 15:03, Brian Drummond wrote:
> On Thu, 15 Apr 2010 10:32:43 +0200, David Brown
> <da...@westcontrol.removethisbit.com> wrote:
>
>> On 15/04/2010 04:07, glen herrmannsfeldt wrote:
>>> In comp.arch.fpga rickman<gnu...@gmail.com> wrote:
>>> (snip)
>>>
>>>> People say that strong typing catches bugs, but I've never seen any
>>>> real proof of that. There are all sorts of anecdotal evidence, but
>>>> nothing concrete. Sure, wearing a seat belt helps to save lives, but
>>>> at what point do we draw the line? Should we have four point
>>>> harnesses, helmets, fireproof suits...?
>
>> The main thing is to write your code clearly, and to take advantage of
>> whatever typing mechanisms your tool supports - even if it doesn't check
>> them (for example, use typedefs and enums in C rather than "int", even
>> though the compiler treats them the same). You can write good code in
>> any language, and you certainly can write bad code in any language. But
>> it is easier to write good code if the language and tools make support it.
>
> Then (using enums as a simple example), strong typing can either catch
> or eliminate errors that may not immediately be considered as type
> errors.
>

Yes - it's one of the advantages of C++ over C (C++ has plenty of
advantages and plenty of disadvantages) - type checking, including
enums, is much stronger. And with a bit of messing around with you can
make types that are effectively enums but even stronger typed.

I've seen some code to use C++ classes to encapsulate rules about lock
orders for multi-threaded code (such as "you must not take lock A unless
you first have lock B, and you must release them in reverse order").
The result was that violations of these rules ended up as type errors
and were therefore caught at compile time.

It's often possible to add features to the language in this way - some
ugly macros will give you zero-cost static asserts in C, for example.

> Loop count errors or indexing off the end of an array, for example, are
> usually type errors, since they use the range of a discrete subtype. In
> C you have to build something complex and error prone, simply to
> replicate "for i in my_enum'range loop ..." And remember to maintain it
> whenever you add a value to the enum...
>

typedef enum {
red, blue, green,
noOfColours
} colours;

for (int i = 0; i < noOfColours; i++) { ... }

But it would be *so* much nicer if it were part of the language as it is
in Ada or Pascal.

Patrick Maupin

unread,
Apr 15, 2010, 5:21:37 PM4/15/10
to
On Apr 15, 3:12 pm, David Brown <da...@westcontrol.removethisbit.com>
wrote:

> Another famous contest involved a C and Ada comparison.  It took the Ada
> more than twice as long as the C team to write their code, but it took
> the C team more than ten times as long to debug their code.

Well, this isn't at all the same then. The Verilog teams got working
designs, and the VHDL teams didn't.

Patrick Maupin

unread,
Apr 15, 2010, 5:27:08 PM4/15/10
to
On Apr 15, 2:23 pm, Andy <jonesa...@comcast.net> wrote:
> The benefits of a "strongly typed" language, with bounds checks, etc.
> are somewhat different between the first time you write/use the code,
> and the Nth time reuse and revise it. Strong typeing and bounds
> checking let you know quickly the possibly hidden side effects of
> making changes in the code, especially when it may have been a few
> days/weeks/months since the last time you worked with it.

For this usage, a good testbench will catch more bugs and make strong
type and bounds checking redundant.

>
> A long time ago there was a famous contest for designing a simple
> circuit in verilog vs. vhdl to see which language was better. The
> requirements were provided on paper, and the contestents were given an
> hour or two (don't remember how long, but it was certainly not even a
> day), and whoever got the fastest and the smallest (two winners)
> correct synthesized circuit, their chosen language won. Verilog won
> both, and I don't think vhdl even finished.

Contest details here:

http://www.see.ed.ac.uk/~gerard/Teach/Verilog/manual/Example/lrgeEx2/cooley.html

> IMHO, they missed the point. Any design that can be completed in a
> couple of hours will necessarily favor the language with the least
> overhead. Unfortunately, two-hour-solvable designs are not
> representative of real life designs, and neither was the contest's
> declared winner.

Well, I think the takeaway is a lot more nuanced than that, but
whatever. Believe what you will.

> If you just want to hack out the code and get it working by yourself
> for the day, a weakly typed language may be the better choice for you.
> If you need to be able to reuse/revise/update/extend the design over
> time, more strongly typed languages are preferred.

Again, IMHO, a really good testbench will more than make up for any
perceived weaknesses in Verilog in this area. But you are free to
continue to believe that the language is really helping you.

Regards,
Pat

Muzaffer Kal

unread,
Apr 15, 2010, 5:31:01 PM4/15/10
to

There are two issues to consider. One is the relative times of writing
the codes vs debugging ie if writing took 5 hours and debugging 10
minutes (unlikely) then C still wins. Which brings the second issue:
it is very likely that the programming contest involved a "larger"
design to be finished. If I am remembering correctly RTL was an async
reset, synchronously loadable up-down counter which is a "smallish"
project. If programming contest involved something more "involved" it
still points to the benefit of strong typing and other features of
Ada/VHDL etc.
--
Muzaffer Kal

DSPIA INC.
ASIC/FPGA Design Services

http://www.dspia.com

Patrick Maupin

unread,
Apr 15, 2010, 5:48:31 PM4/15/10
to
On Apr 15, 4:31 pm, Muzaffer Kal <k...@dspia.com> wrote:
> On Thu, 15 Apr 2010 14:21:37 -0700 (PDT), Patrick Maupin
>
> <pmau...@gmail.com> wrote:
> >On Apr 15, 3:12 pm, David Brown <da...@westcontrol.removethisbit.com>
> >wrote:
>
> >> Another famous contest involved a C and Ada comparison.  It took the Ada
> >> more than twice as long as the C team to write their code, but it took
> >> the C team more than ten times as long to debug their code.
>
> >Well, this isn't at all the same then.  The Verilog teams got working
> >designs, and the VHDL teams didn't.
>
> There are two issues to consider. One is the relative times of writing
> the codes vs debugging ie if writing took 5 hours and debugging 10
> minutes (unlikely) then C still wins. Which brings the second issue:
> it is very likely that the programming contest involved a "larger"
> design to be finished. If I am remembering correctly RTL was  an async
> reset, synchronously loadable up-down counter which is a "smallish"
> project. If programming contest involved something more "involved" it
> still points to the benefit of strong typing and other features of
> Ada/VHDL etc.

But it's mostly academic and FPGA people who think that VHDL might
have any future at all. See, for example:

http://www.eetimes.com/news/design/columns/industry_gadfly/showArticle.jhtml?articleID=17408302

Regards,
Pat

Andrew FPGA

unread,
Apr 15, 2010, 5:57:06 PM4/15/10
to
Interesting in the discussion on myHdl/testbenches, no-one raised
SystemVerilog. SystemVerilog raises the level of abstraction(like
myHdl), but more importantly it introduces constrained random
verification. For writing testbenches, SV is a better choice than
MyHdl/VHDL/Verilog, assuming tool availablility.

It would seem that SV does not bring much to the table in terms of RTL
design - its just a catchup to get verilog up to the capabilities that
VHDL already has.

Jonathan Bromley

unread,
Apr 15, 2010, 5:58:53 PM4/15/10
to
On Apr 15, 10:21 pm, Patrick Maupin <pmau...@gmail.com> wrote:

> Well, this isn't at all the same then.  The Verilog teams got working
> designs, and the VHDL teams didn't.

It's really about time that this old VHDL-Verilog shootout nonsense
was shown up for the garbage it was.

It was set up by John Cooley many years ago. It took place in the
US where Verilog predominated, then as now, so there was a much
smaller pool of skilled VHDL users than Veriloggers at the host
event. At the time, VHDL did indeed lag badly behind in some
respects - the situation with numeric packages was chaotic, and
the design was a (very simple) counter of some kind so anyone
who wasn't fluent with the VHDL numeric packages **as accepted
by the tools in use** was doomed. And, as Andy pointed out, the
scale of the problem was so tiny that Verilog would always come
out on top - Verilog is definitely faster to write for tiny
designs; you don't need a contest to work that out.

All of this means that the "contest" was little more than a
good way to show how feeble were VHDL's **tools** at that
time. It wasn't very long before the tool vendors got their
acts together, but the shootout wasn't re-run; instead, it
became part of the folklore. It was unreliable populist
hokum then, it's out-of-date unreliable hokum now, and I'm
saddened to see it adduced in evidence so many years on.

There's a lot wrong with VHDL, for sure, but a lot right too.
Here's a sampler of a few things that have been in VHDL since
at least 1987:

Added to Verilog in 2001:
- generate
- multi-dimensional arrays
- signed vector arithmetic
- configurations (nasty in both languages!)

Added to SystemVerilog in 2003-2005:
- packages
- structs (a.k.a. records)
- non-trivial data types (e.g. arrays, structs) on ports
- enumeration types
- unresolved signals (compiler errors on multiple drivers)
- non-trivial data types as subprogram arguments
- reference arguments to subprograms
- default values for input arguments
- inquiry functions to determine the dimensionality and
bounds of an array

May possibly be added to SystemVerilog in 2012:
- parameterized subprograms (like VHDL unconstrained arrays)
- subprogram overloading

Of course, there's also a ton of stuff that Verilog can do
but VHDL can't. And it's rarely a good idea to judge any
language by a laundry-list of its feature goodies. But
it is NEVER a good idea to trivialize the discussion.
--
Jonathan Bromley

Patrick Maupin

unread,
Apr 15, 2010, 6:30:37 PM4/15/10
to
On Apr 15, 4:58 pm, Jonathan Bromley <s...@oxfordbromley.plus.com>
wrote:

> Of course, there's also a ton of stuff that Verilog can do


> but VHDL can't.  And it's rarely a good idea to judge any
> language by a laundry-list of its feature goodies.

Agreed completely.

>  But it is NEVER a good idea to trivialize the discussion.

Well, you have to talk to others about that. Somebody else brought up
the stupid Verilog contest, and David, apparently agreeing with some
sentiment there, said:

> Another famous contest involved a C and Ada comparison. It took the Ada
> more than twice as long as the C team to write their code, but it took
> the C team more than ten times as long to debug their code.

To which the only sane answer was my flippant "it's not the
same" (and, of course the very first thing that's not the same is who
the purported winner was). I don't think either contest is worth a
hoot, but I do find it interesting that you found it necessary to pen
a long response to my flippant response, yet found it acceptable to
ignore the statement about the C vs. ADA contest.

Regards,
Pat

Jonathan Bromley

unread,
Apr 16, 2010, 3:47:23 AM4/16/10
to
On Apr 15, 11:30 pm, Patrick Maupin <pmau...@gmail.com> wrote:

> I do find it interesting that you found it necessary to pen
> a long response to my flippant response, yet found it acceptable to
> ignore the statement about the C vs. ADA contest.

I try to write on things I know something about :-)

I am painfully familiar with the Cooley Verilog-vs-VHDL nonsense,
but know nothing about that C-Ada contest.

In any case, I wasn't particularly responding to you. I took an
opportunity to say something I've wanted to say for a long time
about an exceedingly faulty part of the HDL mythology.
--
Jonathan Bromley

David Brown

unread,
Apr 16, 2010, 5:15:35 AM4/16/10
to

The contest in question was a substantial programming project over a
longer period - weeks rather than hours. I don't remember how much time
was actually spend on debugging rather than coding, but it certainly
worked out that the Ada team were finished long before the C team.

David Brown

unread,
Apr 16, 2010, 5:31:40 AM4/16/10
to
On 16/04/2010 00:30, Patrick Maupin wrote:
> On Apr 15, 4:58 pm, Jonathan Bromley<s...@oxfordbromley.plus.com>
> wrote:
>
>> Of course, there's also a ton of stuff that Verilog can do
>> but VHDL can't. And it's rarely a good idea to judge any
>> language by a laundry-list of its feature goodies.
>
> Agreed completely.
>
>> But it is NEVER a good idea to trivialize the discussion.
>
> Well, you have to talk to others about that. Somebody else brought up
> the stupid Verilog contest, and David, apparently agreeing with some
> sentiment there, said:
>

I wasn't agreeing with the validity of the Verilog/VHDL contest,
although I suppose by not saying that, it looked like I agreed with it.
It would have been more useful if I'd given a little more detail. The
Ada / C contest was over a much longer time scale, using a real-world
project - and thus is a much more valid contest (though obviously, like
any test or benchmark, you can't apply it thoughtlessly to other contexts).

It was an indication of where the stronger typing and generally stricter
compiler and language was demonstrated to give a faster development time
in a real case. I can't say whether those results could carry over to a
comparison between VHDL and Verilog, or how much the results are the
effect of strong typing. But since VHDL is often though of as being a
similar style of language to Ada, and Verilog is similarly compared to
C, it may be of interest.

I couldn't find references to the study I was thinking of, but I found
one in a similar vain:

<http://www.adaic.org/whyada/ada-vs-c/cada_art.html#conclusion>

Of course, I haven't scoured the net looking for enough articles to give
a balanced view here. So if my comments here are of interest or use to
anyone, that's great - if not, I'll not complain if you ignore them!

David Brown

unread,
Apr 16, 2010, 5:38:50 AM4/16/10
to
On 15/04/2010 23:27, Patrick Maupin wrote:
> On Apr 15, 2:23 pm, Andy<jonesa...@comcast.net> wrote:
>> The benefits of a "strongly typed" language, with bounds checks, etc.
>> are somewhat different between the first time you write/use the code,
>> and the Nth time reuse and revise it. Strong typeing and bounds
>> checking let you know quickly the possibly hidden side effects of
>> making changes in the code, especially when it may have been a few
>> days/weeks/months since the last time you worked with it.
>
> For this usage, a good testbench will catch more bugs and make strong
> type and bounds checking redundant.
>

A testbench does not make checks redundant, for two reasons. First, the
earlier in the process that you find the mistakes, the better - its
easier to find the cause of the mistake, and it's faster to find them,
fix them, and re-compile.

Secondly, a testbench does not check everything. It is only as good as
the work put into it, and can be flawed in the same way as the code
itself. A testbench's scope for non-trivial projects is always limited
- it is not practical to test everything. If you have some code that
has a counter, your testbench may not go through the entire range of the
counter - perhaps doing so would take simulation times of years. Your
testbench will then not do bounds checking on the counter.

The old joke about Ada is that when you get your code to compile, it's
ready to ship. I certainly wouldn't go that far, but testing is
something you do in cooperation with static checking, not as an alternative.


mvh.,

David

Symon

unread,
Apr 16, 2010, 6:30:25 AM4/16/10
to
Pat,
If your email client was less agile and performed better 'typing
checking' you wouldn't have sent this blank post.
HTH, Syms. ;-)

Brian Drummond

unread,
Apr 16, 2010, 7:22:16 AM4/16/10
to

Was it John McCormick's model railroad?

http://www.adaic.org/atwork/trains.html

Possibly not - the C students apparently never did deliver.

- Brian

David Brown

unread,
Apr 16, 2010, 7:30:39 AM4/16/10
to

I suppose there have been many such studies through the ages. The one I
remember was a real project rather than a student project (I think it
was a large commercial company, but it may have been some sort of
government organisation).

Patrick Maupin

unread,
Apr 16, 2010, 1:50:08 PM4/16/10
to
On Apr 16, 5:30 am, Symon <symon_bre...@hotmail.com> wrote:

> Pat,
> If your email client was less agile and performed better 'typing
> checking' you wouldn't have sent this blank post.
> HTH, Syms. ;-)

Absolutely true!

But it keeps me young trying to keep up with it.

Pat

rickman

unread,
Apr 16, 2010, 1:56:57 PM4/16/10
to
On Apr 14, 10:07 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu>
wrote:

> In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
> (snip)
>
> > People say that strong typing catches bugs, but I've never seen any
> > real proof of that.  There are all sorts of anecdotal evidence, but
> > nothing concrete.  Sure, wearing a seat belt helps to save lives, but
> > at what point do we draw the line?  Should we have four point
> > harnesses, helmets, fireproof suits...?
>
> Seatbelts may save lives, but statistically many other safety
> improvements don't.  When people know that their car has air bags,
> they compensate and drive less safely.  (Corner a little faster, etc.)
> Enough to mostly remove the life saving effect of the air bags.

Are you making this up? I have never heard that any of the other
added safety features don't save lives overall. I have heard that
driving a sportier car does allow you to drive more aggressively, but
this is likely not actually the result of any real analysis, but just
an urban myth. Where did you hear that air bags don't save lives
after considering all?


> It does seem likely that people will let down their guard and
> code more sloppily knowing that the compiler will catch errors.

If you can show me something that shows this, fine, but otherwise this
is just speculation.


> One of my least favorite is the Java check on variable initialization.
> If the compiler can't be sure that it is initialized then it is
> a fatal compilation error.  There are just too many cases that
> the compiler can't get right.

I saw a warning the other day that my VHDL signal initialization "is
not synthesizable". I checked and it was appropriately initialized on
async reset, it was just complaining that I also used an
initialization in the declaration to keep the simulator from giving me
warnings in library functions. You just can't please everyone!

Then again I had to make a second trip to the customer yesterday
because of an output that got disconnected in a change and I didn't
see the warning in the ocean of warnings and notes that the tools
generate. Then I spent half an hour going through all of it in detail
and found a second disconnected signal. Reminds me of the moon
landing where there was a warning about a loss of sync which kept
happening so much it overloaded the guidance computer and they had to
land manually. TMI!

Rick

rickman

unread,
Apr 16, 2010, 1:58:31 PM4/16/10
to

That is certainly a great way to prove a theory. Toss out every data
point that disagrees with your theory!

Rick

rickman

unread,
Apr 16, 2010, 2:05:52 PM4/16/10
to
On Apr 15, 3:23 pm, Andy <jonesa...@comcast.net> wrote:
> The benefits of a "strongly typed" language, with bounds checks, etc.
> are somewhat different between the first time you write/use the code,
> and the Nth time reuse and revise it. Strong typeing and bounds
> checking let you know quickly the possibly hidden side effects of
> making changes in the code, especially when it may have been a few
> days/weeks/months since the last time you worked with it.
>
> A long time ago there was a famous contest for designing a simple
> circuit in verilog vs. vhdl to see which language was better. The
> requirements were provided on paper, and the contestents were given an
> hour or two (don't remember how long, but it was certainly not even a
> day), and whoever got the fastest and the smallest (two winners)
> correct synthesized circuit, their chosen language won. Verilog won
> both, and I don't think vhdl even finished.

Maybe this was repeated, but the first time they tried this *NO ONE*
finished in time which is likely much more realistic compared to real
assignments in the real world. If you think it will take a couple of
hours, allocate a couple of days!

Rick

Bernd Paysan

unread,
Apr 16, 2010, 2:24:47 PM4/16/10
to
rickman wrote:
> People say that strong typing catches bugs, but I've never seen any
> real proof of that. There are all sorts of anecdotal evidence, but
> nothing concrete.

My practical experience is that strong typing creates another class of bugs,
simply by making things more complicated. I've last seen VHDL in use more
than 10 years ago, but the typical pattern was that a designer wanted a bit
vector, and created a subranged integer instead. Seems to be identical, but
isn't. If you increment the subranged integer, it will stop simulation on
overflow, if you increment the bit vector, it will wrap around. My coworker
who did this subranged integer stuff quite a lot ended up with code like

if foo = 15 then foo <= 0 else foo <= foo + 1 endif;

And certainly, all those lines had started out as

foo <= foo + 1;

and were only "fixed" later when the simulation crashed.

The good news is that the synthesis tool really generates the bitvector
logic for both, so all those simulator crashes were only false alarms.

--
Bernd Paysan
"If you want it done right, you have to do it yourself!"
http://www.jwdt.com/~paysan/

rickman

unread,
Apr 16, 2010, 2:25:51 PM4/16/10
to
> http://www.eetimes.com/news/design/columns/industry_gadfly/showArticl...
>
> Regards,
> Pat

Hmmm... The date on that article is 04/07/2003 11:28 AM EDT. Seven
years later I still don't see any sign that VHDL is going away... or
did I miss something?

Rick

whygee

unread,
Apr 16, 2010, 1:53:44 PM4/16/10
to
rickman wrote:
> Hmmm... The date on that article is 04/07/2003 11:28 AM EDT. Seven
> years later I still don't see any sign that VHDL is going away... or
> did I miss something?

I had the same thought.

Furthermore it was about only one company who wanted to push one
technology. Bold statements followed, and... 7 years later,
VHDL and Verilog are still the Vi vs Emacs of EDA.

> Rick
yg
--
http://ygdes.com / http://yasep.org

glen herrmannsfeldt

unread,
Apr 16, 2010, 2:32:56 PM4/16/10
to
In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
(snip, I wrote)

>> Seatbelts may save lives, but statistically many other safety
>> improvements don't. ?When people know that their car has air bags,
>> they compensate and drive less safely. ?(Corner a little faster, etc.)

>> Enough to mostly remove the life saving effect of the air bags.

> Are you making this up? I have never heard that any of the other
> added safety features don't save lives overall. I have heard that
> driving a sportier car does allow you to drive more aggressively, but
> this is likely not actually the result of any real analysis, but just
> an urban myth. Where did you hear that air bags don't save lives
> after considering all?

I believe that they still do save lives, but by a smaller
factor than one might expect. I believe the one that I saw
was not quoting air bags, but anti-lock brakes.

The case for air bags was mentioned by someone else -- that some
believe that they don't need seat belts if they have air bags.
Without seat belts, though, you can be too close to the air bag
when it deploys, and get hurt by the air bag itself. For that
reason, they now use slower air bags than they used to.

The action of anti-lock breaks has a more immediate feel while
driving, and it seems likely that many will take that into account
while driving. I believe that there is still a net gain, but
much smaller than would be expected.

-- glen

rickman

unread,
Apr 16, 2010, 2:35:54 PM4/16/10
to
On Apr 16, 5:38 am, David Brown <da...@westcontrol.removethisbit.com>
wrote:

> Secondly, a testbench does not check everything.  It is only as good as
> the work put into it, and can be flawed in the same way as the code
> itself.  

I was listening to a lecture by a college once who indicated that you
don't need to use static timing analysis since you can use a timing
based simulation! I queried him on this a bit and he seemed to think
that you just needed to have a "good enough" test bench. I was
incredulous about this for a long time. Now I realize he was just a
moron^H^H^H^H^H^H^H ill informed!

Rick

rickman

unread,
Apr 16, 2010, 2:44:21 PM4/16/10
to

I can't say I understand what the point is. If you want a counter to
wrap around you only need to use the mod operator. In fact, I tried
using unsigned and signed types for counters and checked the synthesis
results for size. I found that the coding style greatly influences
the result. I ended up coding with subranged integers using a mod
operator because it always gave me a good synthesis result. I never
did understand some of the things the synthesis did, but it was not
uncommon to see one adder chain for the counter and a second adder
chain for the carry out!

After I run the Verilog gauntlet this summer, I plan to start
verifying a library of common routines to use in designs when the size
is important. My current project is very tight on size and it is such
a PITA to code every line thinking of size.

Rick

glen herrmannsfeldt

unread,
Apr 16, 2010, 2:45:03 PM4/16/10
to
In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
(snip)


> I was listening to a lecture by a college once who indicated that you
> don't need to use static timing analysis since you can use a timing
> based simulation! I queried him on this a bit and he seemed to think
> that you just needed to have a "good enough" test bench. I was
> incredulous about this for a long time. Now I realize he was just a
> moron^H^H^H^H^H^H^H ill informed!

I suppose so, but consider it the other way around.

If your test bench is good enough then it will catch all static
timing failures (eventually). With static timing analysis, there
are many things that you don't need to check with the test bench.

Also, you can't do static timing analysis on the implemented logic.
(That is, given an actual built circuit and a logic analyzer.)

Now, setup and hold violations are easy to test with static
analysis, but much harder to check in actual logic. Among others,
you would want to check all possible clock skew failures, which is
normally not possible. With the right test bench and logic
implementation (including programmable delays on each FF clock)
it might be possible, though.

-- glen

Bernd Paysan

unread,
Apr 16, 2010, 2:56:19 PM4/16/10
to
Andy wrote:
> IMHO, they missed the point. Any design that can be completed in a
> couple of hours will necessarily favor the language with the least
> overhead. Unfortunately, two-hour-solvable designs are not
> representative of real life designs, and neither was the contest's
> declared winner.

Well, we pretty much know that the number of errors people make in
programming languages basically depends on how much code they have to write
- a language which has less overhead and is more terse is being written
faster and has less bugs. And it goes non-linear, i.e. a program with 10k
lines of code will have less bugs per 1000 lines than a program with 100k
lines of code. So the larger the project, the better the more terse
language is.

Patrick Maupin

unread,
Apr 16, 2010, 5:08:17 PM4/16/10
to
On Apr 16, 12:58 pm, rickman <gnu...@gmail.com> wrote:

> That is certainly a great way to prove a theory.  Toss out every data
> point that disagrees with your theory!

Well, I don't really need others to agree with my theory, so if that's
how it's viewed, so be it. Nonetheless, I view it as tossing out data
that was taken under different conditions than the ones I live under.
Although the basics don't change (C, C++, Java, Verilog, VHDL are all
turing-complete, as are my workstation and the embedded systems I
sometimes program on), the details can make things qualitatively
enough different that they actually appear to be quantatively
different. It's like quantum mechanics vs. Newtonian physics.

For example, on my desktop, if I'm solving an engineering problem, I
might throw Python and numpy, or matlab, and gobs of RAM and CPU at
it. On a 20 MIPS, fixed point, low-precision embedded system with a
total of 128K memory, I don't handle the problem the same way.

I find the same with language differences. I assumed your complaint
when you started this thread was that a particular language was
*forcing* you into a paradigm you felt might be sub-optimal. My
opinion is that, even when languages don't *force* you into a
particular paradigm, there is an impedance match between coding style
and language that you ignore at the peril of your own frustration.

So when somebody says " I don't change the way I code when I code in


Verilog vs. VHDL
or C vs. Java, the compiler just does a better job of catching my

stupid mistakes, allowing me to get things done faster." I just can't
even *relate* to that viewpoint. It is that of an alien from a
different universe, so has no bearing on my day to day existence.

Regards,
Pat

Patrick Maupin

unread,
Apr 16, 2010, 5:09:20 PM4/16/10
to
On Apr 16, 1:25 pm, rickman <gnu...@gmail.com> wrote:

> Hmmm...  The date on that article is 04/07/2003 11:28 AM EDT.  Seven
> years later I still don't see any sign that VHDL is going away... or
> did I miss something?

True, but you also have to remember in the early 90s that all the
industry pundits thought verilog was dead...

Bernd Paysan

unread,
Apr 16, 2010, 6:37:49 PM4/16/10
to
glen herrmannsfeldt wrote:
> If your test bench is good enough then it will catch all static
> timing failures (eventually). With static timing analysis, there
> are many things that you don't need to check with the test bench.

And then there are some corner cases where neither static timing analysis
nor digital simulation helps - like signals crossing asynchronous clock
boundaries (there *will* be a setup or hold violation, but a robust clock
boundary crossing circuit will work in practice).

Example: We had a counter running on a different clock (actually a VCO,
where the voltage was an analog input), and to sample it robust in the
normal digital clock domain, I grey-encoded it. There will be one bit which
is either this or that when sampling at a setup or hold violation condition,
but this is only just one bit, and it's either in the state before the
increment or after.

rickman

unread,
Apr 17, 2010, 10:16:08 AM4/17/10
to
On Apr 16, 2:45 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
> (snip)
>
> > I was listening to a lecture by a college once who indicated that you
> > don't need to use static timing analysis since you can use a timing
> > based simulation!  I queried him on this a bit and he seemed to think
> > that you just needed to have a "good enough" test bench.  I was
> > incredulous about this for a long time.  Now I realize he was just a
> > moron^H^H^H^H^H^H^H ill informed!
>
> I suppose so, but consider it the other way around.
>
> If your test bench is good enough then it will catch all static
> timing failures (eventually).  With static timing analysis, there
> are many things that you don't need to check with the test bench.

I don't follow what you are saying. This first sentence seems to be
saying that a timing simulation *is* a good place to find timing
problems, or are you talking about real world test benches? The point
is that static timing is enough to catch all timing failures given
that your timing constraints cover the design properly... and I agree
that is a big given. Your second sentence seems to be agreeing with
my previous statement.


> Also, you can't do static timing analysis on the implemented logic.
> (That is, given an actual built circuit and a logic analyzer.)

So?


> Now, setup and hold violations are easy to test with static
> analysis, but much harder to check in actual logic.  Among others,
> you would want to check all possible clock skew failures, which is
> normally not possible.  With the right test bench and logic
> implementation (including programmable delays on each FF clock)
> it might be possible, though.

In twenty years of designing with FPGAs I have never found a clock
skew problem. I always write my code to allow the clock trees to
deliver the clocks and I believe the tools guaranty that there will
not be a skew problem. Static timing actually does cover clock skew,
at least the tools I use.

BTW, how do you design a "right test bench"? Static timing analysis
will at least give you the coverage level although one of my
complaints is that they don't provide any tools for analyzing if your
constraints are correct. But I have no idea how to verify that my
test bench is testing the timing adequately.

Rick

rickman

unread,
Apr 17, 2010, 10:17:51 AM4/17/10
to
On Apr 16, 6:37 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> glen herrmannsfeldt wrote:
> > If your test bench is good enough then it will catch all static
> > timing failures (eventually).  With static timing analysis, there
> > are many things that you don't need to check with the test bench.
>
> And then there are some corner cases where neither static timing analysis
> nor digital simulation helps - like signals crossing asynchronous clock
> boundaries (there *will* be a setup or hold violation, but a robust clock
> boundary crossing circuit will work in practice).
>
> Example: We had a counter running on a different clock (actually a VCO,
> where the voltage was an analog input), and to sample it robust in the
> normal digital clock domain, I grey-encoded it.  There will be one bit which
> is either this or that when sampling at a setup or hold violation condition,
> but this is only just one bit, and it's either in the state before the
> increment or after.


But this has nothing to do with timing analysis. Clock domain crosses
*always* violate timing and require a logic solution, not a timing
test.

Rick

rickman

unread,
Apr 17, 2010, 10:19:27 AM4/17/10
to
On Apr 16, 2:56 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> Andy wrote:
> > IMHO, they missed the point. Any design that can be completed in a
> > couple of hours will necessarily favor the language with the least
> > overhead. Unfortunately, two-hour-solvable designs are not
> > representative of real life designs, and neither was the contest's
> > declared winner.
>
> Well, we pretty much know that the number of errors people make in
> programming languages basically depends on how much code they have to write
> - a language which has less overhead and is more terse is being written
> faster and has less bugs.  And it goes non-linear, i.e. a program with 10k
> lines of code will have less bugs per 1000 lines than a program with 100k
> lines of code.  So the larger the project, the better the more terse
> language is.


That must be why we are all programming in APL, no?

Rick

glen herrmannsfeldt

unread,
Apr 17, 2010, 7:17:35 PM4/17/10
to
In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
(snip on test benches)

>> I suppose so, but consider it the other way around.

>> If your test bench is good enough then it will catch all static

>> timing failures (eventually). ?With static timing analysis, there


>> are many things that you don't need to check with the test bench.

> I don't follow what you are saying. This first sentence seems to be
> saying that a timing simulation *is* a good place to find timing
> problems, or are you talking about real world test benches? The point
> is that static timing is enough to catch all timing failures given
> that your timing constraints cover the design properly... and I agree
> that is a big given. Your second sentence seems to be agreeing with
> my previous statement.

Yes, I was describing real world (hardware) test benches.

Depending on how close you are to a setup/hold violation,
it may take a long time for a failure to actually occur.



>> Also, you can't do static timing analysis on the implemented logic.
>> (That is, given an actual built circuit and a logic analyzer.)

> So?

>> Now, setup and hold violations are easy to test with static

>> analysis, but much harder to check in actual logic. ?Among others,


>> you would want to check all possible clock skew failures, which is

>> normally not possible. ?With the right test bench and logic


>> implementation (including programmable delays on each FF clock)
>> it might be possible, though.

> In twenty years of designing with FPGAs I have never found a clock
> skew problem. I always write my code to allow the clock trees to
> deliver the clocks and I believe the tools guaranty that there will
> not be a skew problem. Static timing actually does cover clock skew,
> at least the tools I use.

Yes, I was trying to cover the case of not using static timing
analysis but only testing actual hardware. For ASICs, it is
usually necessary to test the actual chips, though they should
have already passed static timing.


> BTW, how do you design a "right test bench"? Static timing analysis
> will at least give you the coverage level although one of my
> complaints is that they don't provide any tools for analyzing if your
> constraints are correct. But I have no idea how to verify that my
> test bench is testing the timing adequately.

If you only have one clock, it isn't so hard. As you add more,
with different frequencies and/or phases, it gets much harder,
I agree. It would be nice to get as much help as possible
from the tools.

-- glen

Martin Thompson

unread,
Apr 19, 2010, 5:36:21 AM4/19/10
to
Bernd Paysan <bernd....@gmx.de> writes:

> rickman wrote:
>> People say that strong typing catches bugs, but I've never seen any
>> real proof of that. There are all sorts of anecdotal evidence, but
>> nothing concrete.
>
> My practical experience is that strong typing creates another class of bugs,
> simply by making things more complicated. I've last seen VHDL in use more
> than 10 years ago, but the typical pattern was that a designer wanted a bit
> vector, and created a subranged integer instead.

Surely the designer should've used a bit vector (for example
"unsigned" type) then? That's not the language's fault!

Cheers,
Martin

--
martin.j...@trw.com
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronics.html

Martin Thompson

unread,
Apr 19, 2010, 5:40:32 AM4/19/10
to
Bernd Paysan <bernd....@gmx.de> writes:

> Andy wrote:
>> IMHO, they missed the point. Any design that can be completed in a
>> couple of hours will necessarily favor the language with the least
>> overhead. Unfortunately, two-hour-solvable designs are not
>> representative of real life designs, and neither was the contest's
>> declared winner.
>
> Well, we pretty much know that the number of errors people make in
> programming languages basically depends on how much code they have to write
> - a language which has less overhead and is more terse is being written
> faster and has less bugs.

Citation needed :) (I happen to agree, but if you can point to good
studies, I'd be interested in reading ...)

> And it goes non-linear, i.e. a program with 10k lines of code will
> have less bugs per 1000 lines than a program with 100k lines of
> code. So the larger the project, the better the more terse language
> is.

Is that related to the terseness of the core language, or how many
useful library functions are available, so you don't have to reinvent
the wheel (and end up with a triangular one...)?

Andy

unread,
Apr 19, 2010, 9:27:36 AM4/19/10
to

The cost of bugs in code is not a constant, per-bug figure. The cost
is dominated by how hard it is to find the bug as early as possible.

So, in a verbose language, the number of bugs may go up, but the cost
of fixing the bugs goes down.

Case in point: Would you suggest that using positional notation in
port maps and argument lists is more prone to cause errors? And which
is prone to cost more to find and fix any errors? My point is not that
positional notation is an advantage of one language over another, it
is simply to debunk the "fewer lines of code = better code" myth.

Don't kid yourself that cryptic one-liners are more bug free than well
documented (e.g. compiler-enforced comments) code that is more
verbose.


Andy

Paul Uiterlinden

unread,
Apr 19, 2010, 10:12:53 AM4/19/10
to
Andrew FPGA wrote:

> Interesting in the discussion on myHdl/testbenches, no-one raised
> SystemVerilog. SystemVerilog raises the level of abstraction(like
> myHdl), but more importantly it introduces constrained random
> verification. For writing testbenches, SV is a better choice than
> MyHdl/VHDL/Verilog, assuming tool availablility.

And assuming availability of financial means to buy licenses.
And assuming availability of time to rewrite existing verification
components, all of course written in VHDL.
And assuming availability of time to learn SV.
And assuming availability of time to learn OVM.
And ...

Oh man, is it because of the seasonal change that I'm feeling so tired, or
is it something else? ;-)

> It would seem that SV does not bring much to the table in terms of RTL
> design - its just a catchup to get verilog up to the capabilities that
> VHDL already has.

Indeed.

I agree that SV seems to give most room for growth in the verification side.
VHDL is becoming too restrictive when you want to create really reusable
verification parts (reuse verification code from block level at chip
level). More often than not, the language is working against you in that
case. Most of the time because it is strongly typed. In general I prefer
prefer strongly typed over weakly typed. But sometimes it just gets in your
way.

For design I too do not see much advantage of SV over VHDL, especially when
you already are using VHDL. So then a mix would be preferable: VHDL for
design, SV/OVM for verification.

--
Paul Uiterlinden
www.aimvalley.nl
e-mail addres: remove the not.

Martin Thompson

unread,
Apr 20, 2010, 5:11:54 AM4/20/10
to
Andy <jone...@comcast.net> writes:

> The cost of bugs in code is not a constant, per-bug figure. The cost
> is dominated by how hard it is to find the bug as early as possible.
>
> So, in a verbose language, the number of bugs may go up, but the cost
> of fixing the bugs goes down.
>
> Case in point: Would you suggest that using positional notation in
> port maps and argument lists is more prone to cause errors? And which
> is prone to cost more to find and fix any errors? My point is not that
> positional notation is an advantage of one language over another, it
> is simply to debunk the "fewer lines of code = better code" myth.

Agreed - as with all things, any extreme position is daft...

The problem (as I see it) comes with languages (or past
design-techniques enforced by synthesis tools) which are not
descriptive enough to allow you to express your intent in a
"relatively" small amount of code. Which is why assembly is not as
widely used anymore, and more behavioural descriptions win out over
instantiating LUTs/FFs by hand. It's not about the verboseness of the
language per-se, more about the ability to show (clearly) your intent
relatively concisely.

And much of the "verboseness" in VHDL can be mitigated with tools like
Emacs or Sigasi's product. And much of the other perceived
verboseness can be overcome by writing "modern" code: single process,
using variables, functions, procedures (the sort of thing some of us
do all the time!)

BTW - I write a lot of VHDL and a fair bit of Python, so I see both
sides of the language fence. (I also write a fair amount of Matlab, which
annoys me in many many ways due to the way features have been kluged
on over time, but it's sooo quick to do some things that way!). I
can't see myself moving over the Verilog either - the conciseness
doesn't seem to be the "right sort" of conciseness for me.

>
> Don't kid yourself that cryptic one-liners are more bug free than well
> documented (e.g. compiler-enforced comments) code that is more
> verbose.
>

Indeed, I don't (kid myself that is)! You can write rubbish in any
language :)

BTW, what do you mean by "compiler-enforced comments" - is it "simply"
that code should be as self-documenting as possible? Or something else?

Andy

unread,
Apr 20, 2010, 1:36:53 PM4/20/10
to
On Apr 20, 4:11 am, Martin Thompson <martin.j.thomp...@trw.com> wrote:
> BTW, what do you mean by "compiler-enforced comments" - is it "simply"
> that code should be as self-documenting as possible? Or something else?

Self-documenting and more.

I look at many aspects of a strongly typed language as encouraging or
requiring "verbosity" that would otherwise need comments to explain.
Hence we have different types for vectors of the same base element
type, based on how the contents are to be interpretted, and/or
limitations of the data contained. Integer subtypes allow you to
constrain (and verify) the contents more thoroughly, at least
numerically.

By choosing the appropriate data types, you are telling the tool and
subsequent reviewers/users/maintainers something about your code, and
how it works (and sometimes more importantly, how it doesn't or won't
work.)

By using assertion statements, you can not only document assumptions
and limitations of your code, but also ensure that those are met.

Some of these features are enforced by the compiler, some are enforced
by standards compliant implementations. But they are enforced, which
is more than we can say about comments. How many times have you seen
comments that were clearly written for a previous version of the code,
but not completely updated in later revisions to that code?

Andy

Jonathan Bromley

unread,
Apr 20, 2010, 3:31:19 PM4/20/10
to
On Apr 20, 6:36 pm, Andy <jonesa...@comcast.net> wrote:
> Some of these features are enforced by the compiler, some are enforced
> by standards compliant implementations. But they are enforced, which
> is more than we can say about comments. How many times have you seen
> comments that were clearly written for a previous version of the code,
> but not completely updated in later revisions to that code?

Hear, hear!

There is, of course, another form of enforced comment - the assertion.
Because assertions are pure observers and can't affect anything else
[*]
they have a similar status to comments - they make a statement about
what
you THINK is happening in your code - but they are *checked at
runtime*.
VHDL has permitted procedural assertions since forever; nowadays,
though,
we have come to expect more sophisticated assertions allowing us to
describe (assert) behaviours that span over a period of time.

Some of the benefits of strict data types that Andy and Martin mention
can be replicated by assertions. Others, though, are not so easy;
for example, an integer subtype represents an assertion on _any_
attempt to write to _any_ variable of that subtype, checking that
the written value is within bounds. It would be painful to write
explicit assertions to do that. So the two ideas go hand in hand:
data types (and a few other constructs) can automate certain simple
assertions that would be tiresome to write manually; hand-written
assertions can be very powerful for describing specific behaviours
at points in the code where you fear that bad things might happen.

Choosing to use neither of these sanity-checking tools seems to me
to be a rather refined form of masochism, given how readily
available both are.

[*] Before anyone asks: yes, I know that SystemVerilog assertions
can have side effects. And yes, I have made use of that in real
production code, albeit only to build observers in testbenches.
The tools are there to be used, dammit.....
--
Jonathan Bromley

rickman

unread,
Apr 20, 2010, 4:44:34 PM4/20/10
to
On Apr 17, 7:17 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
> (snip on test benches)
>
> >> I suppose so, but consider it the other way around.
> >> If your test bench is good enough then it will catch all static
> >> timing failures (eventually). ?With static timing analysis, there
> >> are many things that you don't need to check with the test bench.
> > I don't follow what you are saying.  This first sentence seems to be
> > saying that a timing simulation *is* a good place to find timing
> > problems, or are you talking about real world test benches?  The point
> > is that static timing is enough to catch all timing failures given
> > that your timing constraints cover the design properly... and I agree
> > that is a big given.  Your second sentence seems to be agreeing with
> > my previous statement.
>
> Yes, I was describing real world (hardware) test benches.
>
> Depending on how close you are to a setup/hold violation,
> it may take a long time for a failure to actually occur.

That is the point. Finding timing violations in a simulation is hard,
finding them in physical hardware is not possible to do with any
certainty. A timing violation depends on the actual delays on a chip
and that will vary with temperature, power supply voltage and process
variations between chips. I had to work on a problem design once
because the timing analyzer did not work or the constraints did not
cover (I firmly believe it was the tools, not the constraints since it
failed on a number of different designs). We tried finding the chip
that failed at the lowest temperature and then used that at an
elevated temperature for our "final" timing verification. Even with
that, I had little confidence that the design would never have a
problem from timing. Of course on top of that the chip was being used
at 90% capacity. This design is the reason I don't work for that
company anymore. The section head knew about all of these problems
before he assigned the task and then expected us to work 70 hour work
weeks. At least we got them to buy us $100 worth of dinner each
evening!

The point is that if you don't do static timing analysis (or have an
analyzer that is broken) timing verification is nearly impossible.


> >> Also, you can't do static timing analysis on the implemented logic.
> >> (That is, given an actual built circuit and a logic analyzer.)
> > So?
> >> Now, setup and hold violations are easy to test with static
> >> analysis, but much harder to check in actual logic. ?Among others,
> >> you would want to check all possible clock skew failures, which is
> >> normally not possible. ?With the right test bench and logic
> >> implementation (including programmable delays on each FF clock)
> >> it might be possible, though.
> > In twenty years of designing with FPGAs I have never found a clock
> > skew problem.  I always write my code to allow the clock trees to
> > deliver the clocks and I believe the tools guaranty that there will
> > not be a skew problem.  Static timing actually does cover clock skew,
> > at least the tools I use.
>
> Yes, I was trying to cover the case of not using static timing
> analysis but only testing actual hardware.  For ASICs, it is
> usually necessary to test the actual chips, though they should
> have already passed static timing.  

If you find a timing bug in the ASIC chip, isn't that a little too
late? Do you test at elevated temperature? Do you generate special
test vectors? How is this different from just testing the logic?


> > BTW, how do you design a "right test bench"?  Static timing analysis
> > will at least give you the coverage level although one of my
> > complaints is that they don't provide any tools for analyzing if your
> > constraints are correct.  But I have no idea how to verify that my
> > test bench is testing the timing adequately.
>
> If you only have one clock, it isn't so hard.  As you add more,
> with different frequencies and/or phases, it gets much harder,
> I agree.  It would be nice to get as much help as possible
> from the tools.

The number of clocks is irrelevant. I don't consider timing issues of
crossing clock domains to be "timing" problems. There you can only
solve the problem with proper logic design, so it is a logic
problem.

Rick

rickman

unread,
Apr 20, 2010, 5:10:26 PM4/20/10
to
On Apr 20, 5:11 am, Martin Thompson <martin.j.thomp...@trw.com> wrote:

> Andy <jonesa...@comcast.net> writes:
> > The cost of bugs in code is not a constant, per-bug figure. The cost
> > is dominated by how hard it is to find the bug as early as possible.
>
> > So, in a verbose language, the number of bugs may go up, but the cost
> > of fixing the bugs goes down.
>
> > Case in point: Would you suggest that using positional notation in
> > port maps and argument lists is more prone to cause errors? And which
> > is prone to cost more to find and fix any errors? My point is not that
> > positional notation is an advantage of one language over another, it
> > is simply to debunk the "fewer lines of code = better code" myth.
>
> Agreed - as with all things, any extreme position is daft...
>
> The problem (as I see it) comes with languages (or past
> design-techniques enforced by synthesis tools) which are not
> descriptive enough to allow you to express your intent in a
> "relatively" small amount of code.  Which is why assembly is not as
> widely used anymore, and more behavioural descriptions win out over
> instantiating LUTs/FFs by hand.  It's not about the verboseness of the
> language per-se, more about the ability to show (clearly) your intent
> relatively concisely.

Or do you think it has to do with the fact that the tools do a better
job so that the efficiencies of using assembly language and
instantiating components is greatly reduced?


> And much of the "verboseness" in VHDL can be mitigated with tools like
> Emacs or Sigasi's product.  And much of the other perceived
> verboseness can be overcome by writing "modern" code: single process,
> using variables, functions, procedures (the sort of thing some of us
> do all the time!)

I am going to give Emacs a try this summer when I have more free
time. I don't see the things you mention as being a solution because
they don't address the problem. The verboseness is inherent in VHDL.
Type casting is something that makes it verbose. That can often be
mitigated by using the right types in the right places. I never use
std_logic_vector anymore.


Rick

Patrick Maupin

unread,
Apr 20, 2010, 5:44:02 PM4/20/10
to
On Apr 16, 4:38 am, David Brown <da...@westcontrol.removethisbit.com>
wrote:

> The old joke about Ada is that when you get your code to compile, it's
> ready to ship.  I certainly wouldn't go that far, but testing is
> something you do in cooperation with static checking, not as an alternative.

GOOD static checking tools are great (and IMHO part of a testbench).
I certainly hope you're not trying to imply that the typechecking
built into VHDL is a substitute for a good model checker!

Regards,
Pat

rickman

unread,
Apr 20, 2010, 5:59:37 PM4/20/10
to
On Apr 20, 1:36 pm, Andy <jonesa...@comcast.net> wrote:
> On Apr 20, 4:11 am, Martin Thompson <martin.j.thomp...@trw.com> wrote:
>
> > BTW, what do you mean by "compiler-enforced comments" - is it "simply"
> > that code should be as self-documenting as possible? Or something else?
>
> Self-documenting and more.
>
> I look at many aspects of a strongly typed language as encouraging or
> requiring "verbosity" that would otherwise need comments to explain.
> Hence we have different types for vectors of the same base element
> type, based on how the contents are to be interpretted, and/or
> limitations of the data contained. Integer subtypes allow you to
> constrain (and verify) the contents more thoroughly, at least
> numerically.
>
> By choosing the appropriate data types, you are telling the tool and
> subsequent reviewers/users/maintainers something about your code, and
> how it works (and sometimes more importantly, how it doesn't or won't
> work.)

I used to use boolean for most signals that were used as controls for
ifs and such. But in the simulator these are displayed as values
rather than the oscope type trace used for std_logic which I find much
more readable. So I don't use boolean. I seldom use enumerated
types. I find nearly everything I want to do works very well with
std_logic, unsigned, signed and integers with defined ranges. I rely
on comments to explain what is going on, because when it is not clear
from reading the code, I think there is little that using various
types will add to the picture.


> By using assertion statements, you can not only document assumptions
> and limitations of your code, but also ensure that those are met.

I've never uses assertions in my synthesized code. I hate getting
warnings from the tools so I don't like to provoke them. There are
times I use assignments in declarations of signed or unsigned types to
avoid warnings I get during simulation. But then these produce
warnings in synthesis, so you can't always win.

Can you give an example of an assertion you would use in synthesized
code?


> Some of these features are enforced by the compiler, some are enforced
> by standards compliant implementations. But they are enforced, which
> is more than we can say about comments. How many times have you seen
> comments that were clearly written for a previous version of the code,
> but not completely updated in later revisions to that code?

I only worry about my comments...

Rick

glen herrmannsfeldt

unread,
Apr 20, 2010, 5:56:26 PM4/20/10
to
In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
> On Apr 17, 7:17?pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
(snip on test benches)

>> Yes, I was describing real world (hardware) test benches.

>> Depending on how close you are to a setup/hold violation,
>> it may take a long time for a failure to actually occur.

> That is the point. Finding timing violations in a simulation is hard,
> finding them in physical hardware is not possible to do with any
> certainty. A timing violation depends on the actual delays on a chip
> and that will vary with temperature, power supply voltage and process
> variations between chips.

But they have to be done for ASICs, and all other chips as
part of the fabrication process. For FPGAs you mostly don't
have to do such, relying on the specifications and that the chips
were tested appropriately in the factory.

> I had to work on a problem design once
> because the timing analyzer did not work or the constraints did not
> cover (I firmly believe it was the tools, not the constraints since it
> failed on a number of different designs). We tried finding the chip
> that failed at the lowest temperature and then used that at an
> elevated temperature for our "final" timing verification. Even with
> that, I had little confidence that the design would never have a
> problem from timing. Of course on top of that the chip was being used
> at 90% capacity. This design is the reason I don't work for that
> company anymore. The section head knew about all of these problems
> before he assigned the task and then expected us to work 70 hour work
> weeks. At least we got them to buy us $100 worth of dinner each
> evening!

One that I worked with, though not at all at that level, was
a programmable ASIC (for a systolic array processor). For some
reason that I never knew the timing was just a little bit off
regarding to writes to the internal RAM. The solution was to use
two successive writes, which seemed to work. In the usual operation
mode, the RAM was initialized once, so the extra cycle wasn't much
of a problem. There were also some modes where the RAM had to
be written while processing data, such that the extra cycle meant
that the processor ran that much slower.


> The point is that if you don't do static timing analysis (or have an
> analyzer that is broken) timing verification is nearly impossible.

And even if you do, the device might still have timing problems.

(snip)


>> Yes, I was trying to cover the case of not using static timing

>> analysis but only testing actual hardware. ?For ASICs, it is


>> usually necessary to test the actual chips, though they should

>> have already passed static timing. ?



> If you find a timing bug in the ASIC chip, isn't that a little too
> late? Do you test at elevated temperature? Do you generate special
> test vectors? How is this different from just testing the logic?

It might be that it works at a lower clock rate, or other workarounds
can be used. Yes, it is part of testing the logic.

(snip)

>> If you only have one clock, it isn't so hard. ?As you add more,


>> with different frequencies and/or phases, it gets much harder,

>> I agree. ?It would be nice to get as much help as possible


>> from the tools.

> The number of clocks is irrelevant. I don't consider timing issues of
> crossing clock domains to be "timing" problems. There you can only
> solve the problem with proper logic design, so it is a logic
> problem.

Yes, there is nothing to do about asynchronous clocks. It just has
to work in all cases. But in the case of supposedly related
clocks, you have to verify it. There are designs that have one
clock a multiple of the other clock frequency, or multiple phases
with specified timing relationship. Or even single clocks with
specified duty cycle. (I still remember the 8086 with its 33% duty
cycle clock.)

With one clock you can run combinations of voltage, temperature,
and clock rate, not so hard but still a lot of combinations.
With related clocks, you have to verify that the timing between
the clocks works.

-- glen

Andy

unread,
Apr 20, 2010, 7:55:57 PM4/20/10
to
I don't know of any assertions that are used by the synthesis tool,
but I do use assertions in my RTL to help verify the code in
conjunction with the testbench.

The most common occurrence is with counters. I can use integer
subtypes with built-in bouds checking to make sure a counter never
overflows or rolls over on its own (when the surrounding circuitry
should never allow that to happen, but if it did, I would want to know
about it first hand). Or I can use assertion statements with unsigned
when the allowable range for a counter is not 0 to 2**n-1.

If I know that a set of external interface strobe inputs should be
mutually exclusive, and I have optimized the circuit to take advantage
of that, then I use an assertion to verify it. It would be nice if the
synthesis tool recognized the assertion, and optimized the circuit for
me, but I'll take what I can get.

I'm rarely concerned with what waveforms look like, since the vast
majority of my debugging is with the source level debugger, not the
waveform viewer.

I suspect that if your usage is constrained to the data types you
listed (except bounded integer subtypes), you may do well with
verilog. But given that you may not use a lot of what is available in
VHDL, it would be worthwhile to compare the productivity gains from
using more of the capabilities of the language you already know, to
the gains from changing to a whole new language.

Andy

Cesar

unread,
Apr 21, 2010, 2:18:14 AM4/21/10
to
On Apr 20, 11:11 am, Martin Thompson <martin.j.thomp...@trw.com>
wrote:

> And much of the "verboseness" in VHDL can be mitigated with tools like
> Emacs or Sigasi's product.  And much of the other perceived
> verboseness can be overcome by writing "modern" code: single process,
> using variables, functions, procedures (the sort of thing some of us
> do all the time!)

BTW, it's long time I hear about VHDL "modern" code and the methods
you enumerate. Since typical VHDL books do not deal with coding style,
do you know about any VHDL book explaining this modern coding style?

César

Martin Thompson

unread,
Apr 21, 2010, 5:05:27 AM4/21/10
to
Andy <jone...@comcast.net> writes:

> On Apr 20, 4:11�am, Martin Thompson <martin.j.thomp...@trw.com> wrote:
>> BTW, what do you mean by "compiler-enforced comments" - is it "simply"
>> that code should be as self-documenting as possible? Or something else?
>
> Self-documenting and more.
>
> I look at many aspects of a strongly typed language as encouraging or
> requiring "verbosity" that would otherwise need comments to explain.
> Hence we have different types for vectors of the same base element
> type, based on how the contents are to be interpretted, and/or
> limitations of the data contained. Integer subtypes allow you to
> constrain (and verify) the contents more thoroughly, at least
> numerically.
>
> By choosing the appropriate data types, you are telling the tool and
> subsequent reviewers/users/maintainers something about your code, and
> how it works (and sometimes more importantly, how it doesn't or won't
> work.)

Indeed so - capturing the knowledge that gets lost when you just use a
"bag-of-bits" type for everything.

>
> By using assertion statements, you can not only document assumptions
> and limitations of your code, but also ensure that those are met.

Yes, I sprinkle assertions through both RTL and testbench code - they
usually trigger when I come back to the code after several weeks ;)

>
> Some of these features are enforced by the compiler, some are enforced
> by standards compliant implementations. But they are enforced, which
> is more than we can say about comments. How many times have you seen
> comments that were clearly written for a previous version of the code,
> but not completely updated in later revisions to that code?
>

Well said - I had a feeling that was what you meant by
compiler-enforced comments: it's the whole of the code which should be
used as documentation.

Martin Thompson

unread,
Apr 21, 2010, 5:16:57 AM4/21/10
to
rickman <gnu...@gmail.com> writes:

> On Apr 20, 5:11 am, Martin Thompson <martin.j.thomp...@trw.com> wrote:
>> The problem (as I see it) comes with languages (or past
>> design-techniques enforced by synthesis tools) which are not
>> descriptive enough to allow you to express your intent in a
>> "relatively" small amount of code.  Which is why assembly is not as
>> widely used anymore, and more behavioural descriptions win out over
>> instantiating LUTs/FFs by hand.  It's not about the verboseness of the
>> language per-se, more about the ability to show (clearly) your intent
>> relatively concisely.
>
> Or do you think it has to do with the fact that the tools do a better
> job so that the efficiencies of using assembly language and
> instantiating components is greatly reduced?

Well, that's the reason we *had* to use assembly in the past, the
tools weren't up to the job of *allowing* us to specify the behaviour
in a higher-level way. I know very few people who would choose to
write assembly unless they have a problem they just can't solve in C
for example.

>
>
>> And much of the "verboseness" in VHDL can be mitigated with tools like
>> Emacs or Sigasi's product.  And much of the other perceived
>> verboseness can be overcome by writing "modern" code: single process,
>> using variables, functions, procedures (the sort of thing some of us
>> do all the time!)
>
> I am going to give Emacs a try this summer when I have more free
> time.

Emacs is a quite a big investment, but it's a tool for life. I wonder
what looks I'll garner when I'm still using it in 20 years time :)

> I don't see the things you mention as being a solution because
> they don't address the problem.

What is the problem - that there's a lot of typing? If that's it then
they *do* solve the problem. VHDL-mode pretty much makes it so you
only have to type in the bits that really matter, all the boilerplate
is done for you (up to and including testbenches if you want).

> The verboseness is inherent in VHDL.

To some extent it is, but it's there to be used (as Andy pointed out).
VHDL has a lot less overhead when you use a single process inside each
entity. And the instantiation matching ports to wires (which looks
terribly verbose and is a pain to do in a copy/paste style) stops a
lot of silly "positioning" mistakes. (You could always choose to just
do a positional instantiation IIRC).

> Type casting is something that makes it verbose.

Yes, but it forces you to show you know what you're doing, rather than
the compiler just allowing you to do it, and it works for now, but in
the future some corner-case comes up which breaks the implicit
assumptions that you have put in the code. With strong typing, you
have to be explicit about the assumptions.

> That can often be mitigated by using the right types in the right
> places.

I'd say it's almost always mitigated by using the right types in the
right places. The times it causes me pain is when I have to
instantiate a 3rd party entity (usually an FPGA-vendor RAM) which has
bag-of-bits vectors everywhere, (even on the address lines for example).

> I never use std_logic_vector anymore.

I wouldn't go that far, but I use them a lot less than some others do
:)

Martin Thompson

unread,
Apr 21, 2010, 5:17:50 AM4/21/10
to
Cesar <ceste...@gmail.com> writes:

Sorry, no.

Maybe some of us should take some of the sample code from the classic
texts and rewrite it "our way" :)

Nial Stewart

unread,
Apr 21, 2010, 6:02:40 AM4/21/10
to
>> The point is that if you don't do static timing analysis (or have an
>> analyzer that is broken) timing verification is nearly impossible.
>
> And even if you do, the device might still have timing problems.


Can you expand on this Glen?

As I have always understood it one of the bedrocks of FPGA design is that
when it's passed a properly constrained static timing analysis an FPGA design
will always work (from a timing point of view).

Nial.


glen herrmannsfeldt

unread,
Apr 21, 2010, 7:36:14 AM4/21/10
to

Well, some of the comments were regarding ASIC design, where
things aren't so sure. For FPGA designs, there is, as you say,
"properly constrained" which isn't true for all design and tool
combinations. One that I have heard of, though haven't actually
tried, is having a logic block where the delay is greater than
one clock cycle, but less than two. Maybe some tools can do that,
but I don't believe that all can.

-- glen


rickman

unread,
Apr 22, 2010, 12:07:09 AM4/22/10
to
glen herrmannsfeldt wrote:
> In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
> > On Apr 17, 7:17?pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> (snip on test benches)
>
> >> Yes, I was describing real world (hardware) test benches.
>
> >> Depending on how close you are to a setup/hold violation,
> >> it may take a long time for a failure to actually occur.
>
> > That is the point. Finding timing violations in a simulation is hard,
> > finding them in physical hardware is not possible to do with any
> > certainty. A timing violation depends on the actual delays on a chip
> > and that will vary with temperature, power supply voltage and process
> > variations between chips.
>
> But they have to be done for ASICs, and all other chips as
> part of the fabrication process. For FPGAs you mostly don't
> have to do such, relying on the specifications and that the chips
> were tested appropriately in the factory.

I don't follow your reasoning. Why is finding timing violations in
ASICs any different from FPGA? If the makers of ASICs can't
characterize their devices well enough for static timing analysis to
find the timing problems then ASIC designers are screwed.

You keep saying that, but you don't explain.

But you can't verify timing by testing. You can never have any level
of certainty that you have tested all the ways the timing can fail.
If the clocks are related, what exactly are you testing, that they
*are* related? Timing is something that has to be correct by
design.

Rick

mike v.

unread,
Apr 22, 2010, 12:30:50 AM4/22/10
to
I also use seperate sequential and combinatorial always blocks. At
first I felt that I should be able to have just a single sequential
block but quickly became accustomed to 2 blocks and it now feels
natural and I don't think it limits my ability to express my intent at
all. Most of the experienced designers I work with use this style but
not all of them.

Kim Enkovaara

unread,
Apr 22, 2010, 2:09:40 AM4/22/10
to
glen herrmannsfeldt wrote:
> combinations. One that I have heard of, though haven't actually
> tried, is having a logic block where the delay is greater than
> one clock cycle, but less than two. Maybe some tools can do that,
> but I don't believe that all can.

Just normal multicycle path, has been normal thing in tools for
a long time. At least Altera, Xilinx, Synplify, Primetime and
Precision support it.

--Kim

Kim Enkovaara

unread,
Apr 22, 2010, 2:15:24 AM4/22/10
to
rickman wrote:
> But you can't verify timing by testing. You can never have any level
> of certainty that you have tested all the ways the timing can fail.

Especially with ASIC you can't verify the design by testing. There are
so many signoff corners and modes in the timing analysis. The old
worst/best case in normal and testmode are long gone. Even 6 corner
analysis in 2+ modes is for low end processes with big extra margins.
With multiple adjustable internal voltage areas, powerdown areas etc.
the analysis is hard even with STA.

--Kim

Nial Stewart

unread,
Apr 22, 2010, 5:26:50 AM4/22/10
to
> Well, some of the comments were regarding ASIC design, where
> things aren't so sure. For FPGA designs, there is, as you say,
> "properly constrained" which isn't true for all design and tool
> combinations. One that I have heard of, though haven't actually
> tried, is having a logic block where the delay is greater than
> one clock cycle, but less than two. Maybe some tools can do that,
> but I don't believe that all can.


As Kim says multi-cycle paths have been 'constrainable' in any FPGA
took I have used for as long as I can remember.


Nial.


Patrick Maupin

unread,
Apr 22, 2010, 9:58:33 AM4/22/10
to
On Apr 22, 1:15 am, Kim Enkovaara <kim.enkova...@iki.fi> wrote:

> Especially with ASIC you can't verify the design by testing. There are
> so many signoff corners and modes in the timing analysis. The old
> worst/best case in normal and testmode are long gone. Even 6 corner
> analysis in 2+ modes is for low end processes with big extra margins.
> With multiple adjustable internal voltage areas, powerdown areas etc.
> the analysis is hard even with STA.

For the record, I agree that lots of static analysis is necessary
(static timing, model checking, etc.) The thesis when I started this
sub-thread is that what the *language* gives you (VHDL vs. verilog) is
such a small subset of possible checking as to be unuseful. I will
now add that it comes at a huge cost (in coding things just right).

Regards,
Pat

Andy

unread,
Apr 22, 2010, 6:32:18 PM4/22/10
to
Other than twice the declarations, unintentional latches, explicitly
coding clock enables, simulation penalties, etc., using separate
combinatorial and sequential blocks is just fine.

Most designers here use single clocked processes / always blocks.
Those that don't are 'encouraged' to.

Andy

Patrick Maupin

unread,
Apr 22, 2010, 6:36:30 PM4/22/10
to
On Apr 22, 5:32 pm, Andy <jonesa...@comcast.net> wrote:
> Other than twice the declarations, unintentional latches, explicitly
> coding clock enables, simulation penalties, etc., using separate
> combinatorial and sequential blocks is just fine.

Unintentional latches don't happen if you use a consistent naming
style with, e.g. 'next_x' and 'x'.

I don't think simulation penalties happen if the simulator is halfway
decent.

Twice the declarations is a slight issue, but if you do reg [2:0] x,
next_x; it's not too bad at all.

Explicitly coding clock enables -- not sure what you mean here --
that's an if statement no matter how you slice it.

Regards,
Pat

KJ

unread,
Apr 22, 2010, 9:57:41 PM4/22/10
to
On Apr 22, 6:36 pm, Patrick Maupin <pmau...@gmail.com> wrote:
> On Apr 22, 5:32 pm, Andy <jonesa...@comcast.net> wrote:
>
> > Other than twice the declarations, unintentional latches, explicitly
> > coding clock enables, simulation penalties, etc., using separate
> > combinatorial and sequential blocks is just fine.
>
> Unintentional latches don't happen if you use a consistent naming
> style with, e.g. 'next_x' and 'x'.
>

Ha, ha, ha...having a convention naming style prevents latches???!!!

Ummmmmm, noooooooo, don't think so, but thanks for the possibly
unintended humor.

Whether you have a naming convention or not, latches will be created
when assignment statements to cover every path are not included in an
unclocked process totally independent of how they are named...end of
story

I suppose your point is that calling things 'next_x' and 'x' then
makes it easier to do a *manual* inspection and perhaps catch such a
missing assignment but that is a far, far stretch from your actual
statement "Unintentional latches don't happen if...". Andy, on the
other hand, is on much firmer ground had he said "Unintentional
latches don't happen if you only use clocked processes"...he didn't
explicitly say that, but I'm fairly certain he would agree.

Yes, you can do things to make *manual* inspections better...which is
like saying it hurts my head less if I use a rubber hammer to hit my
head than a steel one...but it is a far better process improvement to
just not hit my head at all with any hammer.

KJ

Patrick Maupin

unread,
Apr 22, 2010, 10:12:33 PM4/22/10
to
On Apr 22, 8:57 pm, KJ <kkjenni...@sbcglobal.net> wrote:

> I suppose your point is that calling things 'next_x' and 'x' then
> makes it easier to do a *manual* inspection and perhaps catch such a
> missing assignment but that is a far, far stretch from your actual
> statement "Unintentional latches don't happen if...".  Andy, on the
> other hand, is on much firmer ground had he said "Unintentional
> latches don't happen if you only use clocked processes"...he didn't
> explicitly say that, but I'm fairly certain he would agree.

Yes, I should have been more clear about that. Any decent synthesizer
or simulation tool will report latches, but sometimes the reports are
hard to interpret. If you use a consistent naming convention like I
have described, it is easy to find the latches, and also easy to write
a script to find them, as well.

And I agree that you won't have latches if all your processes are
clocked, but latches are much easier to detect and rectify than some
other possible logic problems.

Regards,
Pat

KJ

unread,
Apr 23, 2010, 12:38:13 AM4/23/10
to
On Apr 22, 10:12 pm, Patrick Maupin <pmau...@gmail.com> wrote:
> On Apr 22, 8:57 pm, KJ <kkjenni...@sbcglobal.net> wrote:
>

> Yes, I should have been more clear about that.

Agreed, you're not very clear when you have statements like this from
your previous post...

> Unintentional latches don't happen if you use a consistent naming
> style with, e.g. 'next_x' and 'x'.

followed up with statements like this...

> If you use a consistent naming convention like I
> have described, it is easy to find the latches,

So, first you say that the naming convention by itself will prevent
unintentional latches and then follow that up to say that the naming
convention helps you to *find* the unintended latches that couldn't be
created in the first place...hmmm....yes, I agree, not very clear.

Both statements indicating that you may be oblivous to the simple
point that using non-clocked processes opens you up to making it easy
to create your own problems (i.e. the latches) that are easily avoided
in the first place...

> And I agree that you won't have latches if all your processes are
> clocked,

Oop, I guess not because now it seems that you do get the point that
clocked processes for the most part avoid the unintended latch...but
based on the earlier comments I guess you must not practice it or
something for some reason...You admit it avoids a potential design
problem, but don't use it because....hmmmm....well, perhaps you have a
sponge hammer...

Ah well...as long as there are textbooks I guess there will always be
those disciples that think that separating combinatorial logic from
the register description actually is of some value to somebody,
somewhere at some point in time...but inevitably they come up short
when trying to demonstrate that value.

> easy to find the latches, and also easy to write
> a script to find them, as well

Then they will trot out the methods they use to minimize the problem
that others do not even have.

While those disciples are steadfast in their belief, it usually
doesn't get across to them that the value they perceive is actually
negative, it is costing them...and they are left clinging to the only
thing they have left which is always a statement of the form "That's
the way I do it, I'm comfortable with it, I feel I'm productive doing
it this way"

Every now and then, it seems like good sport to challenge those folks
to see if they have anything better to offer, but it never seems to
be.

> but latches are much easier to detect and rectify than some
> other possible logic problems.
>

And much easier to avoid in the first place too...with the correct
methodology (hint: that would be the one that avoids using unclocked
processes)

Just having fun...like I said, every now and then it's good sport to
poke fun at the people who make their own problems.

Kevin Jennings

Patrick Maupin

unread,
Apr 23, 2010, 1:01:38 AM4/23/10
to
On Apr 22, 11:38 pm, KJ <kkjenni...@sbcglobal.net> wrote:
> > but latches are much easier to detect and rectify than some
> > other possible logic problems.
>
> And much easier to avoid in the first place too...with the correct
> methodology (hint:  that would be the one that avoids using unclocked
> processes)
>
> Just having fun...like I said, every now and then it's good sport to
> poke fun at the people who make their own problems.
>

But, the reason I was unclear to start with, is that it's been so long
since I've had an unintended latch (probably several years) that I
really don't think that hard about it at all. So you can think what
you want, but the *reason* I code the way I do isn't really that much
about latches at all (obviously, if I was *worried* about them, I
would code everything in sequential blocks, where they could never
happen, but I could have some other hard to find logic problems, which
I *have* had in the past). However, unintended latches just don't
happen for me with my coding style, so I don't worry about it until
somebody like you comes along to tell me about all the problems that
I'm causing for myself that I never knew I had!

Regards,
Pat

Chris Higgs

unread,
Apr 23, 2010, 4:03:03 AM4/23/10
to
On Apr 23, 5:38 am, KJ <kkjenni...@sbcglobal.net> wrote:

> Ah well...as long as there are textbooks I guess there will always be
> those disciples that think that separating combinatorial logic from
> the register description actually is of some value to somebody,
> somewhere at some point in time...but inevitably they come up short
> when trying to demonstrate that value.

http://www.gaisler.com/doc/structdes.pdf

I'd recommend casting your eye over this presentation. It details some
of the advantages of the "2 process" coding style with a real world
example (LEON SPARC-V8 processor).

Thanks,

Chris

Bernd Paysan

unread,
Apr 23, 2010, 6:01:33 AM4/23/10
to
Andy wrote:

> Other than twice the declarations, unintentional latches, explicitly
> coding clock enables, simulation penalties, etc., using separate
> combinatorial and sequential blocks is just fine.

LoL. Note that there are further difficulties to understand this separated
code due to the fact that things which conceptually belong together are
spread apart over the file. This is just too messy.

> Most designers here use single clocked processes / always blocks.
> Those that don't are 'encouraged' to.

I'm fully on your side.

--
Bernd Paysan
"If you want it done right, you have to do it yourself!"
http://www.jwdt.com/~paysan/

Marcus Harnisch

unread,
Apr 23, 2010, 6:14:50 AM4/23/10
to
Chris Higgs <chig...@googlemail.com> writes:

> I'd recommend casting your eye over this presentation. It details some
> of the advantages of the "2 process" coding style with a real world
> example (LEON SPARC-V8 processor).

Actually the rpesentation seems to compare two entirely different
designs done using different approaches. In this comparison, the
Two-Process-style appears to only *one* of the aspects.

--
Marcus

note that "property" can also be used as syntactic sugar to reference
a property, breaking the clean design of verilog; [...]

(seen on http://www.veripool.com/verilog-mode_news.html)

Brian Drummond

unread,
Apr 23, 2010, 7:35:05 AM4/23/10
to

Not over the single-process style, which it doesn't even mention.
Other aspects of this document are worthwhile, however.

- Brian

KJ

unread,
Apr 23, 2010, 9:12:08 AM4/23/10
to
On Apr 23, 1:01 am, Patrick Maupin <pmau...@gmail.com> wrote:
> On Apr 22, 11:38 pm, KJ <kkjenni...@sbcglobal.net> wrote:
>
> > > but latches are much easier to detect and rectify than some
> > > other possible logic problems.
>
> > And much easier to avoid in the first place too...with the correct
> > methodology (hint:  that would be the one that avoids using unclocked
> > processes)
>
> > Just having fun...like I said, every now and then it's good sport to
> > poke fun at the people who make their own problems.
>
> But, the reason I was unclear to start with, is that it's been so long
> since I've had an unintended latch (probably several years) that I
> really don't think that hard about it at all.

The two process people generally do fall back on excuses about being
misunderstood.

>  So you can think what
> you want, but the *reason* I code the way I do isn't really that much
> about latches at all (obviously, if I was *worried* about them, I
> would code everything in sequential blocks, where they could never
> happen,

Hmmm...so you prefer to take what you admit as unnecessary
chances....fair enough, that implies though that you expect some
actual benefit from that decision...but are probably unable to come up
with an actual example demonstrating that benefit.

> but I could have some other hard to find logic problems, which
> I *have* had in the past).  

Ahhh....one of those examples...now what sort of 'hard to find' logic
problem would you like to offer up to to the group to actually
demonstrate that two processes are better than one? I'm willing to
listen, but I'll warn that you that every time in the past that this
debate pops up, the two process people are unable to coherently
describe anything other than vague generalities as you've done
here...so here is your opportunity to present a clear example
describing a 'hard to find' logic problem that is easier to find when
coded with two processes. The clocked process folks (i.e. the one
process camp) have in the past presented actual examples to back their
claims, Googling for 'two process' in the groups should provide some
good cases.

> However, unintended latches just don't
> happen for me with my coding style, so I don't worry about it until
> somebody like you comes along to tell me about all the problems that
> I'm causing for myself that I never knew I had!

Whether you in particular have this problem, I don't know (apparently
not). You in particular though are
- Doing more work (two process will always be more typing and lines of
code then one process),
- Producing less maintainable code (two process will always physically
separate related things based only on whether the logic signal is a
'register' or not)

Kevin Jennings

KJ

unread,
Apr 23, 2010, 10:02:09 AM4/23/10
to

OK, but it doesn't compare it to the 'one process' approach, the
comparison is to a 'traditional method'. The 'traditional method'
though is all about lumping all signals into records (refer to the
'Benefits' area of that document). All of the comparisons are between
'traditional method' which has discrete signals and 'two-process
method' which lumps signals into a record.

There is absolutely nothing comparing 'one process' and 'two
process'. Why the author chose to title his new method the 'two-
process' method is totally unclear...just another example either of
sloppy academic thinking to name your method after something that is
not even relevant to the point of the paper.

The author does mention "No distinction between sequential and comb.
signals" as being a "Problem". Maybe it's a problem for the author,
but it's somewhat irrelevant for anyone skilled in design. The author
presents no reason for why having immediate knowledge of whether a
signal comes out of a flip flop or a gate is relevant...(hint: it's
not). What is relevant is the logic that the signal represents and
whether it is implemented properly or not. Whether 'proper
implementation' means the signal is a flop or not is of very little
concern (one exception being when you're generating gated
clocks...which is a different can of worms).

Even the author's 'State machine' example demonstrates the flaw of
using two processes. Referring to slide #27 partially shown below,
note that (the undeclared) variable v has no default assignment, v
will result in a latch.

begin
comb : process(...., r)
begin
case r.state is
when first =>
if cond0 then v.state := second; end if;
when second =>
...

This to me demonstrates that the author didn't even take the time to
compile his own code...and not notice the (possibly) unintended latch
that gets generated. Maybe the author is living in an ASIC world
where latches are not a problem, who knows? But in an FPGA, a latch
most certainly is indicative of a design problem more times than not.

You seem to have been caught up by his statement "A synchronous design
can be abstracted into two separate parts; a combinational and a
sequential" and the slide titled "Abstraction of digital logic" and
thought that this was somehow relevant to the point of his
paper...it's not...his point is all about combining signals into
records...totally different discussion.

The only conclusion to draw from this paper is that you shouldn't
believe everything you read...and you shouldn't accept statements that
do not stand up to scrutiny.

Kevin Jennings

Chris Higgs

unread,
Apr 23, 2010, 11:29:50 AM4/23/10
to
On Apr 23, 3:02 pm, KJ <kkjenni...@sbcglobal.net> wrote:

> OK, but it doesn't compare it to the 'one process' approach, the
> comparison is to a 'traditional method'.  The 'traditional method'
> though is all about lumping all signals into records (refer to the
> 'Benefits' area of that document).  All of the comparisons are between
> 'traditional method' which has discrete signals and 'two-process
> method' which lumps signals into a record.

I think one of the points (implicitly) made by the paper is an
admission that the two-process method is a mess unless you use
records. I think it's also implied that 'traditional method' people
are more prone to using discrete signals rather than record types.

> The author does mention "No distinction between sequential and comb.
> signals" as being a "Problem".  Maybe it's a problem for the author,
> but it's somewhat irrelevant for anyone skilled in design.  The author
> presents no reason for why having immediate knowledge of whether a
> signal comes out of a flip flop or a gate is relevant...(hint:  it's
> not).  What is relevant is the logic that the signal represents and
> whether it is implemented properly or not.  Whether 'proper
> implementation' means the signal is a flop or not is of very little
> concern (one exception being when you're generating gated
> clocks...which is a different can of worms).

Sometimes it's necessary to use the combinatorial signal.

>
> Even the author's 'State machine' example demonstrates the flaw of
> using two processes.  Referring to slide #27 partially shown below,
> note that (the undeclared) variable v has no default assignment, v
> will result in a latch.

Yes, that's just sloppy.

> You seem to have been caught up by his statement "A synchronous design
> can be abstracted into two separate parts; a combinational and a
> sequential" and the slide titled "Abstraction of digital logic" and
> thought that this was somehow relevant to the point of his
> paper...it's not...his point is all about combining signals into
> records...totally different discussion.

Well combining state into records makes a "two-process" technique neat
enough to be feasible. Personally I use a similar style and I find it
very clear and understandable. As an example:

entity myentity is
generic (
register_output : boolean := true
);
port (
clk : in std_ulogic;
srst : in std_ulogic;

-- Input
data : in some_type_t;

-- Output
result : out another_type_t
);
end;

architecture rtl of myentity is

type state_enum_t is (IDLE, OTHER_STATES);

type state_t is record
state : state_enum_t;
result : another_type_t;
end record;

constant idle_state : state_t := (state => IDLE,
result => invalid_result);

signal r, rin : state_t;

begin

combinatorial: process(r, srst, data)
variable v : state_t;
begin

--DEFAULTS
v := r;
v.result := invalid_result;

-- STATE MACHINE
case v.state is
when IDLE =>
null;
when OTHER_STATES =>
null;
end case;

-- RESET
if srst = '1' then
v := idle_state;
end if;

--OUTPUTS
if register_output then
result <= r.result;
else
result <= v.result;
end if;

rin <= v;
end process;

sequential : process(clk)
begin
if rising_edge(clk) then
r <= rin;
end if;
end process;

end;

> The only conclusion to draw from this paper is that you shouldn't
> believe everything you read...and you shouldn't accept statements that
> do not stand up to scrutiny.

You can only use sequential processes and make it impossible to infer
a latch but lose the ability to use a combinatorially derived signal.
Alternatively you can use a two-process technique which allows
intermediate/derived signals to be used but accept the risk that bad
code will introduce latches. We can ague forever about which method is
more 'correct' but it's unlikely to boil down to anything other than
personal preference.

Thanks,

Chris

Andy

unread,
Apr 23, 2010, 11:34:30 AM4/23/10
to
Coding clock enables in a combinatorial process requires an additional
assignment for the clock-disable case (otherwise you get a latch,
regardless of your naming convention). Only one assignment is required
(the enabled assignment) in a clocked process, and it is the
assignment you had to make anyway.

KJ has already well stated the problems with latches (requiring
additional assignments) in combinatorial processes/blocks, regardless
of the naming convention employed.

Any decent simulator (maybe not a half-decent one) will merge
processes or always blocks that share the same sensitivity list. Since
they are usually identical for all synchronous processes clocked by
the same clock, they get merged, thus improving performance by
avoiding duplicative process-related overhead. Since combinatorial
processes rarely share the same sensitivity list, they don't get
merged, and performance suffers.

Andy

Patrick Maupin

unread,
Apr 23, 2010, 12:08:05 PM4/23/10
to
On Apr 23, 10:34 am, Andy <jonesa...@comcast.net> wrote:

> Coding clock enables in a combinatorial process requires an additional
> assignment for the clock-disable case (otherwise you get a latch,
> regardless of your naming convention). Only one assignment is required
> (the enabled assignment) in a clocked process, and it is the
> assignment you had to make anyway.

Oh, I see the point you're trying to make. Two points I tried (and
obviously failed) to make are that (1) I don't mind extra typing,
because it's really all about readability (obviously, from the
discussion, my opinion of what is readable may differ from others);
and (2) With the canonical two process technique, the sequential
process becomes boilerplate (even to the point of being able to be
generated by a script or editor macro, in most cases) that just
assigns a bunch of 'x <= next_x' statements. The top of the
combinatorial process becomes boilerplate as well, with corresponding
'next_x = x' statements (for some variables, it could be other things,
e.g. 'next_x = 0'. But you can just glance at those and not think
about it. So, when reading, you aren't really looking at that, or the
register declarations.

Once you accept that the sequential block, and the top of the
combinatorial block, are both boilerplate that you don't even need to
look at, then it's no more work than anything else. (In fact, if you
can type faster than 20 wpm and/or know how to write scripts or editor
macros, it's less work overall.)

> KJ has already well stated the problems with latches (requiring
> additional assignments) in combinatorial processes/blocks, regardless
> of the naming convention employed.

I understand the issue with latches. I just never see them. The
coding style makes it easy to check and avoid them. It can even be
completely automatic if you have a script write your boilerplate.

> Any decent simulator (maybe not a half-decent one) will merge
> processes or always blocks that share the same sensitivity list. Since
> they are usually identical for all synchronous processes clocked by
> the same clock, they get merged, thus improving performance by
> avoiding duplicative process-related overhead. Since combinatorial
> processes rarely share the same sensitivity list, they don't get
> merged, and performance suffers.

I'm pretty sure that verilator is smart enough to figure all this
out. That's the simulator I use if I care about execution time.

Regards,
Pat

Patrick Maupin

unread,
Apr 23, 2010, 12:25:50 PM4/23/10
to
On Apr 23, 8:12 am, KJ <kkjenni...@sbcglobal.net> wrote:

> The two process people generally do fall back on excuses about being
> misunderstood.

Well, I'm not going to generalize about "one process people" but at
least some of them are supercilious bastards who think that anybody
who doesn't do things their way is an idiot. BTW, this is the last
post I'm going to reply to you on, so feel free to have fun with more
piling on.

> Hmmm...so you prefer to take what you admit as unnecessary
> chances....fair enough, that implies though that you expect some
> actual benefit from that decision...but are probably unable to come up
> with an actual example demonstrating that benefit.

Well, I haven't used the single process style in many years, so no, I
can't point you directly to the issues I had that led me to switch.
But I have helped others to switch over they years, and they have all
been grateful. In any case, I posted elsewhere in this thread a
pointer to Cliff Cumming's paper on blocking vs non-blocking
assignments. I assume you've been studiously avoiding that for
plausible deniability, so here it is: http://www.sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf

If you read this paper carefully, you will come to an understanding
that, while it is often possible to follow it using a single process
method, in many cases, in order to follow it, you will have to assign
related variables from *different* sequential processes (one blocking
sequential process, one non-blocking sequential process). With the
sequential/combinatorial process method, using the boilerplate for
sequential processes is actually very easy and requires little
thinking, and allows you to do all the real work of coding on related
variables within the same combinatorial process, with a sequential
boilerplate process. You can concentrate all your hard thinking on
the problem at hand, in the non-boilerplate code in the combinatorial
process.

> > but I could have some other hard to find logic problems, which
> > I *have* had in the past).  
>
> Ahhh....one of those examples...now what sort of 'hard to find' logic
> problem would you like to offer up to to the group to actually
> demonstrate that two processes are better than one?  I'm willing to
> listen, but I'll warn that you that every time in the past that this
> debate pops up, the two process people are unable to coherently
> describe anything other than vague generalities as you've done
> here...so here is your opportunity to present a clear example
> describing a 'hard to find' logic problem that is easier to find when
> coded with two processes.  The clocked process folks (i.e. the one
> process camp) have in the past presented actual examples to back their
> claims, Googling for 'two process' in the groups should provide some
> good cases.

That's because you're *not* really willing to listen. If you were,
you would have heard, from me, anyway, loud and clear, that it's not
really about the *language constructs*, it's about how much people can
hold in their heads at a single time. The two process method reduces
the need for that, so even if I presented an example where it helped
me in my thinking, you would just superciliously explain how any idiot
should have seen the error in the one process method, so it doesn't
prove anything. Since you're the smartest asshole in the world, the
two process method couldn't possibly offer any benefits you would be
interested in, so just forget about it.

> - Producing less maintainable code (two process will always physically
> separate related things based only on whether the logic signal is a
> 'register' or not)

See, this is another place where you haven't listened. What don't you
understand about 'boilerplate'? It's a tiny bit of overhead, not
really part of what you need to worry about in maintenance. It is
easily checked and even automated.

Regards,
Pat

Andy

unread,
Apr 23, 2010, 2:26:45 PM4/23/10
to
You seem to put a lot of stock in the effortlessness of boilerplate,
yet you prefer a language that is said to reduce the need for
boilerplate.

OK, so you mention that you could write a script to automate all of
that, but to work, it would depend on a specific non-standard, non-
enforceable naming convention. Not to mention this script has yet to
be written and offered to the public, for free or fee. Which means
each of those who would follow your advice must write, test, run and
maintain their own scripts (or maybe even sell it to the rest of us,
if they felt there was a market).

Alas, we have no such scripts. So that would put most users back at
typing out all that boilerplate. Once it is typed, there is no
compiler to check it for you (unlike much of the boilerplate often
attributed to vhdl).

You recommend all of this extra work, rather than writing just one
process, with none of your boilerplate (no additional process, no
additional declarations, no additional assignments, no chances for
latches, no simulation-slowing effects).

What's really silly is how the two-process code model even got
started. The original synthesis tools could not infer registers, so
you had to instantiate them separately from your combinatorial code.
Once the tools progressed, and could infer registers, the least impact
to the existing coding style (and tools) was simply to replace the
code that instantiated registers with code that inferred them, still
separating that code from the logic code.

Finally, someone (God bless them!) figured out how to do both logic
and registers from one process with less code, boilerplate or not.

For all your staunch support of this archaic coding style, we still
have not seen any examples of why a single process style did not work
for you. Instead of telling me why the boilerplate's not as bad as I
think it is, tell me why it is better than no boilerplate in the first
place.

Andy

Jonathan Bromley

unread,
Apr 23, 2010, 3:09:45 PM4/23/10
to
On Fri, 23 Apr 2010 09:25:50 -0700 (PDT), Patrick Maupin wrote:

> In any case, I posted elsewhere in this thread a
>pointer to Cliff Cumming's paper on blocking vs non-blocking
>assignments. I assume you've been studiously avoiding that for
>plausible deniability

Very droll.

If you have had even half an eye on comp.lang.verilog
these past few years you will have seen a number of
posts clearly pointing out the very serious flaws in
Cliff's otherwise rather useful paper. In particular,
the "guideline" (=myth) about not using blocking
assignments in a clocked always block was long
ago exposed for the nonsense it is.

Cliff is quite rightly highly respected in the industry,
but he's no more infallible than the rest of us and
he made a serious mistake (which, to his great
discredit, he has never chosen to retract) on
that issue.

You have the freedom to choose your coding
style, as we all do. You do yourself no favours
by citing flawed "authority" as discrediting
a style that you dislike.
--
Jonathan Bromley

Patrick Maupin

unread,
Apr 23, 2010, 4:19:17 PM4/23/10
to
On Apr 23, 2:09 pm, Jonathan Bromley <s...@oxfordbromley.plus.com>
wrote:

> If you have had even half an eye on comp.lang.verilog
> these past few years you will have seen a number of
> posts clearly pointing out the very serious flaws in
> Cliff's otherwise rather useful paper.  In particular,
> the "guideline" (=myth) about not using blocking
> assignments in a clocked always block was long
> ago exposed for the nonsense it is.

Well, I just did a search, and found some mild disagreements, but
nothing that I would consider a "debunking." Perhaps you could point
me to such?

FWIW, I independently learned several of the lessons in Cliff's paper,
so I find it handy to explain these lessons to others. So I really
would be interested in a valid counterpoint, but I honestly didn't see
it.

Regards,
Pat

Patrick Maupin

unread,
Apr 23, 2010, 4:29:27 PM4/23/10
to
On Apr 23, 1:26 pm, Andy <jonesa...@comcast.net> wrote:
> You seem to put a lot of stock in the effortlessness of boilerplate,
> yet you prefer a language that is said to reduce the need for
> boilerplate.

Not all boilerplate is created equal. In particular, some boilerplate
is, not only easy to glance at and understand, but also, and more
important, easy to code from first principles without knowing arcane
corners of the language your are coding in.

> OK, so you mention that you could write a script to automate all of
> that, but to work, it would depend on a specific non-standard, non-
> enforceable naming convention. Not to mention this script has yet to
> be written and offered to the public, for free or fee. Which means
> each of those who would follow your advice must write, test, run and
> maintain their own scripts (or maybe even sell it to the rest of us,
> if they felt there was a market).

That's a good point. I have some languishing tools for this (because
the boilerplate is never quite bad enough to work on the tools some
more) that I should clean up and publish.

> Alas, we have no such scripts. So that would put most users back at
> typing out all that boilerplate. Once it is typed, there is no
> compiler to check it for you (unlike much of the boilerplate often
> attributed to vhdl).

Well, actually, the stock verilog tools do a pretty darn good job
these days.

> What's really silly is how the two-process code model even got
> started. The original synthesis tools could not infer registers, so
> you had to instantiate them separately from your combinatorial code.
> Once the tools progressed, and could infer registers, the least impact
> to the existing coding style (and tools) was simply to replace the
> code that instantiated registers with code that inferred them, still
> separating that code from the logic code.

That may well be. Nonetheless, many people found the two process
method better, even before 'always @*' or the new systemverilog
'always_comb', to the point where they maintained ungodly long
sensitivity lists. Are you suggesting that none of those people were
reflective enough to try to figure out if that was the best way (for
them) to code?

> Finally, someone (God bless them!) figured out how to do both logic
> and registers from one process with less code, boilerplate or not.

Yes, and the most significant downside to this is that the access to
the 'before clock' and 'after clock' versions of the same signal is
implicit, and in fact, in some cases (e.g. if you use blocking
assignments) you have access to more than two different signals within
a process, all with the same name. There is no question that in many
cases this is not an issue and the one process model will work fine.
But I think most who do serious coding with the 'one process' model
will, at least occasionally, wind up having two processes (either a
separate combinatorial process, or two interrelated sequential
processes) to cope with not having an explicit delineation of 'before
clock' and 'after clock'.

At the end of the day, it is certainly desirable to have something
that looks more like the 'one process' model, but that gives explicit
access to 'previous state' and 'next state', so that complicated
combinatorial logic with interrelated variables can always be
expressed inside the same process without resorting to weird code
ordering that is done just to make sure that the synthesizer and
simulator will create the structures you want.

> For all your staunch support of this archaic coding style, we still
> have not seen any examples of why a single process style did not work
> for you. Instead of telling me why the boilerplate's not as bad as I
> think it is, tell me why it is better than no boilerplate in the first
> place.

A paper I have mentioned in other posts,
http://www.sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf gives
some good examples why the general rule of never having blocking
assignments in sequential blocks is a good practice. I have seen some
dismiss this paper here, but haven't seen a technical analysis about
why it's wrong. The paper itself speaks to my own prior experience,
and I also feel that related variables should be processed in the same
block. When I put this preference together with the guidelines from
the paper, it turns out that a reliable, general way to achieve good
results without thinking about it too hard is to use the two process
model.

But, if you can tell me that you *always* manage to put *all* related
variables in the same sequential block, and *never* get confused about
it (and never confuse any co-workers), then, as with KJ and Bromley
and some others, you have no reason to consider the two process
model. OTOH, if you sometimes get confused, or have easily confused
co-workers, and/or find yourself using multiple sequential processes
to model variables where multiple processes have references to
variables in other processes, then you might want to consider whether
slicing related functionality into processes in this fashion is really
better than slicing the processes in a manner where you keep all the
related functional variables together in a single combinatorial
process, and simply extract out the registers into a very-well
understood model.

At the end of the day, I am willing to concede that the two process
model is, at least partly, a mental crutch. Am I a mental cripple?
In some respects, almost certainly. But on the off-chance that I am
not the only one, I tolerate a certain amount of abuse here in order
to explain to others who may also be easily confused that there are
other coding styles than the single process model.

I will also concede that the single process model can be beefed up
with things like tasks or functions (similar to what Mike Treseler has
done) to overcome some of the shortcomings. However, personally, I
don't really find that to be any better than putting combinatorial
stuff in a separate process.

Regards,
Pat

Patrick Maupin

unread,
Apr 23, 2010, 4:32:37 PM4/23/10
to
On Apr 23, 2:09 pm, Jonathan Bromley
> You have the freedom to choose your coding
> style, as we all do.  You do yourself no favours
> by citing flawed "authority" as discrediting
> a style that you dislike.

BTW, I wasn't trying to cite "authority." I was trying to cite a
paper which I've actually read and have no current reason to disagree
with. You claim it's been debunked -- care to present a technical
citation for such a claim?

Regards,
Pat

It is loading more messages.
0 new messages