Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The horrible truth about the Verilog standard

127 views
Skip to first unread message

Jan Decaluwe

unread,
Sep 11, 2000, 7:49:37 AM9/11/00
to
Hi - I'd like to share my latest horror story with you.

Back in 1990-1991, I used Verilog at Alcatel. There was only
one Verilog simulator (Verilog XL) with no competitors in
sight. Nonblocking assignments were not part of the
language. We therefore designed all RTL logic (combinatorial
and sequential) using blocking assignments - not because we
liked them so much, but because there was no choice.

Between RTL modules running on the same clock, there was a
sound, deterministic concurrency model. This was supported
perfectly by Synopsys Verilog/Design Compiler - making the
magic of RTL synthesis work.

Since 1992, I have been using VHDL for RTL design, but now
in 2000 I'm briefly back to Verilog. I have learned to be
conservative, and so I decided to use the tried and tested
methodology I knew from 10 years ago. I knew that Verilog's
zero-delay mechanisms were quite weak compared to VHDL and
didn't want to take any risk. In that way, I could be sure
nothing bad was going to happen.

Or so I thought.

Through discussions in another thread (see blocking /
nonblocking), a horrible truth gradually became apparent
to me: Verilog's zero-delay handling had actually been
*relaxed* instead of strengthened by the Verilog
standardization process. The old model of things (using
blocking assignments for all RTL logic) is not guaranteed to
work! At least that's what many people who seem to know the
standard well are asserting - I'm still finding it hard to
believe.

By doing this, the Verilog standard has violated a basic
principle in language design: backwards compatibility. Good
heaven, what can be more crucial in an HDL than its
concurrency model?

I am astonished that the Verilog standard has been able to
get away with this. My guess is that many people are not
aware of it - especially those that still use "good"
simulators (see further) and don't read standards all the
time.

What can be done? My position is that bad standards should be
ignored until they are fixed. I believe that Verilog designers
should switch back to the "gold" standard, that is, Verilog XL
compatibility.

I have been assured that NC Verilog is compatible with
Verilog XL. It uses a mixed language simulation kernel,
so scheduling semantics match VHDL semantics. We are
using Modelsim Verilog, and it seems to be fine also.
(I'd be grateful if someone could confirm this.)

It would be really useful to me and I hope to others to find
out which simulators are compatible with the "gold" standard.
and which are not.

Regards, Jan

--
Jan Decaluwe Easics
Design Manager System-on-Chip design services
+32-16-395 600 Interleuvenlaan 86, B-3001 Leuven, Belgium
mailto:ja...@easics.be http://www.easics.com

Lars Rzymianowicz

unread,
Sep 14, 2000, 3:00:00 AM9/14/00
to
Jan Decaluwe wrote:
> Back in 1990-1991, I used Verilog at Alcatel. [...]

> Since 1992, I have been using VHDL for RTL design, but now
> in 2000 I'm briefly back to Verilog.

Geee! Jan, you were almost 10 years out of Verilog business
and are complaining that things have changed since then?
C'mon, this is EDA!

> By doing this, the Verilog standard has violated a basic
> principle in language design: backwards compatibility.

No, the standard has defined an overall accepted semantics,
and removed the proprietary quasi-standard semantics of
a simulator monopol at the start of Verilog (XL from CAD).

> What can be done? My position is that bad standards should be
> ignored until they are fixed. I believe that Verilog designers
> should switch back to the "gold" standard, that is, Verilog XL
> compatibility.

Serious? "Naa, we don't want a puplic standard, we want a
single company to control the language." You're kidding, right?

Is it really that difficult to drop old habbits and adopt the
new style? I don't think so, since most have ;-)

Lars
--
Address: University of Mannheim; B6, 26; 68159 Mannheim, Germany
Tel: +(49) 621 181-2716, Fax: -2713
email: larsrzy@{ti.uni-mannheim.de, atoll-net.de, computer.org}
Homepage: http://mufasa.informatik.uni-mannheim.de/lsra/persons/lars/

Stephen Williams

unread,
Sep 17, 2000, 3:00:00 AM9/17/00
to

> Through discussions in another thread (see blocking /
> nonblocking), a horrible truth gradually became apparent
> to me: Verilog's zero-delay handling had actually been
> *relaxed* instead of strengthened by the Verilog
> standardization process. The old model of things (using
> blocking assignments for all RTL logic) is not guaranteed to
> work!

Perhaps you can say specifically what you think changed? A non-blocking
assignment within a thread is pretty clear-cut and I would be curious to
know how they confound old designs.
--
Steve Williams "The woods are lovely, dark and deep.
st...@icarus.com But I have promises to keep,
st...@picturel.com and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep."

Jan Decaluwe

unread,
Sep 18, 2000, 3:00:00 AM9/18/00
to
Lars Rzymianowicz wrote:
>
> Jan Decaluwe wrote:
> > Back in 1990-1991, I used Verilog at Alcatel. [...]

> > Since 1992, I have been using VHDL for RTL design, but now
> > in 2000 I'm briefly back to Verilog.
>
> Geee! Jan, you were almost 10 years out of Verilog business
> and are complaining that things have changed since then?
> C'mon, this is EDA!

I believe I'm sufficiently long in this business to tell the
difference between changes (very common) and progress (rare).
In this case, I'm seeing decline. Of course I complain.



> > By doing this, the Verilog standard has violated a basic
> > principle in language design: backwards compatibility.
>

> No, the standard has defined an overall accepted semantics,
> and removed the proprietary quasi-standard semantics of
> a simulator monopol at the start of Verilog (XL from CAD).
>

> > What can be done? My position is that bad standards should be
> > ignored until they are fixed. I believe that Verilog designers
> > should switch back to the "gold" standard, that is, Verilog XL
> > compatibility.
>

> Serious? "Naa, we don't want a puplic standard, we want a
> single company to control the language." You're kidding, right?

A flawed standard is worse than anything else. Backwards
compatibility is key - otherwise no progress is possible.



> Is it really that difficult to drop old habbits

Is it difficult to learn to represent years by 4 digits instead
of 2? Of course not. That's not the issue.

The issue is legacy code. Companies with major investments
in Verilog code have reasons to be very worried.
Their existing designs might stop working (in simulation)
if they switch to a new simulator that takes the standard
literally.

> and adopt the
> new style? I don't think so, since most have ;-)

What is the new style? If you mean Cliff Cummings' guidelines,
you can't use blocking assignments for sequential logic
anymore. That's not progress.

Jan Decaluwe

unread,
Sep 18, 2000, 3:00:00 AM9/18/00
to
Stephen Williams wrote:
>
> > Through discussions in another thread (see blocking /
> > nonblocking), a horrible truth gradually became apparent
> > to me: Verilog's zero-delay handling had actually been
> > *relaxed* instead of strengthened by the Verilog
> > standardization process. The old model of things (using
> > blocking assignments for all RTL logic) is not guaranteed to
> > work!
>
> Perhaps you can say specifically what you think changed? A non-blocking
> assignment within a thread is pretty clear-cut and I would be curious to
> know how they confound old designs.

The problem is not with non-blocking assignment, but with blocking
assignment - that used to be the only kind available.

Verilog has always had areas of indeterministic behavior. However,
communication between modules was deterministic. In particular,
when modules running on the same clock were communicating through
ports driven by blocking assignments (without delay specification),
behavior was deterministic and race-free. In other words,
there was no difference between blocking and nonblocking assignments
as far as inter-module communication is concerned.

The standard however doesn't require that blocking assignments
still work like that. A design (using blocking assignments) that
used to work fine can exhibit races and indeterminism on a different
simulator that would still be compliant with the standard.

In other words, legacy code is not guaranteed to work as
before by the standard.

Kevin Cameron x3251

unread,
Sep 18, 2000, 3:00:00 AM9/18/00
to
Jan Decaluwe wrote:

>
> Stephen Williams wrote:
> >
> > > Through discussions in another thread (see blocking /
> > > nonblocking), a horrible truth gradually became apparent
> > > to me: Verilog's zero-delay handling had actually been
> > > *relaxed* instead of strengthened by the Verilog
> > > standardization process. The old model of things (using
> > > blocking assignments for all RTL logic) is not guaranteed to
> > > work!
> >
> > Perhaps you can say specifically what you think changed? ...

>
> The problem is not with non-blocking assignment, but with blocking
> assignment - that used to be the only kind available.
>
> Verilog has always had areas of indeterministic behavior....

>
> In other words, legacy code is not guaranteed to work as
> before by the standard.

The problem is really that zero-delay things don't really exist
in the world of hardware, and that Verilog was originally designed
to verify hardware.

Simulators will always generate non-deterministic (simulator to
simulator) output for zero-delays as the LRM does not sufficiently
define scheduling algorithms. Multi-thread parallel processing
simulator kernels produce different results run to run with zero
delay events (and non zero delay events in the same slot) - one
reason why there aren't many such beasts.

The only way to fix the problem is to sacrifice performance in
scheduling and use a stricter algorithm.

[There are worse problems with the Verilog standard that I'd fix
first :-) ]

Kev.

--
mailto:Kevin....@nsc.com
http://www-galaxy.nsc.com/~dkc/

Paul Campbell

unread,
Sep 18, 2000, 3:00:00 AM9/18/00
to

Kevin Cameron x3251 wrote:

> The problem is really that zero-delay things don't really exist
> in the world of hardware, and that Verilog was originally designed
> to verify hardware.
>
> Simulators will always generate non-deterministic (simulator to
> simulator) output for zero-delays as the LRM does not sufficiently
> define scheduling algorithms.

actually I'd beg to disagree - 0-delay-like non-deterministic
things happen in the real world all the time - they're just not
very nice to deal with so we avoid them like the plague.

Of course what I'm talking about is what happens when we miss
setup/hold times in our (common) synchronous design methodologies
(the brave people doing async self-timed stuff of course
relish this :-).

I think there's a direct parallel between the verilog 0-time
race issues we've been discussing the past week or so and
the normal issues of synchronous design.

The thing that makes all this hard to understand without a
detailed knowledge of the underlying simulator implementation
is that in Verilog we're talking about setup and hold windows
that are 0 time units wide and coupled with clk->Q times of
0 units - you can see why we're having problems.

Now we use non-blocking transactions to create a
still-0-but-slightly-larger clk->Q time in order to meet
the hold times in our simulated designs. Of course that's
even more confusing to a beginner.

I'm convinced the thing that makes it most confusing is that
starting out it's relatively easy to get something that will
work most of the time without trying hard, or really understanding
what's going on underneath ..... then you get sloppy
about which assignment you use where and you get bitten.

Look at a simulation on a waves display and you can't tell
whether something gets sampled or not - on a real-world 'scope
you can tell if you're meeting setup.

I'm actually personally in favor of using unit-delays so I can
look at the waves and actually see what was sampled when.

However as I've said before there are a number of verilog
timing methodologies that work - choose one that works for you,
and get to understand it's strengths and weaknesses

Paul Campbell
pa...@verifarm.com

Stephen Williams

unread,
Sep 18, 2000, 10:01:31 PM9/18/00
to
Jan Decaluwe wrote:
>
> Stephen Williams wrote:
> >
> > Perhaps you can say specifically what you think changed? A non-blocking
> > assignment within a thread is pretty clear-cut and I would be curious to
> > know how they confound old designs.
>
> The problem is not with non-blocking assignment, but with blocking
> assignment - that used to be the only kind available.
>
> Verilog has always had areas of indeterministic behavior. However,
> communication between modules was deterministic. In particular,
> when modules running on the same clock were communicating through
> ports driven by blocking assignments (without delay specification),
> behavior was deterministic and race-free. In other words,
> there was no difference between blocking and nonblocking assignments
> as far as inter-module communication is concerned.

I'm sorry, I'm slow and I need help understanding this problem. I think
you are saying that yes, this is and always was a race:

module foo;
reg a, b;
always @(posedge clk) a = b;
always @(posedge clk) b = a;
endmodule

but if the two always blocks are placed in different modules it was not
a race, because the assignment acted sorta like what we now call non-
blocking assignments?

So this contrived example...

module foo_a(clk, a, b);
input clk;
output a;
input b;
reg a;
always @(posedge clk) a = b;
endmodule

module foo_b(clk, a, b);
input clk;
input a;
output b;
reg b;
always @(posedge clk) b = a;
endmodule

module foo;
[...]
foo_a fa(clk, a, b);
foo_b fb(clk, a, b);
endmodule

... does *not* have a race with the "pre-standard" semantics? (It does
with the current semantics.)

Shalom Bresticker

unread,
Sep 19, 2000, 3:00:00 AM9/19/00
to
Actually, the Verilog-XL Reference Manual never guaranteed that such a model will work.

On the contrary, they go out of their way to state that +caxl and +turbo, for instance,
change event ordering, and zero-delay event ordering may "work" in one mode and
"not work" in another.

I have an old Verilog course from the time Verilog-XL was at Gateway Design Automation,
before it was bought by Cadence, and there the FF model is written with a delay on the assignment.

Similarly, the Reference Manual never guarantees you that always constructs in separate modules
will have a certain behavior.

It may have worked for Jan, but the Reference Manual never guaranteed it.

Similarly, what works in Verilog-XL may not work in VCS, and vice-versa.

Shalom


Stephen Williams wrote:

--

************************************************************************
Shalom Bresticker email: sha...@msil.sps.mot.com
Motorola Semiconductor Israel, Ltd. Tel #: +972 9 9522268
P.O.B. 2208, Herzlia 46120, ISRAEL Fax #: +972 9 9522890
http://www.motorola-semi.co.il/
************************************************************************


Jan Decaluwe

unread,
Sep 19, 2000, 3:00:00 AM9/19/00
to
Stephen Williams wrote:
>

> I'm sorry, I'm slow and I need help understanding this problem. I think
> you are saying that yes, this is and always was a race:
>
> module foo;
> reg a, b;
> always @(posedge clk) a = b;
> always @(posedge clk) b = a;
> endmodule
>
> but if the two always blocks are placed in different modules it was not
> a race, because the assignment acted sorta like what we now call non-
> blocking assignments?

Yes, that's what I am saying.

Paul Campbell

unread,
Sep 19, 2000, 3:00:00 AM9/19/00
to

Shalom Bresticker wrote:
>
> Actually, the Verilog-XL Reference Manual never guaranteed that such a model will work.

...

> It may have worked for Jan, but the Reference Manual never guaranteed it.

pulls out the black'n'blue manual "Gateway Verilog Version 1.2 March 1987" .... whoooph
blow all the dust off it .... cough cough .... an interesting manual ... almost
all of the pages are actually marked "1.1a" excpet for a few marker "1.0a".
It's printed in a tacky fized-size font that makes it look like an old
type-written manuscript - by modern standards it's reall hard to read
(my guess is it was NROFF'd :-)

This was the manual I learned Verilog from - and my only documentation
until the IEEE standard was released .... I used to know this book like
the back of my hand :-)

Looking through it there is no mention of behavioural event ordering - it doesn't
define a particular one or caution about coding practices for managing them
(at least to my cursory skim ... there's nothing about the event queues
descending into modules etc etc although that may well have been what they did)


There is however the interesting exception of chapter 18 which
describes the wonders of "accelerated events" - what the 'X' in XL is for ....
here it cautions:

"the accelerated algorithms can process events in a different order from
the normal algorithm ..."

"because the order of simultaneous events can be processed differently
it is possible for zero delay oscilations to occur ..."

these are of course ways of saying "we monkey with the event ordering for
performance reasons so sometimes things will work differently from
what you expect".

Anyway I'd take this as evidence that arbitrary event ordering (at least
in some circumstances) has been a part of the Verilog landscape from
almost the beginning.

The real problem of course is that this sort of thing is hard to explain,
I think it's easy to keep it out of a manual, or even to not realize
how important it is untill people have used something like Verilog for a
long period of time.

Even then different people come away with very different views
of the world - I know from day one I've coded with no assumptions
of event ordering (but then I never went to a Gateway/Cadence
training course - just read the above manual and wrote code) - Jan on the
other hand picked up a different world view ... it doesn't necesarily make
either right or wrong at the time .... however I suspect that 'reality' has
shifted over time - simulations that depended on those undocumented
event ordering assumptions (I'm assuming here that they didn't
appear in an intervening document between my gateway manual and the LRM)
started to break as the assumptions broke down (and is
does rather explain why I didn't suffer any problems
switching to VCS while other people did .... maybe it's
because I didn't go to any of those training classes :-)

Paul

B. Joshua Rosen

unread,
Sep 19, 2000, 3:00:00 AM9/19/00
to
Hasn't your doctor ever given you the advice "if it hurts when you do
that, don't do that".
Using blocking assignments for anything except pure combinatorial logic
is bad practice and should be avoided. You should never rely on a side
effect that isn't in a standard, side effects change from release to
release of the same tool, and certainly can't be counted on to be
consistant from vendor to vendor. If the orginal VerilogXL lacked non
blocking assignments, that was a bug which has been corrected now.

Jan Decaluwe wrote:
>
> Stephen Williams wrote:
> >
>
> > I'm sorry, I'm slow and I need help understanding this problem. I think
> > you are saying that yes, this is and always was a race:
> >
> > module foo;
> > reg a, b;
> > always @(posedge clk) a = b;
> > always @(posedge clk) b = a;
> > endmodule
> >
> > but if the two always blocks are placed in different modules it was not
> > a race, because the assignment acted sorta like what we now call non-
> > blocking assignments?
>

> Yes, that's what I am saying.
>

Jan Decaluwe

unread,
Sep 20, 2000, 3:00:00 AM9/20/00
to
Paul Campbell wrote:
>
> The real problem of course is that this sort of thing is hard to explain,
> I think it's easy to keep it out of a manual, or even to not realize
> how important it is untill people have used something like Verilog for a
> long period of time.
>
> Even then different people come away with very different views
> of the world - I know from day one I've coded with no assumptions
> of event ordering (but then I never went to a Gateway/Cadence
> training course - just read the above manual and wrote code) - Jan on the
> other hand picked up a different world view ... it doesn't necesarily make
> either right or wrong at the time .... however I suspect that 'reality' has
> shifted over time - simulations that depended on those undocumented
> event ordering assumptions (I'm assuming here that they didn't
> appear in an intervening document between my gateway manual and the LRM)
> started to break as the assumptions broke down (and is
> does rather explain why I didn't suffer any problems
> switching to VCS while other people did .... maybe it's
> because I didn't go to any of those training classes :-)

As we are reconstructing mental processes here, I'd like to offer you
mine to explain where my model comes from.

I learned my first Verilog from Synopsys' Verilog (or HDL) Compiler
manual - all very pragmatic and with a strong emphasis on synthesis.
I believe that register inference was introduced somewhere
early 1990 (version 1.3a?) and it was explained using blocking
assignments. I don't know when nonblocking assignments started to be
supported by Synopsys synthesis but it must have been much later.

I quickly noticed that you could have race conditions in simulation
between always blocks running on the same clock in the same module.
This of course caused a little uneasiness - how could the examples
in the manuals then work? But I realized that regs were really like
shared variables and therefore not really suited for concurrent,
deteterministic communication. Probably I would need to use a
"hardware" concept such as a port for this purpose.

And indeed, by encapsulating each clocked always block in a module,
the races were gone as I had expected. I have never seen it
otherwise. My mental model definitely is that an output port
inherently has the "right" hardware semantics - whether it is
driven from a reg with a blocking assignment or not.

Steven Sharp from Cadence has confirmed me that Verilog-XL was
indeed consistent with this model, and also that NC-Verilog is.

I checked recent version of the synthesis manuals from Synopsys
(1999.05) and Exemplar (1999.01). I invite everyone to have a
look. You will see that *still today*, register inference and
state machine descriptions are explained with blocking
assignments only.

Well, all of this tells me that I might not be the only one
with this model. A few other people may be up to a big surprize
sooner or later.

e...@riverside-machines.com.nospam

unread,
Sep 20, 2000, 3:00:00 AM9/20/00
to
I think Jan's been getting rather a bad press out of this. As to how
Verilog-XL actually did, or didn't work, we've had a few opinions.
Cliff asked Cadence, and they more-or-less said that module
encapsulation worked, but wasn't guaranteed. Shalom and Paul have
found the original documentation, and the small print says that there
might be a problem with ordering in some circumstances. I can't
believe that everybody, or even most people, who wrote code between
'87 and '92 (or whenever) put in delays on assignments for synchronous
elements. In short, the correct coding style was massively unclear,
and there's lots of legacy code which may not now work, but did
previously work. The IEEE standard did nothing for this code base, and
introduced another mechanism to sort out races. It's a fact that the
standard is not backwards-compatible with what Verilog-XL actually did
at the time, whether or not Gateway actually wrote any guarantees into
the manual. End of story.

Evan

Shalom Bresticker

unread,
Sep 20, 2000, 3:00:00 AM9/20/00
to
Yes, I noticed this a long time ago, and intended to write Synopsys about it.
Probably I never did it, due to the large number of tasks I am involved in.

Anyway, the Synopsys HDL Compiler for Verilog manual you refer to is
certainly not a justification because:

(1) The original manual was written before there were non-blocking assignments.
They simply never updated the manual.

(2) The manual there describes a single flip-flop, not a flip-flop chain.
More specifically, the manual tells you that if you write in a certain way, it will infer a flip-flop.

(3) The same manual, on pages 5-11 to 5-13 (at least in 1999.10 version), describes the difference
between blocking assignments and non-blocking assignments, and shows how, in certain cases at least,
blocking assignments may result in a non-serial register implementation.

(4) I have a Synopsys document which states that all storage elements, both flip-flops and latches,
should be written with non-blocking assignments.

Shalom


Jan Decaluwe wrote:

> I checked recent version of the synthesis manuals from Synopsys
> (1999.05) and Exemplar (1999.01). I invite everyone to have a
> look. You will see that *still today*, register inference and
> state machine descriptions are explained with blocking
> assignments only.

--

Lars Rzymianowicz

unread,
Sep 20, 2000, 3:00:00 AM9/20/00
to
Shalom Bresticker wrote:
> Anyway, the Synopsys HDL Compiler for Verilog manual you refer to is
> certainly not a justification because:
> (1) The original manual was written before there were non-blocking assignments.
> They simply never updated the manual.

The "Register Inference" chapter of the 2000.05 HDL Compiler Manual uses
nonblocking assignments throughout the chapter...

Shalom Bresticker

unread,
Sep 21, 2000, 3:00:00 AM9/21/00
to
Ah, so they did finally fix it. Great.

Shalom


Lars Rzymianowicz wrote:

--

Jan Decaluwe

unread,
Sep 21, 2000, 3:00:00 AM9/21/00
to
Shalom Bresticker wrote:
>
>
> Anyway, the Synopsys HDL Compiler for Verilog manual you refer to is
> certainly not a justification because:

I wasn't justifying. I was describing a mental process.

> (1) The original manual was written before there were non-blocking assignments.

Of course! that's my whole point. Back at the time, there was also time-to-market
pressure, work had to be done, methodologies had to be built. We really
hadn't the time to wait until nonblocking assignments would be introduced!
Not necessary either, as the flow worked.

> (2) The manual there describes a single flip-flop, not a flip-flop chain.
> More specifically, the manual tells you that if you write in a certain way, it will infer a flip-flop.

What do you mean? That even when the manual shows how to infer a flip-flop
from an RTL model, I shouldn't necessarily assume that that model is
actually usable in a larger RTL design ?? Sorry, I did.

> (3) The same manual, on pages 5-11 to 5-13 (at least in 1999.10 version), describes the difference
> between blocking assignments and non-blocking assignments, and shows how, in certain cases at least,
> blocking assignments may result in a non-serial register implementation.

You're mixing different issues. What we are discussing here is whether or not
blocking assignments are safe for *communication* between always blocks or
modules. Even if that is now deemed unsafe, it *still* makes sense to understand
and use register inference from blocking assignments, for regs that are only
used within a single always block. (In VHDL terms, this would be variables
against signals - register inference from variables is more complex but
also enables powerful coding techniques.)

Shalom Bresticker

unread,
Sep 21, 2000, 3:00:00 AM9/21/00
to
My point is simply that there is, to the best of my knowledge, no authoritative source which states or shows that

always @(posedge clk) out1 = in1 ;
always @(posedge clk) out2 = out1 ;

is safe if the two always constructs are in different modules.

I am still waiting to see such a source.

0 new messages