Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Q: Cycle based simulation: What, how, etc

389 views
Skip to first unread message

Charlie Burns

unread,
Apr 9, 1996, 3:00:00 AM4/9/96
to
I would like to see a thread about cycle based simulation. Perhaps
we could discuss...

a) what it is
b) how it works (the more detailed the better)
c) what makes it fast?
d) does a cycle based simulator have an event queue? a timewheel?
e) so there are no delays... so what? Was that much time
in event driven simulation spent on maintaining a timewheel?
f) is it an old idea under a new name?
g) what is the relation to "levelized compiled code simulation"?
h) does it imply the combining / optimization of logic between
the flops?
i) given a fully synchronous gate level design with no delays,
is cycle based simulation faster than event driven simulation?
Why?
j) could a cycle based simulator support more than two levels
of logic?

Or, could someone point me to some papers I could read?

Thanks,

Charlie

Steven Bird

unread,
Apr 13, 1996, 3:00:00 AM4/13/96
to
In article <4kfbh5$g...@crl7.crl.com>, Charlie Burns <cbu...@crl.com>
writes

>I would like to see a thread about cycle based simulation. Perhaps
>we could discuss...
>
> a) what it is

Cycle based simulation is a new (to the open market) technology for fast
logic verification of ASIC designs. Figures generally accepted are 10 to
100 times faster than a leading Event simulator (no names!).
In general the logical functionality of the design is simulated with no
timing (much like RTL).

>
> b) how it works (the more detailed the better)

This is a very tool dependant question. Better answered by others.

> c) what makes it fast?

There are many reasons why CBS is faster than Event sim. Here's a
probably incomplete list.

1. No timing - logic only
2. No strengths - 1 and 0 only
3. Better fit for multi-processor workstations.
4. Boolean logic typically represented as a boolean equation not gates.
5. Typically requires less workstation memory which means more of the
design will fit into the workstations cache resulting in faster
access.

The benefits these give:-

1. Increased throughput.

a) You can run longer tests in the same time. Note: You can *never*
do enough simulation.

b) Allows you to devise more a realistic test harness. If you design
a graphics chip you can run many frames of real data through the
chip. I've seen examples of a new generation graphics chip taking
8 days of sim per frame on event simulators, but dropping to < 1
day on CBS.

> d) does a cycle based simulator have an event queue? a timewheel?

I don't develop these tools, but I would expect that they have some
queue mechanism to store events over time BUT I suspect that it would be
one time unit deep.

> e) so there are no delays... so what? Was that much time
> in event driven simulation spent on maintaining a timewheel?

A *HUGE* amount of time is spent in descrete event in evaluating events
across all devices in the netlist, including things like setup and hold,
minimum pulse width validation, inertial and transport delay
calculation. These calculations need to be evaluated for every device
including boolean gates. Also event simulators tend to have many states
used to resolve the final logic level for a net, this all consumes time.

An event simulator I worked on saw a 10:1 speed up when all checks were
removed (other than delay propgation).

More esoteric problems with event simulation theory is that device
output events cannot be truely determined untill *all* events pending
for a block of logic have been processed. Put another way an event which
gets scheduled for an output may be canceled before the time for this
new output event is reached, due to other events on other inputs. This
happens (frequently?) in boolean logic. This is effectively wasted
processing, but unavoidable...

> f) is it an old idea under a new name?

No, it is a new form of design verification, using a core methodology
which is not based on event based simulation. In other words the new
cycle based simulators are not evolutions of event simulators. If any
are then I would strongly suspect their performance.

> g) what is the relation to "levelized compiled code simulation"?

None that I know of. This is still an event simulation, I think.

> h) does it imply the combining / optimization of logic between
> the flops?

Typically all logic between flops is represented as something other than
pure gates. During this transformation optimization may occur, but you
would need to talk to the vendors to find out what.

> i) given a fully synchronous gate level design with no delays,
> is cycle based simulation faster than event driven simulation?
> Why?

Yes, yes, yes. An event driven simulator needs to manage events
propogating through *ALL* devices in the network, gate by gate. The
equivalent netlist for CBS effectively has nodes only for the flops. If
you look at one of your designs and add up the number of flops and
subtract that from the total number of nets you'll get a feel for the
reduction in 'events' which need to be mananged.

> j) could a cycle based simulator support more than two levels
> of logic?
>

I'm not sure what benefit this would bring. The only area I can think of
is internal tri-state nets (buses) where contention occurs. The CBS
systems I am familiar with report conflicts but don't simulate them.
This is typically sufficient (I think :) )

>Or, could someone point me to some papers I could read?
>

Last years DAC conference proceedings.

>Thanks,
>
>Charlie

New Questions
-------------

1. Library support - Gate Level

I can only talk in general terms here. As there is no timing required
the gate level lib is typically simple. If you look at most vendor
libraries the complexity is all related to timing. The logic of an AND
gate is trivial, its full timing is more challenging. More often than
not you can take an existing fab library and 'strip' out the timing
stuff, what's left is usually OK.

Obviously for RTL this is not an issue.

2. Async Logic

The system must be able to handle async logic. I have not yet seen a
sync design which does not have some element (however small) of async
logic in it. Frequently Sync to one is Async to another. You need to
verify with you CBS vendor a common definition for async. Sounds silly,
probably is, but saves on future pain...

3. Multi-phase Clock Domains

Support needs to be checked with the CBS vendor. Again this is a very
loose term which can have different interpretations as it is typically
related to Q2 above.

4. Design Flow using CBS

Timing verification must be carried out using a different tool. The
options are:-

a. Use descrete event simulator with a sub-set of the verification
vectors. This only needs to be carried out once per design change
at the gate level.

b. Static timing analysis

c. Formal verification.

d. Ask you fab to do it? (I didn't say this!!!)

5. Poor Apps for CBS or What it is not

System level design. i.e. you want to mix hardware models, other
proprietry soft models, etc in the system/PCB style simulation. In this
environment the overall simulation thoughput is usually gated by the
slowest sub-system.

It is not a plug and play replacement for descrete event simulators
IMHO.

6. Technology Risks

These tools are new to the open market place. CBS technology has been
around for years but mostly as in-house systems. Choose a system which
has a *proven* pedigree.

7. Am I Involved with a CBS Vendor.

Ummmm, yes, but I'm trying to be unbiased, flame me if you like, I have
a bucket of sand always at my side...


Just my 2 Euro's worth!!!

------------------------------------------------
Steven Bird

VIZEF Limited
Tel: 44 (0)1628 481571
Fax: 44 (0)1628 483902
------------------------------------------------

Rick Sullivan

unread,
Apr 15, 1996, 3:00:00 AM4/15/96
to

cbu...@crl.com (Charlie Burns) posed the following questions
to start a thread on cycle simulation. Let me take a shot
at these, with the caveat that I am involved in developing
a cycle simulator for Viewlogic Systems, Inc, and therefore
may be biased in my opinions. However, I speak with an
engineering tongue; and wear no marketing hat.

a) what it is

Cycle simulation is a technique (i.e. an algorithm) for
digital circuit simulation. It does not simulate detailed
circuit timing, but instead computes the steady state
response of a circuit at each clock cycle. The
user cannot see the glitching behaviour of signals
between clock cycles. Instead the user observes circuit
signals once per clock cycle.

b) how it works (the more detailed the better)

There is a lot to cover here.

A good seminal paper on the topic is "HSS - A High Speed
Simulator" by Barzilai, Carter, Rosen, Rutledge, IEEE
Trans. Computer Aided Design, vol. CAD-6, July 87.

It is relatively old, but it covers the basic concepts
(maybe in more detail than you would like).

c) what makes it fast?

- Usually, only 2-states {0, 1} are simulated. No
unknowns, high-impedance states, etc. So, logical
evaluations don't have to deal with these extra
states, and can often map directly into single
machine instructions (AND, OR, etc.). You would
be amazed at how much time it can take to accurately
process unknown states.

- In an event driven simulator, logic is sometimes evaluated
many times during a cycle, depending on how often
signals "glitch" before settling to a steady
state. Cycle simulators can avoid these multiple
evaluations by levelizing (topologically sorting)
gate evaluations so that they occur at most once
per clock cycle.

- Typically, gate and operator evalutions map directly
onto machine instructions.

d) does a cycle based simulator have an event queue? a timewheel?

Not usually. The elimination of event queues and timewheels is one
of the techniques used to obtain performance. However, it is
possible to come up with (contrived?) circuits that would
benefit from selective trace queues, even in a cycle simulation
paradigm where results need only be presented on cycle
boundaries. Therefore, cycle simulators CAN and probably
WILL have event queues (or something like them) as the
technology matures.

e) so there are no delays... so what? Was that much time
in event driven simulation spent on maintaining a timewheel?

Again, it depends on the circuit/testbench combination. If
a large percentage of the signals are switching every clock
cycle in an
event driven simulation, then the ratio of event management
to circuit evaluation is pretty high.

f) is it an old idea under a new name?

Definitely. I know that IBM has been doing cycle simulation
for at least 20 years. I suspect that cycle simulation
was invented before event driven simulation, which
came along later as a refinement to support more accurate
delay modeling.

g) what is the relation to "levelized compiled code simulation"?

I view "levelized compiled code" as a technique
used to achieve higher performance in a cycle simulator.
But I don't think there are any industry standard definitions
here. If cycle simulation is defined as "simulation that
produces a steady state result once per clock cycle", then
any number of algorithms and techniques can be employed
to determine the steady state result. "Levelized compiled
code" is one.



h) does it imply the combining / optimization of logic between
the flops?

Not necessarily, but this is a technique that might be
employed to speed things up.

If logic between the flops is optimized away, then
simulation will run faster, but simulation users won't
be able to observe or control values on signals that
disappeared as a result of optimization.

i) given a fully synchronous gate level design with no delays,
is cycle based simulation faster than event driven simulation?
Why?

Not always. Assuming typical cycle simulation algorithms,
if the circuit has a low percentage of
switching signals per clock cycle, cycle simulation can
be slower than event driven. If the circuit has a
high percentage of switching signals, then cycle
simulation will probably be faster.

The crossover point is determined by the algorithms
employed by the cycle driven and event driven simulators.
If the circuit is fully synchronous, and all delays are
zero (so that very little event scheduling overhead
is incurred), the crossover point at which cycle driven
becomes faster is toward a higher percentage of
switching signals.



j) could a cycle based simulator support more than two levels
of logic?

Yes. If cycle simulation is defined as "simulation that
produces a steady state result once per clock cycle", then
any number of logic states can be used. However, the
more states, the slower the simulation. And since
performance is the name of the game for cycle simulation,
most cycle simulators support two states {0,1}.

So that's my take on it. I'd be very interested in hearing
other viewpoints.

Rick Sullivan
Viewlogic Systems, Inc.
PH: (508) 480 0881 x305
Fax: (508) 480 0882
rsul...@viewlogic.com
ri...@sage.ultranet.com


Steven Leung

unread,
Apr 15, 1996, 3:00:00 AM4/15/96
to

Steven Bird <st...@vizef.demon.co.uk> wrote:
> > c) what makes it fast?
>
> There are many reasons why CBS is faster than Event sim. Here's a
> probably incomplete list.
>
> 1. No timing - logic only

I would say no detail timing. I think the timing is there, just that
the timing unit is always a clock cycle.

> 2. No strengths - 1 and 0 only
> 3. Better fit for multi-processor workstations.

Is it true? I wonder what's the reason.

> New Questions
> -------------


>
> 2. Async Logic
>
> The system must be able to handle async logic. I have not yet seen a
> sync design which does not have some element (however small) of async
> logic in it. Frequently Sync to one is Async to another. You need to
> verify with you CBS vendor a common definition for async. Sounds silly,
> probably is, but saves on future pain...

I think the case of asynchronization between 2 completely sync. clock
domains is different from general async logics. The former case is readily
handled by the support of mutli-phase clock domains. But I'm not sure
support of general async. simulation is necessary, at least I think we can
live without it.

> 4. Design Flow using CBS
>
> Timing verification must be carried out using a different tool. The
> options are:-
>
> a. Use descrete event simulator with a sub-set of the verification
> vectors. This only needs to be carried out once per design change
> at the gate level.
>
> b. Static timing analysis

I agree, this will become a requirement.

> c. Formal verification.

I don't agree with that. For non-trivial designs, formal analysis tools
(Chrysalis comes to mind) currently can only do validation, not verification.
(I think the term formal verification is unfortunate: it carries too much
hype and the result can only be confusion. When both verification (of
functionality of the first executable spec) and validation (of the
correctness of subsequent implementations) are done by simulation, the
distinction is not important. But with introduction of formal analysis
tools, it becomes critical to make a distinction between the two. The
market acceptance of formal analysis tools has been slow. In addition
to the high price tag and tool maturity, I have the feeling that the
failure to make that distinction in marketing may be a factor - once
you make that distinction, you can clearly see the difference between a
simulation based validation methodology and a [formal] analysis based
validation methodology.) Anyway, with the distinction of verification
and validation, I think the trend is to use CBS for verification, and
completely replace simulation with static timing analysis and formal
analysis for validation.

My own question: Anyone has a vendor list of CBS available _now_?

Steven
--
"Life is a series of problems. ... Yet it is in this whole process of
meeting and solving problems that life has its meaning."
From "The Road Less Traveled" by M. Scott Peck

John Williams

unread,
Apr 16, 1996, 3:00:00 AM4/16/96
to
In article <4ku7c8$9...@gazette.loc3.tandem.com> le...@DSG.Tandem.COM (Steven Leung) writes:

Steven Bird <st...@vizef.demon.co.uk> wrote:
> > c) what makes it fast?
>
> There are many reasons why CBS is faster than Event sim. Here's a
> probably incomplete list.
>
> 1. No timing - logic only

I would say no detail timing. I think the timing is there, just that


the timing unit is always a clock cycle.

More than that, there are no events. Every clock tick, all logic is
recomputed, even if none of the signals change. The advantage of cycle
based simulation comes when you have lots of events, you avoid the overhead
of managing events. Cycle based simulators don't manage events, they simply
assume events every clock cycle.

> 2. No strengths - 1 and 0 only

Depends on the simulator. I personally like the binary logic variation with
random initialization.

> 3. Better fit for multi-processor workstations.

Is it true? I wonder what's the reason.

If the simulator is equipped to handle it. True, the problem is much easier
to solve. The amount of work to be performed is known a priori to the
analysis.

> New Questions
> -------------


>
> 2. Async Logic
>
> The system must be able to handle async logic. I have not yet seen a
> sync design which does not have some element (however small) of async
> logic in it. Frequently Sync to one is Async to another. You need to
> verify with you CBS vendor a common definition for async. Sounds silly,
> probably is, but saves on future pain...

I think the case of asynchronization between 2 completely sync. clock


domains is different from general async logics. The former case is readily
handled by the support of mutli-phase clock domains. But I'm not sure
support of general async. simulation is necessary, at least I think we can
live without it.

No current commercial simulator lets you verify metastability ( except
perhaps for spice ). This level of detail is abstracted out of the analysis.
Having multiple asynchronous clocks is not a tremendously difficult problem.

> 4. Design Flow using CBS
>
> Timing verification must be carried out using a different tool. The
> options are:-
>
> a. Use descrete event simulator with a sub-set of the verification
> vectors. This only needs to be carried out once per design change
> at the gate level.
>
> b. Static timing analysis

I agree, this will become a requirement.

PLUS, it is already seeing a great deal of use. Standard discrete event
simulation already benefits from removing timing accuracy. Cycle based
simulation simply follows this trend to its conclusion.

> c. Formal verification.

I don't agree with that. For non-trivial designs, formal analysis tools
(Chrysalis comes to mind) currently can only do validation, not verification.
(I think the term formal verification is unfortunate: it carries too much
hype and the result can only be confusion. When both verification (of
functionality of the first executable spec) and validation (of the
correctness of subsequent implementations) are done by simulation, the
distinction is not important. But with introduction of formal analysis
tools, it becomes critical to make a distinction between the two. The
market acceptance of formal analysis tools has been slow. In addition
to the high price tag and tool maturity, I have the feeling that the
failure to make that distinction in marketing may be a factor - once
you make that distinction, you can clearly see the difference between a
simulation based validation methodology and a [formal] analysis based
validation methodology.) Anyway, with the distinction of verification
and validation, I think the trend is to use CBS for verification, and
completely replace simulation with static timing analysis and formal
analysis for validation.

Formal verification is being hyped by quite a few people who are not involved
in the industry. Quite a lot of University research goes into it, and the
results are less than impressive. As far as I can see formal verification
will always trail Moore's law. Some low end tools, such as Chrysalis, may
prove to be useful, but don't expect to do in formal verification what you
can't already do in synthesis. For formal verification at more abstract
levels ( higher order logic theorem proving ), the essential problem, that
is, does this design do what I think it should, becomes more obscure,
cryptic, and fractured.

Random testing will easily dominate verification strategies, at least until
Moore's law expires.

My own question: Anyone has a vendor list of CBS available _now_?

Speedsim has one now. Many others ( and Frontline's interests me ) are
going to be available " very soon now ".

John Williams

Steven Bird

unread,
Apr 16, 1996, 3:00:00 AM4/16/96
to
In article <4ktda3$j...@decius.ultra.net>, Rick Sullivan
<ri...@sage.ultranet.com> writes

> f) is it an old idea under a new name?
>
> Definitely. I know that IBM has been doing cycle simulation
> for at least 20 years. I suspect that cycle simulation
> was invented before event driven simulation, which
> came along later as a refinement to support more accurate
> delay modeling.
>

MaxSim I beleive. Any hint of Async and you're hosed. Didn't Synopsis
get access to this with their IBM deal?


> h) does it imply the combining / optimization of logic between
> the flops?
>
> Not necessarily, but this is a technique that might be
> employed to speed things up.
>
> If logic between the flops is optimized away, then
> simulation will run faster, but simulation users won't
> be able to observe or control values on signals that
> disappeared as a result of optimization.
>

Optimisation is no bad thing (done correctly) as generally CBS
technology is best/typically used for black box verification, i.e. to
answer the question, Does my design work as expected? as fast as
possible. If the answer is no then you will probably be better moving
back to your event simulator to find out why. The reason for this is the
better debug environment. Another way of stating this is that CBS
tecnology today is not a plug and play replacement for your existing
simulation tools rather an additional tool to the armoury.

> i) given a fully synchronous gate level design with no delays,
> is cycle based simulation faster than event driven simulation?
> Why?
>
> Not always. Assuming typical cycle simulation algorithms,
> if the circuit has a low percentage of
> switching signals per clock cycle, cycle simulation can
> be slower than event driven. If the circuit has a
> high percentage of switching signals, then cycle
> simulation will probably be faster.
>
> The crossover point is determined by the algorithms
> employed by the cycle driven and event driven simulators.
> If the circuit is fully synchronous, and all delays are
> zero (so that very little event scheduling overhead
> is incurred), the crossover point at which cycle driven
> becomes faster is toward a higher percentage of
> switching signals.
>
>

I think this would depend *very* much on the technique used by the CBS
vendor. The key to performance is to 'remove' (re-represent) the logic
between registers, as this is usually where the event simulators spend
most of their time. Propogating events through clouds of combinational
logic typically accounts for a significant proportion of the simulation
time. The only design topologys I can think of which may level the field
so to speak, is one which has virtually no combinational logic, very
rare? or a very deep pipelined design where the internal fan out is
almost non-existant.


Another 2 Euros, no wait..... they just changed the name again, 2
EuroCents!!!

Steven Bird

unread,
Apr 16, 1996, 3:00:00 AM4/16/96
to
In article <4ku7c8$9...@gazette.loc3.tandem.com>, Steven Leung
<le...@DSG.Tandem.COM> writes

>
>Steven Bird <st...@vizef.demon.co.uk> wrote:
>> > c) what makes it fast?
>>
>> There are many reasons why CBS is faster than Event sim. Here's a
>> probably incomplete list.
>>
>> 1. No timing - logic only
>
>I would say no detail timing. I think the timing is there, just that
>the timing unit is always a clock cycle.
>

True(ish). Because all events are assumed to complete within the current
clock cycle, the 'time queue' only needs to be 1 deep. With event
simulation this queue is 'n' deep.


>> 2. No strengths - 1 and 0 only
>> 3. Better fit for multi-processor workstations.
>

>Is it true? I wonder what's the reason.
>

I assume you mean q3. I think this really only applies to those vendors
which designed their system for SMP (symmetric Multi processing). The
older event algs were conceived before SMP was generally available. I
don't know for sure but I suspect most of the 'smarts' is in the way the
netlist is partioned during compilation and the fact that in CBS the
design con be considered as 'n' individual circuits connected at the
register boundaries. Each sub-circuit can be simulated independant of
any other with the results being 'merged' at each clock cycle.


>> 4. Design Flow using CBS
>>
>> Timing verification must be carried out using a different tool. The
>> options are:-
>>
>> a. Use descrete event simulator with a sub-set of the verification
>> vectors. This only needs to be carried out once per design change
>> at the gate level.
>>
>> b. Static timing analysis
>

>I agree, this will become a requirement.
>

>> c. Formal verification.
>
>I don't agree with that. For non-trivial designs, formal analysis tools
>(Chrysalis comes to mind) currently can only do validation, not verification.
>(I think the term formal verification is unfortunate: it carries too much
>hype and the result can only be confusion. When both verification (of
>functionality of the first executable spec) and validation (of the
>correctness of subsequent implementations) are done by simulation, the
>distinction is not important. But with introduction of formal analysis
>tools, it becomes critical to make a distinction between the two. The
>market acceptance of formal analysis tools has been slow. In addition
>to the high price tag and tool maturity, I have the feeling that the
>failure to make that distinction in marketing may be a factor - once
>you make that distinction, you can clearly see the difference between a
>simulation based validation methodology and a [formal] analysis based
>validation methodology.) Anyway, with the distinction of verification
>and validation, I think the trend is to use CBS for verification, and
>completely replace simulation with static timing analysis and formal
>analysis for validation.
>

I agree. I only added it for completeness(?). From the validation angle,
all you seem to acheive is to prove that your synthesis tool mapped the
logic of your design correctly. Don't they already do that?

IBM's internal (and once external) design methodology which, as I
understand it, was used to great effect at the gate level, was to use
boolean equivalence and static timing analysis. They, again as I
understand it, used little to no simulation at the gate level.

The tool which interests me (I haven't seen it yet) is the technology
Viewlogic aquired (from Musgrave?). Anybody got any experience with
this? I don't remember what it is called.


>My own question: Anyone has a vendor list of CBS available _now_?
>

SpeedSim - available since 1994? 1995 at the latest. 3rd generation.

IBM MaxSim - I don't think this is generally available. It was but then
their marketing wing Altium was closed down in '94. Maybe Synopsis got
it. nth generation.

Synopsis - Name unknown, released yet? 1st generation?
Canence - Name unknown, released yet? 1st generation?
ViewLogic - Name unknown, released yet? 1st generation?

Charlie Burns

unread,
Apr 17, 1996, 3:00:00 AM4/17/96
to
Charlie Burns (cbu...@crl.com) wrote:
: I would like to see a thread about cycle based simulation. Perhaps
: we could discuss...

Thanks to all those who have responded to my original posting.
It has been interesting reading so far. Here's a short list
of papers I've found pertaining to cycle based simulation (and
variants thereof).

Charlie Burns
cbu...@crl.com

Z. Barzilai, J. Carter, B. Rosen, and J. Rutledge


"HSS - A High Speed Simulator"

IEEE Transactions on Computer Aided Design, vol 6. pp 601-616, July 1987

M. Chiang and R. Palkovic
"LCC Simulators Speed Development of Synchronous Hardware"
Computer Design, March 1986, pp. 87-92

L. Wang, N. Hppver, E. Porter, and J. Zasio
"SSIM: A Software Levelized Compiled Code Simulator"
Proceedings of the 24th IEEE/ACM DAC, 1987, pp. 2-8

C. Hensen
"Hardware Logic Simulation by Compilation"
Proceedings of the 25th IEEE/ACM DAC, 1988, pp.712-715

E. Shriver and K. Sakallah
"Ravel: Assigned Delay Compile Code Logic Simulation"
Proceedings of the 1992 IEEE International Conference on Computer Aided Design
pp. 364-368

Z. Wang and P. Maurer
"LECSIM: A Levelized Event Driven Compiled Logic Simulator"
Proceedings of the 27th IEEE/ACM DAC, 1990, pp.491-496

Also, two recent, very intersting, papers about where the technology
may be going. Both of these discuss evalution methods to speed up
cycle based simulation:

P. Ashar and S. Malik
"Fast Functional Simulation Using Branching Programs"
Proceedings of the 1995 IEEE International Conference on Computer Aided Design
pp. 408-413

P. McGreer,K. McMillian,A. Saldanha,A. Sangiovanni-Vincentelli and P. Scaglia
"Fast Discrete Function Evaluation using Decision Diagrams"
Proceedings of the 1995 IEEE International Conference on Computer Aided Design
pp. 402-407


Steven Leung

unread,
Apr 18, 1996, 3:00:00 AM4/18/96
to

Steven Bird <st...@vizef.demon.co.uk> wrote:
> I agree. I only added it for completeness(?). From the validation angle,
> all you seem to acheive is to prove that your synthesis tool mapped the
> logic of your design correctly. Don't they already do that?

In my mind, validation covers all transformation steps after the very
first executable-spec has been created. The systhesis step is no doubt
the first step in the entire validation process. Some people may argue
whether validation of synthesis is necessary at all since by definition
syn. is correct by construct. But we did experience difference in
simulation results between RTL and gates in the past. Regardless you
want to validate the synthesis results or not, after the synthesis/test
insertion, there are still many minor yet sometimes significant
modifications in the physical design process. On top of that, as
verification is done concurrently, it's not uncommon to make some
last minute changes to meet either timing or functional fix. And the
temptation to make these changes at the gate-level circuit proves to
be just inresistable. It is exactly in this late design stage that one
can see the clear advantage of an analysis-based validation approach
over the traditional simulation-based validation approach.

> IBM's internal (and once external) design methodology which, as I
> understand it, was used to great effect at the gate level, was to use
> boolean equivalence and static timing analysis. They, again as I
> understand it, used little to no simulation at the gate level.

I think the issue here is whether simulation-based signoff (simulation
vector is part of signoff requirements) has any advantage over static
timing analysis based signoff. From the circuit functionality viewpoint,
I think the most important factor is probably the amount of physical
design work to be done on the vendor's side. If there are significant
physical design works involved, it is to both the user and vendor's
benefits to adopt the formal analysis approach to _validate_ (again, not
verify) the final netlist. (To another extreme, if you do all the
physical design work, then the handoff netlist is already the final one
and there is nothing to compare with.) Hence, the final netlist, not the
simulation vectors, is really the place where the functionality of the
design embodies as no amount of simulation vectors can cover all possible
I/O + internal state situations. True, simulation vectors can help to
screen out defects introduced from fabrication/manufacturing but this
type of defects can be addressed more effectively by other means. As to
the timing, no matter which approach is taken, the vendors must have
their technology to back up whatever timing that put in their gate/wire-
load libraries. That is the real bottom line. With that in mind, as
static timing analysis is exhaustive and simulation is not, the former
is clearly more advantageous.

Looking back, it is clear that the role of simulation in validation
(not verification) is in the decline. First, static timing analysis
tools already take away the timing part in the validation. Now formal
analysis tools, I believe, are in the position to take away the
functional part. So, I think the future is completely analysis-based
validation.

Chong Guan Tan

unread,
Apr 18, 1996, 3:00:00 AM4/18/96
to
cbu...@crl.com (Charlie Burns) wrote:
>
> I would like to see a thread about cycle based simulation. Perhaps
> we could discuss...
>
> a) what it is
> b) how it works (the more detailed the better)
> c) what makes it fast?
> d) does a cycle based simulator have an event queue? a timewheel?
> e) so there are no delays... so what? Was that much time
> in event driven simulation spent on maintaining a timewheel?
> f) is it an old idea under a new name?
> g) what is the relation to "levelized compiled code simulation"?
> h) does it imply the combining / optimization of logic between
> the flops?
> i) given a fully synchronous gate level design with no delays,
> is cycle based simulation faster than event driven simulation?
> Why?
> j) could a cycle based simulator support more than two levels
> of logic?
>
> Or, could someone point me to some papers I could read?
>
> Thanks,
>
> Charlie

Since people have answer most of the questions, I will answer only those
I believe were answered wrong:

f) Cycle based-simulation is nothing new. very old in fact. back in the
good old day of the '80, we at AIDA had already shipped our cycle based
logic and fault simulators.
g) levelized compiled code is one of the techniques used to do cycle
based simulator. Levelization is always the thing you do in a CBS,
or you will not be able to take full advantage of CBS, compile code is
the instrument that realizes the simulation.
h) this is optional, just like CD player is optional when you buy a car.
i) In theory, CBS whould be faster, due to the levelization and not
queueing the events.
j) There is no reason CBS is limited to 2 level of logic. In AIDA,
we did only 4 level of logic.

There is one paper, in the proceeding of DAC '87, by Nathan Hover and
company which you may want to take a look.


Bernd Paysan

unread,
Apr 18, 1996, 3:00:00 AM4/18/96
to
John Williams wrote:
> Formal verification is being hyped by quite a few people who are not involved
> in the industry. Quite a lot of University research goes into it, and the
> results are less than impressive. As far as I can see formal verification
> will always trail Moore's law. Some low end tools, such as Chrysalis, may
> prove to be useful, but don't expect to do in formal verification what you
> can't already do in synthesis. For formal verification at more abstract
> levels ( higher order logic theorem proving ), the essential problem, that
> is, does this design do what I think it should, becomes more obscure,
> cryptic, and fractured.

I can back this up. I discussed with a professor (er, his assistent ;-) here at
TU München about formal verification (his special point of interrest). IMHO the
most problematic point of formal verification is that you have to write things
two times (one time to implement and one time to specify). How would you write
the specification of a Verilog module? In VHDL? And why not take the
specification and compile it to the implementation language? Have a look at the
history of both Verilog and VHDL: they started as specification languages, not
as actual implementation languages. The problematic of "non synthesizable code"
results from this approach (especially in VHDL).

One student of this professor (who did his diploma degree there) worked on
formal test pattern generation. He said his test pattern generator can test a
GAL with 20 input bits in less than a second. I answered, that a logic analyzer
can do this with a brute force approach, because these things run at 2 MHz or
more, thus it takes half a second to test it, because 2^20 is about a million.
He said, then he had to correct the optimistic assumptions in his diploma tesis
about test speed improvement - I hope his work was a success :-)...

> Random testing will easily dominate verification strategies, at least until
> Moore's law expires.

My favourite verification strategy is "intelligent testing", thus generate
input patterns with a unlikely high probability to spot edge conditions. How
long will it take if you randomly test a FPU add unit until it produces
infinity or denormals?

--
Bernd Paysan
"Late answers are wrong answers!"
http://www.informatik.tu-muenchen.de/~paysan/

Stan Bailes

unread,
Apr 19, 1996, 3:00:00 AM4/19/96
to
In article <JWILL.96A...@netcom22.netcom.com>,

John Williams <jw...@netcom22.netcom.com> wrote:
>Formal verification is being hyped by quite a few people who are not involved
>in the industry. Quite a lot of University research goes into it, and the
>results are less than impressive. As far as I can see formal verification
>will always trail Moore's law. Some low end tools, such as Chrysalis, may
>prove to be useful, but don't expect to do in formal verification what you
>can't already do in synthesis. For formal verification at more abstract
>levels ( higher order logic theorem proving ), the essential problem, that
>is, does this design do what I think it should, becomes more obscure,
>cryptic, and fractured.
>
>Random testing will easily dominate verification strategies, at least until
>Moore's law expires.

Yes and no. There *is* a lot of hype surrounding formal verificaion,
and there is an astounding amount of academic work with little
apparent applicability to real-world problems. But there are some
outstanding success stories "in the industry". SGI has used formal
verification techniques (in particular, symbolic model checking using
smv) to verify an extremely complicated cache-coherence protocol --
some of the bugs found would *never* have been found by simulation.
We've also used smv to verify multi-ported FIFOs and I/O protocols.

Formal verification is not for everybody, and will not replace
simulation in the forseeable future. BDD-based methods do not, in
general, work well when used by non-expert users. For FV to succeed,
it requires substantial support/involvement from the designers, a
rigorous specificaion, insights into the key abstractions of the
design, and a fairly deep understanding of the limitations of BDDs.
But there is a class of problems that can be solved no other way.
--
Stan Bailes e-mail: st...@sgi.com
Silicon Graphics, Mountain View, CA voice: (415) 933-1995
When cryptography is outlawed, bayl bhgynjf jvyy unir cevinpl

John Williams

unread,
Apr 24, 1996, 3:00:00 AM4/24/96
to
In article <4l8tn4$j...@murrow.corp.sgi.com> st...@stan.asd.sgi.com (Stan Bailes) writes:

> Formal verification is not for everybody, and will not replace
> simulation in the forseeable future. BDD-based methods do not, in
> general, work well when used by non-expert users. For FV to succeed,
> it requires substantial support/involvement from the designers, a
> rigorous specificaion, insights into the key abstractions of the
> design, and a fairly deep understanding of the limitations of BDDs.
> But there is a class of problems that can be solved no other way.

If you can't see a bug in simulation, then you can't see it in the real
hardware, and therefore, it does not exist. When I was working at X, they
used formal verification to verify the cache coherency and memory ordering
of the bus protocol. Hey, it worked. Could they have done the same thing
with stochastic simulation? Yes, with time to spare. The simplifications
and the amount of the architecture they abstracted out of the model were
by themselves a tremendous effort, only so that the analysis would
terminate within a reasonable amount of time! Therein lies the rub, so to
speak: The class of problems that can be solved through formal verification
are the very small, simple problems. Once you start discussing real world
complexities and the kind of functional interference you get when you share
resources ( like gates on a budget ), FV grows intractable.

As far as rigorous specification goes, I find nearly all the time I am forced
to work with not only vague specifications, but incomplete ones as well,
where decisions are put off for some later time nearer the end of the
project. Some of these can be quite profound!

FV seems to work best in a small, static environment. There may be room for it
in the low end analysis, ALU data paths, gate comparisons, etc. At the
architectural level, however, FV is simply not competitive. It imposes
constraints on the design that are simply not feasible in a competitive
environment.

Hey, I think it's neat technology, but good luck finding a competitive
application for it!

John Williams

John Williams

unread,
Apr 24, 1996, 3:00:00 AM4/24/96
to

> My favourite verification strategy is "intelligent testing", thus generate
> input patterns with a unlikely high probability to spot edge conditions. How
> long will it take if you randomly test a FPU add unit until it produces
> infinity or denormals?

Don't get me started on the AI hype!!!!!

Seriously, test compression is a very interesting problem. The intelligence
you speak of exists in the mind of the DV engineer, who ultimately makes
a judgement: Tape out, or not Tape out, that is the question!

John Williams


Steven Bird

unread,
Apr 26, 1996, 3:00:00 AM4/26/96
to

In article <JWILL.96A...@netcom17.netcom.com>, John Williams
<jw...@netcom17.netcom.com> writes

>
>If you can't see a bug in simulation, then you can't see it in the real
>hardware, and therefore, it does not exist.

I hope I'm not taking this out of context..., but the above statement
makes a number of *gross* assumptions, IMHO, these being that your
testbench exercised the design in completely with respect to its
intended (and unintended) use and that your design does not exhibit any
accumulative problems. You can never do enough functional verification.
I'm beginning to feel that functional verification is complete when the
design team reach a level of confidence not when every possible
combination, both good and bad, has been applied to the design.

Robert Wood

unread,
Apr 26, 1996, 3:00:00 AM4/26/96
to

In article <g76y$CA9qM...@vizef.demon.co.uk>, Steven Bird <st...@vizef.demon.co.uk> writes:
|> I'm beginning to feel that functional verification is complete when the
|> design team reach a level of confidence not when every possible
|> combination, both good and bad, has been applied to the design.

In my experience, functional verification is complete when the boss says so.
That's often before the design team has reached a significant level of
confidence.

--
Bob Wood ascom Nexion
phone: 508-266-2350
wo...@nexen.com http://arthur.nexen.com:8000/~wood

Krutibas Biswal

unread,
Apr 29, 1996, 3:00:00 AM4/29/96
to

Hello,
Is there a free SDF 2.1 parser available in internet ? If yes,
could anyone please point me to the same. Thanks in advance,

Regards,
Krutibas.
=========================================================================
India Software Development Systems, MSGID : [SIVA]
Texas Instruments (India) Pvt Ltd, M/S : 4232 PH : [91-80-5269451]
Bangalore. email : bis...@india.ti.com
=========================================================================


Tsu-Hua Wang

unread,
Apr 29, 1996, 3:00:00 AM4/29/96
to

For the confidence issue, I like to suggest the paper
"Practical Code Coverage for Verilog", IVC'95, written by myself
and C.G. Tan.

We try to address the issue that both the boss and engineer can
all agree on "confidence" semantics. We try to quantify
"confidence" in terms of numbers (engineering) instead of
feelings (art).

The tool has been successfully used by few projects. Many of my
colleagues felt tool is quite useful.

-----------------------------
Tsu-Hua Wang
Simulation Technologies
47100 Bayside Parkway
Fremont, CA 94538-9942

email: tsu...@simtech.com
Voice: (510) 226-0286
Fax: (510) 623-4575

John Williams

unread,
May 1, 1996, 3:00:00 AM5/1/96
to

In article <g76y$CA9qM...@vizef.demon.co.uk> Steven Bird <st...@vizef.demon.co.uk> writes:

I hope I'm not taking this out of context..., but the above statement
makes a number of *gross* assumptions, IMHO, these being that your
testbench exercised the design in completely with respect to its
intended (and unintended) use and that your design does not exhibit any
accumulative problems. You can never do enough functional verification.

I'm beginning to feel that functional verification is complete when the
design team reach a level of confidence not when every possible
combination, both good and bad, has been applied to the design.

You raise a good point. Maybe I can explain a little better.

First of all, although stochastic ( random ) methods are not 100% air tight,
they terminate. You can build any size model you want, run a test, and it
will terminate. If you take into account Moore's Law, what this translates
into is that the current generation of machines can be effectively used to
simulate the next generation of machines. Formal verification methods require
fragmenting the problem into tiny bits if you want the analysis to terminate.

Second, there are many sources of bugs and coverage holes. The primary
source of coverage holes is oversight due to complexity. Even with formal
verification, when trying to verify an architecture as opposed to comparing
gates or some such, you have to compose a set of theorems which you then
set out to prove. Safety theorems tend to be much simpler and automatic
than liveness theorems, but if you leave one or more out when "proving" the
design, then you can forget about working silicon. Because it's at a more
abstract level, you tend to make a smaller number of bigger mistakes ( than
with simulation ).

Don't let randomness bother you. The air you breathe is a collection of
randomly distributed molecules. If something escapes the random net, it is
almost certainly a test left out of the distribution, rather than some
bizarre statistical abheration. The whole business enterprise is a gamble
to begin with. Very few things about logic design are perfect. Even if
you had the perfect computer, what will prevent it from being crushed by
a giant asteroid?

Excuse my philosophical digression . . .

Ultimately, someone bears the responsibility for judging when the odds are
in favor of tape out. Every day you wait costs money and market share.
Ultimately you have to present the market with not a safe bet, but a good
bet. Corporations win by playing the odds. What are the odds that some other
flaw will affect the silicon and that by delaying tapeout you are delaying
the discovery of this other problem?

The "real" world is far too messy to insist that verification not be. The
time constant is already set, and formal methods are orders of magnitude
too slow for most applications.

John Williams

Michael Woodacre

unread,
May 1, 1996, 3:00:00 AM5/1/96
to

In article <JWILL.96M...@netcom7.netcom.com>, jw...@netcom7.netcom.com (John Williams) writes:
|> Lines: 52

I think this is going a bit too far. I think anyone who has worked on verifying
large complex systems knows that currently there is no 'perfect' way to verify
the design. By attacking the verification from a number of different angles, you
hope to cover most cases, and by using a number of different tools/methods to
achieve your goal, you decrease the liklehood of any one method having some
'blindspot' and so missing some bugs.

At Silicon Graphics we have successfully added formal verification in our design
verification strategy and have found bugs that would have been hard to find using
more traditional simulation methods. Don't get me wrong - we are not about to
throw our simulation but we are happy to have another tool to improve the quality
of the designs we tape out given tight time frames required to stay competitive
in our market.

Cheers,
Mike


--
Michael S. Woodacre | Phone: (415) 933 4175
UUCP: wood...@sgi.com
USPS: Silicon Graphics, M/S 7U
2011 N. Shoreline Blvd, Mountain View, CA 94039


0 new messages