Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A spectre is haunting this newsgroup, the spectre of metastability

374 views
Skip to first unread message

Peter Alfke

unread,
Oct 27, 2006, 5:25:09 PM10/27/06
to

To paraphrase Karl Marx:
A spectre is haunting this newsgroup, the spectre of metastability.
Whenever something works unreliably, metastability gets the blame. But
the problem is usually elsewhere.

Metastability causes a non-deterministic extra output delay, when a
flip-flop's D input changes asynchronously, and happens to change
within an extremely narrow capture window (a tiny fraction of a
femtosecond !). This capture window is located at an unknown (and
unstable) point somewhere within the set-up time window specified in
the data sheet. The capture window is billions of times smaller than
the specified set-up time window. The likelihood of a flip-flop going
metastable is thus extremely small. The likelihood of a metastable
delay longer than 3 ns is even less.
As an example, a 1 MHz asynchronous signal, synchronized by a 100 MHz
clock, causes an extra 3 ns delay statistically once every billion
years. If the asynchronous event is at 10 MHz, the 3 ns delay occurs
ten times more often, once every 100 million years.
But a 2.5 ns delay happens a million times more often !
See the Xilinx application note XAPP094
You should worry about metastability only when the clock frequency is
so high that a few ns of extra delay out of the synchronizer flip-flop
might causes failure. The recommended standard procedure,
double-synchronizing in two close-by flip-flops, solves those cases.
Try to avoid clocking synchronizers at 500 MHz or more...
So much about metastability.

The real cause of problems is often the typical mistake, when a
designer feeds an asynchronous input signal to more than one
synchronizer flip-flop in parallel, (or an asynchronous byte to a
register, without additional handshake) in the mistaken believe that
all these flip-flops will synchronize the input on the same identical
clock edge.
This might work occasionally, but sooner or later subtle difference in
routing delay or set-up times will make one flip-flop use one clock
edge, and another flip-flop use the next clock edge. Depending on the
specific design, this might cause a severe malfunction.
Rule #1: Never feed an asynchronous input into more than one
synchronizer flip-flop. Never ever.

Peter Alfke

Ben Twijnstra

unread,
Oct 27, 2006, 6:04:30 PM10/27/06
to
Applause!!!

PeteS

unread,
Oct 27, 2006, 6:09:00 PM10/27/06
to

First, I fully agree that metastability is a very rare issue, and if it
_does_ occur, it is because the designer has not guarded against it. Yes
- the designer has a responsibility to meet the setup and hold times
(which will help prevent such issues).

Second, metastability, as a potential problem, _does_ exist in any FF
based system, by definition. See point number 1.

So I agree with Peter Afke on this. I understand why the thread was
named that way too :)

I've noticed the large number of recent threads 'blaming' purported
metastability issues for problems, but at (for instance) 10MHz, a
metastability issue would only show up in the occasional few million or
so transactions (at an absolute maximum, and then only a few times at
that rate).

So if there's a problem with your design throwing bad data at a rate of
1 in a million or so, check your timing. It's highly unlikely to be
metastability.

I have had *true* metastable problems (where an output would float,
hover, oscillate and eventually settle after some 10s of
*milliseconds*), but those I have seen recently don't qualify :)

Cheers

PeteS


John Kortink

unread,
Oct 27, 2006, 6:24:48 PM10/27/06
to

On 27 Oct 2006 14:25:09 -0700, "Peter Alfke" <pe...@xilinx.com> wrote:

>To paraphrase Karl Marx:
>A spectre is haunting this newsgroup, the spectre of metastability.
>Whenever something works unreliably, metastability gets the blame. But
>the problem is usually elsewhere.

In my experience, ground bounce is a bigger problem.
Especially in a device that is nearly 'full' it is
wise to invest in a few fabricated grounds (dedicate
a pin at a strategic location, i.e. as far away as
possible from other ground pins, drive it to ground,
and tie it to ground externally).

When you find that moving cells around alleviates or
intensifies observed instabilities, you may want to
look into ground bounce problems.

>Metastability causes a non-deterministic extra output delay, when a
>flip-flop's D input changes asynchronously, and happens to change
>within an extremely narrow capture window (a tiny fraction of a
>femtosecond !). This capture window is located at an unknown (and
>unstable) point somewhere within the set-up time window specified in
>the data sheet. The capture window is billions of times smaller than
>the specified set-up time window. The likelihood of a flip-flop going
>metastable is thus extremely small. The likelihood of a metastable
>delay longer than 3 ns is even less.
>As an example, a 1 MHz asynchronous signal, synchronized by a 100 MHz
>clock, causes an extra 3 ns delay statistically once every billion
>years. If the asynchronous event is at 10 MHz, the 3 ns delay occurs
>ten times more often, once every 100 million years.
>But a 2.5 ns delay happens a million times more often !
>See the Xilinx application note XAPP094
>You should worry about metastability only when the clock frequency is
>so high that a few ns of extra delay out of the synchronizer flip-flop
>might causes failure. The recommended standard procedure,
>double-synchronizing in two close-by flip-flops, solves those cases.

I've found that one synchronizing flip-flop was not enough
in one particular case (from a 4-ish to 50-ish MHz domain).
Two was. Does one ever work reliably ? Or has the 'window'
become smaller in the past few years ?


John Kortink
Windfall Engineering

--

Email : kor...@inter.nl.net
Homepage : http://www.windfall.nl

Your hardware/software designs realised !

Peter Alfke

unread,
Oct 27, 2006, 6:30:01 PM10/27/06
to

To paraphrase Karl Marx:
A spectre is haunting this newsgroup, the spectre of metastability.
Whenever something works unreliably, metastability gets the blame. But
the problem is usually elsewhere.

Metastability causes a non-deterministic extra output delay, when a


flip-flop's D input changes asynchronously, and happens to change
within an extremely narrow capture window (a tiny fraction of a
femtosecond !). This capture window is located at an unknown (and
unstable) point somewhere within the set-up time window specified in

the data sheet. The capture window is millions of times smaller than


the specified set-up time window. The likelihood of a flip-flop going
metastable is thus extremely small. The likelihood of a metastable

delay longer than 3 ns is even smaller.
As an example, a close-to 1 MHz asynchronous signal, synchronized by a
100 MHz clock, causes an extra 3 ns metastable delay statistically once
every billion years. If the asynchronous event is at ~10 MHz, the 3 ns


delay occurs ten times more often, once every 100 million years.
But a 2.5 ns delay happens a million times more often !
See the Xilinx application note XAPP094
You should worry about metastability only when the clock frequency is
so high that a few ns of extra delay out of the synchronizer flip-flop
might causes failure. The recommended standard procedure,
double-synchronizing in two close-by flip-flops, solves those cases.

Try not to use a 500 MHz clock to capture asynchronous inputs...
So much about metastability.

Much more common is the typical oversight, when a designer feeds an


asynchronous input signal to more than one synchronizer flip-flop in

parallel, in the mistaken believe that all these flip-flops will
synchronize the input on the same identical clock edge. Or feed an
asynchronous byte into an eight-bit register, without any additional
handshake.


This might work occasionally, but sooner or later subtle difference in
routing delay or set-up times will make one flip-flop use one clock

edge, and another flip-flop use the following clock edge. Depending on


the specific design, this might cause a severe malfunction.

Rule #1: Never feed asynchronous inputs into more than one synchronizer

Jim Granville

unread,
Oct 27, 2006, 6:48:40 PM10/27/06
to
PeteS wrote:

>
> I have had *true* metastable problems (where an output would float,
> hover, oscillate and eventually settle after some 10s of
> *milliseconds*), but those I have seen recently don't qualify :)

Can you clarify the device/process/circumstances ?

-jg

PeteS

unread,
Oct 27, 2006, 6:52:58 PM10/27/06
to

This was a discrete design with FETs that I was asked to test (at a
customer site). The feedback loop was not particularly well done, so
when metastability did occur, it was spectacular.

Cheers

PeteS

Peter Alfke

unread,
Oct 27, 2006, 7:22:18 PM10/27/06
to
Our server did strange things, like filing my (long) posting twice...

The numbers I quoted are from experiments with VirtexIIPro, a few years
ago. (XAPP094)
So it's modern, but not very modern.
Old horror stories of millisecond oscillations must have been based on
TTL circuits, where there are multiple slow stages in the master-latch
feedback loop (the only participant in metastability delays).Modern
CMOS flip-flops are much simpler, and I have never heard about them
oscillating due to metastability.
There is a human tendency to blame every evil on the mystery du jour.
Metastability is not mysterious, but it is rare and non-deterministic,
which bothers some people...
I agree that ground bounce a.k.a. simultaneously switching outputs
(SSOs) is a far nastier problem.
Peter Alfke

Jim Granville

unread,
Oct 27, 2006, 7:53:25 PM10/27/06
to

Do you mean they built a D-FF, using discrete FETS ?!

I have seen transistion oscillations (slow edges) cause very strange
effects in Digital Devices, but I'd not call that effect metastability.

-jg

Peter Alfke

unread,
Oct 27, 2006, 8:32:55 PM10/27/06
to
Well, in the beginning of my professional life, I built flip-flops out
of two Ge transistors, 8 resistors, two diodes and two capacitors.
Remember, the term J-K flip-flop comes from a standardized sinle-FF
pc-board where the connector oins were labeled A-Z, and the set and
reset inputs were on the adjacent central pins J and K.
Not a joke...
Peter Alfke

On Oct 27, 4:53 pm, Jim Granville <no.s...@designtools.maps.co.nz>
wrote:


> PeteS wrote:
> > Jim Granville wrote:
>
> >> PeteS wrote:
>
> >>> I have had *true* metastable problems (where an output would float,
> >>> hover, oscillate and eventually settle after some 10s of
> >>> *milliseconds*), but those I have seen recently don't qualify :)
>
> >> Can you clarify the device/process/circumstances ?
>
> >> -jg
>
> > This was a discrete design with FETs that I was asked to test (at a
> > customer site). The feedback loop was not particularly well done, so

> > when metastability did occur, it was spectacular.Do you mean they built a D-FF, using discrete FETS ?!

PeteS

unread,
Oct 28, 2006, 3:02:26 PM10/28/06
to

Amusing

I too have made flip flops from discrete parts in the distant past. The
metastable problem I encountered was due to slow rising inputs on pure
CMOS (a well known issue) and was indeed part of the feedback path.

I remember making a D FF using discrete parts only a few years ago
because it had to operate at up to 30VDC. I had to put all the usual
warnings on the schematic page about setup/hold times etc.

There are times when the knowledge of just what a FF (be it JK, D or
M/S) is comes in _real_ handy.

Cheers

PeteS

Peter Alfke

unread,
Oct 28, 2006, 5:03:23 PM10/28/06
to
We had a discussion at lunch, about the future when us dinosaurs are
gone.
Who will then understand those subtleties, only the tiny cadre of IC
designers?
Many new college graduates' eyes glaze over when I ask them about the
way a flip-flop works, and how it avoids a race condition in a shift
register. And clock skew and hold-time issues.
Hard-earned "wisdom"...
Peter Alfke

PeteS

unread,
Oct 28, 2006, 5:50:05 PM10/28/06
to

Well, I am not an IC designer (well, not regularly). Perhaps the answer
is education - real education. The new crowd doesn't seem to understand
the fundamentals that are key to successful design of any type be it IC,
board level or any other.

Like the other dinosaurs, I've seen and done things most youngsters
don't even consider, but the youngsters that have been around when *I
did them* were awed, and they wanted to learn, so I think there's hope.


I am sure the youngsters that were around when *you* have done
astounding things (to them) were awed too. Perhaps it's a matter of
making sure they understand the limitations of their current knowledge :)

It's different in a way - we were *figuring out* what made things work;
nowadays it's taken for granted. We need to make sure the kids
understand that this knowledge is key to successful design.

Cheers

PeteS

PeteS

unread,
Oct 28, 2006, 6:24:03 PM10/28/06
to

There was a TV show perhaps 20 years ago the name of which I do not
remember. In it, the computer that ran the spacecraft (named Mentor
because the female of the group had thought the name) refused to give
information about using the transporter system.

It said 'Wisdom is earned, not given'

Cheers

PeteS

Peter Alfke

unread,
Oct 29, 2006, 12:16:26 AM10/29/06
to
There is a difference, 60 years ago, a curious kid could at least try
to understand the world around him/her.
Clocks, carburators, telephones, radios, typewriters, etc.
Nowadays, these functions are black boxes that few people really
understand, let alone are able to repair.
Youngsters today can breathe life into a pc by hitting buttons in
mysterious sequences...
Do they really understand what they are doing or what's going on?
"If the engine stalls, roll down the window" :-)

Here is a simple test, flunked by many engineers:
How can everybody smoothely adjust the heat of an electric stove, or a
steam iron ?
Hint: It is super-cheap, no Variac, no electronics. Smoke and mirrors?
Answer: it's slow pulse-width modulation, controlled by a self-heating
bimetal strip.
Cost: pennies...

Well, the older generation has bemoaned the superficiality of the
younger generation,
ever since Socrates did so, a hundred generations ago. Maybe there is
hope...
Peter Alfke

> >>PeteSThere was a TV show perhaps 20 years ago the name of which I do not

Guru

unread,
Oct 29, 2006, 12:07:43 PM10/29/06
to
Peter,

you're right that youngsters (my friends for example) don't know how
the computer really works despite the fact that they use it almost
every day. This secret is revealed only to the most curious youngsters
that devote their time to it. I am sure that there are still youngsters
which are willing to understand the secrets of a silicon brain. Now in
the in age of internet information is easily available to anyone
connected, may it be a 6 years old kid or a 100 grandpa (no offence
Peter). The people with knowledge should be willing to give their
knowledge to the masses (p.e. publish it on the net), there will always
be someone that will accept it.
The problem is that a life is too short to explore all the interesting
things. When you explore things we usually begin at the surface of the
problem. Then you remove it layer by layer, like peeling an onion. What
to do when there are too many layers. Computer technology gains many
new layers every year (an exponential number according to moore's law).
I think that nobody can keep the pace with this layers. So at some
point you give up and study only the things you prefer. Youngsters are
familiar with games, so some learn how to make one. The others prefer
the secrets of operating system - they build OS. Some of them are
interested in HW - like most of us in this newsgroup. I for example am
a very curious mechanical engineer. When I got bored in mechanical
engineering, where the pace of development is nothing in comparison to
electronic industry, I also studied electronics. Now I prefer
electronics for a simple reason - it is far more complex hence gives me
much more satisfaction when learning.
Sometimes I realise that I am very weak in fundamentals, because I am
missing the lectures in fundamentals of electronics. To be honest I
don't have a clue what a "FF metastability" is and what is the cause of
it. BTW: What is an FF? I imagine it like a memory cell. Despite my
poor knowledge in fundamentals I am able to build very complex computer
systems, write software for it... How that can be? Well some people are
devoted to fundamentals, some to layer above that, the others to layer
above that layer and so on. At the end there are "normal" computer
users that do not want to know how the computer works, they just want
to use it for Word, games, watching movies...
I wouldn't worry about the passing the knowledge to youngsters. If
there will be a need for that knowledge they will learn it. So specific
knowledge is learned by a small group of people but the Word usage and
Email sending is learned by almost every youngster (in the Western
world!). That's an evolution.

Cheers,

Guru

Frank Buss

unread,
Oct 29, 2006, 1:19:50 PM10/29/06
to
Guru wrote:

> Then you remove it layer by layer, like peeling an onion. What
> to do when there are too many layers. Computer technology gains many
> new layers every year (an exponential number according to moore's law).

Moore's Law says that the transistor density of integrated circuits,
doubles every 24 months:

http://en.wikipedia.org/wiki/Moore%27s_Law

I think layers are increased more linear, maybe one every 5 years, like
when upgrading from DOS with direct hardware access to Windows, with an
intermediate layer for abstracting the hardware access, or from Intel 8086
to Intel 386, with many virtual 8086. So you can keep the pace with the
layers. You don't need to be an expert for every layer, but it is easy to
learn the basics about which layers exists, what they are doing and how
they interact with other layers.

It is more difficult to keep the pace with all the new components, like
PCIe, new WiFi standards etc., but usually they don't change the layers or
introduce new concepts. If PCs would be built with FPGAs instead of CPUs,
and if you start a game, which reconfigures a part of a FPGA at runtime to
implement special 3D shading algorithms in hardware, this would change many
concepts, because then you don't need to buy a new graphics card, but you
can install a new IP core to enhance the functionality and speed of your
graphics subsystem. If it is too slow, just plugin some more FPGAs and the
power is available for higher performance graphics, but when you need OCR,
the same FPGAs could be reconfigured with neuronal net algorithms to do
this at high speed.

There are already some serious applications to use the computational power
of shader engines of graphics cards, but most of the time they are idle,
when you are not playing games. Implementing an optimized CPU in FPGAs for
the current task would be much better.

--
Frank Buss, f...@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

Peter Alfke

unread,
Oct 29, 2006, 2:20:43 PM10/29/06
to
Frank, I was addressing a different issue:
That the knowledge base of the fundamental technology inevitably is
supported by fewer and fewer engineers, so that soon (now!) people will
manipulate and use technology that they really do not understand.
And that is a drastic change from 40 years ago.
I think you understand German, to appreciate Goethe's words:

Was du ererbt von Deinen Vätern hast,_
Erwirb es, um es zu besitzen!

Cheers
Peter

On Oct 29, 10:19 am, Frank Buss <f...@frank-buss.de> wrote:
> Guru wrote:
> > Then you remove it layer by layer, like peeling an onion. What
> > to do when there are too many layers. Computer technology gains many

> > new layers every year (an exponential number according to moore's law).Moore's Law says that the transistor density of integrated circuits,

Frank Buss

unread,
Oct 29, 2006, 2:38:42 PM10/29/06
to
Peter Alfke wrote:

> That the knowledge base of the fundamental technology inevitably is
> supported by fewer and fewer engineers, so that soon (now!) people will
> manipulate and use technology that they really do not understand.

The good news: the fewer people know the basics, the more you can earn,
when a customer needs it :-)

KJ

unread,
Oct 29, 2006, 3:15:58 PM10/29/06
to

"Peter Alfke" <al...@sbcglobal.net> wrote in message
news:1162149643.1...@i42g2000cwa.googlegroups.com...

> Frank, I was addressing a different issue:
> That the knowledge base of the fundamental technology inevitably is
> supported by fewer and fewer engineers,
But with a capitalistic economy one would expect that it will be supported
by the proper number of engineers and if that number is 'small' than those
few will earn more money than if there were a 'lot' of those engineers. In
any case, the appropriate amount of money will be spent on those
functions....but of course nowhere is there a true capitalistic economy, but
I expect that the knowledge/skill transfer will in fact be transferred if
there still exists a market for it.

> so that soon (now!) people will
> manipulate and use technology that they really do not understand.
> And that is a drastic change from 40 years ago.

Ummm.....the true 'fundamental' knowledge underpinning all electronics as we
understand it today are contained in Maxwell's equations and quantum
mechanics. I'd hazard a guess that engineer's have been designing without
that true knowledge of both for far longer than 40 years.

What you're considering as fundamental seem to be the things things that you
started your career with and were thought to be 'fundamental' back then,
like how flip flops are constructed from transistors, why delays are
important, bandwidths of transistors, etc. But even those things are
abstractions of Maxwell and quantum....is that a 'bad' thing? History would
indicate that it's not. The electronics industry has done quite well in
designing many things without having to hark back to Maxwell and quantum to
design something. There is the quote about seeing farther because I stood
on the shoulders of giants that comes to mind.

How far away one can get without a knowledge of what is 'fundamental' though
is where catastrophes can happen but productivity improvements over time are
driven by having this knowledge somehow 'encoded' and moving away from
direct application of that fundamental knowledge so that the designers of
the future do not need to understand it all....as stated earlier, there are
many layers to the onion, to many to be fully grasped by someone who also
needs to be economically productive to society (translation: employable).

There is the real danger of not passing along the new 'fundamentals' to the
next generation so that lack of knowledge of old does not result in failures
of the future. What exactly the new 'fundamental' things are is
subjective....but in any case they won't be truly be 'fundamental' unless it
is a replacement for Maxwell's equations and the theory of quantum
mechanics.

KJ


Nicolas Matringe

unread,
Oct 29, 2006, 3:20:27 PM10/29/06
to
Peter Alfke a écrit :

> To paraphrase Karl Marx:
> A spectre is haunting this newsgroup, the spectre of metastability.
> Whenever something works unreliably, metastability gets the blame. But
> the problem is usually elsewhere.

Strangely enough, this newsgroup is the only place where I have seen ths
aformentionned ghost in 10 years of digital design.
(yeah I know Peter, these 10 years make me look like a baby, compared to
your experience ;o)

[...]

> Rule #1: Never feed an asynchronous input into more than one
> synchronizer flip-flop. Never ever.

Got bitten once there. Never twice.

Nicolas

PeteS

unread,
Oct 29, 2006, 4:17:10 PM10/29/06
to

Well, there's fundamentals and there's fundamentals :)

One I see missing is an intuitive feel for transmission lines. For
years, new engineers were churned out with the mantra of 'everything's
going digital and we don't need that analog crap', but when edge rates
are significantly sub-microsecond everything's a transmission line.

Certainly it has enhanced my employability that I learned those things
both in theory and hard earned practice, but far more people need to
learn these things in a world of ultra highspeed interconnects. One can
not always trust to software simulations[1] quite apart from the issue
of setting up a layout.[2]

This is a fundamental, at least imo, and it doesn't seem to be getting
the attention it deserves.[3]

Other things could be cited, of course. Using a technology one does not
understand is all well and good while it works. When it doesn't, the
person is stumped because they don't understand the underlying principles.

[1] The most amusing software bug I ever had was an updated release of
Altera's HDL tools where it synthesised a SCSI controller to a single
wire. That was the predecessor to Quartus and happened in '98.

[2] Really highspeed systems have the layout defined for the
interconnect [for best signal integrity and EMC issues] which then
determines part placement, which is almost 180 out from standard layouts.

[3] This is a huge growth industry for those with the requisite
knowledge; see e.g. Howard Johnson et.al. They'll give a seminar at a
company for a few $10k or so for a day plus printed materials.


Cheers

PeteS

Will Dean

unread,
Oct 30, 2006, 6:48:22 PM10/30/06
to
"KJ" <kkjen...@sbcglobal.net> wrote in message
news:2_71h.17705$TV3....@newssvr21.news.prodigy.com...

>
> What you're considering as fundamental seem to be the things things that
> you started your career with and were thought to be 'fundamental' back
> then,

This was EXACTLY the point I wanted to make...

There are two issues here - one is a valid point about it being useful to
understand one or two layers below the abstraction you're working at, and a
tangential one which is merely a placeholder for the vague regret we all
feel that very much younger people are capable of doing something like our
jobs, using a changing set of skills.

Technological progression, in pushing down what's fundamental and up what's
possible RELIES on people being able to concentrate on only a limited number
of layers in the stack.

The guys at CERN can't spend their time worrying about how you would use a
boson to make a better automobile door handle any more than people
programming desktop computers should worry about electronics.

One can endlessly and enjoyably debate which particular things are
'fundamental' to solving a particular task, but one shouldn't fool oneself
that there's a right answer.

Will

Peter Alfke

unread,
Oct 30, 2006, 7:51:31 PM10/30/06
to
I agree, and I was not making a moral statement.
Just that the ranks of engineers that can debug low-level (fundamental)
problems are shrinking.
Soon only IC designers will understand these things (because they are
still their livelihood), since everybody else has "moved up". ( I have
a son who works in software R&D, and we have very limited common ground
in electronic things).
I was, however, bemoaning the fact that so many things in our lives
have become black mystery boxes that defy "healthy curiosity". And that
phenomenon is new, within the last 50 years, a short time in the
evolution of technology.
Peter Alfke


On Oct 30, 3:48 pm, "Will Dean" <w...@nospam.demon.co.uk> wrote:
> "KJ" <kkjenni...@sbcglobal.net> wrote in messagenews:2_71h.17705$TV3....@newssvr21.news.prodigy.com...


>
>
>
> > What you're considering as fundamental seem to be the things things that
> > you started your career with and were thought to be 'fundamental' back

> > then,This was EXACTLY the point I wanted to make...

gallen

unread,
Oct 30, 2006, 10:21:14 PM10/30/06
to
I consider myself to still be a youngster. I'm only 24 years old and
I'm relatively recently out of college, but I find nothing you mention
here foreign. This stuff is still being taught in schools (though I
might argue my school didn't do a great job of it). The reality of it
all is that low level electronics remains useful. I have never once
regretted understanding how a transistor works. I have recently been
looking a flip flop designs since my company was having a hard time
meeting timing. While I'm not an expert, others are, and I've yet to
ever meet a person who know this kind of stuff and doesn't want to
share that knowledge.

The kinds of things I deal with on perhaps a monthly basis are:
* What are the costs of transmission gate input flip flop versus a
cmos input?
* Can Astro synthesis a 4GHz clock tree?
* How much drive would it take to overpower the drive of another cell
(multiple outputs tied together)?
* What are possible resolve states when you have a race on an async
set/reset flop?

People still have to solve these problems. They aren't going away.
The younger engineers still face these.

Now I admit that I do work as an IC designer, but ICs are here to stay.
They may become fewer, but as long as they exist and get more
complicated, plenty of people will be employed in that industry.

My point to add to this is that many older engineers have difficulty
grasping new ways of operating. Convincing experienced engineers that
synthesis tools actually work can be like pulling teeth sometimes.
Just the other day, some engineers were ranting about some code that a
contractor wrote that was very very behavioral. They were complaining
about how that was killing timing and adding 10s to 100s of levels of
logic. They hadn't tried it out. I ran it through the synthesizer and
it was *faster* than the low level code.

I don't see knowledge of the really low level stuff going away. In
fact I see it increasing. Things like quantum physics and maxwell's
equations are getting used more and more to make electronics work.
TCAD engineers live in this realm and TCAD is getting used more and
more for things like process definition and modeling. What I see
happening is the rift between the low level process/cell designers and
the logic designers growing as the logic designers get more high level
and the process/cell designers have to get closer to the real physics
of the system. Not all of the knowledge is necessary for all parties.
The fact is that if a good library is present (and nothing super funky
is in the design), a logic designer doesn't need to know electronics.
They simply need to know the how to work with the models that are
employed by the tools.

-Arlen

Peter Alfke

unread,
Oct 30, 2006, 11:35:49 PM10/30/06
to
Good for you, Arien.
Over the past 35 years I have interviewed many hundreds of new college
grads. Among others I always asked a very simple question:
Show me how you analyze the max clock frequency of a 2-bit shift
register (what data-sheet parameters do you need, where do they apply,
and what is the math between them?). Most can do that, after some
prodding. Then: What happens to the max frequency when there is clock
skew between the two flip-flops. About half gave the wrong answer to
this slightly tricky question. So they did not get hired...In a new
grad I do not look for factual knowledge, but for the ability to think
clearly.
Some passed this test with flying colors, sometimes amazed at the
implication of their own answer.
I was looking for nuts-and-bolts applications engineers, and other
interviewers tested their systems and software skills. We were (and
are) pretty selective...
Peter Alfke
========================

KJ

unread,
Oct 31, 2006, 6:15:04 AM10/31/06
to

"Peter Alfke" <al...@sbcglobal.net> wrote in message
news:1162255891....@m7g2000cwm.googlegroups.com...

>I agree, and I was not making a moral statement.
> Just that the ranks of engineers that can debug low-level (fundamental)
> problems are shrinking.
But you haven't established that this is a 'problem'. Shrinking to fit the
number of slots that the world needs is economically sound. Shrinking below
that will cause a shortage which will cause the price of people who have
those skills to go up.

> Soon only IC designers will understand these things (because they are
> still their livelihood), since everybody else has "moved up".

But maybe they will be the only ones who use this on a day to day basis and
have an actual need to know this. Can you say that you understand the
operation of flip flops and can demonstrate this using the equations of
quantum mechanical level or can you even compute the fields that will be
produced by that changing flip flop by using Maxwell's equations? Maybe you
can, but I'll hazard to say that if forced to do this in front of someone
who is skilled in either or both of these theories then probably not.

> ( I have
> a son who works in software R&D, and we have very limited common ground
> in electronic things).

But maybe he is very skilled in other areas....keep in mind Adam Smith and
the division of labor in economic theory.

> I was, however, bemoaning the fact that so many things in our lives
> have become black mystery boxes that defy "healthy curiosity". And that
> phenomenon is new, within the last 50 years, a short time in the
> evolution of technology.

My point on the earlier post was that it has gone on for much longer. The
true fundamentals of electronics haven't changed in roughly a century
(Maxwell and quantum) and yet I would hazard to say that the number of
engineers designing electrical or electronic equipment that directly use
these theories is pretty close to 0....and would speculate that that is also
roughly the number of engineers who directly used it 10, 20, 30, etc......
years ago as well.

I don't think many things defy the healthy curiousity but as designers get
more and more productive there becomes more and more knowledge that one must
accumulate if you want to satisfy that curiousity completely not because the
fundamentals are changing but because the approximations and shortcuts that
are used above those fundamentals in order to realize those productivity
improvements get more and more each year. One can still do it, one can
still specialize in any of those areas if you choose to but and there will
still generally be a market for people who have accumulated more of that
specialized skill....if it is still relevant to the world at large.

KJ


KJ

unread,
Oct 31, 2006, 6:27:44 AM10/31/06
to

"Peter Alfke" <al...@sbcglobal.net> wrote in message
news:1162269349....@k70g2000cwa.googlegroups.com...

> Good for you, Arien.
> Over the past 35 years I have interviewed many hundreds of new college
> grads. Among others I always asked a very simple question:
> Show me how you analyze the max clock frequency of a 2-bit shift
> register (what data-sheet parameters do you need, where do they apply,
> and what is the math between them?).
> Most can do that, after some
> prodding. Then: What happens to the max frequency when there is clock
> skew between the two flip-flops. About half gave the wrong answer to
> this slightly tricky question. So they did not get hired...In a new
> grad I do not look for factual knowledge, but for the ability to think
> clearly.
Kind of missed how you factored in the stress associated with the interview
process. It's easy to sit on the side of the questioner in that situation,
not so easy the other way around. Sometimes the weak answers have nothing
to do with the skills of that person but reflects how that person tenses up
in stressful situations. If they do, then maybe they're not appropriate for
a client facing position, but maybe you're not interviewing someone for that
type of position either.

> Some passed this test with flying colors, sometimes amazed at the
> implication of their own answer.

Being able to think quickly on your feet is a skill that can help land that
job offer. It can also come in handy when on the job...but along with other
skills too.

> I was looking for nuts-and-bolts applications engineers, and other
> interviewers tested their systems and software skills.

What about them 'people skills'? The arrogant ones who know the nuts and
bolts and flew through your test might be rather disruptive in the work
place. Generally they become sidelined because of their arrogance...or work
their way up the management chain to become CEO.

> We were (and are) pretty selective...

This made me ponder why most of the posts about problems with tools are
centered around brand X. 'Most' here meaning that the percentage of brand X
questions/complaints appears to be far above brand X's market share. It
could just be my perception of the posts though.

KJ


KJ

unread,
Oct 31, 2006, 8:37:59 AM10/31/06
to

KJ wrote:
Please ignore my previous post in the section after....

> > We were (and are) pretty selective...

and accept my apologies for the inference.

KJ

Tom

unread,
Oct 31, 2006, 9:23:25 AM10/31/06
to

Nicolas Matringe wrote:
> Peter Alfke a écrit :

> > Rule #1: Never feed an asynchronous input into more than one
> > synchronizer flip-flop. Never ever.
>
> Got bitten once there. Never twice.

Is it possible that this situation could occur if register duplication
is enabled (to improve timing) in the tools (eg XST)?

If so. is there a method to mark the synchronizer in HDL to ensure it
is never automatically duplicated?

Tom

Andy

unread,
Oct 31, 2006, 9:25:45 AM10/31/06
to
The first synchronizing flip flop on an async input will experience
metastability, sooner or later. Whether that metastability lasts long
enough to cause a functional problem depends on how the output is used.
If it becomes a causal input (i.e. clk or async rst) to something else,
it can become a problem very quickly (read: "don't do that"). If there
is very little timing margin to the next non-causal (i.e. D, CE, or
sync rst) input(s), then it can also cause problems fairly quickly. The
admonishment to "add a second flop" is usually an attempt to create a
high slack/margin path to the next clocked element, but may not be
sufficient. Ideally, that path (or any path out of the first
synchronizing flop) should be constrained to be faster than the clock
period would indicate, to force the synthesis/P&R process to provide
extra timing margin (slack), in case MS should delay the output a bit.
The more slack/margin, the more immunity to MS a design has. Also, the
first synchronizing flop on an input should have a no-replicate
constraint on it, just in case the synth/P&R tool wants to replicate it
to solve fanout problems from that first flop.

Also recognize that even async rst/prst inputs to flops must be
properly synchronized with respect to the deasserting edge, since that
edge is effectively a "synchronous" input, subject to setup/hold
requirements too.

Whether or not a problem is caused by metastability or by improper
synchronization, it is still solved by the same proper synchronization
techniques. It is true that MS has been reduced significantly by the
newer, faster FPGA devices, but it is not totally eliminated, and the
higher speeds & tighter timing margins of designs implemented in these
FPGAs at least partially offset the improvements in MS in the flops
themselves.

Follow the guidelines in the app notes for simultaneous switching
outputs, and properly ground/bypass the on-board PDS, and ground bounce
will not be an issue. Once it becomes an issue, there are numerous
"creative" solutions to the problem, but they are best avoided up
front.

Andy

John Kortink wrote:


> On 27 Oct 2006 14:25:09 -0700, "Peter Alfke" <pe...@xilinx.com> wrote:
>
> >To paraphrase Karl Marx:
> >A spectre is haunting this newsgroup, the spectre of metastability.
> >Whenever something works unreliably, metastability gets the blame. But
> >the problem is usually elsewhere.
>

> In my experience, ground bounce is a bigger problem.
> Especially in a device that is nearly 'full' it is
> wise to invest in a few fabricated grounds (dedicate
> a pin at a strategic location, i.e. as far away as
> possible from other ground pins, drive it to ground,
> and tie it to ground externally).
>
> When you find that moving cells around alleviates or
> intensifies observed instabilities, you may want to
> look into ground bounce problems.


>
> >Metastability causes a non-deterministic extra output delay, when a
> >flip-flop's D input changes asynchronously, and happens to change
> >within an extremely narrow capture window (a tiny fraction of a
> >femtosecond !). This capture window is located at an unknown (and
> >unstable) point somewhere within the set-up time window specified in

> >the data sheet. The capture window is billions of times smaller than


> >the specified set-up time window. The likelihood of a flip-flop going
> >metastable is thus extremely small. The likelihood of a metastable

> >delay longer than 3 ns is even less.
> >As an example, a 1 MHz asynchronous signal, synchronized by a 100 MHz
> >clock, causes an extra 3 ns delay statistically once every billion
> >years. If the asynchronous event is at 10 MHz, the 3 ns delay occurs


> >ten times more often, once every 100 million years.
> >But a 2.5 ns delay happens a million times more often !
> >See the Xilinx application note XAPP094
> >You should worry about metastability only when the clock frequency is
> >so high that a few ns of extra delay out of the synchronizer flip-flop
> >might causes failure. The recommended standard procedure,
> >double-synchronizing in two close-by flip-flops, solves those cases.
>

> I've found that one synchronizing flip-flop was not enough
> in one particular case (from a 4-ish to 50-ish MHz domain).
> Two was. Does one ever work reliably ? Or has the 'window'
> become smaller in the past few years ?
>
>
> John Kortink
> Windfall Engineering
>
> --
>
> Email : kor...@inter.nl.net
> Homepage : http://www.windfall.nl
>
> Your hardware/software designs realised !

Nicolas Matringe

unread,
Oct 31, 2006, 10:28:32 AM10/31/06
to
Tom a écrit :

> Nicolas Matringe wrote:
>> Peter Alfke a écrit :
>>> Rule #1: Never feed an asynchronous input into more than one
>>> synchronizer flip-flop. Never ever.
>> Got bitten once there. Never twice.
>
> Is it possible that this situation could occur if register duplication
> is enabled (to improve timing) in the tools (eg XST)?

In theory it could.


> If so. is there a method to mark the synchronizer in HDL to ensure it
> is never automatically duplicated?


I am sure there is but I haven't used a Xilinx part for quite a long
time. Austin or Peter (or many other readers) could give you a more
accurate answer.

Nicolas

Ben Jones

unread,
Oct 31, 2006, 10:57:37 AM10/31/06
to

"Nicolas Matringe" <nicolas....@fre.fre> wrote in message
news:45476ba0$0$3879$426a...@news.free.fr...

> Tom a écrit :
>> Nicolas Matringe wrote:
>>> Peter Alfke a écrit :
>>>> Rule #1: Never feed an asynchronous input into more than one
>>>> synchronizer flip-flop. Never ever.
>>> Got bitten once there. Never twice.
>> Is it possible that this situation could occur if register duplication
>> is enabled (to improve timing) in the tools (eg XST)?
> In theory it could.

You may be right, but I think it's unlikely. If you're re-synchronizing
properly then there are two levels of FFs, and the front-end FFs have a
fanout of exactly one net each. So there is nothing to be gained by
duplicating them, even if the back-end stage has a high fanout (the second
stage would be duplicated instead).

In fact, register duplication rarely makes timing better; in fact in many
high-performance pipelined designs, it can make it much worse (explanation
available on demand).

>> If so. is there a method to mark the synchronizer in HDL to ensure it
>> is never automatically duplicated?

Yes. You can use the REGISTER_DUPLICATION constraint in your source code or
XCF file to specifically turn this feature on or off for a specific entity
or module.

Cheers,

-Ben-


PeteS

unread,
Oct 31, 2006, 2:51:09 PM10/31/06
to

I have no problem using synthesis tools, although I have a healthy
skepticism of *any* software based tool (which does not mean I won't use
it or it's conclusions, merely that all non-trivial software has bugs).

As to a logic designer not needing to know the electronics; that's only
true if said designer is only designing for where synthesis (or models
that will be used) happens to be available. I've yet to see an LSI or
larger device where the IO pins could be directly attached to 48V, (and
be cheaper than the discrete alternative) yet that is a pretty standard
logic design issue in some industries.

A POR indicator circuit for a 24V vehicle, for instance, could be
constructed from standard cells, but at some point we meet the (very
nasty) 24V system (which can go up to 80V during load dump and droop
regularly during engine cranking). Of course, when designing power
supplies (which I also do quite regularly) I expect those sorts of
challenges.

Incidentally, the 'logic designer' syndrome you mention is precisely
what I was railing against in an earlier post; it's shortsighted and
foolish. A logic designer that can do logic but not electronics is _not_
an electrical/electronics engineer - they are either a software engineer
or a mathematician.

Kudos to you for learning the low level parts.

As with Peter Afke, I too am very particular about who we hire.
Generally I would prefer not to hire anyone than hire someone who
doesn't have the urge to seek out answers and think for themselves. I am
fully aware that such people _will_ make mistakes (it's an occupational
hazard) but I would prefer that to hand-holding.

I worry about too few people who call themselves electrical/electronic
engineers actually knowing sufficient about physical layer engineering.

Cheers

PeteS

Andreas Ehliar

unread,
Oct 31, 2006, 4:08:42 PM10/31/06
to
On 2006-10-31, Ben Jones <ben....@xilinx.com> wrote:
> In fact, register duplication rarely makes timing better; in fact in many
> high-performance pipelined designs, it can make it much worse (explanation
> available on demand).

I guess I'll bite and see if my understanding is close to what you have
in mind:

My feeling is that register duplication could worsen a design with
combinatorial logic followed by a flip flop. This means either that
the combinatorial logic has to be duplicated (which would enlarge the
design and perhaps slow down the circuit due to extra routing, or
by only duplicating the flip flop which will certainly demand extra
routing since it is normally possible to place a FF directly after a LUT
using only high speed dedicated routing.

On the other hand, I can't really see that register duplication will make
the performance much worse (unless the synthesizer makes very bad choices of
course) so you might have something else in mind.


/Anderas

Peter Alfke

unread,
Oct 31, 2006, 4:26:35 PM10/31/06
to
The original subject was metastability, and the second subject was
unreliable operation when an asynchronous signal is, in parallel,
synchronized in more than one flip-flop, where even the most minute
delay/set-up-time difference can cause severe problems.
Peter Alfke

On Oct 31, 1:08 pm, Andreas Ehliar <ehl...@lysator.liu.se> wrote:


> On 2006-10-31, Ben Jones <ben.jo...@xilinx.com> wrote:
>
> > In fact, register duplication rarely makes timing better; in fact in many
> > high-performance pipelined designs, it can make it much worse (explanation

> > available on demand).I guess I'll bite and see if my understanding is close to what you have

Ben Jones

unread,
Nov 1, 2006, 5:32:30 AM11/1/06
to

"Andreas Ehliar" <ehl...@lysator.liu.se> wrote in message
news:ei8e0q$avt$1...@news.lysator.liu.se...

> On 2006-10-31, Ben Jones <ben....@xilinx.com> wrote:
>> In fact, register duplication rarely makes timing better; in fact in many
>> high-performance pipelined designs, it can make it much worse
>> (explanation
>> available on demand).
>
> I guess I'll bite and see if my understanding is close to what you have
> in mind:
>
> My feeling is that register duplication could worsen a design with
> combinatorial logic followed by a flip flop. This means either that
> the combinatorial logic has to be duplicated (which would enlarge the
> design and perhaps slow down the circuit due to extra routing, or
> by only duplicating the flip flop which will certainly demand extra
> routing since it is normally possible to place a FF directly after a LUT
> using only high speed dedicated routing.

Got it in one. The "enlargement" problem isn't much of a problem, since in
FPGA technology if you need to allocate a new register then you basically
get the preceding LUT for free. However, it's the "extra routing" problem
that's the killer.

> On the other hand, I can't really see that register duplication will make

> the performance much worse (unless the synthesizer makes very bad choices)

Say your design is supposed to run at 400MHz (2.5ns clock period). The extra
route from the combinatorial output of the LUT to the input of the "extra"
register added by the replication process may be 500ps. That's 20% of your
cycle budget! Often, it's more like 800ps... of course if your clock speed
is only 100MHz, this is much less of an issue.

There may be a few scenarios in which register duplication really is a good
thing, but in my experience synthesis tools don't always find them. So I
tend to just leave this "feature" turned off.

Cheers,

-Ben-

(Whoops, off topic...)


Peter Alfke

unread,
Nov 1, 2006, 11:55:22 AM11/1/06
to
Just make sure you NEVER duplicate the one flip-flop that is supposed
to synchronize the asynchronous input. Then a 1 ps input delay or
set-up time difference can spell disaster.
Peter Alfke

On Nov 1, 2:32 am, "Ben Jones" <ben.jo...@xilinx.com> wrote:
> "Andreas Ehliar" <ehl...@lysator.liu.se> wrote in messagenews:ei8e0q$avt$1...@news.lysator.liu.se...


>
>
>
> > On 2006-10-31, Ben Jones <ben.jo...@xilinx.com> wrote:
> >> In fact, register duplication rarely makes timing better; in fact in many
> >> high-performance pipelined designs, it can make it much worse
> >> (explanation
> >> available on demand).
>
> > I guess I'll bite and see if my understanding is close to what you have
> > in mind:
>
> > My feeling is that register duplication could worsen a design with
> > combinatorial logic followed by a flip flop. This means either that
> > the combinatorial logic has to be duplicated (which would enlarge the
> > design and perhaps slow down the circuit due to extra routing, or
> > by only duplicating the flip flop which will certainly demand extra
> > routing since it is normally possible to place a FF directly after a LUT

> > using only high speed dedicated routing.Got it in one. The "enlargement" problem isn't much of a problem, since in


> FPGA technology if you need to allocate a new register then you basically
> get the preceding LUT for free. However, it's the "extra routing" problem
> that's the killer.
>
> > On the other hand, I can't really see that register duplication will make

> > the performance much worse (unless the synthesizer makes very bad choices)Say your design is supposed to run at 400MHz (2.5ns clock period). The extra

Ron N.

unread,
Nov 2, 2006, 2:24:54 AM11/2/06
to
Peter Alfke wrote:
> There is a difference, 60 years ago, a curious kid could at least try
> to understand the world around him/her.
> Clocks, carburators, telephones, radios, typewriters, etc.

In a talk at the Computer History Museum that Gordon Moore
gave on the history of Moore's Law, he was asked about what
influenced his interest in science. His answer included playing
with chemistry sets and blowing things up.

He was later asked about the some of reasons for the decline
in US science education. His answer was that nowadays kids
couldn't buy "real" chemistry set and blow things up.


IMHO. YMMV.
--
rhn A.T nicholson d.0.t C-o-M

Evan Lavelle

unread,
Nov 2, 2006, 11:50:48 AM11/2/06
to
On Wed, 1 Nov 2006 10:32:30 -0000, "Ben Jones" <ben....@xilinx.com>
wrote:

>There may be a few scenarios in which register duplication really is a good
>thing, but in my experience synthesis tools don't always find them. So I
>tend to just leave this "feature" turned off.

I think you'd normally use duplication to reduce routing congestion.
On a chip I was on recently, the vendor wouldn't take a netlist that
had any fanout cones with more than 2500 endpoints, and register
duplication was the only practical fix. I used Teraform (deceased?) to
measure the cones.

Evan

>(Whoops, off topic...)
>

Ben Jones

unread,
Nov 2, 2006, 12:26:14 PM11/2/06
to

"Evan Lavelle" <e...@nospam.uk> wrote in message
news:b77kk2hrrc8eks4lm...@4ax.com...

>
> I think you'd normally use duplication to reduce routing congestion.
> On a chip I was on recently, the vendor wouldn't take a netlist that
> had any fanout cones with more than 2500 endpoints, and register
> duplication was the only practical fix. I used Teraform (deceased?) to
> measure the cones.

Register duplication leads to more nets in the final design, not fewer, so
it's not usually going to do much for congestion. However, for these
high-fanout signals, it does make placement easier (because there are fewer
rubber-bands pulling the driving element around the die).

If I had a net with a fanout greater than a few hundred, and it wasn't a
clock or a reset, I'd probably do a bit of redesign at a higher level before
resorting to replication. :)

Cheers,

-Ben-


Evan Lavelle

unread,
Nov 2, 2006, 2:03:35 PM11/2/06
to
On 30 Oct 2006 19:21:14 -0800, "gallen" <arle...@gmail.com> wrote:

>My point to add to this is that many older engineers have difficulty
>grasping new ways of operating. Convincing experienced engineers that
>synthesis tools actually work can be like pulling teeth sometimes.
>Just the other day, some engineers were ranting about some code that a
>contractor wrote that was very very behavioral. They were complaining
>about how that was killing timing and adding 10s to 100s of levels of
>logic. They hadn't tried it out. I ran it through the synthesizer and
>it was *faster* than the low level code.

I think I should make the point, for the benfit of those of us who are
long past 24, that (in my experience at least) older engineers are
just as fast as younger ones at picking up new ideas. In fact, given
Natural Selection, they may well be faster.

My (just constructed, contentious, and probably wrong) rule-of-thumb
on logic levels is:

1 - softies who write synthesisable behavioural code are likely to end
up with over 50, and maybe 100, logic levels. These chips exist; I've
worked on one (85 levels).

2 - experienced logic designers who can use a synthesiser can get 10 -
15 levels without thinking about it, and their chips can run at maybe
3 times the frequency of (1) above. But, of course, it takes 3 times
as long to write the code.

3 - experienced logic designers who can write behavioural code can
also get 10 - 15 levels without thinking too hard about it, because
they understand the language and the tools.

4 - if you want to go fast, you need to do maybe 4 - 6 logic levels.
This is hard work when writing RTL code, and you'll need to put in
lots of fixes. There will probably be lots of places where you're
effectively drawing schematics in your RTL code.

5 - there's another way to go very fast. Write your code at as high a
level as you want, use a synthesiser which can do good register
balancing, and give it lots of pipeline levels to work with. This
works well, and is invaluable for complex algorithms.

It would be interesting to know where your contractor fitted in - (1)
who got lucky, (3), or (5)?

Evan

Evan Lavelle

unread,
Nov 2, 2006, 2:13:19 PM11/2/06
to
On Thu, 2 Nov 2006 17:26:14 -0000, "Ben Jones" <ben....@xilinx.com>
wrote:

>If I had a net with a fanout greater than a few hundred, and it wasn't a

>clock or a reset, I'd probably do a bit of redesign at a higher level before
>resorting to replication. :)

It wasn't possible in this case, unfortunately, since I didn't
understand the design or have enough time. One cone actually had
10,000 endpoints.

Yes, the total number of nets increased, but not by a great deal - the
ideal solution would be to turn one source register into 4 registers,
each driving a cone of 2,500 endpoints.

Evan

0 new messages