does "posedge" samples the transition from "x" to "1" or only from "0"
to "1" ?
Thanks
-Amir
>does "posedge" samples the transition from "x" to "1" or only from "0"
>to "1" ?
Anything in a vaguely upward direction. 0->X, X->1, 0->1 all count
as a posedge.
I fell foul of this once, very early in my Verilog apprenticeship.
I was trying to model a clock signal with some jitter in its timing.
So I created a signal that did 0->X followed by X->1 a few ns later.
I was mighty surprised when I found it triggered my flipflops twice
per clock cycle!
Note that posedge should normally be given a single-bit expression to
consider. If you ask for posedge on a vector, it should officially
give you posedge of the least significant bit - but some tools have,
at least at some time in the past, triggered on any rising change
of the whole vector's value. Not a smart thing to try.
--
Jonathan Bromley.
>Hi,
>
>does "posedge" samples the transition from "x" to "1" or only from "0"
>to "1" ?
Both 'x' to '1' and 'z' to '1' should be detected as posedge in
addition to '0' to 'x' and '0' to 'z'.
--
Muzaffer Kal
DSPIA INC.
ASIC/FPGA Design Services
>>does "posedge" samples the transition from "x" to "1" or only from "0"
>>to "1" ?
> Both 'x' to '1' and 'z' to '1' should be detected as posedge in
> addition to '0' to 'x' and '0' to 'z'.
To be an accurate simulation, it should randomly trigger
on changes to/from 'x' or 'z'...
If the input has enough pull-up, though, it will trigger on
an output going to 'z'.
-- glen
Jonathan -
You get a bonus point for your wording "Anything in a vaguely upward
direction."
I've never thought about vaguely with digital logic!
John Providenza
X does not mean "stable, but unknown value" even though as long as the
signal stays X, no more events will be created from it. Instead, X
means "unknown value and stability", and if the clock has a value of X
for 1 ns, there could be from zero to many rising and/or falling edges
in that timespan. Randomly choosing a single transition at the
beginning and/or end of that span is not any closer to reality than
always or never triggering.
Andy
>Muzaffer Kal <k...@dspia.com> wrote:
>> Both 'x' to '1' and 'z' to '1' should be detected as posedge in
>> addition to '0' to 'x' and '0' to 'z'.
>
>To be an accurate simulation, it should randomly trigger
>on changes to/from 'x' or 'z'...
>
>If the input has enough pull-up, though, it will trigger on
>an output going to 'z'.
I don't see what this has to do with Verilog.
No digital simulator can ever possibly claim to do
"accurate simulation". The simulator implements a
programming language, following the defined rules
of that language. Muzaffer Kal (and I, less
thoroughly) outlined those rules. For sure,
they do not represent "accurate simulation" (see
my little anecdote about jittery clocks). But
they're what any Verilog simulator is required
to do, by the language rules. Of course, the
language is designed to provide a useful
approximation to "accurate simulation"; but
if you want accuracy you need Spice :-)
A fine example of this is given by the behaviour
of the Verilog if() procedural statement. The
expression tested by if() can give three possible
outcomes: definitely zero, definitely nonzero, and
unknown (i.e. it has one or more X/Z bits, and
possibly some zero bits). But unfortunately
there's no "maybe" branch on if(). So the
if() statement is defined to execute its "else"
branch if the test condition is unknown. This
often gives rise to behaviour that is not
"accurate simulation" of typical logic hardware.
Note that VHDL has a different set of compromises
for this problem, giving different "inaccuracy".
As far as pullup is concerned, it might be worth
mentioning again how Z works in Verilog. If you
provide a pullup on a net (for example by connecting
a pullup() primitive to it), and then drive the net
with a Z value from some other driver, the net's
value will become 1 and a posedge operator on the
net would have no need to deal with a Z at all.
But if there is no pullup on the net, and you
drive the net with Z, then the net's value really
will be Z.
--
Jonathan Bromley
>>Muzaffer Kal <k...@dspia.com> wrote:
>>> Both 'x' to '1' and 'z' to '1' should be detected as posedge in
>>> addition to '0' to 'x' and '0' to 'z'.
(then I wrote)
>>To be an accurate simulation, it should randomly trigger
>>on changes to/from 'x' or 'z'...
>>If the input has enough pull-up, though, it will trigger on
>>an output going to 'z'.
> I don't see what this has to do with Verilog.
> No digital simulator can ever possibly claim to do
> "accurate simulation". The simulator implements a
> programming language, following the defined rules
> of that language.
I completely agree. If one has a design that triggers
on 'x' during simulation, though, it is likely not to
work with real hardware.
> Muzaffer Kal (and I, less
> thoroughly) outlined those rules. For sure,
> they do not represent "accurate simulation" (see
> my little anecdote about jittery clocks). But
> they're what any Verilog simulator is required
> to do, by the language rules. Of course, the
> language is designed to provide a useful
> approximation to "accurate simulation"; but
> if you want accuracy you need Spice :-)
Yes. I was trying to point out differences between
simulation and real hardware.
> As far as pullup is concerned, it might be worth
> mentioning again how Z works in Verilog. If you
> provide a pullup on a net (for example by connecting
> a pullup() primitive to it), and then drive the net
> with a Z value from some other driver, the net's
> value will become 1 and a posedge operator on the
> net would have no need to deal with a Z at all.
> But if there is no pullup on the net, and you
> drive the net with Z, then the net's value really
> will be Z.
TTL inputs usually have enough pull-up to go high,
though maybe not as fast as when driven high.
Yes, you could model that with a verilog pullup.
-- glen
I wouldn't agree with that. I would like to see any transition that
doesn't have a defined logic level at both ends cause the output of a
FF to assume an 'x' or 'u' value. I'm not sure Verilog has a 'u'
value, but I assume it does. I'm more familiar with VHDL.
I don't see any useful application of getting a valid clock edge from
a transition with a "vaguely" defined state at either end.
Rick
>On Mar 15, 1:57 am, Muzaffer Kal <k...@dspia.com> wrote:
>> On Mon, 14 Mar 2011 02:29:30 -0700 (PDT), Amir <sting...@gmail.com>
>> wrote:
>>
>> >Hi,
>>
>> >does "posedge" samples the transition from "x" to "1" or only from "0"
>> >to "1" ?
>>
>> Both 'x' to '1' and 'z' to '1' should be detected as posedge in
>> addition to '0' to 'x' and '0' to 'z'.
>
>I wouldn't agree with that.
What I stated was not a personal opinion but a statement of fact as
far as 1364 is concerned.
>I would like to see any transition that
>doesn't have a defined logic level at both ends cause the output of a
>FF to assume an 'x' or 'u' value.
Various IEEE P1800 (aka SystemVerilog, aka Verilog as we know it
today) subcommittees are in session right now; you might want to take
it up with them but I think it's unlikely that there'll be a change in
this regard.
> On Mar 15, 1:57 am, Muzaffer Kal <k...@dspia.com> wrote:
>> On Mon, 14 Mar 2011 02:29:30 -0700 (PDT), Amir <sting...@gmail.com>
>> wrote:
>>
>> >Hi,
>>
>> >does "posedge" samples the transition from "x" to "1" or only from "0"
>> >to "1" ?
>>
>> Both 'x' to '1' and 'z' to '1' should be detected as posedge in
>> addition to '0' to 'x' and '0' to 'z'.
>> --
>> Muzaffer Kal
>>
>> DSPIA INC.
>> ASIC/FPGA Design Services
>>
>> http://www.dspia.com
>
> I wouldn't agree with that. I would like to see any transition that
> doesn't have a defined logic level at both ends cause the output of a
> FF to assume an 'x' or 'u' value. I'm not sure Verilog has a 'u'
> value, but I assume it does. I'm more familiar with VHDL.
That's how it's defined in the Verilog specification. posedge is
similar to (name'event and name = '1'), unlike rising_edge(name) which
only detectes '0' to '1'.
There's no 'u' value in Verilog.
//Petter
--
.sig removed by request.
[Muzaffer Kal]
> Various IEEE P1800 (aka SystemVerilog, aka Verilog as we know it
> today) subcommittees are in session right now; you might want to take
> it up with them but I think it's unlikely that there'll be a change in
> this regard.
There certainly won't be a 'u' value, ever!
Cliff Cummings proposed a new set of special keywords -
something like "always_x", "always_ff_x" etc, but I'm
not sure of the details - to provide built-in modelling
for this kind of X-management. Many of us felt that it
was putting too much application-specific stuff into
what should be a general-purpose language; after all,
you can already model such things perfectly well with
UDPs, and people who write gate-level cell models
certainly do that. Anyway, for better or worse it
didn't make the cut for things to consider in the
2012 revision of IEEE-1800.
Note that the usual VHDL flipflop modelling style
does nothing at all if there's an X or U on the
clock - it certainly doesn't drive the FF's
output to X. And, once again, it's easy to provide
such modelling explicitly if you can be bothered
to do so.
The usual RTL abstraction, in either VHDL or Verilog,
is intended for the modelling, simulation and
synthesis of functional behaviour at the clock
cycle level. It has many well-known limitations,
of which this is only one. For most of us,
though, the benefits massively outweigh those
limitations, and we have tools (both automated
and mental) for coping with the limitations.
--
Jonathan Bromley
> Many of us felt that it was putting too much application-specific
> stuff into what should be a general-purpose language; after all, you
> can already model such things [FFs] perfectly well with UDPs, and
> people who write gate-level cell models certainly do that.
I would say you can adequately model a FF with a well crafted UDP, but
it's amazing how many cell libraries only provide the basic
functionality and hardly any pessimism. I have developed what I consider
the definitive UDP that models a FF with asynchronous set/reset and even
it is not as good as I would like. It's fundamentally limited by the
functionality available in a UDP. I think with a second helper UDP I can
get all the functionality I desire, but I've been busy with other things
so I have not finished this.
For the interested here's the remaining problem.
Given a FF with a defined D input that is opposite the current Q value a
0->X on the clock should produce an X on the output but a subsequent
x->1 should correctly latch the value because at this point in time you
know an edge has occurred. You could actually code this in the UDP as
x->1 latches the D input, but that doesn't work since a 1->x followed by
a x->1 doesn't work correctly (should be undefined). I believe with a
second UDP I can record the previous transition type and then restrict
the x->1 latching to the case where it was proceeded by a 0->x. Even
better is a C model linked into the simulator, but I want to get this
straight with basic Verilog (which is portable) first.
In reality you shouldn't have an X in your clock tree, so this is really
a personal experiment to see how far you can push things.
Cary
> For the interested here's the remaining problem.
>
> Given a FF with a defined D input that is opposite the current Q value a
> 0->X on the clock should produce an X on the output but a subsequent
> x->1 should correctly latch the value because at this point in time you
> know an edge has occurred. You could actually code this in the UDP as
> x->1 latches the D input, but that doesn't work since a 1->x followed by
> a x->1 doesn't work correctly (should be undefined). I believe with a
> second UDP I can record the previous transition type and then restrict
> the x->1 latching to the case where it was proceeded by a 0->x. Even
> better is a C model linked into the simulator, but I want to get this
> straight with basic Verilog (which is portable) first.
Sounds cool. But I'm not sure... I think you
also need to model the effect of changes on
D during (clock===X), because X->1 on the
clock after that D change does not necessarily
mean that there's been an active clock during
the time when D had its new value.
Anyways, I could offer something like this.
event clock_went_bad;
event clock_definitely_happened;
reg last_valid_clock;
reg previous_clock;
always @clock begin
case ({previous_clock, clock})
2'b0x, 2'b0z, 2'b1x, 2'b1z:
-> clock_went_bad;
2'bx1, 2'bz1, 2'b01:
if (last_valid_clock === 1'b0)
-> clock_definitely_happened;
endcase
previous_clock = clock;
if ((~clock) !== 1'bx) last_valid_clock = clock;
end
Now you can use the two events to trigger
activity in your FF model. I'm sure you can
do it even more neatly with UDPs but, as you
well know, I'm not a cell modelling guy!
cheers
--
Jonathan Bromley
Yes, this also needs to be considered. All the complicating factors are
why I think this really should be embedded inside the simulator where
all the extra checking, etc. can be processed efficiently. I'm only
using UDPs to stretch my knowledge of the UDP corner cases. As you well
know, I could easily write a behavioral version of this. FYI I expect
this case to be handled in the helper UDP where a data transition when
the clock is undefined invalidates the previous edge event thus keeping
the undefined value at the X->1 transition.
The one thing I had not been considering is the fact that X could
represent multiple transitions instead of a single transition to an
unknown state. Depending on the circuit either could be the correct
representation. The only time I have seen an X in the clock path is when
someone had two driver in parallel driving the clock signal. Then
because of slight delay differences they got a small window of X at all
edges. For this case interpreting X to be a single edge transition is valid.
> Anyways, I could offer something like this.
Thanks, If I was implementing this with behavioral code I'm sure I'd
have something similar to this somewhere in the code.
Regards,
Cary
When you say "should be" that sounds like an opinion to me. But I get
your point.
> >I would like to see any transition that
> >doesn't have a defined logic level at both ends cause the output of a
> >FF to assume an 'x' or 'u' value.
>
> Various IEEE P1800 (aka SystemVerilog, aka Verilog as we know it
> today) subcommittees are in session right now; you might want to take
> it up with them but I think it's unlikely that there'll be a change in
> this regard.
It wouldn't be their first mistake! It is just obvious to me that you
can't treat a transition involving a state that has no defined logical
value (in the boolean sense) to produce a valid clock edge. I have no
idea what it would seem to be useful to have a simulation behave in a
way that is so different from the real world and at the same time be
so non-useful.
Rick
"application specific"??? Verilog is for logic, no? The whole x
thing is outside of the domain of logic really. When has your logic
ever been in the 'x' state? Mine never has. To suggest that
transition between a non-boolean state and valid level not be treated
as a rising edge is hardly "application specific".
> what should be a general-purpose language; after all,
> you can already model such things perfectly well with
> UDPs, and people who write gate-level cell models
> certainly do that. Anyway, for better or worse it
> didn't make the cut for things to consider in the
> 2012 revision of IEEE-1800.
Yes, that is just what I want to do. Rather than the language
properly represent all logic as I know it, we should allow the default
behavior to be inconsistent with the real world and then let the user
figure out how to get around that.
> Note that the usual VHDL flipflop modelling style
> does nothing at all if there's an X or U on the
> clock - it certainly doesn't drive the FF's
> output to X. And, once again, it's easy to provide
> such modelling explicitly if you can be bothered
> to do so.
There is a far cry from treating a transition between a boolean
undefined state and a 1 as a rising clock edge and ignoring the
transition altogether.
> The usual RTL abstraction, in either VHDL or Verilog,
> is intended for the modelling, simulation and
> synthesis of functional behaviour at the clock
> cycle level. It has many well-known limitations,
> of which this is only one. For most of us,
> though, the benefits massively outweigh those
> limitations, and we have tools (both automated
> and mental) for coping with the limitations.
Sure there can always be issues that are hard to fix. This isn't one
of them.
Rick
I don't know that there is much value in providing this sort of
behavior and I don't know that it matches the real world in any useful
way. The fact that there should have been a clock edge somewhere
within the intermediate X region of a 0 to 1 transition is not
usefully modeled by making the output transition concurrent with the
final transition to 1. Within the X region may have been many
transitions and these may not meet specs for proper clocking of the
device and may even cause the FF to go metastable. So why treat the x
to 1 transition as a valid clock?
Rick
[Rickman]
> There is a far cry from treating a transition between a boolean
> undefined state and a 1 as a rising clock edge and ignoring the
> transition altogether.
No, there isn't. Some people still write their VHDL flops
like this, giving an active clock for X->1:
if clock'event and clock = '1' then ...
It's just modelling, using the bare language's features.
Choose your model to suit your needs and convenience
(and, less happily, to suit the templates mandated by
synthesis tools).
A simulation language's X value, in any of its various
flavours, is a trick to make Boolean algebra work even
when you have certain unknowable conditions. It doesn't,
and can't, directly mean anything in real circuits -
it merely means that we don't know enough about a bit's
simulated value to be sure it's 1 or 0. As soon as you
have these meta-values, you get all kinds of fallout in
any programming language: what should happen when you
test if(x)? what does a 0->x transition mean, when
your functional behaviour only makes sense for 0->1
transitions? For every one of these questions, any
language must necessarily make a decision to mandate
the language's behaviour. Since people can combine
language constructs in all manner of interesting ways,
there is no one right answer and some work must be
left to the user. That way, the user gets to choose
how much trouble they should go to in attempting to
model reality.
Of course, we have conventional patterns of code that
work well enough that we're happy with them most of
the time. The standard RTL flipflop templates fall
into that category; they're not part of the language
itself. As Cary and I pointed out in different ways,
you *can* (both in VHDL and Verilog) build quite
accurate FF models that trash their Q value when
bad things happen on clocks, resets and so on.
Well-written library cell models should do exactly
that, to provide the best possible checking that all
is well at gate level. But when we're doing RTL
simulation, we care primarily about 0/1 functional
behaviour and we (or, at least, I) should be happy
to accept that all bets are off if we let an X
creep on to our simulated clock signal. A simple
assertion on the clock's value will soon alert us
if that requirement is violated, at far lower cost
than futzing around with complicated X modelling
at each flop (whether built-in or hand-written).
> Sure there can always be issues that are hard
>to fix. This isn't one of them.
I disagree, but I'm fully aware that many people
would prefer a language that's much more tightly
coupled to the specifics of real flops and other
components.
--
Jonathan Bromley
I don't want to beat a dead horse, but when you say "it's just
modeling" you mean you may not care that the model has any given
inaccuracy. I agree with that. But I don't think it is reasonable to
suggest that the current HDL models are in any sense optimal. There
is always room for improvement.
I think there is little value to looking at HDLs as general
programming languages or even as "programming languages" at all. They
really aren't intended to be programming languages. They are
"Hardware Desciption Languages". If you want to ignore the hardware
aspect of them then I feel you are tossing the baby out with the bath
water.
> A simulation language's X value, in any of its various
> flavours, is a trick to make Boolean algebra work even
> when you have certain unknowable conditions. It doesn't,
> and can't, directly mean anything in real circuits -
> it merely means that we don't know enough about a bit's
> simulated value to be sure it's 1 or 0.
If course these meta values can have meaning in a real circuit, but
you are right, they are not intended to map directly to illegal
voltages or something specific. They are intended to indicate
something that is either unknown or improper. But this is likely a
minor point.
> As soon as you
> have these meta-values, you get all kinds of fallout in
> any programming language: what should happen when you
> test if(x)? what does a 0->x transition mean, when
> your functional behaviour only makes sense for 0->1
> transitions?
That is my point. If you don't know about the input to a function,
the output of that function should be no more known. Treating an X->1
transition as a positive clock edge not only makes the output knowable
when it is not, it hides the fact that there is a problem in the
design or simulation. That is what I want to know about. It is not
frequent, but I have seen problems in a simulation where an internal
point has a meta value far beyond the point I would have expected, but
it was not seen on the outside because the simulation did not properly
transmit that meta value. I had to trace a wrong, but valid state
down through the design unwinding the logic cause and effect to find
the point where I found the meta value. This is typically an error in
initialization or even in the simulation, but I feel it took longer to
find than it should have because the simulation did not properly
handle these meta values.
On the other hand, a FF feeding back on itself to divide by two will
always assume some value in the real world and so generally will
work. But in simulation the meta value will never resolve. That
seems to be too stringent. I guess I can't have it both ways...
> For every one of these questions, any
> language must necessarily make a decision to mandate
> the language's behaviour. Since people can combine
> language constructs in all manner of interesting ways,
> there is no one right answer and some work must be
> left to the user. That way, the user gets to choose
> how much trouble they should go to in attempting to
> model reality.
I can't construct a FF to properly handle meta values on the clock
input and also have that construct synthesizable which is my main
goal. At least I don't think I can get that to work. It would
certainly be a lot more work and would make the simulations run much
slower. If a logic function can be made to properly handle meta
values, I don't see why the code for a FF can't be defined in a way to
do the same thing. As you say, it is just how you define your
models... or how "they" define the models.
> Of course, we have conventional patterns of code that
> work well enough that we're happy with them most of
> the time. The standard RTL flipflop templates fall
> into that category; they're not part of the language
> itself. As Cary and I pointed out in different ways,
> you *can* (both in VHDL and Verilog) build quite
> accurate FF models that trash their Q value when
> bad things happen on clocks, resets and so on.
Yup, and I can write my own HDL tools and even the language itself.
But I want to get work done, the paying kind. Issues with tools
prevent that and saying it is all in how I want to define my models
don't help the issue.
> Well-written library cell models should do exactly
> that, to provide the best possible checking that all
> is well at gate level. But when we're doing RTL
> simulation, we care primarily about 0/1 functional
> behaviour and we (or, at least, I) should be happy
> to accept that all bets are off if we let an X
> creep on to our simulated clock signal. A simple
> assertion on the clock's value will soon alert us
> if that requirement is violated, at far lower cost
> than futzing around with complicated X modelling
> at each flop (whether built-in or hand-written).
When I am doing RTL simulation I want to verify that my design is
correct. (FULL STOP)
I would like as much capability in the HDL as possible that
facilitates my work. This is not a theoretical issue. This is
pragmatic. Unless it becomes a heavy burden on simulation or
otherwise causes a problem, why not make the simulations more
realistic and practical? You give reasons why I shouldn't want what I
want, but you haven't given any reasons why it shouldn't be done.
> > Sure there can always be issues that are hard
> >to fix. This isn't one of them.
>
> I disagree, but I'm fully aware that many people
> would prefer a language that's much more tightly
> coupled to the specifics of real flops and other
> components.
You mean a language that is more hardware oriented? Yes! I want a
hardware description language that describes hardware as well as
possible. If I wanted to program I would use Forth (or when the
customer demands it C).
Rick
> A simulation language's X value, in any of its various
> flavours, is a trick to make Boolean algebra work even
> when you have certain unknowable conditions.
Last time I checked out gate level simulation (which is
admittedly a long time ago), that wasn't really the
case. For example, gate level simulation wasn't aware
that 'not X' is actually the boolean opposite of 'X'.
Synthesis on the other hand understands this very
well. So the result was perfectly valid ciruits
that "couldn't get out of reset" because of incorrectly
pessimistic gate level simulation.
Therefore, I am rather unconvinced of the value of
this 'X' concept, even at the gate level.
Jan
--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
I too have no desire to beat this to death; we both
have work to do :-) I think I understand where
you're coming from, and I certainly would never
suggest that what we have today is ideal. But
there's also my nagging sense that ultimately you
can get more done with a truly general-purpose
toolkit than you can with something that was aimed
at a specific problem. C beat Cobol and a zillion
others precisely because its generality allowed
users to layer almost any necessary domain-specific
stuff on top of the language (but, interestingly,
not enough to make C much good for hardware
design/verification.... hmmm, maybe it's not so
obvious after all.)
Anyhow, the position you take is very interesting
to me. I too have to get real work done, but I
don't feel a need for the kind of thing you request;
instead I hanker after more general tools such as
assertions. I know plenty of engineers whose
position is more closely aligned with yours than
with mine, but I'm not alone either.
I'll try to look out some of the material Cliff
Cummings produced on proposals for built-in X
handling in the Verilog language. It would be
very interesting to hear your take on it.
Thanks and regards
--
Jonathan Bromley
If you look in one of my later post you'll see that I'm doing this
mostly to understand the corner cases of UDPs and how far you can push
things if you were so inclined. In the quest for knowledge I was so
inclined. It also describes the specific case that I had seen where this
particular solution is the correct behavior (someone had connected two
buffers that had slightly different delay in parallel in the clock
tree). You are 100% correct, if the source of the X could create many
dynamic transitions during the X period then you need different
functionality. I also said in that post that I would need to consider
this (dynamic X behavior) when I had some free time to look at this
model again. My guess is that you need two models and which one you use
depends on the type of X you expect, statically unknown or dynamically
unknown.
The quick summary is this is a personal journey of education, not
something I'm building to include in a cell library.
Cary
> [Rickman]
>> There is a far cry from treating a transition between a boolean
>> undefined state and a 1 as a rising clock edge and ignoring the
>> transition altogether.
> No, there isn't. Some people still write their VHDL flops
> like this, giving an active clock for X->1:
> if clock'event and clock = '1' then ...
(snip)
> A simulation language's X value, in any of its various
> flavours, is a trick to make Boolean algebra work even
> when you have certain unknowable conditions. It doesn't,
> and can't, directly mean anything in real circuits -
> it merely means that we don't know enough about a bit's
> simulated value to be sure it's 1 or 0.
Well, it is also nice to initialize a state machine and be
sure that it can start up with any (unknown) initial state.
Though that doesn't always work. I had one once that the
simulator couldn't figure out. Instead of 'X' I set the
start values to large numbers, like 12345, and then watched
as they got to the right value.
Note, for example, that (with unsigned arithmetic) min(16'x,0)
is not zero in simultion, but it is with any actual value
for x.
-- glen
> That is my point. If you don't know about the input to a function,
> the output of that function should be no more known.
But that is the problem. In enough cases, the output of a function
is known with some inputs unknown. If you multiply by zero (and
don't allow for infinity) then the product is zero. If you subtract
a number from itself, the difference should be zero. IF you
add one, and then subtract the original, it should be one.
The simulator likely gets those wrong with X.
> Treating an X->1
> transition as a positive clock edge not only makes the output knowable
> when it is not, it hides the fact that there is a problem in the
> design or simulation. That is what I want to know about. It is not
> frequent, but I have seen problems in a simulation where an internal
> point has a meta value far beyond the point I would have expected, but
> it was not seen on the outside because the simulation did not properly
> transmit that meta value. I had to trace a wrong, but valid state
> down through the design unwinding the logic cause and effect to find
> the point where I found the meta value. This is typically an error in
> initialization or even in the simulation, but I feel it took longer to
> find than it should have because the simulation did not properly
> handle these meta values.
(snip)
> I can't construct a FF to properly handle meta values on the clock
> input and also have that construct synthesizable which is my main
> goal. At least I don't think I can get that to work. It would
> certainly be a lot more work and would make the simulations run much
> slower. If a logic function can be made to properly handle meta
> values, I don't see why the code for a FF can't be defined in a way to
> do the same thing. As you say, it is just how you define your
> models... or how "they" define the models.
X on a clock is strange. It is a little more interesting on
a clock enable, though. It would seem that there are some state
machines that could reliably start with an X on the clock enable
in real life, but maybe not in simulation.
-- glen
...
>I think there is little value to looking at HDLs as general
>programming languages or even as "programming languages" at all. They
>really aren't intended to be programming languages. They are
>"Hardware Desciption Languages".
...
>I would like as much capability in the HDL as possible that
>facilitates my work.
...
> When I am doing RTL simulation I want to verify that my design is
> correct. (FULL STOP)
In my humble opinion these statements are not self-consistent in the
sense that you're not only using HDL to develop hardware but you're
using it to verify your design.
A language which only/strictly allows you to describe hardware is no
where nearly good enough as a verification language. To verify well,
quickly and with high coverage you need a much more capable language
than an HDL you describe. You need a complete programming language to
describe all the verification tasks you need to accomplish. Remember
that you'll spend a lot more time in that verification language than
in the description language so make sure that verification language is
as sophisticated as possible to save you time and sometime even make
it possible to say what you want for verification.
The context here is not logic gates where you can easily define a
table of outputs vs. inputs for each of the meta-values. VHDL does
that and I don't have any issues with how it is done. But I don't
agree that if you subtract a number from itself the result should be
zero if meta values are involved. Subtraction uses logic elements. I
expect that a subtraction results in meta values on the outputs
because of how the logic operates once you have defined how meta
values propagate through the logic. The real world does funny things
when you violate the input specs. That is part of what meta values
represent and the outputs should reflect that. Otherwise, what is the
point of simulation?
> > I can't construct a FF to properly handle meta values on the clock
> > input and also have that construct synthesizable which is my main
> > goal. At least I don't think I can get that to work. It would
> > certainly be a lot more work and would make the simulations run much
> > slower. If a logic function can be made to properly handle meta
> > values, I don't see why the code for a FF can't be defined in a way to
> > do the same thing. As you say, it is just how you define your
> > models... or how "they" define the models.
>
> X on a clock is strange. It is a little more interesting on
> a clock enable, though. It would seem that there are some state
> machines that could reliably start with an X on the clock enable
> in real life, but maybe not in simulation.
Yes, the issue of starting a circuit with meta values is common. If a
FF has a meta value on the enable and the input is different from the
output, the result should be a meta value. I suppose part of the
problem is that while gates are primitive elements in the math, FFs
are not elements in the language at all. They are inferred through
the constructs of the language, but not in the language itself. I
have always had an issue with that. I simply don't agree that an HDL
has to be a programming language first and describe hardware second.
It should be more tightly tied to hardware in my opinion.
Rick
I don't agree that the language can't do both. It is doing both now,
just not a great job of the hardware description part. There is
nothing wrong with a language having programming capabilities. I'm
trying to point out that some suggest that by being as flexible in the
language as possible, you don't need the language to deal directly
with aspects of hardware. But the two are not incompatible. We
shouldn't make excuses for limitations in the hardware description
aspects by saying you can program around these limitations.
I think this is an issue that comes from the software side of
development where the mindset is that ultimately no one can understand
all the issues involved in a large design, so let the machine figure
it out for you. This creates problems that we turn back to the
machine to fix and the complexity of the tools gets every larger. I
think simpler tools with more predictable results is a better way to
go. Complexity in the tools puts a barrier between the designer and
the design. I want to be able to get closer to my design and have
less filtering between.
Rick
>
> Yes, the issue of starting a circuit with meta values is common. If a
> FF has a meta value on the enable and the input is different from the
> output, the result should be a meta value. I suppose part of the
> problem is that while gates are primitive elements in the math, FFs
> are not elements in the language at all. They are inferred through
> the constructs of the language, but not in the language itself.
In the early days of synthesis (Synopsys before 1990), FF inference
wasn't supported. Therefore, designers instantiated generic FFs and
described the combinational logic around it. Primitive, but it worked.
Now, if that is your preference, why isn't it entirely trivial
to you that you can just do it like that? Why would one need a
new language if you can simply use Verilog at an even lower
level than is commonly the case?
I simply don't see the point.
I agree. You don't see the point.
Rick
(snip, I wrote)
>> But that is the problem. In enough cases, the output of a function
>> is known with some inputs unknown. If you multiply by zero (and
>> don't allow for infinity) then the product is zero. If you subtract
>> a number from itself, the difference should be zero. IF you
>> add one, and then subtract the original, it should be one.
> The context here is not logic gates where you can easily define a
> table of outputs vs. inputs for each of the meta-values. VHDL does
> that and I don't have any issues with how it is done. But I don't
> agree that if you subtract a number from itself the result should be
> zero if meta values are involved. Subtraction uses logic elements. I
> expect that a subtraction results in meta values on the outputs
> because of how the logic operates once you have defined how meta
> values propagate through the logic.
I might agree, but the problem is that state machines that start
up just fine in real life won't start up properly if X propagates
in all cases.
> The real world does funny things
> when you violate the input specs. That is part of what meta values
> represent and the outputs should reflect that. Otherwise, what is the
> point of simulation?
Well, violate the input spec is different. If I have logic that
is either 0 or 1, but I don't know which one, then subtract will
give zero. If it is somewhere in between, then that is different.
(snip)
>> X on a clock is strange. It is a little more interesting on
>> a clock enable, though. It would seem that there are some state
>> machines that could reliably start with an X on the clock enable
>> in real life, but maybe not in simulation.
> Yes, the issue of starting a circuit with meta values is common. If a
> FF has a meta value on the enable and the input is different from the
> output, the result should be a meta value.
Again, the problem is state machines that initialize with real
data, but not with X. So, even though I agree with you mostly,
it would be nice to write systems that can verify the design,
and yet start up in any initial state.
> I suppose part of the
> problem is that while gates are primitive elements in the math, FFs
> are not elements in the language at all. They are inferred through
> the constructs of the language, but not in the language itself. I
> have always had an issue with that.
Except for FF's, (and some state machines), I mostly write
structural verilog. So, yes, it does seem that FF's are not
part of the language, at least not from structural verilog.
> I simply don't agree that an HDL
> has to be a programming language first and describe hardware second.
> It should be more tightly tied to hardware in my opinion.
-- glen
Sure, we can't all be HDL language design geniuses.
Still, there is something that puzzles me. An HDL
like AHDL seems to be exactly the closer-to-hardware
HDL that you want. Of course, I believe that it is moving
into the ranks of forgotten HDLs for the same reason.
But still, one would expect that you would mention it
as an example to follow. At least this would get
the discussion real instead of vague and open-ended.
So if you ever did historical research about HDLs
(= googling for "HDL"), or even better, if you have
experience with an HDL like AHDL, you did a very good
job at hiding it.
Otherwise: those who don't know history are bound
to repeat it. Mistakes included.
Yes, but if you consider what 'X' means, it includes your case, but
does not only mean that. It means that the state is not known and can
be changing in an unknown way. So in reality, your simulation is not
the same as reality, because the states are not specified well
enough.
I remember finding this back when I started working with HDLs and a
tech support person who knew something about HDLs (this was back when
you could actually speak with someone knowledgeable on a hot line)
told me that this was a well known issue. I guess it has not been a
big enough problem to do anything about it. In the case of a FF with
feedback never getting out of a meta value is the same as the
subtraction case. But in the FF case the solution would be to test
with the input in each state. For the subtraction case you would need
to test with all possible combinations which would be an unrealistic
task. This would require telling the simulator that the two inputs
are not known, but stable, valid and equal. That doesn't sound too
realistic either.
> >> X on a clock is strange. It is a little more interesting on
> >> a clock enable, though. It would seem that there are some state
> >> machines that could reliably start with an X on the clock enable
> >> in real life, but maybe not in simulation.
> > Yes, the issue of starting a circuit with meta values is common. If a
> > FF has a meta value on the enable and the input is different from the
> > output, the result should be a meta value.
>
> Again, the problem is state machines that initialize with real
> data, but not with X. So, even though I agree with you mostly,
> it would be nice to write systems that can verify the design,
> and yet start up in any initial state.
I'm not sure how that relates to FSMs that start up with unknown
inputs. If you don't know the value of a clock enable, how can you
know when or if it will capture the input signal? With FSMs it is
particularly difficult because they will eventually arrive at a known
state, but the process of getting there will not necessarily the same
in all cases. So how could that be accommodated?
> > I suppose part of the
> > problem is that while gates are primitive elements in the math, FFs
> > are not elements in the language at all. They are inferred through
> > the constructs of the language, but not in the language itself. I
> > have always had an issue with that.
>
> Except for FF's, (and some state machines), I mostly write
> structural verilog. So, yes, it does seem that FF's are not
> part of the language, at least not from structural verilog.
Structural coding in VHDL is a real PITA because of the verbosity.
I'm a little unclear on what you say you do. I would instantiate FFs
and use RTL for the logic. Why would you instantiate the logic and
infer the FFs? FFs can be instantiated, no? I'm more familiar with
VHDL, but I don't use instantiation for low level objects.
Rick
(snip, I wrote)
>> Again, the problem is state machines that initialize with real
>> data, but not with X. So, even though I agree with you mostly,
>> it would be nice to write systems that can verify the design,
>> and yet start up in any initial state.
> I'm not sure how that relates to FSMs that start up with unknown
> inputs. If you don't know the value of a clock enable, how can you
> know when or if it will capture the input signal? With FSMs it is
> particularly difficult because they will eventually arrive at a known
> state, but the process of getting there will not necessarily the same
> in all cases. So how could that be accommodated?
OK, say you have a state machine that uses clock enable as
part of its feedback, and also is well designed such that it
works no matter the initial state. If it starts in X, then
it won't work.
Simplest I can think of is a FF with clock enable the XNOR
of its output and its output delayed by one clock cycle.
If it clock enable is low, the output won't change, the XNOR
will go high, and so the clock enable.
I haven't thought about why one would want to do that, but
it doesn't seem so strange.
-- glen
>>> Again, the problem is state machines that initialize with real
>>> data, but not with X. So, even though I agree with you mostly,
>>> it would be nice to write systems that can verify the design,
>>> and yet start up in any initial state.
>
>> I'm not sure how that relates to FSMs that start up with unknown
>> inputs. If you don't know the value of a clock enable, how can you
>> know when or if it will capture the input signal? With FSMs it is
>> particularly difficult because they will eventually arrive at a known
>> state, but the process of getting there will not necessarily the same
>> in all cases. So how could that be accommodated?
>
>OK, say you have a state machine that uses clock enable as
>part of its feedback, and also is well designed such that it
>works no matter the initial state. If it starts in X, then
>it won't work.
The problem is that X is only an abstraction, and
it doesn't (and doesn't claim to) model all the
real possibilities.
In Verilog, the hardware meaning of X is pretty much
"it's either a 0 or a 1 but, for some reason, I can't
decide which it is". This idea gives some obvious
inconsistencies, as we've seen. Suppose, for example,
I have an XOR gate whose inputs are both X. In effect,
that means we have four possible values for the XOR's
inputs: 00, 01, 10, 11 - either input can be 0 or 1.
So we don't know the output and we must write the
truth table as X^X=X.
But suppose, for a moment, that both inputs are wired
to the same signal. Now we have only two possible
inputs: 00, 11. The output is reliably 0. How can
we capture this? Obviously we can't describe it in
the truth table of XOR, because that can't know about
correlations between its two inputs.
This is precisely why I believe it is futile to try
to make the language imitate hardware reality to the
extent that Rick seems to want. It will always be
possible to find cases where the modelling doesn't
match reality (is too pessimistic or too optimistic)
because X values carry no information about their
relationship to other X values. What we have right
now may not be the perfect compromise, but it's well
defined and we know how to live with it. (Did I
mention assertions?)
Symbolic simulation, and formal analysis, are able
to deal with these questions because they understand
the functionality of the whole circuit, not just a
set of uncorrelated truth tables. Conventional
functional simulation can't be that smart.
Another way to handle it, which has been used on
real projects, is to interfere with the simulation
at time zero so that all register-like signals are
initialized to a random 0 or 1 value instead of X
(it's tricky, but not impossible, to do this using
the Verilog VPI). By running many such simulations
with different random seeds you can learn a lot about
the real start-up and "X" behaviour. There has been
a serious proposal to add such a facility to the
Verilog standard, and I think there's one commercial
simulator that already supports it, but it's not
happening in the current standards effort. There
are some papers on this in the public domain that
I'll try to hunt down and link to.
cheers
--
Jonathan Bromley
> Structural coding in VHDL is a real PITA because of the verbosity.
> I'm a little unclear on what you say you do. I would instantiate FFs
> and use RTL for the logic. Why would you instantiate the logic and
> infer the FFs? FFs can be instantiated, no? I'm more familiar with
> VHDL, but I don't use instantiation for low level objects.
That may explain why you were a little hasty in dismissing
my suggestion to use Verilog instantation as a cheap way to
implement a closer-to-hardware HDL. Your VHDL past is the problem.
I suggest to reconsider. First, let me point out that any
closer-to-hardware HDL will necessarily involve the use of
more "low level objects" in some form.
More importantly, as is commonly known and as I believe you
experienced yourself, Verilog is a neat and concise HDL as
opposed to super-verbose VHDL. Certainly for structural
coding. Therefore, you will probably find Verilog instantiations
a joy to work with.
And of course, this applies to any functionality, not just FFs.
For example, you could use it for the module power-of-2 counters
that, as I recall, you couldn't get to synthesize properly.
I suppose you now see how this proposal addresses the issues
of your concern.
But if you assume the starting state is unknown, how do you prove that
the FSM will work? You can't. You have to consider all possible
starting states and test them all or something similar. By specifying
an 'X' state, you have not provided enough information for a
simulation to know this will work. Just saying that the machine will
work no matter the starting state is not enough to resolve an 'X'.
Tell me what transition sequence you would expect to see as the FSM is
clocked with 'X' on the inputs?
Rick
The issue is not that 'X' can't model all possibilities, but that 'X'
is saying that you lack information about the state. To model the
real world in more detail you need more information than 'X'
reprensents.
> In Verilog, the hardware meaning of X is pretty much
> "it's either a 0 or a 1 but, for some reason, I can't
> decide which it is".
I don't agree with this really. It says you don't know the state, not
that it is at a valid and constant, but unknown value. It does not
represent having any information about the state of the signal. So
you can't make many of the assumptions you indicate below.
> This idea gives some obvious
> inconsistencies, as we've seen. Suppose, for example,
> I have an XOR gate whose inputs are both X. In effect,
> that means we have four possible values for the XOR's
> inputs: 00, 01, 10, 11 - either input can be 0 or 1.
> So we don't know the output and we must write the
> truth table as X^X=X.
So far, so good.
> But suppose, for a moment, that both inputs are wired
> to the same signal. Now we have only two possible
> inputs: 00, 11. The output is reliably 0. How can
> we capture this? Obviously we can't describe it in
> the truth table of XOR, because that can't know about
> correlations between its two inputs.
I don't agree that knowing the same signal is on the two inputs of an
XOR gate is enough information to know the output. In a real world
circuit this can easily generate glitches on every state change of the
input. So you don't know the output and the output of 'x' is valid.
> This is precisely why I believe it is futile to try
> to make the language imitate hardware reality to the
> extent that Rick seems to want. It will always be
> possible to find cases where the modelling doesn't
> match reality (is too pessimistic or too optimistic)
> because X values carry no information about their
> relationship to other X values. What we have right
> now may not be the perfect compromise, but it's well
> defined and we know how to live with it. (Did I
> mention assertions?)
I think you are overstating what I have asked for. I am not saying
you have to model logic perfectly. My original thought was that FFs
are not modeled well by assuming a transition from 'x' to '1' is a
rising clock edge. I still think this is a very poor model. Either
ignore invalid edges or generate an invalid output from the FF. The
former makes perfect sense to me while the latter would likely be hard
to do given the current languages.
> Symbolic simulation, and formal analysis, are able
> to deal with these questions because they understand
> the functionality of the whole circuit, not just a
> set of uncorrelated truth tables. Conventional
> functional simulation can't be that smart.
>
> Another way to handle it, which has been used on
> real projects, is to interfere with the simulation
> at time zero so that all register-like signals are
> initialized to a random 0 or 1 value instead of X
> (it's tricky, but not impossible, to do this using
> the Verilog VPI). By running many such simulations
> with different random seeds you can learn a lot about
> the real start-up and "X" behaviour. There has been
> a serious proposal to add such a facility to the
> Verilog standard, and I think there's one commercial
> simulator that already supports it, but it's not
> happening in the current standards effort. There
> are some papers on this in the public domain that
> I'll try to hunt down and link to.
Yes, randomization of start up states can help with the simulation,
but it doesn't seem to be the right way to deal with the issue of FSM
startup. It only takes one missed case to spoil a design. I'm not
clear though on why a part of a design that needs to start up
correctly would not use an initialization through reset of similar.
When would this randomization be needed?
Rick
> The issue is not that 'X' can't model all possibilities, but that 'X'
> is saying that you lack information about the state. To model the
> real world in more detail you need more information than 'X'
> reprensents.
(snip)
> I don't agree that knowing the same signal is on the two inputs of an
> XOR gate is enough information to know the output. In a real world
> circuit this can easily generate glitches on every state change of the
> input. So you don't know the output and the output of 'x' is valid.
Well, that would be true if there was a different delay between
the paths. As verilog does model delay (though rarely used) it
would seem fair for it to include the delay.
(snip)
> Yes, randomization of start up states can help with the simulation,
> but it doesn't seem to be the right way to deal with the issue of FSM
> startup. It only takes one missed case to spoil a design. I'm not
> clear though on why a part of a design that needs to start up
> correctly would not use an initialization through reset of similar.
> When would this randomization be needed?
I like the randomization idea, but, yes that wouldn't be the
final anser for SM startup. I have used big constants
(such as 12345) when it made sense.
There have been suggestions to supply random bits for floating
point post-normalization, to simulate the uncertainty in the result.
-- glen
>> The problem is that X is only an abstraction, and
>> it doesn't (and doesn't claim to) model all the
>> real possibilities.
>
>The issue is not that 'X' can't model all possibilities, but that 'X'
>is saying that you lack information about the state. To model the
>real world in more detail you need more information than 'X'
>reprensents.
Yes, I think that's pretty much what I said. You need to
know about relationships that hold between different signals,
not just the values on individual signals.
>> In Verilog, the hardware meaning of X is pretty much
>> "it's either a 0 or a 1 but, for some reason, I can't
>> decide which it is".
>
>I don't agree with this really.
You're welcome to disagree all you like, but that's how it
works for all the Verilog built-in operators. I agree that
posedge is somewhat anomalous, and if() is inevitably
unsatisfactory, but the basic behaviour is as I stated.
Check out the behaviour of the ?: conditional operator
when the selector is X, if you want further confirmation:
wire [3:0] Y = 1'bX ? 4'b1010 : 4'b0110;
gives Y=4'bXX10.
> It says you don't know the state, not
>that it is at a valid and constant, but unknown value.
I didn't say "constant", and certainly didn't intend to
imply it. "Valid", though, I did mean. If you want
X to mean "some voltage on the wire that makes it
uncertain whether I have 0 or 1" then you must go to
analog modelling of some kind.
>you can't make many of the assumptions you indicate below.
I'm not sure what you mean by that. I posited some
example situations, and didn't aim to "make assumptions".
>> But suppose, for a moment, that both inputs are wired
>> to the same signal. Now we have only two possible
>> inputs: 00, 11. The output is reliably 0. How can
>> we capture this? Obviously we can't describe it in
>> the truth table of XOR, because that can't know about
>> correlations between its two inputs.
>
>I don't agree that knowing the same signal is on the two inputs of an
>XOR gate is enough information to know the output. In a real world
>circuit this can easily generate glitches on every state change of the
>input. So you don't know the output and the output of 'x' is valid.
That's a different issue, easily modelled by adding timing
(a specify block, in Verilog-speak) to your XOR model. The
Verilog calculation "y = a ^ a" will ALWAYS yield zero
if 'a' is either 0 or 1; you won't see a glitch when 'a'
makes a transition. Of course you may get a glitch in the
real hardware; it you really want to see that in simulation,
you'll need to add timing to your model. That's irrelevant
to my point that the (stable) value of a^a is zero
regardless of the (stable) value of 'a', but ordinary
functional simulation can't produce that correct result
when a=X. Similarly, if you do provide the necessary
timing model so that flipping the XOR's inputs from 00
to 11 gives a visible output glitch, you won't - and
could not, by any stretch of the imagination - get
such a glitch when the inputs "transition" from XX
to XX, because there's no transition. The X value
allows propagation of unknown-ness through the design
in some cases, but doesn't allow you to represent
all the possibilities that you might care about.
>I still think this
[posedge responds to 0X and X1 transitions]
>is a very poor model. Either
>ignore invalid edges or generate an invalid output from the FF. The
>former makes perfect sense to me while the latter would likely be hard
>to do given the current languages.
I don't disagree that the definition of posedge is odd, and
probably somewhat inappropriate, but that's the way it is
and we need to live with it. Did I mention assertions?
Note that a suitable SystemVerilog assertion could
allow you to trash the Q value of a conventional
synthesisable FF model if there's an X on the clock,
without compromising the ability to synthesise the
code, if you think that's the right thing to do.
>randomization of start up states can help with the simulation,
>but it doesn't seem to be the right way to deal with the issue of FSM
>startup. It only takes one missed case to spoil a design.
I hope that no-one should equate "randomized" with "scattergun"
for functional verification. I can easily collect coverage
on those randomly generated initial states to check that
any specific critical case has been covered exhaustively,
while tolerating just a selection of starting values
for less critical things such as the counter in a
divide-by-N block.
If I'm really worried about such a thing, I can deploy
a formal assertion checker (or manual analysis) to prove
that all possible start states work properly. Pencil
and paper still has its place.
> I'm not
>clear though on why a part of a design that needs to start up
>correctly would not use an initialization through reset of similar.
>When would this randomization be needed?
Some of the systems I'm working on right now have
an audio timebase, typically 44kHz but could be even
as low as 8kHz. That's generated by dividing down
the system clock, probably 50MHz. No-one cares
about the absolute phase of this timebase relative
to reset. I can't necessarily reset the counter
at power-up (the chip may have multiple power
domains, some of which come out of power-down at
times when there's no reset happening). The
counter powers-up at X; oops, no simulation.
If I force it to zero at power-up, I'm testing
only one of thousands of possibilities. Why
not randomize its state at power-up? If I allow
that random startup value to vary across the
hundreds of simulations I do for other reasons,
I'll get pretty good confidence that all is well.
--
Jonathan Bromley
> I think there is little value to looking at HDLs as general
> programming languages or even as "programming languages" at all. They
> really aren't intended to be programming languages. They are
> "Hardware Desciption Languages". If you want to ignore the hardware
> aspect of them then I feel you are tossing the baby out with the bath
> water.
This is just a bunch of clichés, as misleading and boring
as clichés can be.
Many HDLs have been proposed. Most were in the "close to hardware"
camp. They are mostly forgotten. The market clearly prefers
the "outsiders" that look more like "programming languages".
Of course, the market may be (temporarily) wrong, but to back that
up you better come up with very strong arguments and fresh ideas.
Programming is of course about languages, compilers and methodolgies.
However, a point frequently missed by hardware designers is that
there is a wide variety of sub-discplines and application domains,
with very different characteristics and constraints. The best
programmers know their application domain very well, they master
the coding patterns that work well for their purpose, and they
understand what they can expect from their compilers.
A good HDL designer should of course do exactly the same. He
should understand hardware design very well. But he should also
understand what kind of coding patterns are efficiently
handled by his compiler (synthesis tool). Focussing on
"describing hardware" is counter-productive, because it tends
to make designers blind for useful coding patterns.
Interpreting HDL design as a specific software discpline has
a tremendous advantage: you benefit from all advances in tools
and methodologies from the overall software world. The best
talent that gives us real breakthroughs is there - that is
just a matter of numbers.
We hardware designers are much less special than we often
like to think. By realizing that, we can reduce our development
costs and increase our productivity.
No need to be humble here - that is exacly right and well put.
Then finally come up with something concrete how that should
look like. Some technical starting point, no matter how small,
that we could analyze and criticize in a meaningful way.
Even though I have argued, with arguments open to anyone's
criticism, that the concept goes into the wrong direction,
I have made 2 concrete suggestions: Verilog instantiations
and AHDL. A good start would be to tell us what is wrong
or right with those approaches for your purposes.
Just stop misrepresenting synthesis and the cheap talk.