Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

vhdl code review

490 views
Skip to first unread message

alb

unread,
Dec 8, 2014, 4:01:11 AM12/8/14
to
Hi everyone,

inspired by a nice read (the art of designing embedded systems - Jack
Ganssle) I've started wondering what's the best way to perform code
reviews for vhdl.

Our projects are on the scale of ~100 KLOC (including testbenches) and
if the number of lines reviewed per hour presented in the book is
correct (150) it will mean that we will need to spend ~85 days in code
reviews (considering a day made out of 8 working hours)!

Considering that best practices mandate 4 people per review (moderator,
reader. recorder, author), only review would cost one and a half
man/year!

Am I missing something? In order to make reviews effective how can we
organize them?

I've also seen a presentation on klabs.org which talked about reviewing
the netlist as well, since that is what ultimately goes in the hardware.
This will complexify the process enormously, to the point where no one
single manager would be on board.

Any insight on this subject? If you happen to know any material
(articles/books/presentations) which is worth reading I'll be happy to
have a look.

Thanks a lot,

Al

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

HT-Lab

unread,
Dec 8, 2014, 4:46:02 AM12/8/14
to
Hi Al,

On 08/12/2014 09:01, alb wrote:
> Hi everyone,
>
> inspired by a nice read (the art of designing embedded systems - Jack
> Ganssle) I've started wondering what's the best way to perform code
> reviews for vhdl.
>
> Our projects are on the scale of ~100 KLOC (including testbenches) and
> if the number of lines reviewed per hour presented in the book is
> correct (150) it will mean that we will need to spend ~85 days in code
> reviews (considering a day made out of 8 working hours)!
>
> Considering that best practices mandate 4 people per review (moderator,
> reader. recorder, author), only review would cost one and a half
> man/year!
>
> Am I missing something? In order to make reviews effective how can we
> organize them?
>
> I've also seen a presentation on klabs.org which talked about reviewing
> the netlist as well, since that is what ultimately goes in the hardware.
> This will complexify the process enormously, to the point where no one
> single manager would be on board.
>
> Any insight on this subject? If you happen to know any material
> (articles/books/presentations) which is worth reading I'll be happy to
> have a look.
>
> Thanks a lot,
>
> Al

I would suggest you look into linting tools as this is the only way to
handle large amount of code. There are some really nice ones like
Mentor's Design Checker, Atrenta's Spyclass, Aldec's Alint etc.
Unfortunately these tools are not cheap but they are quit powerful and
compared to the "old days" they do both static and dynamic linting
(running synthesis in the background).

Regards,
Hans.
www.ht-lab.com





Chris Higgs

unread,
Dec 8, 2014, 8:15:12 AM12/8/14
to
On Monday, December 8, 2014 9:01:11 AM UTC, alb wrote:
> inspired by a nice read (the art of designing embedded systems - Jack
> Ganssle) I've started wondering what's the best way to perform code
> reviews for vhdl.

Great! It seems to have taken a while for code review to catch on for RTL
development but done correctly you should find it beneficial.


> Our projects are on the scale of ~100 KLOC (including testbenches) and
> if the number of lines reviewed per hour presented in the book is
> correct (150) it will mean that we will need to spend ~85 days in code
> reviews (considering a day made out of 8 working hours)!

If you view code review as a pure cost then it will never become ingrained
in your process. Code review done correctly should *save* you time,
though it's always difficult to quantify it should be noticeable.

If you're trying to review 100 KLOC then you're very unlikely to succeed.
Ideally code review should be done in parallel with development, reviewing
small chunks at a time. It is probably better to focus on introducing a
process for new development rather than trying to retrospectively review
historical code.


> Considering that best practices mandate 4 people per review (moderator,
> reader. recorder, author), only review would cost one and a half
> man/year!
>
> Am I missing something? In order to make reviews effective how can we
> organize them?

I think these best practices sound like they are from the 80s!

It's generally accepted that review meetings generally don't work nearly
so well as individual reviews. Group meetings are more suitable for
architecture review where there is likely to be more discussion, but you
should really try and separate the agreed architecture from the
implementation review. Those reviewing the code should already be
familiar with the agreed architecture.

If you try putting 4 people in a room to perform a code review you it will
not be very efficient (and most likely un-productive).

Current software industry best-practice is for individuals to perform
reviews at their own desks, using a software tool to track any discussion
and actions.


You asked for advice so here is mine:

1. Make code review part of the development process

This is really important. Unless code review is "designed in" then it
will always fall by the wayside because everybody is always too busy. The
best way to achieve this is to engineer your development process. I would
suggest the following:

Use branches in your source control to enforce code review. Development
happens on a branch, merges back onto stable are reviewed before they can
be merged. You should also integrate this flow with your continuous
integration such that your regression tests are run on the branch before
it is merged too.

Effectively you have a stable branch where to commit to it a review must
have taken place and tests run. This can (and should) be enforced rather
than optional.


2. Review small chunks of code

If the cost of rewriting the entire chunk of code being reviewed is
perceived as too high then it's too late to be doing the review. Ideally
you want to keep reviews under ~500 LOC.


3. Use decent tools to assist

There are plenty of free and non-free tools available (reviewboard,
phabricator, crucible, even gitlab). These will track comments and
discussions, report test results etc. At bare minimum you need something
that ties into your source control, but there are significant benefits
from integrating into your bug-tracking system and continuous integration
/ regression tests too.


4. Don't underestimate the cultural adjustment required

Often introducing code reviews can cause difficulties. Some people don't
like having flaws in their code pointed out, some people will become
defensive, some people will be aggressively anal in the review. In my
experience hardware engineers are more prone to rejecting the idea of code
reviews than pure software engineers.

The key thing to remember is that code review serves multiple purposes.
It should improve the quality of your code-base. It should improve the
teams familiarity with the code-base. It should improve the skill levels
of all your engineers as they learn from each other. To achieve this
however the code review needs to be viewed as beneficial by the engineers
as well as the management!

For further reading this paper has some real data and interesting
commentary: http://smartbear.com/SmartBear/media/pdfs/best-kept-secrets-of-peer-code-review.pdf


Pretty much everything written about software code-review will be relevant
to you because at the end of the day RTL development is identical to
software development... we just have much longer compile times ;)


Thanks,

Chris

alb

unread,
Dec 8, 2014, 8:29:11 AM12/8/14
to
Hi Hans,

In article <srehw.14363$O%4.1...@fx39.am4> you wrote:
[]
> I would suggest you look into linting tools as this is the only way to
> handle large amount of code. There are some really nice ones like
> Mentor's Design Checker, Atrenta's Spyclass, Aldec's Alint etc.

linting tools are essential in order to verify coding rules, but won't
lack of synchronization, asynchronous logic complexity, race conditions,
wrong initialization and all the assumptions the coder has chosen when
writing the code.

Moreover comments are seldom analyzed and they are an important part of
the code quality.

> Unfortunately these tools are not cheap but they are quit powerful and
> compared to the "old days" they do both static and dynamic linting
> (running synthesis in the background).

I'll give a look at those since they certainly can provide a lot of
useful checking that can be skimmed off the review table (essentially
the build should be clean, no errors, no warnings unless justified), but
there are other aspects that can't be covered with the tool and on top
of my small list above I suspect there are even more.

Al

alb

unread,
Dec 8, 2014, 6:32:54 PM12/8/14
to
Hi Chris,

Chris Higgs <chig...@gmail.com> wrote:
[]
> Great! It seems to have taken a while for code review to catch on for RTL
> development but done correctly you should find it beneficial.

good to know that I'm not alone in my madness!

>> Our projects are on the scale of ~100 KLOC (including testbenches) and
>> if the number of lines reviewed per hour presented in the book is
>> correct (150) it will mean that we will need to spend ~85 days in code
>> reviews (considering a day made out of 8 working hours)!
>
> If you view code review as a pure cost then it will never become ingrained
> in your process. Code review done correctly should *save* you time,
> though it's always difficult to quantify it should be noticeable.

to be ingrained in the process, management needs to be onboard,
otherwise, by definition, it won't be ingrained. I'm certainly not
wanting to reduce code review to a mere *cost*, but this is how some not
really enlightened manager may perceive it.

Moreover, technical staff as well should be the first driving force
since they should see the immediate bbenefit of it and not just another
burden on their shoulders.

>
> If you're trying to review 100 KLOC then you're very unlikely to succeed.
> Ideally code review should be done in parallel with development, reviewing
> small chunks at a time. It is probably better to focus on introducing a
> process for new development rather than trying to retrospectively review
> historical code.

absolutely. The 100 KLOC is the size of a medium size project here in
house and regardless of the way we integrate code review in our process
at the end of the day we will still need to review it all, i.e. an extra
amount of hours to budget in.

>> Considering that best practices mandate 4 people per review (moderator,
>> reader. recorder, author), only review would cost one and a half
>> man/year!
[]
> It's generally accepted that review meetings generally don't work nearly
> so well as individual reviews. Group meetings are more suitable for
> architecture review where there is likely to be more discussion, but you
> should really try and separate the agreed architecture from the
> implementation review. Those reviewing the code should already be
> familiar with the agreed architecture.

Agreed, it is not uncommon indeed that people tend to discuss
architectural choices instead of the implementation details, which
invalidate the whole review and make them ineffective.

A run down on the architecture should be preparatory for all the
participants. We are trying to separate the verification effort
completely from the development one, introducing at least two people,
plus the technical coordinator who supervise the whole development, but
rarely write any code.

Both the verification engineer and the designer know the specifications
as well as the architecture. The verification engineer could be part of
the reviewers, but unfortunately his/her experience is more shaped
around verification techniques. The same could be said for the RTL
designer reviewing the testbench code.

Asking people who are not part of the project to be educated on the
architecture and the specs is a major drawback, it essentially would
require the whole FPGA team to know everything about everything, which
essentially is equivalent to knowing nothing about anything... :-/

[]
> Current software industry best-practice is for individuals to perform
> reviews at their own desks, using a software tool to track any discussion
> and actions.

I understand your point and I didn't think about having people sitting
around a table and discussing hours about why the signal name has not
been written in 'camel case'. We have formal round table reviews for
hardware, with a checklist and the QA noting down any deviation, but the
complexity you can get with 10M gates component is not comparable (at
least in our designs the hardware is never driving the schedule)

> 1. Make code review part of the development process
>
> This is really important. Unless code review is "designed in" then it
> will always fall by the wayside because everybody is always too busy. The
> best way to achieve this is to engineer your development process. I would
> suggest the following:
>
> Use branches in your source control to enforce code review. Development
> happens on a branch, merges back onto stable are reviewed before they can
> be merged.

By definition a ready to be integrated feature should be verified before
the merging take place. If the code is already verified the code review
may not be so attractive (the manager: the code is tested, why should we
spend more time on it?).

IMO instead, code review should take place before testing (but after
elaboration *and* synthesis). IIRC effectiveness of code reviews are
essentially due to the fact that a bug cost less if found early and
there's no earlier than 'as soon as' the code is ready. Sometimes it
costs more to run simulations that do not work because of simple bugs
rather than having a pair of eyes reading the code.

> You should also integrate this flow with your continuous
> integration such that your regression tests are run on the branch before
> it is merged too.

Ok, this sentence by itself can start a whole new thread )on continuous
integration), so I'll probably launch the discussion separately in the
coming days. I think we've mildly talked about this in a not so distant
past, but I'm still stuck where I was then (priorities are changing at a
fast pace!).

> Effectively you have a stable branch where to commit to it a review must
> have taken place and tests run. This can (and should) be enforced rather
> than optional.

It's easy to enforce test runs with pre-hooks before merging onto
mainstream, less evident how to force in the process the review. As said
earlier the verification phase should not even start before code review
is performed, but I'm asking myself if the review should be an iterative
process not so different from the verification effort (test, modify,
test).

And when do we consider code review over? All lines of code have been
reviewed, some where tagged as to be modified, some others will likely
show bugs during simulation, should the code be reviewed after all the
mods, along the mods?

> 2. Review small chunks of code
>
> If the cost of rewriting the entire chunk of code being reviewed is
> perceived as too high then it's too late to be doing the review. Ideally
> you want to keep reviews under ~500 LOC.

Divide and Conquer. The architectural phase is critical since is where
we divide functions and establish their interconnections/interactions.
Every functional block should also be divided in relatively manageable
units which losely interact (micro architecture).

Maybe we should envision multiple branches for each unit which all merge
to a feature branch (that would eventually merge to the stable branch).
It would be rather difficult though to break functionality with the
number of lines of code as a delimiter, still I understand your idea and
in practice most of our files (one entity per file) are around that
number (with the exeception of testbenches which are usually longer and
I believe wrongly structured!).

> 3. Use decent tools to assist
>
> There are plenty of free and non-free tools available (reviewboard,
> phabricator, crucible, even gitlab). These will track comments and
> discussions, report test results etc. At bare minimum you need something
> that ties into your source control, but there are significant benefits
> from integrating into your bug-tracking system and continuous integration
> / regression tests too.

IIRC reviewboard should be easy to integrate with bugzilla, while I
don't really see how to integrate it with the CI. Comments from
reviewboard shall translate into bugs which should be treated before
merging activities.

This approach calls for somebody responsible for merging features in the
trunk, verifying that no pending bugs are crawling under the carpet!

> 4. Don't underestimate the cultural adjustment required
>
> Often introducing code reviews can cause difficulties. Some people don't
> like having flaws in their code pointed out, some people will become
> defensive, some people will be aggressively anal in the review. In my
> experience hardware engineers are more prone to rejecting the idea of code
> reviews than pure software engineers.

<rant mode>
Agreed. I see it frequently, everyone considering his/her domain the
ultimate response to human struggles. I see lots of subjects which are
common to both world, but even talented people often consider software
practices like 'abstraction' as something to be rejected 'because I care
about how many gates my code produces'.
</rant mode>

> The key thing to remember is that code review serves multiple purposes.

Yet I've haven't found what should we focus on when reviewing the code.
For what concerns coding rules maybe a fine tuned linting tool would
solve the issue, but then? Here's my very small list off the top of my
head (without no specific order):

- entity interfaces types
- wrongly initialized signals/variables
- conditions for signals/states changes
- clock domain transitions
- resynchronization of asynchronous logic (if and when needed)
- comments clarity
- use of 'standard' functions/procedures
- rewrite reused functional blocks into functions/procedures
- do not discuss about architectural choices!
- reasonable number of states for FSM, otherwise break in hierarchical FSM.

Anything to add or to remove?

> It should improve the quality of your code-base. It should improve the
> teams familiarity with the code-base. It should improve the skill levels
> of all your engineers as they learn from each other. To achieve this
> however the code review needs to be viewed as beneficial by the engineers
> as well as the management!

Unfortunately too often management has rejected the idea to have a
better quality code because 'it won't work better' (or at least is hard
to prove). On top of it they often turn a blind eye on upfront efforts
and prefers quick and dirty solution to ship in time. The lesser
important argument is about skill level improvement, the answer would
simply be 'not on my project'.

No rational argument would be valid against such management. Still
something needs to be done to improve our inability to tackle complexity
at a larger scale.

> For further reading this paper has some real data and interesting
> commentary: http://smartbear.com/SmartBear/media/pdfs/best-kept-secrets-of-peer-code-review.pdf

thanks for the link, someday I should start practice 'speed reading' to
be more effective on the amount of stuff I *have to read* (it's piling up!).

> Pretty much everything written about software code-review will be relevant
> to you because at the end of the day RTL development is identical to
> software development... we just have much longer compile times ;)

Agreed and anyone rejecting the similarities because 'every flop counts'
will not keep up with the increase of complexity. Even in a very
conservative market like 'space', nowadays devices are monsters that
allow complex SoC with multiple processors running and peripherals
offloading the software stack. Willing to keep control over the register
is IMO not affordable.

<rant mode>
In our house we are crazy about optimization in the very early phases,
believing that if we can spare some gates on a functional block we have
made the day. This approach is rather shortsighted since optimization
should be looked at from a different perspective, often at system level,
maybe moving functions from hardware to software, maybe rearchitecting
the coupling between functions. On top of it, why optimizing a function
whose impact is only 5% of the overall resource need? Donald Knuth wrote
about 'premature optimization' as the 'root of all evil' and I could not
agree more.
</rant mode>

Al

Andy

unread,
Dec 8, 2014, 9:53:56 PM12/8/14
to
A couple of things we have found helpful for PL code reviews:

1. As already mentioned, a good (set of) linting tools with customized rules. The results from running these tools should be reviewed. This speeds up verification of compliance to local design and coding standards.

2. A good IDE is immensely helpful especially when reviewing code I am not familiar with (as an independent reviewer). They can make it easy to find object definitions, etc. as well as extensive syntax highlighting (especially Sigasi for VHDL). These often provide graphical and outline views, where-used info, etc.

3. Code complexity analysis tools like SciTools Understand can help target effective reviews commensurate with complexity/size of code. These tools also provide helpful graphical views of the code base.

4. Above all, invite younger, less experienced developers to reviews. This is an excellent way for them to learn, and pays big dividends down the road.

5. Don't forget to review constraints files, simulation/synthesis/place/route logs, code/functional coverage results, etc.

6. Unless you have specific tools or items for which you wish to review net lists, I don't recommend it. Reviewing log files is more cost effective.

7. Review issue tracking tool reports (types of issues found, how/when found, how fixed, etc. This information is very useful for process improvement.

As mentioned earlier, these are all pretty normal for experienced SW personnel, but if your organization has not embraced the SW aspects of PL/ASIC development, much of this will seem foreign to participants and to management. Additional training may be in order (how to prepare for and conduct code reviews.)

Andy

alb

unread,
Dec 9, 2014, 4:55:39 PM12/9/14
to
Hi Andy,

Andy <jone...@comcast.net> wrote:
[]
> 1. As already mentioned, a good (set of) linting tools with customized
> rules. The results from running these tools should be reviewed. This
> speeds up verification of compliance to local design and coding
> standards.

Having coding rules would be the first step toward a better code
quality, I have always have hard time to understand why practices so
well recognized are simply neglected in the name of 'lack of time'.

> 2. A good IDE is immensely helpful especially when reviewing code I am
> not familiar with (as an independent reviewer). They can make it easy
> to find object definitions, etc. as well as extensive syntax
> highlighting (especially Sigasi for VHDL). These often provide
> graphical and outline views, where-used info, etc.

I am pretty much accustomed with emacs and etags which help me browse
the code base rather quickly. Amongst 8 people we have I guess 6 editors
(moreover I'm not one of them) and each of them is religiously addicted
to his preferred one (http://xkcd.com/378/).

> 3. Code complexity analysis tools like SciTools Understand can help
> target effective reviews commensurate with complexity/size of code.
> These tools also provide helpful graphical views of the code base.

Thanks for the hint, I'll give it a try on a large (and unstructured)
code base to see how it can help. I heard several times that
subcontracted code is a nightmare to maintain because of the lack of
detailed knowledge of it. Tools like these may assist in the code
analysis and in the familiarization process. Same happens when, due to
workload and/or priorities, designers are brought in from nowhere and
asked to *contribute* without even knowing where to start from.

> 4. Above all, invite younger, less experienced developers to reviews.
> This is an excellent way for them to learn, and pays big dividends
> down the road.

This is what's called investment...depending on the historical or
emotional phases, management may be interfering a lot on this point.

> 5. Don't forget to review constraints files,
> simulation/synthesis/place/route logs, code/functional coverage
> results, etc.

To my understanding reports are already way down the road w.r.t. to code
review. They certainly need to be reviewed and in our case they are also
contractually required to appear in our documentation set (verification
report, validation report, detailed design report...). While their
importance is crucial in the intent to ship a sound project, IMO they
are not part of the code itself. OTOH I do agree that constraint files
need to be reviewed as code. I'd like to separate reviewing of what goes
in, from what goes out.

> 6. Unless you have specific tools or items for which you wish to
> review net lists, I don't recommend it. Reviewing log files is more
> cost effective.

I've personally found netlist analysis a big PITA and I prefer to spend
time in describing behavior at higher level and *verify* it works
correctly. We have post-synth and post-layout sims [1] to make sure the
generated netlist is behaving as expected. How could you possibly review
a netlist on a 2M gates device with 92% occupancy?

> 7. Review issue tracking tool reports (types of issues found, how/when
> found, how fixed, etc. This information is very useful for process
> improvement.

Unfortunately our issue tracking tool is extremely losely configured and
this type of information is extremely difficult to extract. I believe
there's another whole area of improvement there. What we tend to do
though is to review the bug fix ASAP in our corners and if somebody is
not satisfied with the bug closure it will reopen it, asking for
clarifications or additional mods.

> As mentioned earlier, these are all pretty normal for experienced SW
> personnel, but if your organization has not embraced the SW aspects of
> PL/ASIC development, much of this will seem foreign to participants
> and to management. Additional training may be in order (how to prepare
> for and conduct code reviews.)

Cultural inertia is certainly something that doesn't ease introduction
of new philosophies and approaches. Training might be done in house and
should not take too much efforts to bring people on board in small steps.

If you have any extra suggestion directly related to aspects of the code
beyond the ones I listed in a previous message in this thread I'd be
happy to hear them.

Al

[1] ECSS-Q-60-02C requires full coverage of post-layout sims according
to the verification plan which specifies which test covers which
requirement and how.

John Speth

unread,
Dec 9, 2014, 9:31:58 PM12/9/14
to
> inspired by a nice read (the art of designing embedded systems - Jack
> Ganssle) I've started wondering what's the best way to perform code
> reviews for vhdl.

Among the other suggestion this thread present you, I suggest you look
up Fagan inspections. It's a formalized process for reviews. If your
management supports allocating the resources (people) to participate,
you're in a good position. We've used Fagan inspections with great
results for circuit designs and C code reviews. It should work just
fine for HDL inspections.

JJS

Andy

unread,
Dec 15, 2014, 12:36:44 AM12/15/14
to
Al,

I don't recommend everyone has to use the same editor/ide, but regardless of the one(s) used by the developer(s), a good ide used by the reviewer is a huge boost. I especially like sigasi, since it does not need a list of files in compile order. Just point it at the directory(s) containing the source files, and it will figure it all out. Sigasi also can uniquely fontify different parts of speech (like ports, signals, variables, constants, functions, types/subtypes etc.) making the code (that the reviewer did not write) much easier to understand.

IMHO, reviewing code that has not been compiled, or at least linted, is a waste of reviewers' time. If the code is only RTL unit level, the developer should still synthesize it, at the unit level if necessary. It should also be compiled into the simulator(s) to be used. The logs from these provide useful information that can be quickly reviewed. Packages cannot be synthesized at the unit level, but maybe a simple test entity that uses the package can be synthesized and the log files reviewed.

Andy

lars.ande...@gmail.com

unread,
Dec 16, 2014, 1:22:01 AM12/16/14
to
I'm using unit testing when developing for VHDL and a nice side effect is that it makes you review your own code in a way that you wouldn't do otherwise. I suspect that I find as many bugs writing these tests as I do running them. We have released our unit testing framework for VHDL as open source on Github (https://github.com/LarsAsplund/vunit) and there are also other solutions for Verilog.

Lars
0 new messages