Hi Chris,
Chris Higgs <
chig...@gmail.com> wrote:
[]
> Great! It seems to have taken a while for code review to catch on for RTL
> development but done correctly you should find it beneficial.
good to know that I'm not alone in my madness!
>> Our projects are on the scale of ~100 KLOC (including testbenches) and
>> if the number of lines reviewed per hour presented in the book is
>> correct (150) it will mean that we will need to spend ~85 days in code
>> reviews (considering a day made out of 8 working hours)!
>
> If you view code review as a pure cost then it will never become ingrained
> in your process. Code review done correctly should *save* you time,
> though it's always difficult to quantify it should be noticeable.
to be ingrained in the process, management needs to be onboard,
otherwise, by definition, it won't be ingrained. I'm certainly not
wanting to reduce code review to a mere *cost*, but this is how some not
really enlightened manager may perceive it.
Moreover, technical staff as well should be the first driving force
since they should see the immediate bbenefit of it and not just another
burden on their shoulders.
>
> If you're trying to review 100 KLOC then you're very unlikely to succeed.
> Ideally code review should be done in parallel with development, reviewing
> small chunks at a time. It is probably better to focus on introducing a
> process for new development rather than trying to retrospectively review
> historical code.
absolutely. The 100 KLOC is the size of a medium size project here in
house and regardless of the way we integrate code review in our process
at the end of the day we will still need to review it all, i.e. an extra
amount of hours to budget in.
>> Considering that best practices mandate 4 people per review (moderator,
>> reader. recorder, author), only review would cost one and a half
>> man/year!
[]
> It's generally accepted that review meetings generally don't work nearly
> so well as individual reviews. Group meetings are more suitable for
> architecture review where there is likely to be more discussion, but you
> should really try and separate the agreed architecture from the
> implementation review. Those reviewing the code should already be
> familiar with the agreed architecture.
Agreed, it is not uncommon indeed that people tend to discuss
architectural choices instead of the implementation details, which
invalidate the whole review and make them ineffective.
A run down on the architecture should be preparatory for all the
participants. We are trying to separate the verification effort
completely from the development one, introducing at least two people,
plus the technical coordinator who supervise the whole development, but
rarely write any code.
Both the verification engineer and the designer know the specifications
as well as the architecture. The verification engineer could be part of
the reviewers, but unfortunately his/her experience is more shaped
around verification techniques. The same could be said for the RTL
designer reviewing the testbench code.
Asking people who are not part of the project to be educated on the
architecture and the specs is a major drawback, it essentially would
require the whole FPGA team to know everything about everything, which
essentially is equivalent to knowing nothing about anything... :-/
[]
> Current software industry best-practice is for individuals to perform
> reviews at their own desks, using a software tool to track any discussion
> and actions.
I understand your point and I didn't think about having people sitting
around a table and discussing hours about why the signal name has not
been written in 'camel case'. We have formal round table reviews for
hardware, with a checklist and the QA noting down any deviation, but the
complexity you can get with 10M gates component is not comparable (at
least in our designs the hardware is never driving the schedule)
> 1. Make code review part of the development process
>
> This is really important. Unless code review is "designed in" then it
> will always fall by the wayside because everybody is always too busy. The
> best way to achieve this is to engineer your development process. I would
> suggest the following:
>
> Use branches in your source control to enforce code review. Development
> happens on a branch, merges back onto stable are reviewed before they can
> be merged.
By definition a ready to be integrated feature should be verified before
the merging take place. If the code is already verified the code review
may not be so attractive (the manager: the code is tested, why should we
spend more time on it?).
IMO instead, code review should take place before testing (but after
elaboration *and* synthesis). IIRC effectiveness of code reviews are
essentially due to the fact that a bug cost less if found early and
there's no earlier than 'as soon as' the code is ready. Sometimes it
costs more to run simulations that do not work because of simple bugs
rather than having a pair of eyes reading the code.
> You should also integrate this flow with your continuous
> integration such that your regression tests are run on the branch before
> it is merged too.
Ok, this sentence by itself can start a whole new thread )on continuous
integration), so I'll probably launch the discussion separately in the
coming days. I think we've mildly talked about this in a not so distant
past, but I'm still stuck where I was then (priorities are changing at a
fast pace!).
> Effectively you have a stable branch where to commit to it a review must
> have taken place and tests run. This can (and should) be enforced rather
> than optional.
It's easy to enforce test runs with pre-hooks before merging onto
mainstream, less evident how to force in the process the review. As said
earlier the verification phase should not even start before code review
is performed, but I'm asking myself if the review should be an iterative
process not so different from the verification effort (test, modify,
test).
And when do we consider code review over? All lines of code have been
reviewed, some where tagged as to be modified, some others will likely
show bugs during simulation, should the code be reviewed after all the
mods, along the mods?
> 2. Review small chunks of code
>
> If the cost of rewriting the entire chunk of code being reviewed is
> perceived as too high then it's too late to be doing the review. Ideally
> you want to keep reviews under ~500 LOC.
Divide and Conquer. The architectural phase is critical since is where
we divide functions and establish their interconnections/interactions.
Every functional block should also be divided in relatively manageable
units which losely interact (micro architecture).
Maybe we should envision multiple branches for each unit which all merge
to a feature branch (that would eventually merge to the stable branch).
It would be rather difficult though to break functionality with the
number of lines of code as a delimiter, still I understand your idea and
in practice most of our files (one entity per file) are around that
number (with the exeception of testbenches which are usually longer and
I believe wrongly structured!).
> 3. Use decent tools to assist
>
> There are plenty of free and non-free tools available (reviewboard,
> phabricator, crucible, even gitlab). These will track comments and
> discussions, report test results etc. At bare minimum you need something
> that ties into your source control, but there are significant benefits
> from integrating into your bug-tracking system and continuous integration
> / regression tests too.
IIRC reviewboard should be easy to integrate with bugzilla, while I
don't really see how to integrate it with the CI. Comments from
reviewboard shall translate into bugs which should be treated before
merging activities.
This approach calls for somebody responsible for merging features in the
trunk, verifying that no pending bugs are crawling under the carpet!
> 4. Don't underestimate the cultural adjustment required
>
> Often introducing code reviews can cause difficulties. Some people don't
> like having flaws in their code pointed out, some people will become
> defensive, some people will be aggressively anal in the review. In my
> experience hardware engineers are more prone to rejecting the idea of code
> reviews than pure software engineers.
<rant mode>
Agreed. I see it frequently, everyone considering his/her domain the
ultimate response to human struggles. I see lots of subjects which are
common to both world, but even talented people often consider software
practices like 'abstraction' as something to be rejected 'because I care
about how many gates my code produces'.
</rant mode>
> The key thing to remember is that code review serves multiple purposes.
Yet I've haven't found what should we focus on when reviewing the code.
For what concerns coding rules maybe a fine tuned linting tool would
solve the issue, but then? Here's my very small list off the top of my
head (without no specific order):
- entity interfaces types
- wrongly initialized signals/variables
- conditions for signals/states changes
- clock domain transitions
- resynchronization of asynchronous logic (if and when needed)
- comments clarity
- use of 'standard' functions/procedures
- rewrite reused functional blocks into functions/procedures
- do not discuss about architectural choices!
- reasonable number of states for FSM, otherwise break in hierarchical FSM.
Anything to add or to remove?
> It should improve the quality of your code-base. It should improve the
> teams familiarity with the code-base. It should improve the skill levels
> of all your engineers as they learn from each other. To achieve this
> however the code review needs to be viewed as beneficial by the engineers
> as well as the management!
Unfortunately too often management has rejected the idea to have a
better quality code because 'it won't work better' (or at least is hard
to prove). On top of it they often turn a blind eye on upfront efforts
and prefers quick and dirty solution to ship in time. The lesser
important argument is about skill level improvement, the answer would
simply be 'not on my project'.
No rational argument would be valid against such management. Still
something needs to be done to improve our inability to tackle complexity
at a larger scale.
thanks for the link, someday I should start practice 'speed reading' to
be more effective on the amount of stuff I *have to read* (it's piling up!).
> Pretty much everything written about software code-review will be relevant
> to you because at the end of the day RTL development is identical to
> software development... we just have much longer compile times ;)
Agreed and anyone rejecting the similarities because 'every flop counts'
will not keep up with the increase of complexity. Even in a very
conservative market like 'space', nowadays devices are monsters that
allow complex SoC with multiple processors running and peripherals
offloading the software stack. Willing to keep control over the register
is IMO not affordable.
<rant mode>
In our house we are crazy about optimization in the very early phases,
believing that if we can spare some gates on a functional block we have
made the day. This approach is rather shortsighted since optimization
should be looked at from a different perspective, often at system level,
maybe moving functions from hardware to software, maybe rearchitecting
the coupling between functions. On top of it, why optimizing a function
whose impact is only 5% of the overall resource need? Donald Knuth wrote
about 'premature optimization' as the 'root of all evil' and I could not
agree more.
</rant mode>
Al