On 4/16/2021 1:26 PM, Dave Nadler wrote:
> On 4/16/2021 3:56 PM, Don Y wrote:
>> On 4/16/2021 12:24 PM, Dave Nadler wrote:
>>> Anybody able to recommend a tool they've used successfully?
>>
>> Coverity will require deep pockets/"high visibility" (they're out
>> to make money).
>
> Presumably they'd like a recommendation in a presentation that will be seen by
> ~1k people. But at the current pace more likely they will get a
> dis-recommendation. Sales person just emailed me an incorrect summary of my
> requirements though I repeated them at least 3 times, Yikes.
Ahhh, gwasshoppa... your mistake is assuming competence!
IIRC, NetBSD (or maybe FreeBSD?) is using Coverity to analyze their codebase
(perhaps just the core system -- kernel + userland)
>> Eclipse includes some tools. Lint/PCLint are old standbys.
>
> Haven't found anything that works with current Eclipse.
> For this one I'm actually looking for stand-alone tool.
>
>> There are a few IDEs that include support for MISRA compliance
>> checking. PVS-Studio under Windows.
>
> These bugs would probably pass any MISRA checker.
> As do hundreds of bugs I've seen in the past few years.
> But hey, MISRA is a religion.
... and, as with all religions... <frown>
>> Note that what some folks would consider a bug might really
>> just be a coding style preference (e.g., multiple returns
>> from a function)
>>
>> My approach has mimicked that implicit in code reviews: let lots
>> of eyes (in this case, tools) look at the code and then interpret
>> their reports. The more you veer from plain vanilla C, the more
>> you;ll have to hand-hold the tool.
>
> The presentation emphasizes actual human code reviews, but one of the early
> reviewers suggested static analysis, so I thought I'd give it a try...
You might suggest/pitch the use of whatever tools are available
RUN ON THE CODEBASE BEFORE THE CODE REVIEW. The point not being
to find all of the problems, but, rather, to "bias" (bad choice of
word) the reviewers as they undertake their active review of the code.
I.e., the amount of low-hanging fruit can prime folks to
step up (or down!) their game. A guy walking into a review with
a boatload of *compiler* warnings is just wasting peoples' time!
If developers have access to those same tools, then due diligence would
suggest they run them on their code BEFORE "embarassing themselves".
I think the takeaway has to be that there is no "perfect" tool.
And, when you factor in coding styles, local culture, etc. you
really should come away thinking this is NOT a "checkoff item".
I suspect it may be "beyond your charter" but an interesting
exercise might be to show an "initial implementation", note
the number of faults found (manually or with tools) in contrast
with a refactored implementation (though refactored BEFORE the
analysis was done). The point being to show how coding styles
(designs?) can impact the quality of the code.
"Here's a huge piece of spaghetti code. Note the number of
errors... Now, the same (functionally) code written in a
better style..."