Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Static analysis tool?

39 views
Skip to first unread message

Dave Nadler

unread,
Apr 16, 2021, 3:24:22 PM4/16/21
to
Perhaps someone here can help...

I'm doing a presentation on techniques for embedded, especially removing
and keeping out bugs ;-) Using an example project from last year. A
reviewer of my first draft suggested many of the bugs surfaced in the
project would have been caught by static analysis - but I haven't had
such great luck in the past.

Tried CPPcheck, and while it found some less-than-optimal stuff it only
found one of the real bugs discussed.

Tried to get an evaluation copy of Coverity, but got a wildly annoying
and clueless sales person who promises a member of the right team will
contact me shortly (Real Soon Now).

Bugs I had to fix and amenable to static analysis included:
- uninitialized variable (only one found by CPPcheck)
- use of magic 0xff index value as subscript off end of array
- C macro with unguarded arguments getting wrong answer
- use of int8 to index 1kb buffer (so only 256 bytes got used)

Anybody able to recommend a tool they've used successfully?
Thanks in advance,
Best Regards, Dave

Helmut Giese

unread,
Apr 16, 2021, 3:44:19 PM4/16/21
to
Dave Nadler <d...@nadler.com> schrieb:
Hi Dave,
in the (long gone) past I used a commercial tool called 'Lint' by a
company caled 'Gimpel'. Its main problem is its extremely sharp eyes:
It will tell you anything which might possibly be wrong - and you will
be surprised how much of your code falls into this category.
However, by way of a so called 'Lint file' you can configure Lint to
suppress all kinds of warnings: If you say 'This is my coding style,
and I know what I am doing' you can suppress many of the (for you)
useless messages.
I think I have heard of open source Lint programs but know nothing of
the quality of their results.
HTH
Helmut

Don Y

unread,
Apr 16, 2021, 3:56:33 PM4/16/21
to
On 4/16/2021 12:24 PM, Dave Nadler wrote:
> Perhaps someone here can help...
>
> I'm doing a presentation on techniques for embedded, especially removing and
> keeping out bugs ;-) Using an example project from last year. A reviewer of my
> first draft suggested many of the bugs surfaced in the project would have been
> caught by static analysis - but I haven't had such great luck in the past.

Presumably, C.

> Tried CPPcheck, and while it found some less-than-optimal stuff it only found
> one of the real bugs discussed.
>
> Tried to get an evaluation copy of Coverity, but got a wildly annoying and
> clueless sales person who promises a member of the right team will contact me
> shortly (Real Soon Now).
>
> Bugs I had to fix and amenable to static analysis included:
> - uninitialized variable (only one found by CPPcheck)
> - use of magic 0xff index value as subscript off end of array
> - C macro with unguarded arguments getting wrong answer
> - use of int8 to index 1kb buffer (so only 256 bytes got used)
>
> Anybody able to recommend a tool they've used successfully?

Coverity will require deep pockets/"high visibility" (they're out
to make money).

Eclipse includes some tools. Lint/PCLint are old standbys.
There are a few IDEs that include support for MISRA compliance
checking. PVS-Studio under Windows.

Note that what some folks would consider a bug might really
just be a coding style preference (e.g., multiple returns
from a function)

My approach has mimicked that implicit in code reviews: let lots
of eyes (in this case, tools) look at the code and then interpret
their reports. The more you veer from plain vanilla C, the more
you;ll have to hand-hold the tool.

Dave Nadler

unread,
Apr 16, 2021, 4:26:38 PM4/16/21
to
On 4/16/2021 3:56 PM, Don Y wrote:
> On 4/16/2021 12:24 PM, Dave Nadler wrote:
>> Perhaps someone here can help...
>>
>> I'm doing a presentation on techniques for embedded, especially
>> removing and keeping out bugs ;-) Using an example project from last
>> year. A reviewer of my first draft suggested many of the bugs surfaced
>> in the project would have been caught by static analysis - but I
>> haven't had such great luck in the past.
>
> Presumably, C.

Sorry, yes, C (and later C++).

>> Tried CPPcheck, and while it found some less-than-optimal stuff it
>> only found one of the real bugs discussed.
>>
>> Tried to get an evaluation copy of Coverity, but got a wildly annoying
>> and clueless sales person who promises a member of the right team will
>> contact me shortly (Real Soon Now).
>>
>> Bugs I had to fix and amenable to static analysis included:
>> - uninitialized variable (only one found by CPPcheck)
>> - use of magic 0xff index value as subscript off end of array
>> - C macro with unguarded arguments getting wrong answer
>> - use of int8 to index 1kb buffer (so only 256 bytes got used)
>>
>> Anybody able to recommend a tool they've used successfully?
>
> Coverity will require deep pockets/"high visibility" (they're out
> to make money).

Presumably they'd like a recommendation in a presentation that will be
seen by ~1k people. But at the current pace more likely they will get a
dis-recommendation. Sales person just emailed me an incorrect summary of
my requirements though I repeated them at least 3 times, Yikes.

> Eclipse includes some tools.  Lint/PCLint are old standbys.

Haven't found anything that works with current Eclipse.
For this one I'm actually looking for stand-alone tool.

> There are a few IDEs that include support for MISRA compliance
> checking.  PVS-Studio under Windows.

These bugs would probably pass any MISRA checker.
As do hundreds of bugs I've seen in the past few years.
But hey, MISRA is a religion.

> Note that what some folks would consider a bug might really
> just be a coding style preference (e.g., multiple returns
> from a function)
>
> My approach has mimicked that implicit in code reviews:  let lots
> of eyes (in this case, tools) look at the code and then interpret
> their reports.  The more you veer from plain vanilla C, the more
> you;ll have to hand-hold the tool.

The presentation emphasizes actual human code reviews, but one of the
early reviewers suggested static analysis, so I thought I'd give it a try...

Thanks Don!

Don Y

unread,
Apr 16, 2021, 4:47:54 PM4/16/21
to
On 4/16/2021 1:26 PM, Dave Nadler wrote:
> On 4/16/2021 3:56 PM, Don Y wrote:
>> On 4/16/2021 12:24 PM, Dave Nadler wrote:
>>> Anybody able to recommend a tool they've used successfully?
>>
>> Coverity will require deep pockets/"high visibility" (they're out
>> to make money).
>
> Presumably they'd like a recommendation in a presentation that will be seen by
> ~1k people. But at the current pace more likely they will get a
> dis-recommendation. Sales person just emailed me an incorrect summary of my
> requirements though I repeated them at least 3 times, Yikes.

Ahhh, gwasshoppa... your mistake is assuming competence!

IIRC, NetBSD (or maybe FreeBSD?) is using Coverity to analyze their codebase
(perhaps just the core system -- kernel + userland)

>> Eclipse includes some tools. Lint/PCLint are old standbys.
>
> Haven't found anything that works with current Eclipse.
> For this one I'm actually looking for stand-alone tool.
>
>> There are a few IDEs that include support for MISRA compliance
>> checking. PVS-Studio under Windows.
>
> These bugs would probably pass any MISRA checker.
> As do hundreds of bugs I've seen in the past few years.
> But hey, MISRA is a religion.

... and, as with all religions... <frown>

>> Note that what some folks would consider a bug might really
>> just be a coding style preference (e.g., multiple returns
>> from a function)
>>
>> My approach has mimicked that implicit in code reviews: let lots
>> of eyes (in this case, tools) look at the code and then interpret
>> their reports. The more you veer from plain vanilla C, the more
>> you;ll have to hand-hold the tool.
>
> The presentation emphasizes actual human code reviews, but one of the early
> reviewers suggested static analysis, so I thought I'd give it a try...

You might suggest/pitch the use of whatever tools are available
RUN ON THE CODEBASE BEFORE THE CODE REVIEW. The point not being
to find all of the problems, but, rather, to "bias" (bad choice of
word) the reviewers as they undertake their active review of the code.

I.e., the amount of low-hanging fruit can prime folks to
step up (or down!) their game. A guy walking into a review with
a boatload of *compiler* warnings is just wasting peoples' time!

If developers have access to those same tools, then due diligence would
suggest they run them on their code BEFORE "embarassing themselves".

I think the takeaway has to be that there is no "perfect" tool.
And, when you factor in coding styles, local culture, etc. you
really should come away thinking this is NOT a "checkoff item".

I suspect it may be "beyond your charter" but an interesting
exercise might be to show an "initial implementation", note
the number of faults found (manually or with tools) in contrast
with a refactored implementation (though refactored BEFORE the
analysis was done). The point being to show how coding styles
(designs?) can impact the quality of the code.

"Here's a huge piece of spaghetti code. Note the number of
errors... Now, the same (functionally) code written in a
better style..."

Don Y

unread,
Apr 16, 2021, 4:54:34 PM4/16/21
to
On 4/16/2021 1:47 PM, Don Y wrote:

> I suspect it may be "beyond your charter" but an interesting
> exercise might be to show an "initial implementation", note
> the number of faults found (manually or with tools) in contrast
> with a refactored implementation (though refactored BEFORE the
> analysis was done). The point being to show how coding styles
> (designs?) can impact the quality of the code.
>
> "Here's a huge piece of spaghetti code. Note the number of
> errors... Now, the same (functionally) code written in a
> better style..."

You can also suggest mechanizing testing -- and the consequences it
has on how you structure the code so that it can be tested in that
way. This is particularly helpful in embedded design where it's
too easy to write code that *requires* hardware to run properly
(yet, chances are, many of the algorithms could be tested without
that hardware if the hardware dependencies were isolated).

I write most of my OSs and drivers so that I can run them in an
interpreter or simulator. Catch the obvious bugs where it's easy
to do so (running at DC).

Gerhard Hoffmann

unread,
Apr 16, 2021, 4:54:48 PM4/16/21
to
Am 16.04.21 um 21:56 schrieb Don Y:

>> Anybody able to recommend a tool they've used successfully?
>
> Coverity will require deep pockets/"high visibility" (they're out
> to make money).

Friends of mine were quite happy with it verifying our 10 M lines
of code wafertester control software.
IIRC it is free to use on open source software.

cheers, Gerhard

Don Y

unread,
Apr 16, 2021, 5:20:31 PM4/16/21
to
On 4/16/2021 1:54 PM, Gerhard Hoffmann wrote:
> Am 16.04.21 um 21:56 schrieb Don Y:
>
>>> Anybody able to recommend a tool they've used successfully?
>>
>> Coverity will require deep pockets/"high visibility" (they're out
>> to make money).
>
> Friends of mine were quite happy with it verifying our 10 M lines
> of code wafertester control software.

Oh, it's a wonderful tool! But, for most small organizations,
it's likely seen as an "unnecessary expense".

> IIRC it is free to use on open source software.

They have a cloud-based service that will analyze your codebase.
I'm not sure how willing most firms will be to upload the "family
jewels"...

If you're an *established* open-source project, you'll get
more attention (hence NetBSD). But, you'll still play on their
terms.

Paul Rubin

unread,
Apr 17, 2021, 4:04:51 AM4/17/21
to
Dave Nadler <d...@nadler.com> writes:
> Anybody able to recommend a tool they've used successfully?

I haven't personally used it, but Frama-C is well regarded for this.

This isn't specifically about static analysis, but you might find
it interesting:

https://dwheeler.com/essays/high-assurance-floss.html

Also, Dawson Engler's site http://web.stanford.edu/~engler has lots of
stuff about static analysis (he is the founder or a co-founder of
Coverity).

Finally, ask yourself why you are still using C in this day and age at
all, if correctness is critical. Ada, Rust, or even C++ can keep you
out of a lot of trouble.

Stefan Reuther

unread,
Apr 17, 2021, 4:32:14 AM4/17/21
to
Am 16.04.2021 um 21:24 schrieb Dave Nadler:
> Tried CPPcheck, and while it found some less-than-optimal stuff it only
> found one of the real bugs discussed.
>
> Tried to get an evaluation copy of Coverity, but got a wildly annoying
> and clueless sales person who promises a member of the right team will
> contact me shortly (Real Soon Now).
>
> Bugs I had to fix and amenable to static analysis included:
> - uninitialized variable (only one found by CPPcheck)
> - use of magic 0xff index value as subscript off end of array
> - C macro with unguarded arguments getting wrong answer
> - use of int8 to index 1kb buffer (so only 256 bytes got used)
>
> Anybody able to recommend a tool they've used successfully?

First and foremost: if you've been sloppy for decades, throwing ANY tool
at the existing codebase will bury you in "findings" and therefore not
be useful. So, you WILL have to dumb down the tool to make it usable. If
you're giving out big money for the tool, it'll be hard to predict
whether you'll end up using 90% or 10% of it.

That aside, I consider the free tools good enough. The bang-for-the-buck
ratio is hard to beat:

Recent gcc and clang versions learned to detect a lot of things that
previously only specialized tools would find (e.g. switch/case
fallthrough). Just turn on the bulk for a first test (-O2 -Wall
-Wextra), and turn on even more stuff later. For example, we're using
-Wconversion which can be really annoying but has found/would have found
a few real bugs in our codebase. Of course this will bury you in
warnings if you haven't tried it before.

cppcheck has the advantage of knowing some APIs, i.e. it will find some
file descriptor leaks. On the downside, it has some annoying false
positives (e.g. in C++11, it will flag every other method of a local
class if one method is using std::move on a member).

Now I've also used Klocwork (which would be the same league as
Coverity), but consider its gain minimal. In our codebase, it produces
lots of false positives, some of which lead to sore forehead due to
excessive facepalming. One annoying example is its attempt to implement
the MISRA pseudo-type system, where '1 << 10' is an error because '1'
has type uint8_t for MISRA. Problem is, it considers 'UINT32_C(1)' to
have type uint8_t as well. Another annoying example is that whenever I
call a class method 'open', it expects a call to 'close' somewhere or
flags a resource leak. And sometimes it gets lost inside STL and tells
me that standard class has a leak; at least you won't have that problem
when checking only C :) On the plus side, it has a database to silence
one finding for the future.


Stefan

Michael Kellett

unread,
Apr 17, 2021, 5:45:45 AM4/17/21
to
I uses PC_Lint Version 9, not the latest, but Gimple now only sell site
licenses at rather high cost if you only need 1. It's still a cheapish
solution.
Ristan Case it nice but has not been updated for years and is not likely
to be.
Lint + MISRA is a bit like doing a code review with a colleague who
objects to pretty much everything you do. For me its main virtue is in
forcing you to think about stuff a little more.
On the current project (where I'm using these tools) it has certainly
caught a few bugs but a lot more instances of stuff that can (and
should) be expressed more clearly or simply.
The problem with working in a "MISRA compliant" environment is that
slavish obedience is required which is often daft. The documentation
burden of dealing with exceptions can become large.

On balance I think it improves my code. (And since on the current
project no code reviews ever happen - it's all I've got :-(

All the above applies to C only.

MK


David Brown

unread,
Apr 17, 2021, 6:51:14 AM4/17/21
to
Your first step should always be the compiler - a good compiler, with
optimisation enabled (that's essential) and lots of warnings will pick
up many such things. It won't do everything, but it is certainly a good
start. gcc in particular has got better and better at this over time -
I have found bugs in code (other people's code, of course :-) ) after
switching to a newer gcc and enabling more warnings.

Actually, that is perhaps the /second/ step. The first step is to adopt
good coding practices (and perhaps a formal standard) that make it
easier to avoid writing the bugs in the first place, and to spot them
when they are made accidentally. For example, a strong emphasis on
static inline functions rather than macros means your risk of macro
problems drops dramatically - and a coding standard that insists on
always guarding the arguments means you don't get the macro bug you
mentioned above.


Dave Nadler

unread,
Apr 17, 2021, 10:34:58 AM4/17/21
to
Thanks all for the comments. I should have explained this project came
from elsewhere; landed in my lap to add a minor feature which resulted
in needing to do lots of debug of existing problems. I even rewrote part
of it in C++ ;-) Project is proprietary so Coverity scan is not
applicable as that's only for FOSS. Only 5 (maybe 6?) of the top dozen
bugs COULD be found by static analysis but certainly that would have
been helpful.

If anybody has an hour and would be interested to review the
presentation first draft video PM me - I can always use some
constructive comments and suggestions!

Thanks again,
Best Regards, Dave

Dave Nadler

unread,
May 2, 2021, 11:41:20 AM5/2/21
to
Further follow-up: Never heard back from Coverity (as expected).
Tried Perforce Klocworks and got a very perky and slightly less annoying
sales person who promised prompt follow-up, and as usual none was
forthcoming.

Any other static analysis tools you folks can suggest?

Don Y

unread,
May 2, 2021, 11:49:36 AM5/2/21
to
You're not going to find anything of the same caliber as Klocwork/Coverity
in the "discount/FOSS" aisle.

But, as I said, elsewhere, with enough (machine) "eyes" looking at your code,
you may eke out some insights that would evade a normal review.

Look at PVS Studio. ConQAT won't necessarily give you the sorts of flags
that you're likely expecting from a static analysis tool; but, can help with
things like clone detection (more "smells" than actual "problems")

[Of course, there are other tools that do similar things]

0 new messages