Besides Coverity, we run a clang plugin for style violations, and you
can run the clang analyzer yourself:
http://code.google.com/p/chromium/wiki/ClangStaticAnalyzer (I don't
know why we don't run it regularly, but I'd assume it's the false
positive management. Also, on the Mac, I haven't had luck getting it
to run yet. Fails to build a local clang, for some reason. If you know
why or can figure out how to make it go, please update the wiki page
;)
The guys who make PVS Studio occasionally run it over Chromium and
share some of their findings, but that's it in terms of analysis I'm
aware of.
Rachel
> --
> Chromium Developers mailing list: chromi...@chromium.org
> View archives, change email options, or unsubscribe:
> http://groups.google.com/a/chromium.org/group/chromium-dev
/analyze is a great tool - if you happen to have an X360 compiler.
AFAIK, on the PC you need to buy the high-end version of Visual
Studio, which ends up costing several thousand dollars per seat....
Besides Coverity, we run a clang plugin for style violations, and you
can run the clang analyzer yourself:
http://code.google.com/p/chromium/wiki/ClangStaticAnalyzer (I don't
know why we don't run it regularly, but I'd assume it's the false
positive management. Also, on the Mac, I haven't had luck getting it
to run yet. Fails to build a local clang, for some reason. If you know
why or can figure out how to make it go, please update the wiki page
;)
In VS, you'd have to annotate the function. I.e.
#include <CodeAnalysis\SourceAnnotations.h>
using namespace vc_attributes;
bool Time::FromString([Pre(Null=No)] const char* time_string,
[Pre(Null=No)] Time* parsed_time) {
...
Ideally, it would be nice if we could annotate functions in a portable
way, so that we don't litter the code with annotations for all the
analysis systems we might be using :) I assume we could come up with a
NOT_NULL macro that guarantees a post condition of not null - that
would work portably.
Rachel
We have done that for Coverity. It might be possible for analyze, too.
Somewhere in DCHECK, you need to add an __analysis_assume.
void DCHECK(cond) {
if(!cond)
LogAndExplode();
__analysis_assume(cond);
}
I have no idea if doing that deep down in the guts of DCHECK will
actually bubble up to the actual function. For Coverity runs, we're
using a simplified DCHECK macro that does nothing but say "assume this
is true from here"
Rachel
-Alexei
http://code.google.com/p/chromium/issues/list?can=1&q=Stability%3DValgrind+OR+Stability%3DAddressSanitizer+OR+Stability%3DThreadSanitizer+OR+Stability%3DDrMemory+OR+Stability%3DHeapChecker
-> gives 2k bugs out of ~100k bugs total :)
And that doesn't count the bugs filed before the Stability-X labels
have been introduced...
Timur Iskhodzhanov,
Google Russia
That is the assumption. The docs actually _specifically_ say to not
handle conditions spelled out by DCHECK.
What we do want to do is probably specify a pre-condition for DCHECK,
saying the argument MUST be true. And possibly a post condition saying
that even if it wasn't true before, it WILL be true now. (To avoid a
slew of dependent errors if the pre-condition is violated. Not sure if
/analyze is smart enough to do that on its own)
> This DCHECK issue is just one example of the larger trade-off between
> annotations and analysis precision. To get the full benefit out of /analyze
> we'd probably have to add a lot of SAL annotations.
Again, if we head down that route, we should figure out how to best
annotate in both a cross-platform and cross-analysis tool way. I
_really_ would prefer not spelling out things for every single tool
separately :)
Rachel
Rachel.
> Since I'm part of the folk who look at Coverity - no, that wasn't
> found. (Off to file more Coverity bugs. They must hate me by now :)
I'm surprised Coverity didn't find it, because GCC can find it with -Wall:
$ c++ -o t -Wall -Werror t.cc
cc1plus: warnings being treated as errors
t.cc: In function ‘int main()’:
t.cc:5: error: suggest parentheses around comparison in operand of ‘&’
$ cat t.cc
int main(void) {
int x = 0;
int y = 1;
return x & y != 0;
}
Don't we use -Werror on GCC(-like) platforms? How did this ever build?
Rachel
> It builds by not using gcc :) It's Windows code, so the compiler is
> VS.NET. And I assume we don't crank the warnings up all the way
> because we're using ATL/Windows headers - Win folks, can you confirm?
I had assumed that since GCC can find it, a commercial compiler could,
and that since a commercial compiler could, surely an expensive and
specialized static analysis tool could too. But I didn't know that we
have to turn warnings down on VS — do ATL and/or Windows headers
really not cleanly pass VS' own warnings? That is sad...
My general feeling about static analysis is: I read the Cousots as
cautioning us not to hope for too much, not as cheerleading for SA.
Although I'm a firm believer in compiler warnings and I turn as many
of them on as I can, that's kind of the best we can expect on a
day-to-day, practical basis.
Since we care about quality at runtime, runtime checks, dynamic
analysis (ASAN!), and heavy unit testing is more effective and a
better use of time IME. I'd much rather write unit tests to assert the
method's contract (as in Carlos' CL) than sift through 5,000 pages of
SA output. (A couple jobs ago, we used SA output to haze interns but
for no other purpose. Although one time the source code revealed
crashers in the SA product...)
Based on old experience. That's why I asked the folk who work on the
Win build to confirm. So take with a grain of salt.
> My general feeling about static analysis is: I read the Cousots as
> cautioning us not to hope for too much, not as cheerleading for SA.
Have you read Carmack's article? He found _significant_ bugs via SA
that escaped their analysis and testing before.
> Although I'm a firm believer in compiler warnings and I turn as many
> of them on as I can, that's kind of the best we can expect on a
> day-to-day, practical basis.
Why? What's wrong with fixing SA bugs as they come up? We're currently
getting about ~15 or so a day, over the entire Chromium code base
including boatloads of third-party stuff. If folk could fix them the
moment they came up, it wouldn't be that hard to at least stay on our
current level. Most are trivial fixes. But about 1 in 75 is a serious
bug - resource leak, memory overwrite, NULL deref.
> Since we care about quality at runtime, runtime checks, dynamic
> analysis (ASAN!), and heavy unit testing is more effective and a
> better use of time IME.
"Quality at runtime" is obviously determined by a static product, the
source. So running SA doesn't seem like a bad plan to me :) (Note - I
agree that we should keep runtime checks, ASAN, unit testing, etc.
too. Obviously. They find plenty of issues)
> I'd much rather write unit tests to assert the
> method's contract (as in Carlos' CL)
Unit tests can only assert post conditions, not pre conditions (for
the function under test)
> than sift through 5,000 pages of
> SA output.
You're exaggerating :) There are significantly less than 5,000 issues
being detected by SA right now.
> (A couple jobs ago, we used SA output to haze interns but
> for no other purpose. Although one time the source code revealed
> crashers in the SA product...)
SA has improved a bit since then, I think :)
Rachel