Thanks for asking. The short answer is, we used programs to do much of the matching. We're happy to share them, but they are custom programs we wrote for internal use. They are not general tools.
Both the classic and the Ockham tracks used Juliet. They use different approaches, as shown in Sec. 2.2, Figure 2 (page 12). First, the classic track. For production software, we manually checked a randomly selected set of warnings (Sec. 3.1.2). For CVE-selected software, we used manually-tuned programs to match (Sec. 3.2.2). For the synthetic (Juliet) cases, we also used programs (Sec. 3.3.2, especially Figure 4). Because of (reasonable) differences in use of CWE and reported location, the programs used heuristics and approximate matching.
For the Ockham track, we also used custom programs (Secs. 5.2.3 and 5.2.4). Since there was only one tool (Frama-C) and one set (Juliet), we were more stringent about matching. This meant the programs had to be much more sophisticated.
The Juliet test cases in particular have a manifest file that (should) list the location and type of each error. (I say "should" because we know of mistakes and limitations in the manifest.) In theory, it is straight forward to extract that information from a tool's result and match it with the manifest.
Making matching easier is one reason we are moving to using SARIF.
We be happy to talk with you further.
By the way, the SATE V report is at https://doi.org/10.6028/NIST.SP.500-326
-paul-
Paul E. Black 100 Bureau Drive, Stop 8970
paul....@nist.gov Gaithersburg, Maryland 20899-8970
voice: +1 301 975-4794 fax: +1 301 975-6097
http://hissa.nist.gov/~black/
________________________________________
From: samate-...@list.nist.gov <samate-...@list.nist.gov> on behalf of Thomas J Barnes <thomas....@intel.com>
Sent: Tuesday, January 5, 2021 2:07 PM
To: samate-discuss
Subject: [samate-discuss] Tools used to compare static analysis results
Hello. I've been reading the SAMATE V report, and I'm interested to understand how the analysis is performed. For instance, when running analysis tools on Juliet, how do you analyze the results to determine that each tool has found the correct issue in the correct file on the correct line? Are you doing this using manual inspection, or is there some sort of tool available that allows you to automate the analysis process?
Thanks,
-Tom
--
To unsubscribe from this group, send email to samate-discus...@list.nist.gov
View this message at https://list.nist.gov/samate-discuss