Tools used to compare static analysis results

39 views
Skip to first unread message

Thomas J Barnes

unread,
Jan 5, 2021, 2:07:20 PM1/5/21
to samate-discuss
Hello.  I've been reading the SAMATE V report, and I'm interested to understand how the analysis is performed.  For instance, when running analysis tools on Juliet, how do you analyze the results to determine that each tool has found the correct issue in the correct file on the correct line?  Are you doing this using manual inspection, or is there some sort of tool available that allows you to automate the analysis process?

Thanks,

-Tom

Black, Paul E. (Fed)

unread,
Jan 5, 2021, 4:19:53 PM1/5/21
to Thomas J Barnes, samate-discuss
Tom,

Thanks for asking. The short answer is, we used programs to do much of the matching. We're happy to share them, but they are custom programs we wrote for internal use. They are not general tools.

Both the classic and the Ockham tracks used Juliet. They use different approaches, as shown in Sec. 2.2, Figure 2 (page 12). First, the classic track. For production software, we manually checked a randomly selected set of warnings (Sec. 3.1.2). For CVE-selected software, we used manually-tuned programs to match (Sec. 3.2.2). For the synthetic (Juliet) cases, we also used programs (Sec. 3.3.2, especially Figure 4). Because of (reasonable) differences in use of CWE and reported location, the programs used heuristics and approximate matching.

For the Ockham track, we also used custom programs (Secs. 5.2.3 and 5.2.4). Since there was only one tool (Frama-C) and one set (Juliet), we were more stringent about matching. This meant the programs had to be much more sophisticated.

The Juliet test cases in particular have a manifest file that (should) list the location and type of each error. (I say "should" because we know of mistakes and limitations in the manifest.) In theory, it is straight forward to extract that information from a tool's result and match it with the manifest.
Making matching easier is one reason we are moving to using SARIF.

We be happy to talk with you further.

By the way, the SATE V report is at https://doi.org/10.6028/NIST.SP.500-326

-paul-
Paul E. Black 100 Bureau Drive, Stop 8970
paul....@nist.gov Gaithersburg, Maryland 20899-8970
voice: +1 301 975-4794 fax: +1 301 975-6097
http://hissa.nist.gov/~black/

________________________________________
From: samate-...@list.nist.gov <samate-...@list.nist.gov> on behalf of Thomas J Barnes <thomas....@intel.com>
Sent: Tuesday, January 5, 2021 2:07 PM
To: samate-discuss
Subject: [samate-discuss] Tools used to compare static analysis results

Hello. I've been reading the SAMATE V report, and I'm interested to understand how the analysis is performed. For instance, when running analysis tools on Juliet, how do you analyze the results to determine that each tool has found the correct issue in the correct file on the correct line? Are you doing this using manual inspection, or is there some sort of tool available that allows you to automate the analysis process?

Thanks,

-Tom

--
To unsubscribe from this group, send email to samate-discus...@list.nist.gov
View this message at https://list.nist.gov/samate-discuss

Delaitre, Aurelien M. (Assoc)

unread,
Jan 5, 2021, 4:33:48 PM1/5/21
to Thomas J Barnes, samate-discuss
Hi Thomas,

For the Juliet test suite in SATE V, we used an iterative process to ensure accuracy. First, a script was used to match the tool reports to the vulnerabilities in the test cases, based on CWE types and line numbers. Then a sample of the script results was taken and manually reviewed to identify errors. Based on these findings, we modified the script and/or test case metadata to fix the errors, and started the entire process again. In total, it took six cycles to achieve an accuracy of above 98%. Sections 3.3.2 and 3.3.3 in the SATE V Report explains this in a few more words:

ii from: doi.org. 500-326 Caution on Interpreting and Using the SATE Data SATE V, as well as its predecessors, taught us many valuable lessons. Most importantly, our
Best regards,

  Aurelien Delaitre
  Software and Systems Division
  National Institute of Standards and Technology
  +1 (301) 975-3296

From: samate-...@list.nist.gov <samate-...@list.nist.gov> on behalf of Thomas J Barnes <thomas....@intel.com>
Sent: Tuesday, January 5, 2021 2:07 PM
To: samate-discuss <samate-...@list.nist.gov>

Subject: [samate-discuss] Tools used to compare static analysis results
Hello.  I've been reading the SAMATE V report, and I'm interested to understand how the analysis is performed.  For instance, when running analysis tools on Juliet, how do you analyze the results to determine that each tool has found the correct issue in the correct file on the correct line?  Are you doing this using manual inspection, or is there some sort of tool available that allows you to automate the analysis process?

Thanks,

-Tom

--
To unsubscribe from this group, send email to samate-discus...@list.nist.gov
View this message at https://list.nist.gov/samate-discuss
---
To unsubscribe from this group and stop receiving emails from it, send an email to samate-discus...@list.nist.gov.
Reply all
Reply to author
Forward
0 new messages