[Owasp-topten] Thoughts on OWASP Top Ten and CVE, CWE Top 25, etc.

23 views
Skip to first unread message

OWASP Korea Chapter

unread,
Mar 5, 2013, 9:57:06 PM3/5/13
to owa...@googlegroups.com
Hi all,

Feel free to forward this message elsewhere.

I like all the conversations going on, but unfortunately a little busy to participate as much as I'd like to.  Hopefully these comments will be informative.

Data gathering and publication - while I believe that mo' data = mo' better, I also appreciate that not everybody will want to publish.   I don't have any specific opinion on this, but there will be some important technical challenges.  Even if many organizations want to publish, there will be major problems trying to unify the data because of differences in how weaknesses (or attacks) are categorized.  For example, WHID, Veracode, and White Hat all classify things slightly differently.  There will also be a disconnect between attack-oriented stats and vuln-oriented stats, not to mention the bias of each individual contributor based on their techniques.  I believe it will be very difficult to synthesize this data at this point in time.  Just something for people to be aware of.  If people want to move forward on this, it might be worthwhile to investigate "meta-analysis" techniques used in the medical community, in which a researcher combines the research results of many dif!
 ferent studies.  (The "meta-analysis" concept was mentioned by a Metricon participant last week, who has a background in evidence-based medicine and saw some potential applications in infosec.)

CVE - I agree that CVE is effectively a "bug parade," but CVE is much broader than web applications, which still seem to be the bulk of the OWASP focus.  I think Dave Wichers already said this, but just to confirm - I'm not comfortable submitting CVE data for this round, because we are not as complete as we were in previous years.  In addition, CVE has typically lagged behind what the webappsec consultants/thought-leaders/etc. have done, CSRF being the classic example of an issue that we all knew would be big, but really took a couple years to become widely reported.  For a more current example, in CVE, we are only just starting to see an increase in deserialization/mass-assignment and SSL certificate validation problems, even though these issues have been known for years.

Top 25 - For the last Top 25 we did in 2011, I used a simplified version of the Common Weakness Scoring System (CWSS) to capture votes from all participants.  At the time, few participants had solid quantitative data, so we had to make it survey-based.  That is, we held a "vote" in which each voter voted on a set of CWEs based on their prevalence, technical impact, and likelihood of exploit.  This voting was a challenge itself - even a single Top-25 voter could have been representing an ISV with many different development teams and product lines, and therefore different problems.  I believe that this diversity - within a single voter's organization, and across many organizations - helped to increase confidence in the Top 25 as a general-purpose list, although there were a couple items that ranked higher than most would have expected.  Whether this was a bias in the votes, or a limitation of the CWSS scoring, is not entirely clear.  I would caution against trying too hard to !
 make a quantitative-based, general-purpose Top-N list.  If people want to go in this direction, I suggest finding a statistician who can ferret out any major problems in the data.  (Betsy Nichols helped out a bit with the Top 25.)  For an alternative to this style of voting, I like how Jeremiah Grossman has been conducting the web hacking techniques in the past couple of years - in the final round, 15 items are presented, and each voter ranks them from 1 to 15.

Note that for the Top 25, we aren't planning on releasing a new, general-purpose list - at least for a while.  The CWE team is working on mechanisms for allowing an enterprise to create their own enterprise-specific Top-N list (specifically, CWRAF and CWSS).  Support for custom Top-N lists might be too much overkill for the OWASP Top Ten, but it's something to keep in mind.

Top Ten Controls - I'd love to see this happen and am willing to pitch in a little bit here and there as time allows.  (I have an interest in classifying mitigations through the CWE work.)   I'll save my radical commentary for that forum ;-)

And last but not least, it seems to me that at least some of the main questions can be resolved by deciding, once and for all, what the real goals of the Top Ten are.  I can't say that I've been able to do that job perfectly well in any of the community-driven projects that I support, because many different consumers have to be supported at the same time, but that kind of discussion is often informative and simplifies at least some decisions.

- Steve

_______________________________________________
Owasp-topten mailing list
Owasp-...@lists.owasp.org
https://lists.owasp.org/mailman/listinfo/owasp-topten
Reply all
Reply to author
Forward
0 new messages