Another aspect to consider is the Venn diagram aspect of how people/projects/etc group vulnerabilities. I don't mean just a cluster of CVEs that varies a bit by vendor, but some vendors do a unique identifier per each instance, some do a more CVE approach of "same vuln type, same versions, same fix, into the bucket you go!" and some just glue all sorts of random stuff together. Then on the consumption side things aren't all equal, e.g. one project may consume it as an update, another might backport the specific fix, and another may simply slap a compensating control on it.
Having the ability to arbitrarily slice, dice, glue and other mangle id's into various subsets is very powerful, and most importantly, I think, it's way easier to join stuff together than it is to take it apart, which goes to the "many smaller files or one big one" discussion.
Also, it's easier right now to simply decompose the advisories to what vendors provide (e.g. unique CVEs), and as we build better systems that can parse the advisories, look at the commits and so on we can start to decompose it further (e.g. to the commit level), giving finer granularity. I can't imagine as time goes on and we get better at this people will want "larger" blobs of data that they themselves have to decompose if needed.
-Kurt