Sonar dashboard improvements

311 views
Skip to first unread message

Lonzak

unread,
Oct 25, 2017, 12:29:46 PM10/25/17
to SonarQube
Hi,

as Ann Campell suggested we should continue the discussion here since SO is not the place to actually discuss things...
My original question / remark was the following:

Looking at the sonar dashboard I thought everything is fine because there are only (A) ratings for Bugs, Vulnerabilities and Code Smells/Debt.



Then I was surprised to find lots of high prioritized Code Smell issues in the issue view:



I would have expected the following view - including a rating on Code Smells:



As a user the current dashboard is a bit confusing since for Bugs and Vulnerabilities the rating 1:1 reflects the issue situation: If there is only one Blocker (E) the rating for both categories "jumps" to E (red) in the dashboard. For Code Smells category this is different.
Part of the confusion could be clarified with the explanation that the Debt is a ratio of size of the code base vs. Code Smell issues (which is nice to have). However then I am missing the information that there are still lots of high priority issues in the Code Smell section.
I understand that bugs and vulnerabilities are a different category (higher urgency)...


Two aspects with the current dashboard view:

  1. There are over 4314 Code Smell issues (B,C,D,E) which are not represented in the dashboard (since Code Smells/Debt gets an (A). The number 5,5K does not reflect that since a user thinks those are all "Info" level code smells (like the 308 vulnerabilities)
  2. Since our project has a huge amount of DTO/Entity/Java Beans/Enum etc. classes which, due to their triviality contain less issues, dilute and distort the result of the debt rating.

So to make a long story short it would be nice to reflect the higher rated Code Smell issues somewhere in the dashboard, too...

Thanks!

Auto Generated Inline Image 1

Colin Mueller

unread,
Oct 25, 2017, 10:07:32 PM10/25/17
to SonarQube
I would concur that it would be useful to have issue severity of code smells have a greater impact on or presence in maintainability related metrics.

G. Ann Campbell

unread,
Oct 26, 2017, 9:20:22 AM10/26/17
to SonarQube
Hi Guys,

Technical debt and Code Smell count both belong to the Maintainability domain, and I can tell you right now there will not be two ratings for that domain. The fact that Reliability and Security share a card on the project homepage is a UI design flaw we just haven't gotten to yet. So you shouldn't draw any conclusions from the fact that there are two ratings in the top card and only one in the second. That doesn't mean there's "room" for a second rating in the Maintainability domain.

So... that shifts the question to how the Maintainability rating should be calculated. By putting it on the same scale as the Reliability rating (1 Blocker = E), you're saying that a bad unit test (one of the Blocker-level Code Smell rules) deserves the same urgent treatment as a resource leak or a runtime error that will take down the application. I don't think that's true.

But that doesn't make your concerns invalid, so let's look at what you could do to address them:

Concern) From my project homepage, I'd never know that there are Blocker Code Smells
Action) If you add an Error condition to your quality gate on Blocker Issues > 0 (sorry, you can't narrow this to Blocker Code Smells) then the presence of any blockers will mean your project fails the quality gate with a prominent notification as to why

Concern) We have a lot of trivial classes under analysis, and their large aggregate LoC count throws off the Maintainability Rating calculation.
Action) If these classes are generated, I'd exclude them completely from analysis. If they're hand-coded (no matter how trivial) I would retain them.


HTH,
Ann

Colin Mueller

unread,
Oct 27, 2017, 10:59:10 PM10/27/17
to SonarQube
So... that shifts the question to how the Maintainability rating should be calculated. By putting it on the same scale as the Reliability rating (1 Blocker = E), you're saying that a bad unit test (one of the Blocker-level Code Smell rules) deserves the same urgent treatment as a resource leak or a runtime error that will take down the application. I don't think that's true.

I'm inclined to think that the probability a code smell may lead a developer to introduce a bug might be very important to some development teams / organizations, particularly those undergoing periods of rapid development, change, heightened awareness periods (as is common in my organization). By default that information is more buried and certainly less consumable by those higher-level individuals who consume letter grade based reports from SonarQube. 

Large projects may have a number of issues that make it very likely for bugs to be introduced, weakening the general stability of a project's development, but if that remediation effort time to overall development time ratio isn't high enough, that might not get exposed.

Quality Gate conditions are certainly one answer to this, but I still feel that code smells are perhaps being made second-class issues with regards to grading the likelihood that negative impacts will come from not remediating said issue.

G. Ann Campbell

unread,
Nov 1, 2017, 12:10:21 PM11/1/17
to Colin Mueller, SonarQube
Hi Colin,

Thanks for this thoughtful and thought-provoking response. I (obviously!) don't have a ready answer but rest assured that we're taking it on board, as the British say.


Ann



---
G. Ann Campbell | SonarSource
Product Manager
@GAnnCampbell

--
You received this message because you are subscribed to a topic in the Google Groups "SonarQube" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sonarqube/STPeu-cx3aQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sonarqube+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sonarqube/3144219c-3387-42f2-af0f-ffa271c671ed%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Colin Mueller

unread,
Nov 2, 2017, 11:16:31 AM11/2/17
to SonarQube
Ann,

Your consideration is appreciated! I was trying to think of a good word to describe exactly what I'm trying to convey -- Stability started to sound correct, but that already has meaning in software development independent of the stability of a codebase, then "Solidity" sounded good, but perhaps interferes with the SOLID mnemonic. Words are hard.


Colin
To unsubscribe from this group and all its topics, send an email to sonarqube+...@googlegroups.com.

Lonzak

unread,
Jan 10, 2018, 5:03:03 AM1/10/18
to SonarQube
So... that shifts the question to how the Maintainability rating should be calculated. By putting it on the same scale as the Reliability rating (1 Blocker = E), you're saying that a bad unit test (one of the Blocker-level Code Smell rules) deserves the same urgent treatment as a resource leak or a runtime error that will take down the application. I don't think that's true.

I disagree - that is not what I am saying. Reliability, security by itself indicate a higher urgency. Maintainability as a category by itself already indicates a different priority. So of course it makes sense to have different categories... However if there are high priority issues (blockers, critical etc.) in the maintainability category (which were partly prioritized by us) I expect that this is reflected in the overall dashboard. That is why some of those rules have been defined as blockers - the priority may be lower but the severity is still high. Expressed with some exaggeration: your depth rating completely ignores the severity (and yes I know that it is calculated as a ratio). But a blocker is a blocker. If we think it is less important then we lower the severity.

 
Concern) From my project homepage, I'd never know that there are Blocker Code Smells
Action) If you add an Error condition to your quality gate on Blocker Issues > 0 (sorry, you can't narrow this to Blocker Code Smells) then the presence of any blockers will mean your project fails the quality gate with a prominent notification as to why
Quality gates is only for the developers in our case. It breaks the build so something may be fixed. The dashboard is more of a management view...
 
Concern) We have a lot of trivial classes under analysis, and their large aggregate LoC count throws off the Maintainability Rating calculation.
Action) If these classes are generated, I'd exclude them completely from analysis. If they're hand-coded (no matter how trivial) I would retain them.
You don't get my point - I don't want to exclude any classes but due to their simplicity they (simply put) contain no errors but they dilute the ratio so that the important bugs are not visible...

kysi...@gmail.com

unread,
Jan 10, 2018, 7:07:28 AM1/10/18
to SonarQube
I guess you are after this for a reporting perspective and want severity to affect the maintainability ranking somehow. 

The only option for now you could tweak the technical debt sonar settings from the  <5% (A),  <10% (B),  <20% (C),  <(D)50%, >= 50% (E) to be alot lower percentages. But that still doesn't get affected by severity where you can only use the quality gate as suggested about.

Reply all
Reply to author
Forward
0 new messages