Code Churn metric

776 views
Skip to first unread message

Günter Wirth

unread,
Jan 5, 2016, 9:20:58 AM1/5/16
to SonarQube
Hi,

remember that there was a request to support a code churn metric in SonarQube (SONAR-3580).
Is there any progress or alternative?

Regards,
Günter

Günter Wirth

unread,
Oct 26, 2017, 11:55:47 AM10/26/17
to SonarQube
Hi,

I'm happy to see that there will be some kind of Code Churn in SQ 6.7: Using "Measures / Size / New Lines" creates a list with the last touched files and the number of changed lines. Thanks for providing this.
To complete this it would be nice to add a heatmap / treemap view to visualize the changes.

Best reagards,
Günter

G. Ann Campbell

unread,
Oct 26, 2017, 12:32:34 PM10/26/17
to SonarQube
Hi Guenter,

You're quite welcome. :-)

Since we only treemap ratings and percentages, how do you propose this would work? 


Ann

alix....@gmail.com

unread,
Oct 26, 2017, 3:46:25 PM10/26/17
to SonarQube
Hi Günther and Ann,
not trying to hijack this thread but maybe visualizing code churn is most interesting in combination with other data?
For example I know of a tool, http://www.empear.com/, that tries to predict where bugs are likely to be found based on VCS history and code churn. The idea is that high code churn combined with several authors (commits to the same file) and other factors (such as low code coverage and/or high complexity) means a higher risk to that part of the code base which can then be visualized as hot spots where extra attention is needed.

G. Ann Campbell

unread,
Oct 26, 2017, 4:47:02 PM10/26/17
to Alix Warnke, SonarQube
Hi Alix,

In fact, I was reminded this spring that code churn metrics had been in our backlog for a long time, so I spent some time looking closely at them. I came away distinctly skeptical. First, in the main paper I read (sorry, I can't a non-paywall version now) I couldn't find any reference to Unit Testing. It seems to me that a well-unit tested system could withstand a higher level of churn. But I have no way of knowing whether or not unit testing was used at all, much less, heavily in the project (Windows Server 2003) under examination or what the numbers looked like for unit-tested, versus non-unit-tested sections of the code.

Beyond that, the basis of the measurement seems suspect to me in today's world of feature branches, (and PR analysis, and peer review) and merging with squashed commits. Okay sure, if you're still on CVS this might still be relevant, but for modern shops... well, you'd have to convince me.

Beyond that, how are these metrics actionable? If I have a section of code with a high churn rate, I should... sit on my hands and stop committing? Do a peer review (already part of our workflow, at least)? If the peer review does find something, checking in a change is just going to make the numbers worse! (Okay, now I'm being a PITA, but you get the point. :-D)

In fact, that paper I referred to was a retrospective statistical analysis. IMO, it didn't provide any guidance, or even hints about what to do with code churn numbers in an ongoing project. What if churn is high because the requirements keep changing, not because (as was supposed) the coders are having a hard time getting the logic right?

In short... I'm deeply skeptical.


Ann

---
G. Ann Campbell | SonarSource
Product Manager
@GAnnCampbell

--
You received this message because you are subscribed to a topic in the Google Groups "SonarQube" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sonarqube/GHlCnsIjIPQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sonarqube+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sonarqube/93f3a905-27f7-4105-9cdb-ef949cc5313f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

alix....@gmail.com

unread,
Oct 26, 2017, 5:38:12 PM10/26/17
to SonarQube
Hi Ann,
you make a good point about modern practices and the applicability of this type of predictions.
Indeed, the practice of peer reviews for pull requests/merge requests, casual refactoring (to keep the code tidy) while implementing a new feature and cleaning up violations highlighted by tools like SonarQube will impact on code churn and typically not increase the risk (otherwise you are doing it wrong). On the other hand this tidy code will be simple(r) and covered with unit tests and therefore less likely to be highlighted as a risk even though the code churn is high (if data is correlated).

I may sound convinced of the usefulness of such predictions but I'm not. So let's say I'm mildly skeptical :-)

/Alix
To unsubscribe from this group and all its topics, send an email to sonarqube+...@googlegroups.com.

Günter Wirth

unread,
Oct 27, 2017, 1:56:18 AM10/27/17
to SonarQube
Hi Ann,

"Measures / Size / New Lines" shows the number of New Lines in the current Leak Period. The current implementation is already helpful to find out changes and focus manual code reviews on this code parts. Studies have shown that absolute measures like LOC are poor predictors of pre- and post release faults in software systems.


> Since we only treemap ratings and percentages, how do you propose this would work?

Think the easiest would be to add a view Treemap (to List and Tree) and show the changes in percentages. 100% are the total New Lines within the Leak Period and calculate the remaining percentages (colors) relativ to this 100%. The size of the boxes is the size of the source files. This would visualize in which part are the most changes (not more not less).

To get more value the size of the boxes could also be (indicators for poor new code):
- changed line of codes / number of new issues
- changed line of codes / coverage on new code
- changed line of codes / duplications on new code
- ...

Which leads then to the point that a general new page to define two metrics to create a Treemap would be helpful (NDepend Color Metric is a good sample https://www.ndepend.com/docs/treemap-visualization-of-code-metrics):
- metric 1: color of boxes
- metric 2: size of boxes

Regards,
Günter

Günter Wirth

unread,
Oct 27, 2017, 2:11:13 AM10/27/17
to SonarQube


Hi Ann,

after discussing again with colleagues: extending your page and add a Treemap view with two metrics would have the biggest benefit.

Regards,
Günter








G. Ann Campbell

unread,
Oct 27, 2017, 9:12:58 AM10/27/17
to SonarQube
Hi Guenter,

Thanks for the input. After some internal discussion, we created MMF-1093 to track this. Feel free to follow/vote. Note that it's an "idea", meaning we have no firm plans.


Ann
Reply all
Reply to author
Forward
0 new messages