[ANN] SonarQube 5.5 RC1 is now available

668 views
Skip to first unread message

Fabrice Bellingard

unread,
Apr 8, 2016, 12:06:35 PM4/8/16
to SonarQube
Hi SonarQube Community,

Here is the first public release candidate of SonarQube 5.5.

The main features and noteworthy of this 5.5 version are:
  • New SonarQube Quality Model with 3 characteristics, based on new issue types
    • "Reliability" characteristic, based on "bugs"
    • "Security" characteristic, based on "vulnerabilities"
    • "Maintainability" characteristic, based on "code smells"
    • Purpose is to highlight operational risk while remaining committed to manage the technical debt of the code (the default quality gate highlights 
  • The "old" measure drilldown page is replaced by a brand new "Measures" project page
    • More usable and reactive
    • Offers various visualizations (Treemaps, Bubble charts, Timelines) on top of drilldown capability
  • Increased vertical scalability, performances and stability of the platform
    • Report processing by the Compute Engine (i.e. background tasks) is now done in a dedicated process
    • This processing can be multi-threaded to increase the throughput of the Compute Engine
And obviously as usual, this version comes with lots of bug fixes and other small improvements.


To help us test this new version, here's all what you need to know:

Any feedback is highly appreciated.
Enjoy!

Best regards,

Fabrice BELLINGARD | SonarSource
SonarQube Platform Product Manager
http://sonarsource.com

Francis Galiegue

unread,
Apr 8, 2016, 12:26:36 PM4/8/16
to SonarQube
Hello,

On Friday, April 8, 2016 at 6:06:35 PM UTC+2, Fabrice Bellingard wrote:
[...]
Checking that the database is in UTF-8 is nice, however there is a small problem with MySQL as I can see (well, I use PostgreSQL so I'm not affected): from logs that I could see the charset used is utf8; however this will fail to store characters outside of the BMP correctly; for this you need utf8mb4.

I take it you can't upgrade the charset of a MySQL database like that, can you?

Regards,

Michel Pawlak

unread,
Apr 8, 2016, 12:37:32 PM4/8/16
to SonarQube


Quick feedback:


1. Having added new lines to the main dashboard and new small squares, it became quite difficult to read. Example, where should my eyes focus on this page :




2. I don't get why you're using "your own" quality model instead of using ISO 25010. http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35733 I would say that reinventing the wheel is a bad practice ;-)


3. Plenty of issues that you put in the "code smells" category aren't code smells, but a lack of respect for conventions... (using a tab instead of a space isn't one, not complying with a naming convention isn't one either.)


According to Martin Fowler, "code smell is a surface indication that usually corresponds to a deeper problem in the system".


4. Measures part : nice (I'll be able to remove custom dashboards with the same charts), however, from a usability perspective the "General" tab should be first, after the "all" tab, and in each section the "rating" line should come first.


5. Measures -> issues : the table is difficult to read, it would be better to have a 1st table with two columns "total" and "new" for each type, and a 2nd with "open", "reopened" ...


6. Measures -> all "tabs" : visual glitch such as the following (you should round the number to two digits)




I'll try to find some time to provide more feedback.

Michel


On Friday, April 8, 2016 at 6:06:35 PM UTC+2, Fabrice Bellingard wrote:

Michel Pawlak

unread,
Apr 8, 2016, 12:42:45 PM4/8/16
to SonarQube
To be more precise, on the first screenshot, there is *nothing* in the middle of the page (at least on a a big screen) and this is really disturbing. While the middle is empty, there is a lot of numbers which are quite spread over the page, and the sections label are almost "invisible". IMHO you should think about running an eye-tracking test on this page.

Michel

David Racodon

unread,
Apr 8, 2016, 2:48:04 PM4/8/16
to Michel Pawlak, SonarQube
Hi,

Good job again.
Even though I don't like the big gap (even on my laptop) between metric and measure on the Measures page. It makes it very hard to read.

Regards,

David RACODON
Freelance QA Consultant

--
You received this message because you are subscribed to the Google Groups "SonarQube" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sonarqube+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sonarqube/8125d1f9-e472-4d7f-a79a-969fb3d5fe9f%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Simon Brandhof

unread,
Apr 11, 2016, 4:31:05 AM4/11/16
to SonarQube
Hi,
 
Checking that the database is in UTF-8 is nice, however there is a small problem with MySQL as I can see (well, I use PostgreSQL so I'm not affected): from logs that I could see the charset used is utf8; however this will fail to store characters outside of the BMP correctly; for this you need utf8mb4.

Both utf8 and utf8mb4 are supported, so upgrading to SQ 5.5 does not change anything. If I'm correct most of Chinese/Japanese/Korean characters are correctly encoded with utf8. Only some special characters like emoticons require utf8mb4. The probability to have emoticons in source code is quite slow, but still... In this case indeed utf8mb4 should be used.

 

I take it you can't upgrade the charset of a MySQL database like that, can you?

Upgrading seems to be possible but it does not seem to be straightforward : 

Regards

Regards,

mjdet...@gmail.com

unread,
Apr 11, 2016, 1:19:05 PM4/11/16
to SonarQube
Hello,

Thank you for the announcement.

I tend to agree with some of the other sentiments on this thread.

The idea of "bugs" and "vulnerabilities" is a sound one, but "code smells" is not properly named for the maintainability characteristic and not all "other issues" are maintainability issues.  I can't stress this enough.  For example, if I enable issue reporting for cyclomatic complexity of methods, this is not a code smell but still an "issue" that effects maintainability and testability.  In addition, only some convention-oriented rules are code smells, but not all of them.  What about performance issues?  Performance is an efficiency problem and not a reliability, security, or maintainability problem.  I think the 9 classifications in the technical debt pyramid are a much better way of categorizing issues.

Eye tracking on the measures page is an issue even for me, despite row highlighting with mouse-over.  The drilldown views of the measures is a good improvement.  Overall, I find the initial Measures page to be of little value.  I much preferred the dashboard breakdowns seen in 5.4 because it offers a better way of visualizing problem areas.  I don't like it just because the graphs are pretty, but because the graphs are effective.  The current view just throws a bunch of numbers in your face and does a horrible job of showing me that there's a problem whatsoever.  It's more like a stat sheet that you'd send to a manager or copy into excel.

You mention bubble charts from the measure page, but I could only find the tree map and timeline for a subset of the measures.  I have received good feedback from developers on the visualization of the bubble charts, especially for Issues vs LoC sized by Debt per file.

Multi-threaded CE tasks is a welcome change.

Regards,
Matt

Freddy Mallet

unread,
Apr 14, 2016, 8:02:08 AM4/14/16
to Michel Pawlak, SonarQube
Hi Michel

Thanks for your feedback. See my comments below:

1. Having added new lines to the main dashboard and new small squares, it became quite difficult to read. Example, where should my eyes focus on this page :

 The most important piece of information is the one not displayed in the provided screenshot : the quality gate status at the top of the page :)

SonarLint_for_Visual_Studio.png
I don't think that the new "Bugs & Vulnerabilities" section highly impacts by itself the readability of this page but indeed the new small squares might. We did several trials and unhappily we don't have a better one for the time-being. 

2. I don't get why you're using "your own" quality model instead of using ISO 25010. http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35733 I would say that reinventing the wheel is a bad practice ;-)

I would not say that we've reinventing the wheel. We've just improved the old quality model by fixing its known limitations :
  • Before 5.5: all issues were taken into account to compute the SQALE Rating -> but this rating was mainly designed to estimate the maintainability level of an application/piece of code (technical debt metaphor). What's the point to take into account in this SQALE Rating, the cost to fix some bugs or security vulnerabilities ? Moreover, it was possible to have a "A" SQALE rating with ton of blocker bugs -> Houston, we've a problem !
  • After 5.5: issues that are considered as operational risks are either Bugs or Vulnerabilites and they are not anymore taken into account to compute the SQALE rating.
  • Before 5.5: it was possible to split the technical debt by characteristics and sub-characteristics ? This feature was not so widely used so we've decided to drop it -> Less is More. The tagging mechanism is far more flexible to classify/group issues. 
  • If you manage to find the pages in the ISO 25010 standard explaining how to implement this quality model and make it very actionable for developers, I'll invite you to the restaurant Michel :)

3. Plenty of issues that you put in the "code smells" category aren't code smells, but a lack of respect for conventions... (using a tab instead of a space isn't one, not complying with a naming convention isn't one either.)

According to Martin Fowler, "code smell is a surface indication that usually corresponds to a deeper problem in the system".


If you would have said : "Some of issues that you put in the 'code smells' category aren't code smells" I would have agreed, but "Plenty", I disagree. We needed a name to group/classify all the "maintainability" issues. The issues that impact the ability to inject some changes in a piece of code. And "Code Smells" is the most suitable term. If you have a look to https://en.wikipedia.org/wiki/Code_smell, naming conventions are also listed. 
 

4. Measures part : nice (I'll be able to remove custom dashboards with the same charts),

Cool, that was indeed one of the main motivation  

however, from a usability perspective the "General" tab should be first, after the "all" tab, and in each section the "rating" line should come first.

@Stas, this feedback is for you !
 

5. Measures -> issues : the table is difficult to read, it would be better to have a 1st table with two columns "total" and "new" for each type, and a 2nd with "open", "reopened" ...

@Stas, for you ! 

6. Measures -> all "tabs" : visual glitch such as the following (you should round the number to two digits)

That's going to be fixed ! 

Thanks for your feedback Michel !
Freddy 
--
You received this message because you are subscribed to the Google Groups "SonarQube" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sonarqube+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
--
Freddy MALLET | SonarSource
Product Director & Co-Founder
http://sonarsource.com

Freddy Mallet

unread,
Apr 14, 2016, 8:47:42 AM4/14/16
to mjdet...@gmail.com, SonarQube
Thanks for your feedback Matt !

See my comments below :

The idea of "bugs" and "vulnerabilities" is a sound one, but "code smells" is not properly named for the maintainability characteristic and not all "other issues" are maintainability issues.  I can't stress this enough.  For example, if I enable issue reporting for cyclomatic complexity of methods, this is not a code smell but still an "issue" that effects maintainability and testability.  In addition, only some convention-oriented rules are code smells, but not all of them.  

For this first point, I know that this is controversial because there is absolutely no official definition of "Code Smell" but for me any issue that impact the maintainability level of a piece of code can be considered as a Code Smell. 
 
What about performance issues?  Performance is an efficiency problem and not a reliability, security, or maintainability problem.

You're right and in my previous email when I answered to Michel that "some issues currently considered as code smells are not really code smells" I was thinking about those performance hotspots. During the last sprint we thought about creating another dedicated "Performance Hotspot" issue type along with the other "Bug", "Vulnerability" and "Code Smell" types but we've decided to postpone this decision. Indeed, with an automatic code review, you can identify some performance hotspots but very few compared to what you can do with a profiler. So that's not the main purpose of an automatic code review to detect performance hotspots and we didn't want to make this issue type as important as the three other ones. But on this subject and base on the community feedback, our position might evolve over time. 

 I think the 9 classifications in the technical debt pyramid are a much better way of categorizing issues.

I don't know lot of developers that were using the technical debt pyramid and with those new issues types we're really targeting them. I think they will feel far more confortable playing with those three issue types instead of playing with a tree of 9 characteristics and X sub-characteristics. At least that's the bet :)

Eye tracking on the measures page is an issue even for me, despite row highlighting with mouse-over.  

@Stas, feedback is for you !
 
The drilldown views of the measures is a good improvement.  Overall, I find the initial Measures page to be of little value.  I much preferred the dashboard breakdowns seen in 5.4 because it offers a better way of visualizing problem areas.  I don't like it just because the graphs are pretty, but because the graphs are effective.  The current view just throws a bunch of numbers in your face and does a horrible job of showing me that there's a problem whatsoever.  It's more like a stat sheet that you'd send to a manager or copy into excel.

I think you're making this feedback @Matt, because in the RC1 the links from the section names in the project overview pages were missing. Now (in upcoming RC2) you can click on a section name (like Coverage, Duplications, ...) and you're redirected to the relating metric domain page in the "Measures" space -> and here you can retrieve some useful and actionable preconfigured bubble charts. 

And again thanks for your feedback !
Freddy
 
--
You received this message because you are subscribed to the Google Groups "SonarQube" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sonarqube+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Eric

unread,
Apr 15, 2016, 2:13:59 PM4/15/16
to SonarQube
Hi all,

Good job on 5.5 and the fast release pace!

We get http 400 errors when we try to add metrics that start with "New ..." on a quality gate. The other types of metrics work fine.

Other possible quality gate improvements:
- It would be nice to have the option to restore the built-in quality gate, like it is possible to do with quality profiles. Someone deleted the default one on our server.
- For a large enterprise, it is not ideal to have to give global permissions to administer gates and profiles. Most teams want their own profile and we have 100k employees, ie. a lot of teams. One possible improvement would be to give on-demand "time-limited" permission to users; that permission could expire after 20 minutes or 1 hour. I think Jira does something like that. SonarQube could also keep an audit trail of who requested permission. 

Creating, importing or changing the parent of quality profiles are still extremely slow operations in 5.5. I've seen simple cases take up to 30 minutes. Anything we could do there? Elasticsearch seems to timeout a lot. Is there a way to increase its timeout? Our storage might not be the fastest but our data/es index is only 1GB.

On another note, we like the new bugs and vulnerability metrics. We also have an issue with the new "Code smell" name though. Would something like "Technical issues" be a better name? It could fit well with the related "Technical debt".

Regards,
Eric

Michel Pawlak

unread,
Apr 17, 2016, 9:22:42 AM4/17/16
to SonarQube, michel...@gmail.com
Hi Freddy,

Thanks for you answer, see my comments below.


 The most important piece of information is the one not displayed in the provided screenshot : the quality gate status at the top of the page :)


SonarLint_for_Visual_Studio.png

That's what I'm explaining to our users but it's not obvious for them as this information is written in a smaller font than the rest, and as the big blank zone in the middle of the screen catches their attention first. 
 
I don't think that the new "Bugs & Vulnerabilities" section highly impacts by itself the readability of this page but indeed the new small squares might. We did several trials and unhappily we don't have a better one for the time-being. 

That's indeed not an easy task. Having too many lines with many small squares may lead to a less readable dashboard. Mixing bugs and vulnerabilities on the same line is difficult to understand. Have you though about running a Kano analysis in order to select what should be part of this dashboard page ?

By the way, how can I see quickly debt related to bugs and vulnerabilities (I can only see it for "code smells") ? Yes we have to fix them, but the management will ask us an estimated remediation cost.
 

2. I don't get why you're using "your own" quality model instead of using ISO 25010. http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35733 I would say that reinventing the wheel is a bad practice ;-)

I would not say that we've reinventing the wheel. We've just improved the old quality model by fixing its known limitations :
  • Before 5.5: all issues were taken into account to compute the SQALE Rating -> but this rating was mainly designed to estimate the maintainability level of an application/piece of code (technical debt metaphor). What's the point to take into account in this SQALE Rating, the cost to fix some bugs or security vulnerabilities ? Moreover, it was possible to have a "A" SQALE rating with ton of blocker bugs -> Houston, we've a problem !
Indeed having a A rating while having many blockers was annoying, so it is an improvement if it is not the case anymore. However if I'm not mistaken:

- A rating is still possible if there is 0% coverage
- A rating is still possible if no rule is activated in the profile
  • After 5.5: issues that are considered as operational risks are either Bugs or Vulnerabilites and they are not anymore taken into account to compute the SQALE rating. 
Hmm. In fact you removed the SQALE rating and created three separated ratings (btw the link to the SQALE rating leads now to a maintainability rating)
  • Before 5.5: it was possible to split the technical debt by characteristics and sub-characteristics ? This feature was not so widely used so we've decided to drop it -> Less is More. The tagging mechanism is far more flexible to classify/group issues. 
Well we were using it, but I have to admit that the (sub)characteristics were not clear enough (but that was mostly due to ISO 9126.) 
  • If you manage to find the pages in the ISO 25010 standard explaining how to implement this quality model and make it very actionable for developers, I'll invite you to the restaurant Michel :)
Challenge accepted: "pages 3-4 then section 4.2 page 10 + Annex A", but you'll have to offer me a job as well if you want me to tell you how to integrate it efficiently in SQ for both developers and top management ;-)
 

3. Plenty of issues that you put in the "code smells" category aren't code smells, but a lack of respect for conventions... (using a tab instead of a space isn't one, not complying with a naming convention isn't one either.)

According to Martin Fowler, "code smell is a surface indication that usually corresponds to a deeper problem in the system".


If you would have said : "Some of issues that you put in the 'code smells' category aren't code smells" I would have agreed, but "Plenty", I disagree. We needed a name to group/classify all the "maintainability" issues. The issues that impact the ability to inject some changes in a piece of code. And "Code Smells" is the most suitable term. If you have a look to https://en.wikipedia.org/wiki/Code_smell, naming conventions are also listed. 

Sorry if my comment sounded a bit harsh, but the reason why I wrote "plenty" is the following. For the Java language, 40 out of 250 "Code Smells" are related to "conventions", 26 are tagged "clumsy", and 21 "brain overload", these categories may be considered as "code smells", but I'm not sure of what the "deeper problem in the system" can be then, except that team members should be trained and that they should realise that are not working alone.

I'm not confortable with the relation you make between "code smells" and "maintainability". If I refer to ISO 25010, you kept 2 characteristics out of 8 and merged the remaining 6 in a single one you named "code smells". Knowing that maintainability is only one of the remaining 6 characteristics, that's odd to consider all these smells being maintainability related (you named the related rating "maintainability rating".) 

Another decision I'm not confortable with, is the fact that now that you use code smells as a characteristic, you exclude de facto reliability and security from possible code smells. However, smells can be related to all characteristics (you can have smells related to security and reliability as well.) You have a "smell" if you "suspect" but are not "sure" that there is a problem and want people to investigate (example : during the last 6 months, every time you fixed a given security related class, you also modified another file in your SCM, but the last time, you didn't do it. Statistically speaking you have a code smell). 

Last but not least, while smells are "possible" issues, some issues that you put in this category are certain issues that need to be fixed if you're interested in the underlying characteristic (example : ResultSet.isLast() should not be used, if you care about performance you should fix such issues there is nothing special to investigate).
 
I don't know if you had a look at the list of code smells I gathered for my Code Smells plugin https://github.com/QualInsight/qualinsight-plugins-sonarqube-smell . If you have a look at it you'll see that they are not only related to maintainability, and most of them are difficult not to say impossible to detect automatically.

For all these reasons I don't think code smell is the most suitable word to use. I would even prefer "other issues" and "other debt" if you consider merging them a must (by the way i would keep a separate rating for all remaining characteristics, or find a better name than "maintainability rating".)
 
Thanks for your feedback Michel !
Freddy 

You're welcome,

Michel 

mjdet...@gmail.com

unread,
Apr 19, 2016, 2:56:22 PM4/19/16
to SonarQube, mjdet...@gmail.com

Freddy,


Thanks for the reply.


While there is not an official definition of what constitutes a code smell, I have a hard time seeing how Javadoc or comment violations could be considered code smells.  They aren't code.




On a separate note, I got to thinking about the eye tracking that Michel has talked about a bit more.  It seems like now I notice more each every time I load a project dashboard in SQ.  It's more of a problem dating back to previous SQ versions but I can't help but notice it now.

To demonstrate a comparison on a 1080p monitor, probably the most common desktop resolution today, I chose GitHub's project page.  While it is not perfect, I certainly don't have problems with eye tracking.  I understand that the layouts differ quite a bit, since SQ is variable width and GitHub is centered fixed width, so naturally SQ could fall victim to a problem like eye tracking.  At the same time I am considering other websites and I cannot think of a major content-driven website that does not use some type of centered fixed width layout on the desktop.  Doing some exploring today, it seems the LA Times (http://www.latimes.com/) has an excellent layout with a good balance between text, visuals, and whitespace.

Personally, I find myself looking at the horizontal center and vertical mid-top level any time I load a new browser page, then generally move slightly to the left to find the start of some kind of sentence or title.

Here's what happens when I look at the SonarQube project page:


And on GitHub:



Regards,

Matt



On Thursday, April 14, 2016 at 8:47:42 AM UTC-4, Freddy Mallet wrote:

stas.v...@sonarsource.com

unread,
Apr 21, 2016, 3:35:37 AM4/21/16
to SonarQube
Hello Michel,

About measures page. We definitely fixed the bug with number formatting, and reworked the page UI to improve the overall readability.

Please find below a screenshot of the next version (which will be available in RC2):

stas.v...@sonarsource.com

unread,
Apr 21, 2016, 3:39:42 AM4/21/16
to SonarQube
Hello Eric,

The bug with quality gates will be fixed in RC2.

Thanks!

stas.v...@sonarsource.com

unread,
Apr 21, 2016, 3:41:20 AM4/21/16
to SonarQube, mjdet...@gmail.com
Hello Matt,

You're right pointing the problem with eye-catching numbers and the page width. This is our work in progress, and hopefully we'll improve it soon.

Thanks.

Freddy Mallet

unread,
Apr 22, 2016, 11:48:26 AM4/22/16
to Eric, SonarQube
Thanks @Eric for your feedback !

See my comments below : 

We get http 400 errors when we try to add metrics that start with "New ..." on a quality gate. The other types of metrics work fine.

As answered by @Stas : fixed in upcoming 5.5 RC2
 
Other possible quality gate improvements:
- It would be nice to have the option to restore the built-in quality gate, like it is possible to do with quality profiles. Someone deleted the default one on our server.

Based on this feedback, we have started a discussion internally because perhaps we should go even further by automatically updating the default quality profiles and default quality gates (and as a side effect to make them immutable for end-users).
 
- For a large enterprise, it is not ideal to have to give global permissions to administer gates and profiles. Most teams want their own profile and we have 100k employees, ie. a lot of teams. One possible improvement would be to give on-demand "time-limited" permission to users; that permission could expire after 20 minutes or 1 hour. I think Jira does something like that. SonarQube could also keep an audit trail of who requested permission. 

Indeed and feel free to vote for https://jira.sonarsource.com/browse/SONAR-1330

 
Creating, importing or changing the parent of quality profiles are still extremely slow operations in 5.5. I've seen simple cases take up to 30 minutes. Anything we could do there? Elasticsearch seems to timeout a lot. Is there a way to increase its timeout? Our storage might not be the fastest but our data/es index is only 1GB.

This performance issue relates to https://jira.sonarsource.com/browse/SONAR-6315 which will be fixed in LTS 5.6 version. But 30 minutes, that's really huge. How many rules are you activating in your quality profile and which DB are you using ?
 
On another note, we like the new bugs and vulnerability metrics.

Cool !
 
We also have an issue with the new "Code smell" name though. Would something like "Technical issues" be a better name? It could fit well with the related "Technical debt".

I don't buy so much this name "Technical issue", does that mean that the other ones are not technical ? ;)

Thanks again
Freddy

 

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages