Transparent metrics

6 views
Skip to first unread message

Corey Haines

unread,
Jun 14, 2009, 7:43:41 PM6/14/09
to software_craftsmanship
Paul,

I agree that metrics alone cannot be a cause for someone to say 'they are a craftsman.' However, my goal in this is more based on a growing belief of mine that a solid, craftsman developer will not allow his code to rot to the point that I'm sure we have all seen at some point in our careers. It is not an absolute measure, like 'A is better than B because they have 2 points worth of more DRYness,' but I do think you can take a look and go 'wow, C has a really horrible codebase. What are they doing to fix it?'

I'm growing to believe that transparency in as much as possible is one key in our quest to bring more professionalism into our industry. One analogy I've been using lately is that the current state of our field is a lot like allowing a barber buying a jar of leeches and then hanging a physician's sign on his door. It isn't to say that everyone is horrible, on the contrary, there are some great people out there, plus a lot of people who don't realize the potential harm they are doing. By creating a system of transparency for codebases, much like the attempted system of transparency for practices (www.railsmaturitymodels.com), we can help people understand where they may be deficient.

We all pretty much agree that certifications are not the way to go. As an alternative, we talk about community pressure, helping each other learn and grow. I'm proposing one small part of that as a way for people to learn more about where they deficiencies might lie. If a person looks at the site, sees people's metrics, run the metrics on their own codebase and sees a dramatic difference, then it hopefully will be a flag that there is something missing that they need to practice and learn. Plus, by engaging in some sort of client education about the metrics, people who do not put their metrics up might face a small market-/peer-pressure to clean up their code and learn how to keep it clean.

One necessary part of the system would be a 'keep my metrics private for now' option, where people could view their own in relation to others', then work on figuring out how to get to a certain level they wish to attain.

I'm not completely set on what metrics, how to present them, what they mean over time, etc. So, I'm hoping to continue this conversation to flush out my own thoughts. I do believe there is something worthwhile here, and I'm hoping to use everyone's thoughts and feedback in evolving the idea. Then, my goal is to try to secure a couple months of funding to develop the system in the October/November timeframe.

-Corey

2009/6/12 Paul Pagel <paulw...@gmail.com>
Corey,

I think it depends on your metrics, but I don't think metrics that are predefined can determine craftsmanship or professionalism.  Can a metric understand the time that you are intentionally not DRY for expressiveness purposes?  Can a metric tell that you care about a customers problems as much as they do?  I think the only way to know professionalism is through experiencing the same problem as another craftsman.  Corey, I have heard you talk about professionalism being defined in part as the discipline to stick with your techniques when the pressure is on.  How can you tell from metrics what happened when the pressure was on?

For me, I know a craftsman when they I can see them writing high quality code day in and day out.  When the pressure is on, they don't loose it.  They are constantly learning.  They care about their customers, etc...  I just don't think craftsmanship is measurable in the slightest except through getting to know craftsmen personally and making a somewhat subjective decision.

Having said that, even if it isn't too telling, it would be interesting to publicly see different shops metrics.

Paul

On Jun 12, 2009, at 11:06 AM, Corey Haines wrote:

Hi, all,

I've been thinking on just this topic a considerable amount over the past month, or so, and I've had some ideas come about through discussions with people. I'm going to writing a blog post, as well as look for funding to work on a backbone project for it.

The basic premise is that we all agree that certification isn't that great, so we need a way for the community to self-regulate. There are certain levels of transparency, practices, code metrics, customer satisfaction, peer-vouching, that could establish the 'community of professionals.' I want to build a system that would allow people and development shops to publish metrics about their codebases for public consumption. I'm going to start talking more with companies and individuals about what summary statistics and code metrics could realistically be publicly available without compromising client confidentiality or proprietary information. My goal is to secure about 2 months of funding to develop a pretty complete, turnkey solution for establishing a third-party, as-objective-as-possible system with two goals: display, consolidation; and, public education about meaning and uses of the metrics.

More information to come.

Thanks.
-Corey

On Fri, Jun 12, 2009 at 10:55 AM, Paul Pagel <paulw...@gmail.com> wrote:

Matt,

I am going to have to second Enrique's post on a community of
professionals.  For me, being a craftsman is also about knowing other
craftsmen.  For example, I have never worked directly with Enrique,
Corey Haines, or Jake Scruggs for a client.  However, I have written
code with each of them and would stake my reputation on vouching for
them (Brian started wevouchfor.org to try and streamline that
concept).  Have you tried to recommend a software craftsman to your
client?  Someone you can trust to get the job done and will reflect
positively back to you.

I don't like the idea of a singular authority that certificates,
classes, bar exams must create.  I trust when Dave Hoover, a craftsman
I respect very much, says to me "Craftsman XX is a model craftsman,
you should refer them to your client," that I won't need to worry
about XX.  For me, it is very important to have a network of trusted
craftsmen who have their own network of trusted craftsmen.  One way to
make the software industry better is to refer craftsmen who will do a
good job and raise the expectations of clients.

Paul


On Jun 12, 2009, at 8:48 AM, Matt Wilson wrote:

>
> On Jun 11, 11:10 pm, Matthew Wilson <stls...@gmail.com> wrote:
>> I have had the misfortune to bear witness to similar situations
>> multiple
>> times.
>
>>
>> Matt [the other Matt Wilson ;-)]
>
> Are you the Matt Wilson that wrote Pantheios?  I ran into that guy on
> artima.com.
> >






--
http://www.coreyhaines.com
The Internet's Premiere source of information about Corey Haines









--
http://www.coreyhaines.com
The Internet's Premiere source of information about Corey Haines

Corey Haines

unread,
Jun 26, 2009, 9:36:25 AM6/26/09
to software_craftsmanship
I've recorded a road thoughts video about this, as well as a bit more
explanation of the project that I want to do:

http://vurl.me/ACO

-Corey
> 2009/6/12 Paul Pagel <paulwpa...@gmail.com>
> --http://www.coreyhaines.com

DocOnDev

unread,
Jun 26, 2009, 6:24:37 PM6/26/09
to software_craftsmanship
I like the idea of sharing metrics among the community. I am uncertain
about candid versus anonymous metric sharing. I don't have confidence
that companies or individuals will all have the integrity required to
achieve the desired end. Regardless, I want to see forward progress in
this area.

AlexBolboaca

unread,
Jun 28, 2009, 8:01:09 AM6/28/09
to software_craftsmanship
Corey,

From my experience, I see a few problems with your idea:

1. There will always be customers that don't understand how software
is made, and therefore it will be rather difficult for them to
understand the meaning of the metrics.
2. Companies are not necessarily trying to improve themselves when
they reach their commercial targets.
3. One measure for all doesn't work very well. Companies that don't
want to show their metrics will just say that it doesn't apply to
them, and will be believable because, frankly, that's what usually
happens. I know of various types of contracts for development services
that are measured quite differently: maintenance, offshore development
center, etc. Sure, they are just ways to share risks, but the point is
that you can escape with it if you want.

I'm not saying this to discourage you. I believe that metrics should
be shared, but I also believe that once we start sharing them it may
take a long time to reach a standard, if such thing is possible.

I think metrics will be really useful for the craftsmen who want to
see where their organization is and what they need to learn and teach
others. I could really use something like this for my activity of
helping teams become more proficient.

Hope to hear more soon,
Alexandru Bolboaca
www.alexbolboaca.ro
www.agileworks.ro - Romanian agile user group
www.mozaicworks.com

Corey Haines

unread,
Jun 28, 2009, 8:39:29 AM6/28/09
to software_cr...@googlegroups.com
Alex,

Thanks for the comments. They are all valid, I'm putting responses inline. My response, in summary, is that the fact that it is not a 100% solution does not mean that it shouldn't happen. No one thing can solve all problems, but, together, a bunch of individual things can help improve the situation.

-Corey

On Sun, Jun 28, 2009 at 8:01 AM, AlexBolboaca <alex...@gmail.com> wrote:

1. There will always be customers that don't understand how software
is made, and therefore it will be rather difficult for them to
understand the meaning of the metrics.

That is very true. With a bit of education, a lot of customers will be able to get a basic understanding of some differentiating factors between 'good' and 'bad' solutions for their situation. Some won't, but that's life.


 
2. Companies are not necessarily trying to improve themselves when
they reach their commercial targets.

There definitely are those companies. That's okay, though, they just don't have to take part.
 

3. One measure for all doesn't work very well. Companies that don't
want to show their metrics will just say that it doesn't apply to
them, and will be believable because, frankly, that's what usually
happens. I know of various types of contracts for development services
that are measured quite differently: maintenance, offshore development
center, etc. Sure, they are just ways to share risks, but the point is
that you can escape with it if you want.

Those companies are free to not take part. Companies that decide to pull the "doesn't work here" card are free to do so.

Remember, it isn't about 'one measure;' this project is intended to just be one arm of many to come by the community. It is one of the first steps toward the idea of some sort of 'community approval.'

I do believe that there are certain metrics that, when obviously bad, can be almost universally accepted as detrimental to the codebase.

 

I'm not saying this to discourage you. I believe that metrics should
be shared, but I also believe that once we start sharing them it may
take a long time to reach a standard, if such thing is possible.

I don't think there can be such a thing as a 'standard,' although I think general agreement on 'bad' is possible.

And, yes, it will take a long time. If we do this later, it will take just as long. I've talked to many people about the length of time, and I'm really taking the 10-year view: I'd like to see changes 10 years from now. We need to start now, though. Too often, we aren't willing to start on things that are going to need a long time to have a significant effect.

 

I think metrics will be really useful for the craftsmen who want to
see where their organization is and what they need to learn and teach
others. I could really use something like this for my activity of
helping teams become more proficient.

And this is one of the fundamental goals of the project.

 

Corey Haines

unread,
Jun 28, 2009, 8:40:59 AM6/28/09
to software_cr...@googlegroups.com
I don't have confidence that all companies will be willing to do it. However, there are a lot that will. Those who aren't could be in an interesting situation to explain why they aren't. If we can find a way to effectively display the metrics without exposing proprietary information, then these companies will have to come up with some creative excuses.

-Corey

AlexBolboaca

unread,
Jun 28, 2009, 4:38:11 PM6/28/09
to software_craftsmanship
Corey,

Since I am now sure that we know exactly how things stand with the
adoption of transparent metrics, I have only one question:

How can we help?

Alex

PS: I find it ironic that you are trying to work on something that's
very close at a philosophical level with the open source movement.
While they are trying to "free" the software, you are trying to "free"
the software quality. It is also equally difficult. You will need
help ;).

Olof Bjarnason

unread,
Jun 28, 2009, 5:24:46 PM6/28/09
to software_cr...@googlegroups.com


2009/6/28 AlexBolboaca <alex...@gmail.com>


Corey,

Since I am now sure that we know exactly how things stand with the
adoption of transparent metrics, I have only one question:

How can we help?

One idea: remote pair programming, developing the OSS metrics software and the site, together. Because we need off-line tools, we're dealing with proprietary source code.

Count me in ..
 


Alex

PS: I find it ironic that you are trying to work on something that's
very close at a philosophical level with the open source movement.
While they are trying to "free" the software, you are trying to "free"
the software quality. It is also equally difficult. You will need
help ;).




--
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english

Corey Haines

unread,
Jun 28, 2009, 5:38:55 PM6/28/09
to software_cr...@googlegroups.com
:) I figured we were on the same page.

Well, I'll be looking for help soon in a couple different ways. I'm going to be talking to some companies about funding the project, itself, for a couple months, but I'm going to be looking to the community for a bit of help over the next couple months to get there.

And, of course, once the project is underway, coding help is always appreciated. As a starter, look at your language of choice/expertise and start analyzing the available metrics to see what can be tracked effectively. I'll be looking to people with more experience in other languages to help out with ideas on what their platform supports.

-Corey

Corey Haines

unread,
Jun 28, 2009, 6:17:00 PM6/28/09
to software_cr...@googlegroups.com
We definitely need offline tools to gather the metrics, the SNAFU service will be there to upload the collected data. Good thinking.

Michael Norton

unread,
Jun 28, 2009, 7:11:29 PM6/28/09
to software_cr...@googlegroups.com, software_cr...@googlegroups.com
Count me in. 

Michael Norton

Olof Bjarnason

unread,
Jun 29, 2009, 2:34:57 AM6/29/09
to software_cr...@googlegroups.com
OK maybe we should create some discussion group for this specific subject..? The transparent metrics site (TMS?) discussion will clutter the SC mailing list, since there is so much to discuss.

What do you say Corey?

2009/6/29 Michael Norton <mic...@docondev.com>



--
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english

Corey Haines

unread,
Jun 29, 2009, 8:27:01 AM6/29/09
to software_cr...@googlegroups.com
give me a little bit to set something up. I'm going to be spending this week in PEI putting some information together around it, trying to get some funding for the project, etc. I'll send out a link to everything for people to join by early next week.

Thanks for all the interest. One of the reasons I'm putting the idea out there while still just forming is to get all the feedback from everyone, and it has definitely been great. "Community of Professionals" indeed!

Thanks.
-Corey

AlexBolboaca

unread,
Jun 29, 2009, 8:53:47 AM6/29/09
to software_craftsmanship
I used FxCop and SourceMonitor to inspect C# code. There's also
StyleCop for .net. SourceMonitor is useful because it shows things
like cyclomatic complexity and coupling. FxCop performs automatic code
review based on some rules; it's ok but could be much better, and the
only indicators you can get from it are the number of issues of a
specific type, which is not much.
There's also NDepends, that shows dependencies, but I find the graphs
confusing.

I found very useful some of the Source Monitor metrics and FxCop's
indications on unused stuff. (By the way, it would be great if someone
did a tool for finding dead .net code).

- Alex
> --http://www.coreyhaines.com

Olof Bjarnason

unread,
Jun 29, 2009, 9:25:09 AM6/29/09
to software_cr...@googlegroups.com


2009/6/29 AlexBolboaca <alex...@gmail.com>


I used FxCop and SourceMonitor to inspect C# code. There's also
StyleCop for .net. SourceMonitor is useful because it shows things
like cyclomatic complexity and coupling. FxCop performs automatic code
review based on some rules; it's ok but could be much better, and the
only indicators you can get from it are the number of issues of a
specific type, which is not much.
There's also NDepends, that shows dependencies, but I find the graphs
confusing.

I found very useful some of the Source Monitor metrics and FxCop's
indications on unused stuff. (By the way, it would be great if someone
did a tool for finding dead .net code).

Cool might check that SourceMonitor app.

Problems with such tools include (a) they might prove unusable in an OSS setting (b) it's often easier to write your own custom tools than getting a general tool (as FxCop) to work exactly like you want it to work (especially automatically), at least for simpler problems like method length distribution / comment density and such. Code coverage is of course more intricate, aswell as cyclomatic complexity.




--
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english

AlexBolboaca

unread,
Jun 29, 2009, 12:28:21 PM6/29/09
to software_craftsmanship
> Problems with such tools include (a) they might prove unusable in an OSS
> setting (b) it's often easier to write your own custom tools than getting a
> general tool (as FxCop) to work exactly like you want it to work (especially
> automatically), at least for simpler problems like method length
> distribution / comment density and such. Code coverage is of course more
> intricate, aswell as cyclomatic complexity.

(a) True, some of them are unusable with OSS
(b) Not sure about that, although I've never tried.

Just realized that a good indicator would be the amount of code
duplication. I know of some tools that try to find this - Simian,
TeamCity, CCFinder - but never had the chance to use it. Of course,
they can only be limited in scope.

Alex

Olof Bjarnason

unread,
Jun 29, 2009, 3:27:34 PM6/29/09
to software_cr...@googlegroups.com


2009/6/29 AlexBolboaca <alex...@gmail.com>

Yes code duplication is a prime indicator of bad code, maybe the most useful.

I've tried simian, and it is good (supports many languages for example). Maybe it is based on some OSS library to find duplication..? Then we would have full flexibility when it comes to integrating into a tool which we can customize to our liking.


Alex




--
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english

AlexBolboaca

unread,
Jun 29, 2009, 4:34:33 PM6/29/09
to software_craftsmanship
I've just read about the CPD (copy-paste detector) plugin for eclipse
- details here http://www.ibm.com/developerworks/java/library/j-ap01117/
-, so there is at least one open source tool for this kind of things.
Actually, finding exact copies of code shouldn't be that hard. It will
be a little bit more complicated to find similar code (not taking into
account comments, variable names, small variations like for instead of
foreach), especially if such a tool would need to work for various
languages.

I will see if I can find more tools.
Alex

On Jun 29, 10:27 pm, Olof Bjarnason <olof.bjarna...@gmail.com> wrote:
> 2009/6/29 AlexBolboaca <alexb...@gmail.com>

Olof Bjarnason

unread,
Jun 29, 2009, 5:09:18 PM6/29/09
to software_cr...@googlegroups.com


2009/6/29 AlexBolboaca <alex...@gmail.com>


I've just read about the CPD (copy-paste detector) plugin for eclipse
- details here http://www.ibm.com/developerworks/java/library/j-ap01117/
-, so there is at least one open source tool for this kind of things.
Actually, finding exact copies of code shouldn't be that hard. It will
be a little bit more complicated to find similar code (not taking into
account comments, variable names, small variations like for instead of
foreach), especially if such a tool would need to work for various
languages.

I guess it must be some quite advanced algorithmics to do the actual search..? I mean it is a combinatorial explotion of possibilities; some dynamic programming maybe? I'm not into advanced string algorithms.

About similar code:  I have given that problem a bit of thought. If the algoritm can detect identifiers and other not-so-important-stuff in the source code (whitespace!) it could be a "preprocessing step" to just remove such things, and then do the duplication check.




--
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english

Ryan Roberts

unread,
Jun 29, 2009, 7:25:36 PM6/29/09
to software_cr...@googlegroups.com
Are coverage metrics still doable with an open source stack in with .net? That's going to be a pretty big stumbling block without sponsorship of some sort. 

Comment density measurements are probably counter productive. There's no way of automatically determining whether they are actually useful, 'undocumentation' to comply with policy or stream of consciousness narration are not good indicators of quality. 

Kim Gräsman

unread,
Jun 30, 2009, 12:05:03 AM6/30/09
to software_cr...@googlegroups.com
Hi Ryan,

On Tue, Jun 30, 2009 at 01:25, Ryan Roberts<ryansr...@gmail.com> wrote:
>
> Are coverage metrics still doable with an open source stack in with .net?
> That's going to be a pretty big stumbling block without sponsorship of some
> sort.

You mean if your project uses one or more open source libraries, its
coverage will drop because you don't write tests for the external
libraries? Most coverage tools allow you to exclude assemblies, e.g.
unit tests, third-party libraries, etc.

- Kim

Christian Horsdal

unread,
Jun 30, 2009, 2:24:08 AM6/30/09
to software_craftsmanship
At my company we routinely use FxCop, SourceMonitor and NCover on .NET
code bases. I find the that combination to be very useful.

/Christian
http://horsdal.blogspot.com

Corey Haines

unread,
Jun 30, 2009, 7:34:12 AM6/30/09
to software_cr...@googlegroups.com
Christian,

Could you elaborate a bit on what you get from that combination? What sort of things do you stress in FxCop? I'm not really familiar with SourceMonitor, either.

-Corey
--

Olof Bjarnason

unread,
Jun 30, 2009, 8:04:58 AM6/30/09
to software_cr...@googlegroups.com


2009/6/30 Corey Haines <corey...@gmail.com>

Christian,

Could you elaborate a bit on what you get from that combination? What sort of things do you stress in FxCop? I'm not really familiar with SourceMonitor, either.

I tried SourceMonitor some. It's complexity measure, regarding methods, is nice. It does generate some report XML:
http://www.campwoodsw.com/sourcemonitor.html

But we need tools that are built on OSS libraries, so we can build a "metrics report tool" from those libraries. FxCop is not OSS; I don't know about the others.

One alternative is to contact the authors of these different tools and discuss these transparent metrics idea - maybe they are willing to help us.
 



--
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english

Torbjörn Gyllebring

unread,
Jun 30, 2009, 8:21:01 AM6/30/09
to software_cr...@googlegroups.com
On Tue, Jun 30, 2009 at 2:04 PM, Olof Bjarnason<olof.bj...@gmail.com> wrote:
>
>
> 2009/6/30 Corey Haines <corey...@gmail.com>
>>
>> Christian,
>> Could you elaborate a bit on what you get from that combination? What sort
>> of things do you stress in FxCop? I'm not really familiar with
>> SourceMonitor, either.
>
> I tried SourceMonitor some. It's complexity measure, regarding methods, is
> nice. It does generate some report XML:
> http://www.campwoodsw.com/sourcemonitor.html
>
> But we need tools that are built on OSS libraries, so we can build a
> "metrics report tool" from those libraries. FxCop is not OSS; I don't know
> about the others.

Gendarme and Smokey are two open and free alternatives.

Olof Bjarnason

unread,
Jun 30, 2009, 8:56:40 AM6/30/09
to software_cr...@googlegroups.com


2009/6/30 Torbjörn Gyllebring <torbjorn....@gmail.com>


On Tue, Jun 30, 2009 at 2:04 PM, Olof Bjarnason<olof.bj...@gmail.com> wrote:
>
>
> 2009/6/30 Corey Haines <corey...@gmail.com>
>>
>> Christian,
>> Could you elaborate a bit on what you get from that combination? What sort
>> of things do you stress in FxCop? I'm not really familiar with
>> SourceMonitor, either.
>
> I tried SourceMonitor some. It's complexity measure, regarding methods, is
> nice. It does generate some report XML:
> http://www.campwoodsw.com/sourcemonitor.html
>
> But we need tools that are built on OSS libraries, so we can build a
> "metrics report tool" from those libraries. FxCop is not OSS; I don't know
> about the others.

Gendarme and Smokey are two open and free alternatives.

Great! I took a quick look at Gendarme; especially the "Smells" rule is interesting.
 



--
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english

Ryan Roberts

unread,
Jun 30, 2009, 9:13:39 AM6/30/09
to software_cr...@googlegroups.com
No, I was referring to the fact that ncover went closed source and I am not sure of the viability of the closed source version, especially with the upcoming release of a new CLR.

Corey Haines

unread,
Jun 30, 2009, 4:36:10 PM6/30/09
to software_craftsmanship


2009/6/30 Olof Bjarnason <olof.bj...@gmail.com>



But we need tools that are built on OSS libraries, so we can build a "metrics report tool" from those libraries. FxCop is not OSS; I don't know about the others.


I don't understand this statement. All metrics tools that I know of output their results into some sort of format. It is this format which should be used when consolidating and reporting, not changes to the underlying analyses program, itself. Over time, if a format for the submission service can be settled, then other tool vendors can begin to incorporate this format as a default option, but I would strongly push against attempting to alter the tools, themselves.

-Corey

 

Olof Bjarnason

unread,
Jun 30, 2009, 4:54:32 PM6/30/09
to software_cr...@googlegroups.com


2009/6/30 Corey Haines <corey...@gmail.com>



2009/6/30 Olof Bjarnason <olof.bj...@gmail.com>



But we need tools that are built on OSS libraries, so we can build a "metrics report tool" from those libraries. FxCop is not OSS; I don't know about the others.


I don't understand this statement. All metrics tools that I know of output their results into some sort of format. It is this format which should be used when consolidating and reporting, not changes to the underlying analyses program, itself. Over time, if a format for the submission service can be settled, then other tool vendors can begin to incorporate this format as a default option, but I would strongly push against attempting to alter the tools, themselves.

But that means the metric "tool" would consist of

1) downloading and installing 3-4 different tools
2) configuring those to suit the TMS-site needs
3) running them & creating output files
4) interpreting the result files with a custom tool (right? since the TMS-site cannot mention classes/method-names since it's proprietary software we're dealing with)
5) uploading the resulting file(s) to the TMS-site

Now steps 1-2 are cumbersome and error-prone, especially (2). Replacing step 1-2 with "Download and run the TMS tool" will make this a whole lot more useful, don't you think?

Maybe you should explain your idea for this whole project? Our visions might be quite different.


-Corey

 
--
http://www.coreyhaines.com
The Internet's Premiere source of information about Corey Haines





--
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english

AlexBolboaca

unread,
Jun 30, 2009, 4:55:43 PM6/30/09
to software_craftsmanship
I've tried Gendarme once, but it worked very slowly. Also, the rules
seemed a little strange to me.

I was thinking a little bit about the key requirements for these
tools. I think this would make a start list, in no particular order:

1. OSS, with a permissive license - we would like everybody to be able
to use them
2. Command line interface; I find the GUI less important since you can
plot graphs with other tools and it annoys the heck out of me when I
wait for graphs while data should be enough (.net profilers are a very
bad example)
3. Easy to embed in development (IDE, build tasks) and build
environments
4. Consistent interface (don't make me think whether it's -a or -A or
what does -a mean in tool X)
5. Work in all main OSes
6. Allow for progress in the definition of done - FxCop rules are all
enabled by default and you have to choose which ones you need and to
disable them (i.e. after scrolling through the 700 issues it found in
your project). You should be able to use a tool effectively for your
project immediately after install, and to add rules as you advance.
7. Work as fast as possible. SourceMonitor is a good example of
performance.
8. Connectivity: Make it easy to publish the results in various
sources: as reports, in a database, in other tools.

That's about all I was thinking of. Too much? Too little? Let the
community decide ;-).

- Alex
On Jun 30, 3:21 pm, Torbjörn Gyllebring
<torbjorn.gyllebr...@gmail.com> wrote:
> On Tue, Jun 30, 2009 at 2:04 PM, Olof Bjarnason<olof.bjarna...@gmail.com> wrote:
>
> > 2009/6/30 Corey Haines <coreyhai...@gmail.com>
>
> >> Christian,
> >> Could you elaborate a bit on what you get from that combination? What sort
> >> of things do you stress in FxCop? I'm not really familiar with
> >> SourceMonitor, either.
>
> > I tried SourceMonitor some. It's complexity measure, regarding methods, is
> > nice. It does generate some report XML:
> >http://www.campwoodsw.com/sourcemonitor.html
>
> > But we need tools that are built on OSS libraries, so we can build a
> > "metrics report tool" from those libraries. FxCop is not OSS; I don't know
> > about the others.
>
> Gendarme and Smokey are two open and free alternatives.
>
> > One alternative is to contact the authors of these different tools and
> > discuss these transparent metrics idea - maybe they are willing to help us.
>
> >> -Corey
>
> >> On Tue, Jun 30, 2009 at 3:24 AM, Christian Horsdal <c.hors...@gmail.com>

AlexBolboaca

unread,
Jun 30, 2009, 4:57:38 PM6/30/09
to software_craftsmanship
For the record, I've only now seen the parallel conversation about
almost the same topic.
-Alex

Corey Haines

unread,
Jun 30, 2009, 5:29:16 PM6/30/09
to software_cr...@googlegroups.com
Ah, I see where the disconnect is.

My project is much more focused on the service, accepting submissions of already-created metrics from existing tools in the different spaces. The first space that I'll be pushing toward is the Ruby-based metrics. The problem with having a "tool" that people download is that the number of different spaces is large. I could see that being a phase 2 portion, but, at least for the initial space, the tools are already well-defined and easily installable. Also, different spaces could conceivably have differing metrics, themselves. There are some, I'm sure, that are relevant in dynamic languages that aren't relevant in others. Functional languages might have differing things that are worth paying attention to, as well.

What space do you generally work in?

I'm working on a longer post about the project, explaining the goals, starting the feature/scenario lists to prepare for settling down in November to start coding.


-Corey

AlexBolboaca

unread,
Jul 1, 2009, 4:06:03 AM7/1/09
to software_craftsmanship
I was talking about a tool because of one big problem that I have in
the .NET space: the existing tools are only marginally useful, for
various reasons. Even by using a set of 3-4 tools, we only get a vague
idea on the code quality.

Let me give you some examples. Source Monitor is useful for its
cyclomatic complexity and coupling indicators. FxCop is useful for
finding unused code (although it only does this at the library level),
but without providing an indicator - it only reports an issue. NCover,
along with NUnit or MbUnit or other unit testing tool shows code
coverage. NDepends tries to show dependencies, but it does so in a
very complicated manner. And that's about everything you can get that
could be related to the core clean code principles.

To get to the point where you can use those tools, you need to do a
fair amount of configuration and fiddling around, to publish their
reports at one place and you still need more tools - like the one for
finding duplicated code. I've used CruiseControl.NET to do this and it
was painful. Plus, I realize just now that I missed one core point,
which is to aggregate the interesting results in one dashboard report.
We should probably publish this experience somewhere, so that other
people can use it (even if at first it means downloading 5
applications and some configuration files).

As for the metrics, as soon as you describe what you need, I can send
you everything I know about .net specifics.

- Alex

On Jul 1, 12:29 am, Corey Haines <coreyhai...@gmail.com> wrote:
> Ah, I see where the disconnect is.
> My project is much more focused on the service, accepting submissions of
> already-created metrics from existing tools in the different spaces. The
> first space that I'll be pushing toward is the Ruby-based metrics. The
> problem with having a "tool" that people download is that the number of
> different spaces is large. I could see that being a phase 2 portion, but, at
> least for the initial space, the tools are already well-defined and easily
> installable. Also, different spaces could conceivably have differing
> metrics, themselves. There are some, I'm sure, that are relevant in dynamic
> languages that aren't relevant in others. Functional languages might have
> differing things that are worth paying attention to, as well.
>
> What space do you generally work in?
>
> I'm working on a longer post about the project, explaining the goals,
> starting the feature/scenario lists to prepare for settling down in November
> to start coding.
>
> -Corey
>
> On Tue, Jun 30, 2009 at 5:54 PM, Olof Bjarnason <olof.bjarna...@gmail.com>wrote:
>
>
>
>
>
> > 2009/6/30 Corey Haines <coreyhai...@gmail.com>
>
> >> 2009/6/30 Olof Bjarnason <olof.bjarna...@gmail.com>
> --http://www.coreyhaines.com

Olof Bjarnason

unread,
Jul 1, 2009, 7:00:11 AM7/1/09
to software_cr...@googlegroups.com


2009/6/30 Corey Haines <corey...@gmail.com>

Ah, I see where the disconnect is.

My project is much more focused on the service, accepting submissions of already-created metrics from existing tools in the different spaces. The first space that I'll be pushing toward is the Ruby-based metrics. The problem with having a "tool" that people download is that the number of different spaces is large. I could see that being a phase 2 portion, but, at least for the initial space, the tools are already well-defined and easily installable. Also, different spaces could conceivably have differing metrics, themselves. There are some, I'm sure, that are relevant in dynamic languages that aren't relevant in others. Functional languages might have differing things that are worth paying attention to, as well.

What space do you generally work in?

Daily C#. Every now and then Python and C++.

I think Alexander summed up my understanding/worries about the "use existing tools+configure them appropriately" nicely in his latest posting: It is just too cumbersome/non-economic (for the end user). That leads to confusion and thus low confidence in the statistics material end-users (programmers) gather from their code bases.

I still believe working with existing code analysis OSS libraries is a more fruitful long-term road to walk.

The situation in the Ruby community might be different, I guess?
 



--
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english

Esko Luontola

unread,
Jul 2, 2009, 4:36:52 AM7/2/09
to software_craftsmanship
On Jul 1, 12:29 am, Corey Haines <coreyhai...@gmail.com> wrote:
> What space do you generally work in?

Java and in the future more and more in Scala.

It would be important to have a standardized set of metrics tools,
because different tools can produce slightly different metrics even
when they are measuring the same thing. For example code coverage can
vary by 1-5% depending on which tool you use to measure it. It would
be hard to compare metrics from different tools.
Reply all
Reply to author
Forward
0 new messages