Message from discussion Introducing RMM, the Rails Maturity Model
Received: by 10.100.247.12 with SMTP id u12mr1551105anh.5.1234502960674;
Thu, 12 Feb 2009 21:29:20 -0800 (PST)
Received: from mail-gx0-f158.google.com (mail-gx0-f158.google.com [184.108.40.206])
by mx.google.com with ESMTP id 7si251379yxg.15.2009.02.12.21.29.20;
Thu, 12 Feb 2009 21:29:20 -0800 (PST)
Received-SPF: pass (google.com: domain of r...@rickbradley.com designates 220.127.116.11 as permitted sender) client-ip=18.104.22.168;
Authentication-Results: mx.google.com; spf=pass (google.com: domain of r...@rickbradley.com designates 22.214.171.124 as permitted sender) smtp.mail=r...@rickbradley.com
Received: by gxk2 with SMTP id 2so519367gxk.8
for <firstname.lastname@example.org>; Thu, 12 Feb 2009 21:29:20 -0800 (PST)
Received: by 10.90.90.4 with SMTP id n4mr952363agb.86.1234502899318; Thu, 12
Feb 2009 21:28:19 -0800 (PST)
Date: Thu, 12 Feb 2009 23:28:18 -0600
Subject: Re: [rails-business] Re: Introducing RMM, the Rails Maturity Model
From: Rick Bradley <r...@rickbradley.com>
Content-Type: text/plain; charset=ISO-8859-1
On Thu, Feb 12, 2009 at 11:08 PM, Obie Fernandez
>> The reason that CMM is so reviled amongst programmers is because the
>> system is inherently corrupt. The CMM evaluators are paid by the
> Totally. What if you had to pay a non-refundable fee to RMM upfront in
> order to get evaluated. We're not such a huge "industry" that we
> couldn't use that money to compensate a neutral and properly-qualified
> auditor -- one that wouldn't be subject to coercion or bribing.
We've discussed the potential of a somewhat objective audit before.
One of the things I'm personally still unclear on is whether doing
these audits is something that works well in a distributed (aka
"meatcloud") fashion, or whether it works better in a centralized
fashion. Does the emergent intelligence of crowds (the same emergent
intelligence that drives large-scale open source to success) sift out
the identify of the high quality software teams, or can such high
quality only really be discerned by other high-quality software teams?
If the latter is the case there's a clear bootstrapping issue.
Regardless, because it may nonetheless be the case that the builders
of good software can only be recognized by rare quality practitioners,
there's the unfortunate ballast of experience: every certification or
large-scale auditing process in software that I can recall has, for
all practical purposes, failed due to either widespread skepticism,
outright corruption, or inherent bias.
It's been my limited experience in auditing software and working with
(and after) other developers, that there are a number of objective
metrics one can put into play. Any of them can be "gamed", as has
been hinted earlier in this discussions. In aggregate, however, it is
very difficult to game the majority of them.
Furthermore, In reality, where we happen to live, there seems to
actually be a power law at work: the vast majority of software and
teams plying our trade are so bad that they fail nearly every metric
we could reasonably apply. Only the vanishing minority of products
and practitioners pass even a single gameable metric. Yet we
ultimately have little means of discerning for ourselves, for our
clients (or for differentiating for our future clients) the difference
between one or the other.
To be able to deploy a standard of comparison (whether distributed or
centrally administered and computed) that forces products or teams to
either score in the vast gutter of mediocrity, or work diligently to
at least game performance on a number of axes (when such gaming is,
realistically, often more difficult than just working to better
oneself) to separate from the pack -- would imply a vast improvement
in the current and long-standing state of affairs.
In my limited experience the difference is usually striking and
obvious: "this is trash" vs. "wow, this is pretty good". In the
Rails community the latter is really only elicited by the products of
about 10-20 shops (those shops Obie was trying to catalog in one of
his solicitations this evening). It would be great if we could get
that number up to 50, 100, or even 1000. I'd like to believe that
putting a workable set of metrics out there would eventially bias
progress in that direction.