The name and concept is somewhat influenced by CMM, which I suppose is
anathema to most Agilistas. I'm the first to admit that I don't know
much about CMM other than it provides a scoring system for
organizations.
In a Rails context, you would establish a scorecard that categorized
your shop on a scale of 0 to 3 (or whatever, just thinking out loud
here)
RMM0 "aka Cowboy level"
- No formal development process
- No test coverage
- No standardized business practices
- Static analysis failures
RMM1 and RMM2 would have to be something in-between. RMM1 is
considered negative. RMM2 is considered positive.
RMM3 "aka Master level" (Purposely exclusive territory here, I can
imagine that only a handful of shops in the world could achieve this
level!)
- Agile software development practices
- 100% test coverage WITH THE APPROPRIATE TYPES OF TESTS (For
instance, at Hashrocket lately we are doing much less unit testing at
the MVC level because automated acceptance and integration testing
with Cucumber is so powerful and effective.
- 100% pair-programming (muahaha)
- Formal and standardized business practices
- Institutionalized continuous learning and process improvement
- Positive customer testimonials
- Successful deployment of Rails application(s) with substantial scaling demands
Incidentally, I'm bringing this up for discussion, but I might be
interested in a joint-venture along these lines with an out of work
senior Rails developer that feels like taking the idea and running
with it. (Rick?) In fact I can envision the idea expanding to the
point where being a RMM auditor could be a profitable little part-time
gig for Rails freelancers with the right personality type and
enthusiasm.
Obie Fernandez
CEO & Founder | Hashrocket
904.435.1671 office
404.934.9201 mobile
Hashrocket, Inc.
320 N 1st Street
Suite 712
Jacksonville Beach, FL 32250
Who decides what's appropriate? According to what metrics? Is this
testing all possible branches? And if you're doing integration
testing, how do you cover all possible datasets? This just seems way
too hard to quantify and too easy to game.
> - 100% pair-programming (muahaha)
Again, this is way too easy to game; you could hire a seasoned expert
and a "net-loss" programmer and stick them together, and you'd have
"pair programming." And if this means that XP is the One True Agile
Process, that cuts Crystal Clear, Scrum, DSDM and others completely
out of the loop, and that can't be right...
> - Formal and standardized business practices
> - Institutionalized continuous learning and process improvement
> - Positive customer testimonials
Again, may have little or nothing to do with the quality of the work
done. There have been plenty of technical snowjobs done that the
client didn't discover until well after.
Honestly, I think the only thing that counts is the positive
testimonials of your peers; by which I mean that the next engineer
coming in after you've left, taking over your code, should be happy
with it and can stand up and defend it as if it were his own.
There have been places I've come into where I've seen good quality
code holding the system together, and I didn't have to ask who had
written it; it was obvious just from looking at it that these guys
were from a pro consultant shop.
Ironically, what really made it obvious to me was not that they'd used
DSLs, metaprogramming or self-modifying code; it was that they clearly
knew when it was appropriate to use those tools and when not to. In
some cases they specifically went with a lower tech solution because
it was more maintainable, even though it wasn't quite as cool as what
they wishfully described in the comments. They'd clearly considered
their audience, and were writing with the next guy in mind.
> - Successful deployment of Rails application(s) with substantial scaling demands
See, this is so nebulous, and so hard. Scaling is not something you
can do in the context of a six month project, and the nature of a
software application is to have bottlenecks and features that pop up
in the oddest places in the strangest ways. You can quote the CAP
theorum and wave memcached around, but I don't see how you can say
"substantial scaling" without quantifying it further.
The reason that CMM is so reviled amongst programmers is because the
system is inherently corrupt. The CMM evaluators are paid by the
people who are being evaluated, and they're the ones who make an
industry out of giving people high marks for having a process. Having
a high CMM process doesn't mean you actually produce software more
efficiently (although there's a Construx presentation that says
differently, I believe they're taking the evaluator's reports and
repackaging it) -- it just means that you have a well defined process,
and (at the highest level) can predict the number of bugs you get out
of that process.
Will.
The auditor, based on guidelines established by a group like this one?
Believe me I realize the challenges in making something like this work
in practice. The auditors would have to have "know it when I see it"
capabilities for judging code quality.
>> - 100% pair-programming (muahaha)
>
> Again, this is way too easy to game; you could hire a seasoned expert
> and a "net-loss" programmer and stick them together, and you'd have
I run one of the only shops that I know of that does proper and
effective pair-programming ALWAYS, even onsite with clients, etc.
Hence the evil laugh... the only other shop that I know for a fact
operates similarly is Pivotal. I admit that it might not be vital it
is to attach this criterion to RMM -- on the other hand, good Agile
practices are indispensible nowadays.
> "pair programming." And if this means that XP is the One True Agile
> Process, that cuts Crystal Clear, Scrum, DSDM and others completely
> out of the loop, and that can't be right...
Flawed reasoning. Pair-programming is included in XP but is really its
own thing apart from the particular Agile implementation you choose.
Besides, XP is the only Agile practice that I'm aware of which
"manages down" to the developer level and dictates how they operate.
Scrum and others manage up in terms of releases and stakeholders.
Right? We pick and choose whatever works from XP, Scrum, etc..
>> - Formal and standardized business practices
>> - Institutionalized continuous learning and process improvement
>> - Positive customer testimonials
>
> Again, may have little or nothing to do with the quality of the work
> done. There have been plenty of technical snowjobs done that the
> client didn't discover until well after.
Dunno about that. You might be able to snow your clients, but you
shouldn't be able to snow the auditor.
>
> Honestly, I think the only thing that counts is the positive
> testimonials of your peers; by which I mean that the next engineer
> coming in after you've left, taking over your code, should be happy
> with it and can stand up and defend it as if it were his own.
Agreed in principle. In practice, the client devs taking over are
usually not as good as we are and don't have a basis to judge, nor do
I see a good system for tallying that data. Actually, we've known
client devs to fuck up our code almost immediately. One of the worst
feelings in this business... :-/
> There have been places I've come into where I've seen good quality
> code holding the system together, and I didn't have to ask who had
> written it; it was obvious just from looking at it that these guys
> were from a pro consultant shop.
That's what I would want the auditor to do.
>
> Ironically, what really made it obvious to me was not that they'd used
> DSLs, metaprogramming or self-modifying code; it was that they clearly
> knew when it was appropriate to use those tools and when not to. In
> some cases they specifically went with a lower tech solution because
> it was more maintainable, even though it wasn't quite as cool as what
> they wishfully described in the comments. They'd clearly considered
> their audience, and were writing with the next guy in mind.
True.
>> - Successful deployment of Rails application(s) with substantial scaling demands
>
> See, this is so nebulous, and so hard. Scaling is not something you
> can do in the context of a six month project, and the nature of a
> software application is to have bottlenecks and features that pop up
> in the oddest places in the strangest ways. You can quote the CAP
> theorum and wave memcached around, but I don't see how you can say
> "substantial scaling" without quantifying it further.
I agree. Needs a lot more thought, but in theory there must be some
way of establishing whether a shop has delivered real, scalable
solutions rather than just toy apps.
> The reason that CMM is so reviled amongst programmers is because the
> system is inherently corrupt. The CMM evaluators are paid by the
> snip...
Totally. What if you had to pay a non-refundable fee to RMM upfront in
order to get evaluated. We're not such a huge "industry" that we
couldn't use that money to compensate a neutral and properly-qualified
auditor -- one that wouldn't be subject to coercion or bribing.
We've discussed the potential of a somewhat objective audit before.
One of the things I'm personally still unclear on is whether doing
these audits is something that works well in a distributed (aka
"meatcloud") fashion, or whether it works better in a centralized
fashion. Does the emergent intelligence of crowds (the same emergent
intelligence that drives large-scale open source to success) sift out
the identify of the high quality software teams, or can such high
quality only really be discerned by other high-quality software teams?
If the latter is the case there's a clear bootstrapping issue.
Regardless, because it may nonetheless be the case that the builders
of good software can only be recognized by rare quality practitioners,
there's the unfortunate ballast of experience: every certification or
large-scale auditing process in software that I can recall has, for
all practical purposes, failed due to either widespread skepticism,
outright corruption, or inherent bias.
It's been my limited experience in auditing software and working with
(and after) other developers, that there are a number of objective
metrics one can put into play. Any of them can be "gamed", as has
been hinted earlier in this discussions. In aggregate, however, it is
very difficult to game the majority of them.
Furthermore, In reality, where we happen to live, there seems to
actually be a power law at work: the vast majority of software and
teams plying our trade are so bad that they fail nearly every metric
we could reasonably apply. Only the vanishing minority of products
and practitioners pass even a single gameable metric. Yet we
ultimately have little means of discerning for ourselves, for our
clients (or for differentiating for our future clients) the difference
between one or the other.
To be able to deploy a standard of comparison (whether distributed or
centrally administered and computed) that forces products or teams to
either score in the vast gutter of mediocrity, or work diligently to
at least game performance on a number of axes (when such gaming is,
realistically, often more difficult than just working to better
oneself) to separate from the pack -- would imply a vast improvement
in the current and long-standing state of affairs.
In my limited experience the difference is usually striking and
obvious: "this is trash" vs. "wow, this is pretty good". In the
Rails community the latter is really only elicited by the products of
about 10-20 shops (those shops Obie was trying to catalog in one of
his solicitations this evening). It would be great if we could get
that number up to 50, 100, or even 1000. I'd like to believe that
putting a workable set of metrics out there would eventially bias
progress in that direction.
Best,
Rick
We've discussed the potential of a somewhat objective audit before.
It will not work on a distributed basis because of issues related to
privacy and confidentiality. I simply can't publish 90% of the work we
do in any significant way because my clients would freak out. On the
other hand, getting an auditor in here with a signed NDA for a day or
two to review our code and processes seems doable.
>
> If the latter is the case there's a clear bootstrapping issue.
Why? Because it would take an objective auditor to step up to the
plate? Because he would need to be vetted by some initial group, which
itself would have nobody to audit them? I don't think that challenge
is unsurmountable.
> Regardless, because it may nonetheless be the case that the builders
> of good software can only be recognized by rare quality practitioners,
> there's the unfortunate ballast of experience: every certification or
> large-scale auditing process in software that I can recall has, for
> all practical purposes, failed due to either widespread skepticism,
> outright corruption, or inherent bias.
I wouldn't consider that a reason not to try.
> It's been my limited experience in auditing software and working with
> (and after) other developers, that there are a number of objective
> metrics one can put into play. Any of them can be "gamed", as has
> been hinted earlier in this discussions. In aggregate, however, it is
> very difficult to game the majority of them.
That's right.
> Furthermore, In reality, where we happen to live, there seems to
> actually be a power law at work: the vast majority of software and
> teams plying our trade are so bad that they fail nearly every metric
> we could reasonably apply. Only the vanishing minority of products
I suspect that's everywhere, not just where you live. We are asked to
rescue atrocities on a regular basis.
> and practitioners pass even a single gameable metric. Yet we
> ultimately have little means of discerning for ourselves, for our
> clients (or for differentiating for our future clients) the difference
> between one or the other.
Clearly published standards would go a long way towards starting to
establish criteria for judgment. It will be a challenge to write the
criteria descriptions in such a way that it educated interested
parties about what they should be seeking. If it succeeded at that,
the project would be an educational resource and net positive for the
overall community. Wouldn't it?
> To be able to deploy a standard of comparison (whether distributed or
> centrally administered and computed) that forces products or teams to
> either score in the vast gutter of mediocrity, or work diligently to
> at least game performance on a number of axes (when such gaming is,
> realistically, often more difficult than just working to better
> oneself) to separate from the pack -- would imply a vast improvement
> in the current and long-standing state of affairs.
Right! I fully expect that if such a system was put into place, that
only a handful of companies would qualify for the top-level RMM
certification. Maybe a slightly larger would qualify for the RMM level
right below top-level, and feel that it is a selling point. To the
extent that the rest of our "industry" admired the people at the top,
they might work towards qualifying. Idealistic, yes...
> In my limited experience the difference is usually striking and
> obvious: "this is trash" vs. "wow, this is pretty good". In the
> Rails community the latter is really only elicited by the products of
> about 10-20 shops (those shops Obie was trying to catalog in one of
> his solicitations this evening). It would be great if we could get
> that number up to 50, 100, or even 1000. I'd like to believe that
> putting a workable set of metrics out there would eventially bias
> progress in that direction.
Yes, yes, yes! Amen, brother! Thanks for clearly expressing what I was
suggesting!
Cheers,
Obie
:-)
My suggestion, then, would be that the process be two-tiered. The
first tier is a suite of basically automated metrics that most
software in the field will probably fail spectacularly: test
coverage, cyclomatic complexity, lack of revision control :-),
revision control code churn, class/module/function coupling, etc.
There's a maintained, publicly available package of tools along with a
set of accompanying standards documents. Anyone can pull the suite
tools, run them over their software, and derive an aggregate metric
that's gameable, but tends to indicate that at least something is
probably going right if the scores are high. The tools needed to do
this are in large part already available and open source.
The second tier would be an NDA-agreeable, rigorous professional audit
of code and/or practices, conducted by expert(s) in developing and
assessing software and practices. I'm presuming here that we dodge
the bootstrap process by saying (a) those who care probably can
identify the proper auditors, and (b) if we make the audit process
open enough then a bad bootstrapping could be supplanted by better
experts following the same recipe.
The gateway to the second tier is a reasonable score on the first
tier, as well as presumably some sort of cost barrier -- to both
defray costs and to squelch overwhelming demand to use the auditing
resource. In theory one could get a limited per-application expert
audit or a more comprehensive team-and-process audit.
The net result, presuming the two tiers are evolved over time to try
to promote and identify the best techniques and methodologies, would
be to provide a way for developers to tune their processes cheaply, to
really hone their craft (more expensively), and to be able to
differentiate quality from hackery.
Best,
Rick
> No client has ever come to us and asked about certification,
> insurance, etc. They look at what we've done, talk to our clients,
> and have candid conversations with me.
Agreed.
Clients do ask about insurance in contracts and whatnot. But no one's
ever demanded certification. To be fair, none exists for Rails, so
there's nothing to ask for. But even when we were an all-Microsoft
shop, we were never Microsoft Certified. But no one cared. We did
great work and that's what mattered.
I say, let the markets decide.
Colin
Colin A. Bartlett
Kinetic Web Solutions
http://blog.kineticweb.com
Sorry, that was only meant to say that you can't *buy* a top-level RMM
grading. That's all.
Mike
+1
Jonathan
--
Jonathan Weiss
http://blog.innerewut.de
http://twitter.com/jweiss
>
> On Feb 13, 7:26 am, Colin Bartlett <colinbartl...@gmail.com> wrote:
>> But even when we were an all-Microsoft
>> shop, we were never Microsoft Certified. But no one cared. We did
>> great work and that's what mattered.
>
> I agree with Colin. Great results is what matters and a free market
> can sort that out.
It's a nice thought, but there isn't a well functioning market here.
Transaction costs are non-trivial, the products aren't commodities and
the information asymmetry is huge.
This situation isn't unique to software. I had a horrible time trying
to find an accountant and an attorney because I don't know enough
about either field to judge the worthiness of my potential providers.
Our clients are in the same position. After several mis-starts, I
ended up finding good people via recommendations from people I trust.
That's the same way we get most of our business. It allows us to go
into an engagement with a little trust. Of course, there is a self
selection problem. Most people don't give out references of clients
that weren't happy. (We do, but I think we're in the minority there)
I mention other fields because this problem is universal and isn't
well solved in any field. I got my first attorney from a trade
organization similar to the one proposed here. He wasted a ton of my
money and never delivered the goods. The trade organization is only
helpful if the people who run it our trustworthy. That leads us back
to the same information asymmetry.
I wish I had a brilliant solution to this problem. I would just
recommend that before an organization is created, you look at what
problem you're trying to solve and think closely about whether or not
the organization solves the problem.
Mike
--
Mike Mangino
http://www.elevatedrails.com
+1
Roland
--
Moriz GmbH
Hedwig-Dransfeld-Allee 14
80637 München
Tel: 089/78795079 (AB)
Vertretungsberechtigter
Geschäftsführer:
Roland Moriz
Registergericht: Amtsgericht München
Registernummer: HRB 174 294
USt-ID: DE260422784
The ultimate argument against certification is decertification.
Do you really want to give so much power to some Ubergeek that they can
_de_certify you, without appeal? For the certification to gain industry
acceptance, it must have teeth. And if clients discover that the best works of
art are hung in the "Salon de Refuses" (Gallery of Rejects), then they will
start shopping there, instead!
As I mentioned in the other thread, the concept is interesting. The
responses have been very interesting as well. However, with any
process/change, it's important for us to assess the cost that would
result.
Certification (or whatever you want to call it) would costs teams time
and money. This then gets passed on to their clients, which raises
their costs. Now, you could argue that clients who work with shitty
development teams will incur a big cost as well, but we don't have
enough real data to suggest that a certification process would save
clients money in the long run.
Given the situations we face with the economy, we need to be looking
at ways to remove unnecessary costs (time+money) from our processes
and my initial response to the concept.. is that this would have the
opposite result. As others have said, there is no *right* way
(globally) for designing and developing web applications with Ruby on
Rails. I suspect most of our teams work in very different ways and
that's up for debate on whether that is a good thing or not.
What is the problem that this idea is trying to solve? I don't think
that's been clearly established and would be more interested in
learning more about that before we jump to possible solutions.
Robby
--
Robby Russell
Chief Evangelist, Partner
PLANET ARGON, LLC
design // development // hosting w/Ruby on Rails
http://planetargon.com/
http://robbyonrails.com/
http://twitter.com/planetargon
aim: planetargon
+1 503 445 2457
+1 877 55 ARGON [toll free]
+1 815 642 4068 [fax]
There's a great book, _Measuring and Measuring Performance in
Organizations_ which makes this point exactly; if you grade people to
a metric, they will optimize for the metric rather than for the
intangible quality that the metric is intended to measure.
Still, according to Tom Gilb, "anything you need to quantify can be
measured in some way that is superior to not measuring it at all."
Will.
Hysterical.
> Seeing any sort of praise for the MS certifications is pretty funny,
> given that you can just buy the MS books, memorize them cover to
> cover, pay MS the exam fees, to work with MS products for MS clients.
> It's just another money maker for them, and they care very little for
> how qualified people are.
You can also work for an MS certified software vendor, with the right number of
real MS system administrators and software engineers, and watch in horror as it
leaves a very large crater in the ground, too!
--
Phlip
One question to ask about the motivation behind a "certification body"
of some sort is whether the intent is to
establish a minimum standard of operational efficiency (so that clients
who hire based on it can be sure they aren't
dealing with complete idiots), or whether to establish some objective
measurement of quality (good/better/best/awesome).
It gets me thinking about whether this body would be more of an OSHA
(minimum safety standards), an FDA (reasonable
standards for drugs), or <fill-in-your-government-agency-here>.
I think I see some of the positive intent behind the idea, and I
probably agree with some of those intentions, but given that
no one worth their salt seems to take existing certifications very
seriously, I have a hard time thinking that "we" (the Rails community)
will somehow come up with a certification system that really means
something. Best case, it will establish some very baseline, minimum
standard
that cannot be used to distinguish between more than complete crap and
not-crap. I suspect that he people who came up with CMM, and CNEs and
MCSEs and what-have-you's probably attempted to inject lots of "meaning"
into their certifications, and may have mostly failed.
Also, if the body that provides the certifications is private, there
will be some level of corruption involved in the giving out of
certifications, I am pretty certain.
Wes Gamble
* About "Ruby/Rails Certification":
I completely agree with Jeremy: imho this is the more constructive
path we could take because it will make us move as a community which
promotes activity, diversity and innovation, and not like a certified
elite group, a bunch of wannabes and a big list of talented developers
disappointed with the separatist way that their community is taking.
I also agree with DrNic and Bryan Liles: I've seen different ways to
develop high quality software. Even in the relatively small Rails
community we have great developers who don't pair all the time and
others that don't entirely buy the "always TDD" approach. So yeah,
your teams are doing great stuff, I'm sure, but please don't call your
practices "The Rails Way" ;) because that is a way that is working for
*your teams*, with *your projects* and *at this precise moment*. I'm
pretty sure that, even with your well proven and high quality
development processes, you are all experimenting with new techniques.
And I you don't I would consider it a bad smell ;) If you find a great
new technique, should we all upgrade our certificates?
The ruby people is well known by its innovation and creativity, not
only applied to their libraries and frameworks but also to their
approaches and techniques. Even the opinionated Rails code is changing
to be more flexible, so why would we like to restrict ourselves to a
strict group of practices assuming that it is the One Way to Go?
* About "Agile Certification":
Some experienced people have critizised the certification concept. I
think that all certifications - even the ones which started as a
motivation to improve and spread best practices by standarization -
become the same on the long term: a business for certifiers, a
memory-exercise-to-win-a-badge for candidates and a wrong selection
criteria.
Take a look at one of your certification materials, any of them. It's
obsolete now, isn't it?
In my humble opinion, agileness is not only a group of techniques but,
also and more importantly, a compromise with a state of mind which
implies activity, reaction, evolution and adaptability. And to me,
certification looks like the opposite.
Best regards,
--
Raul Murciano - Freelance Web Developer
http://raul.murciano.net
To that extent, an apprenticeship program really your only solid bet. You need to test-infect these
developers, along with getting them hooked on a bunch of other best practices to the point where
they can't conceive of living without them. But, to do that, they need to be in a position where
they're inundated with those practices and the social culture is appropriately corrective of bad
practices.
An apprenticeship style of system is inevitably going to be a drag on existing, successful programs
-- at least in terms of productivity, if not in strict terms of income. It's an investment in the
community. So if you're interested in investing in the community that way, jump on in. The Rails
community will be ecstatic.
Note that these issues aren't unique to Rails. The entire development world is riddled with poor
developers. As Obie and I were talking about on Twitter, what's going on here is that Rails is
hitting the mainstream, so knowing Rails is no longer to validate a developer's skill. Without that
informal check, the tendency is to want to define a formal check (a "maturity model").
Unfortunately, none of those really work.
BTW, it would seem a lot less enterprisey if it wasn't presented as a collection of standards which
teams have to check off. And it was doubly harsh when presented as "I know the right way to write
software, and so if you're not doing it this way, you're not as good as me, and I'd like to
formalize just how not as good as me you are". (See Obie's explanation of his "mwahahaha" for
evidence of how that's working.)
My co-blogger, Brian Hurt, pointed out something while we were talking about this yesterday.
Certifications are about *getting* a job, whereas what's important is how you do *in the job*.
Certifications are a way for people who don't have experience in software to prove they know
software to other people who don't know software.
For job-seekers who have experience, they don't need certifications: their experience is more
meaningful than any certification. For those position-fillers who know software, they can ask to
look at code, do a technical interview, or a bit of pair programming.
As far as I'm concerned -- as someone who knows software -- open source can play the role of
certification for me. Show me an interesting/useful project you started, or point me to bugs that
you submitted matches for. *Now* you've got my attention, because it shows you 1) are engaged in
the wider development community, and 2) code well enough to not be afraid of putting your code out
there. And I can take a look at what you did and see both the kind of code you write and the way
you carry yourself in a semi-professional context.
Exactly.
One of the uses of a language is to serve as a barrier to entry. It
used to be that knowing Java was only known and used by hotshot
programmers. Now it's Ruby. Next it's probably Erlang.
In each case, you start off with some very highly motivated people,
have some initial successes, get a nice chorus going. And then as the
community matures, you start taking on people who know the words but
not the music, or just sort of hum along... and as more people join
in, you end up with a spectacular cacophony that prompts the early
adopters to leave and set up shop somewhere else. Ultimately Ruby
will go the same way.
I don't think that a Rails Maturity Model, or indeed any kind of
application specific Maturity Model will work, because ultimately
you're not looking for a specific language or an application; you're
looking for a solution. The only solution you're going to get is from
good programmers working as a team.
If there were some easy way to tell the quality of a team or
individual programmer, we'd take it. There are indicators you can
use; if a programmer or company has open source contributions then you
can do reviews of their work. You can take customer testimonials, you
can call competitors or non-local teams (they'll be much more plugged
in to who's doing what) and you can do interviews, or even ask for
prototypes or sample "starter" projects to see how the team operates.
But ultimately, the quality of the review you do is dependent on the
work that you put into it. If you're relying on a piece of paper to
do your review, you're probably not doing enough due diligence in the
first place.
Will.
And there are those who say that this has already happened. It's getting
too bloody easy to find two dozen plugins that all say they do what you
want, but all of which are utter shite.
> One of the uses of a language is to serve as a barrier to entry. It
> used to be that knowing Java was only known and used by hotshot
> programmers. Now it's Ruby. Next it's probably Erlang.
Gawd, please don't put the mojo on Erlang. I've moved into it to try and
find something that isn't being flooded with the great unwashed. I think
it'd break me if it went the same way as Rails is.
- Matt
--
Logan Shaw's Zen of ASR Computing: free yourself of the desire to have
computers work properly for this is the root of all suffering.
Totally backing off the idea of certification, but still think a
defined RMM is a good idea.
Cheers,
Obie
Great feedback for sure. As I mentioned in the Rubiverse podcast with
Mike, you can take the data that will be freely available on RMM and
use it as the basis of curriculum based firmly in the real practices
of successful Rails shops.
Cheers,
Obie
PS: The link to that podcast is
http://rubiverse.com/podcasts/6-obie-fernandez-on-rails-maturity-model
As I wrote on the blog:
--- 8< ----
Better but still not quite right. The trouble with such broad labels
is a "Master" is defined by what?
Mastery by individual skill I can believe. However, even then, mastery
is at best temporary and at worst a fallacy as techniques in this
community continuously evolve.
Mastery of Rails? Whoa! Sign me up!
Oh wait....
I may be awesome at designing and writing model classes but master
over everything in the stack? The front end as well to include
JavaScript, CSS, design, et al? I doubt that there are more than
perhaps two dozen people who can claim that level of expertise and
have the chops to back it up.
(What I left out of the blog comment but I'll say here: However, I
more than suspect that there are plenty of people with the egos to
back up such ridiculous claims)
Do you remember Hampton Caitlin's Ruby survey perchance? See http://hamptoncatlin.com/2008/hampton-s-ruby-survey-2008-results
Unfortunately, he doesn't have the statistical results posted but he
has the source YAML. The outcome was laughable! Some ridiculous
number, on the order of 20% IIRC, consider themselves Ruby Masters.
I sincerely doubt that 1-in-5 Rails developers are masters.
Color me skeptical.
Evan