>Frankly, free scares me.And for-profit scares me.
Yet a lot of user groups and conferences seem to make things happen relatively smoothly, with in-kind contributions, volunteer resources, and practically no money. (http://www.sdtconf.org, www.agilephilly.com, www.agiletour.org)
I think that a low barrier to entry, like a $10-100/year subscription to a web site that allows both self-assessments and peer assessments could end up providing enough information about folk that it's useful to both employers, clients, and employees. We could encourage low-cost assessments as well as high-confidence assessments--that is, score a "reviewed and rated somebody's self assessment" quest lower than a "day of pairing at another company". As Laurent Bossavit mentions at WeVouchFor.org, there could be ways to game the system but with enough currency being exchanged, the level of trust for people that are actually good would distinguish them from the fakes.I think the ultimate solution has to be a blend of low tech and high tech for cost savings purposes and to keep it open to people that don't live in North America and speak English.
>Will the volunteer model be sustainable?I'm beginning to believe no. A bartering system? Maybe. As you mentioned with the iPhone apps model, I think there's a lot of flexibility to let the "market" hash it out. Let's design something that keeps the doors open for both free and paid versions, and then we'll see what happens. I think that the free option will be a corruption-proofing feature that will keep the commercial trainers honest (not to say that any are dishonest, but I do wonder about the prices they charge sometimes).
>Quests could be contributed by people and companies.The way I see it, the open nature of defining quests will keep this assessment system from going obsolete, conservative, or counter-innovation. Anyone can define a quest, then they need to get a significant number of qualified individuals (20?) to endorse it, agree to its worth in points, and voilà--it goes live. Since an individual hoping to get points has intrinsic motivation to do so, they'll be responsible for doing the data entry for their own quest--but for most quests this doesn't redeem points until some other game-player endorses it. This endorsement process bestows a very small amount of points on the reviewer--but in fact this is a good thing too, because reviewers end up learning about what other people are doing in the community. Other quests could be defined as a trainer/student relationship in which case money as well as points changes hands, but that's worked out between the interested parties. I think that point exchange would go best with a written retrospective-like review -- rating 0-9, features to keep for next time, and things to change.
>Do we worry about people gathering up points from whatever source they can?How does WoW deal with this? I don't think a person should accumulate points for the same quest, but multiple quests of increasing difficulty should earn more and more points. Another way to identify people gaming the system would be to see how well-rounded they are across multiple skill sets--skills that correspond with the 7 pillars that Ron Jeffries, Chet Hendrickson, et al. have described.
On the other hand, as long as points decay after a reasonable period (barring sick/parental leave?), then even people that repeat the same quest year after year should still merit at least some points, because they're still trying to learn along this axis.
Hello Joshua,
I vote for hybrid, and I vote for trying it out on a couple of non-trivial "take home" codebase challenges that don't involve any of the compelling WoW mechanics. If any of you want more details off-line, I have some specific challenges in mind.
My personal want is for enough of us in the current agile programming "community of trust" to achieve enough concensus about lightweight, self-sustaining, profitable enough assessment mechanisms, coupled with "open source" codebase challenges, coupled with a coupla good books (I'm working on one now) that together mean that we are helping more people learn the skills. Rising tide floats all skills levels.
Think of it as really lightweight, agile curriculum for vocational programmers who want to step up and become software craftsmen. Think of it as part revenue stream and part pay-it-forward consciousness raising around software craft.
With how little exercise design and instruction could we try this out? How soon could we iterate? How few basic agile programming practices, practiced solo, could we challenge with learning exercises and then assess, at first?
If we try a hybrid programmatic/static-analysis whatever plus manual (remote pairing) mechanism on this at first, and barely break even on it for the first few students, so what? We can learn and adjust.
I want us to start small, get started, build a self-building community, retrospect, adapt to our mistakes, rinse repeat.
But honestly I don't want to boil a new WoW ocean in order to get started. I want to get started small.
And I want to focus on a small handful of things to learn, and a few daunting coding challenges for them, before I focus so much on the assessment mechanics. It really is the learning community that gets my hair on fire. I betcha dollars to donuts that a requisite, affordable assessment biz model can be emergent, as we iterate.
--Patrick
From: assessin...@googlegroups.com [mailto:assessin...@googlegroups.com] On Behalf Of D. André Dhondt
Sent: Thursday, September 24, 2009 11:41 PM
To: assessin...@googlegroups.com
Subject: [Assessing-Agility] Re: Practical & Sustainable Assessment
Bonjour Laurent (and ADS and assessing-agility),
Regardless, this is probably too much to discuss in an e-mail, but I am curious about what happened with the WeVouchFor project...
Do you you want to be part of this next wave of an attempt-to-create-a-certification-scheme that is agile?
Do you have time to meet and chat about what you learned the last time around?
There is still a lot of talk on both the ADS list and the assessment list, people prosing things that may become solid proposals. Is the WeVouchFor site in a state where we could direct a bunch of new users to try it out? Would you be willing to open up the site and code for us to make modifications--and if so, who are the stakeholders that would need to be considered before making changes?
Last of all, as a member of the board of the Agile Alliance, do you think the Alliance would consider endorsing a recurring assessment scheme (while still maintaining their distance from certification)?
Hi All:
For the record, I am (perhaps pig-headedly) interested, for now, only in the problem of getting a semi-standard set of open-source coding exercises out into the world for people to try, and to learn from.
The whole WeVouchFor thing does not help me help more programmers learn more about Agile Programming faster.
I'm only interested in determining how to assess whether or not someone passes/fails such an exercise, and therefore has demonstrated that they understand some pre-determined subset of techniques, practices, principles, and patterns.
I think devising assessment systems larger than that is speculative.
The specific set of practices I am interested in asking students to attempt to learn, in large, challenging exercises, are these and only these, for now:
-- Expressive naming
-- SRP / small modules / cyclomatic complexity
-- Compose Method
-- Extract Method / Extract Class / Move Method /Pull-Up & Pull-Down
-- "basic" test-driving
-- "basic" mocking (in greenfield contexts, and in Legacy/pathalogical decoupling contexts)
-- isolation testing
-- (?) integration/end-to-end testing (because of its easy of learning, not its Total Cost of Ownership)
So I ask you folks, as I am asking everyone, if we ask programmers who are brand new to agile programming to spend a substantial number of hours (hundreds, I am thinking) completing an exercise that "strongly encourages" them to learn mainly the above things ...
...well, how useful is that as an Agile 101 sort of learning experience? How important are those as first few lessons?
Could we start with just the above set of things to learn, and the simplest (but very challenging and demanding) "take-home" coding exercise, and see how it works as a learning system?
Could we then devise just enough hybrid programatic/manual assessment to pass/fail people who are willing to try something this daunting, in order to drill just a few key concepts and practices into their heads?
What am I missing here?
Cheers,
--Patrick
Frankly, free scares me. I don't know how free and sustainable go together. We've all seen the dead open source projects (no updates since 2003). Meanwhile, World of Warcraft is run by a company that charges money and uses that money to keep funding development.I'm not saying that assessment has to be paid. I simply wonder how to make a sustainable model. If a bunch of us want to make something happen, we have to put real time and energy into it and I wonder how far the free volunteer model can take us.
I also wonder about how to do the assessments. There are high tech solutions (we make these at my company), low tech and hybrids. High tech would be programmatically assessing a skill level. Low tech would be manually assessing (likely in person) a skill level. Hybrid would be some mix -- perhaps some automation and some manual verification or checking.
I think that a low barrier to entry, like a $10-100/year subscription to a web site that allows both self-assessments and peer assessments could end up providing enough information about folk that it's useful to both employers, clients, and employees. We could encourage low-cost assessments as well as high-confidence assessments--that is, score a "reviewed and rated somebody's self assessment" quest lower than a "day of pairing at another company". As Laurent Bossavit mentions at WeVouchFor.org, there could be ways to game the system but with enough currency being exchanged, the level of trust for people that are actually good would distinguish them from the fakes.
>very targeted training advertising for instance+1, source of revenue for sustaining this thing. Who wants to set up a corporation? Then again, it'd be better coming from the Agile Alliance--some umbrella organization that has legitimate authority to run this.
For the record, I am (perhaps pig-headedly) interested, for now, only in the problem of getting a semi-standard set of open-source coding exercises out into the world for people to try, and to learn from.
>I also think it's crucial to have a charter and that comes before the baby steps.--
>Make a prototype. See it people are interested. See if it has merit. See if it's something anyone can cheat on by >asking their friend to do it for them.Frankly I don't really think that if I build something anyone will be interested. Maybe if we all build it together, people will sign up.
Then again, we've got two celebrities in our field that already built something--and almost nobody's using it, nor maintaining it, because it ends up being too expensive to maintain.
Hello Joshua,
My notion for evolving this system is this; admittedly it is a "meta-game":
We first design a code quest that may, in fact, consist of few large exercises and a few smaller exercises. I have some ideas here, and some code I have been playing with; we can prolly cobble together something pretty quickly.
Then iterate in the following way (which, BTW, we can use to build community and mindshare as well):
-- divide this list and all comers into three camps:
-- The Good Faith Questers (who just try to do the quest in good faith, as best they can, by the rules)
-- The Quest Cheaters (who try to "cheat" the system somehow, optimizing for performance over learning)
-- The Assessors, who assess results of both groups above, and find patterns. This group tries to assess pass/fail performance as programmatically/efficiently as possible, and emergently determines how much pairing to do with the students.
We then gather data on the whole system as best we can, objectively and subjectively.
We then find what gamable holes we think we have in the system, and determine whether/how to fill them.
We then iterate again. Thus we continuously improve the system, both in how efficiently it drives learning, and in how efficiently we can assess performance.
Thoughts?
--Patrick