Practical & Sustainable Assessment

1 vista
Ir al primer mensaje no leído

Joshua Kerievsky

no leída,
24 sept 2009, 4:41:06 p.m.24/9/09
para assessin...@googlegroups.com
I'm curious what folks think is a practical, sustainable model we can follow that will not be a proof-of-concept but something that has an impact and is around for a while.  

Frankly, free scares me.  I don't know how free and sustainable go together.  We've all seen the dead open source projects (no updates since 2003).  Meanwhile, World of Warcraft is run by a company that charges money and uses that money to keep funding development. 

I'm not saying that assessment has to be paid.  I simply wonder how to make a sustainable model.  If a bunch of us want to make something happen, we have to put real time and energy into it and I wonder how far the free volunteer model can take us.  

I also wonder about how to do the assessments.  There are high tech solutions (we make these at my company), low tech and hybrids.  High tech would be programmatically assessing a skill level.  Low tech would be manually assessing (likely in person) a skill level.  Hybrid would be some mix -- perhaps some automation and some manual verification or checking.    

Thoughts on any or all of the above?  

--
best regards,
jk

Industrial Logic, Inc.
Joshua Kerievsky
Founder, Extreme Programmer & Coach
http://industriallogic.com
866-540-8336 (toll free)
510-540-8336 (phone)
Berkeley, California

Learn Code Smells, Refactoring and TDD at http://industriallogic.com/elearning

D. André Dhondt

no leída,
24 sept 2009, 4:52:15 p.m.24/9/09
para assessin...@googlegroups.com
>Frankly, free scares me.  

And for-profit scares me.  Yet a lot of user groups and conferences seem to make things happen relatively smoothly, with in-kind contributions, volunteer resources, and practically no money.  (http://www.sdtconf.org, www.agilephilly.com, www.agiletour.org)

I think that a low barrier to entry, like a $10-100/year subscription to a web site that allows both self-assessments and peer assessments could end up providing enough information about folk that it's useful to both employers, clients, and employees.  We could encourage low-cost assessments as well as high-confidence assessments--that is, score a "reviewed and rated somebody's self assessment" quest lower than a "day of pairing at another company".  As Laurent Bossavit mentions at WeVouchFor.org, there could be ways to game the system but with enough currency being exchanged, the level of trust for people that are actually good would distinguish them from the fakes.

I think the ultimate solution has to be a blend of low tech and high tech for cost savings purposes and to keep it open to people that don't live in North America and speak English.


--
D.André Dhondt
mobile: 001 33 671 034 984
http://dhondtsayitsagile.blogspot.com/

Support low-cost conferences -- http://agiletour.org/
If you're in the area, join Agile Philly http://www.AgilePhilly.com

Joshua Kerievsky

no leída,
24 sept 2009, 5:23:46 p.m.24/9/09
para assessin...@googlegroups.com
On Thu, Sep 24, 2009 at 1:52 PM, D. André Dhondt <d.andre...@gmail.com> wrote:
>Frankly, free scares me.  

And for-profit scares me.  
Yet a lot of user groups and conferences seem to make things happen relatively smoothly, with in-kind contributions, volunteer resources, and practically no money.  (http://www.sdtconf.org, www.agilephilly.com, www.agiletour.org)

True and those are conferences, not assessment sites.  To create a deep, detailed quests takes time.  To create many -- in many areas of Agile -- will take lots of time.   Finding ways to assess people accurately will take time.  Will the volunteer model be sustainable?   My interest is not in making money from this, yet I also don't want to get involved in something that is doomed from the start to not have enough horsepower to make a real difference.  
 
I think that a low barrier to entry, like a $10-100/year subscription to a web site that allows both self-assessments and peer assessments could end up providing enough information about folk that it's useful to both employers, clients, and employees.  We could encourage low-cost assessments as well as high-confidence assessments--that is, score a "reviewed and rated somebody's self assessment" quest lower than a "day of pairing at another company".  As Laurent Bossavit mentions at WeVouchFor.org, there could be ways to game the system but with enough currency being exchanged, the level of trust for people that are actually good would distinguish them from the fakes.

I think the ultimate solution has to be a blend of low tech and high tech for cost savings purposes and to keep it open to people that don't live in North America and speak English.

Distributing the works appeals to me. Quests could be contributed by people and companies.  They could be reviewed, accepted/rejected and categorized.  It could be like the market for app on the iPhone -- free and paid versions, with rating systems.  So if you are gonna pay some fee for a question, you'll hopefully have a community of ratings and comments about the quest.  

The above is me simply brainstorming.  Please contribute or comment -- esp. you lurkers. 

best
jk


D. André Dhondt

no leída,
24 sept 2009, 5:44:12 p.m.24/9/09
para assessin...@googlegroups.com
>True and those are conferences, not assessment sites.
Point taken... but minor correction--AgilePhilly has a strong history of monthly events going back 3 years, with various volunteers (including Naresh Jain) taking the lead for making it happen at different times: http://www.agilephilly.com/page/events-1#_History__1


>Will the volunteer model be sustainable?
I'm beginning to believe no.  A bartering system?  Maybe.  As you mentioned with the iPhone apps model, I think there's a lot of flexibility to let the "market" hash it out.  Let's design something that keeps the doors open for both free and paid versions, and then we'll see what happens.  I think that the free option will be a corruption-proofing feature that will keep the commercial trainers honest (not to say that any are dishonest, but I do wonder about the prices they charge sometimes).

>Quests could be contributed by people and companies.
The way I see it, the open nature of defining quests will keep this assessment system from going obsolete, conservative, or counter-innovation.  Anyone can define a quest, then they need to get a significant number of qualified individuals (20?) to endorse it, agree to its worth in points, and voilà--it goes live.  Since an individual hoping to get points has intrinsic motivation to do so, they'll be responsible for doing the data entry for their own quest--but for most quests this doesn't redeem points until some other game-player endorses it.  This endorsement process bestows a very small amount of points on the reviewer--but in fact this is a good thing too, because reviewers end up learning about what other people are doing in the community.  Other quests could be defined as a trainer/student relationship in which case money as well as points changes hands, but that's worked out between the interested parties.  I think that point exchange would go best with a written retrospective-like review -- rating 0-9, features to keep for next time, and things to change.



Joshua Kerievsky

no leída,
24 sept 2009, 6:18:31 p.m.24/9/09
para assessin...@googlegroups.com
On Thu, Sep 24, 2009 at 2:44 PM, D. André Dhondt <d.andre...@gmail.com> wrote:
>Will the volunteer model be sustainable?
I'm beginning to believe no.  A bartering system?  Maybe.  As you mentioned with the iPhone apps model, I think there's a lot of flexibility to let the "market" hash it out.  Let's design something that keeps the doors open for both free and paid versions, and then we'll see what happens.  I think that the free option will be a corruption-proofing feature that will keep the commercial trainers honest (not to say that any are dishonest, but I do wonder about the prices they charge sometimes).

I've been recently trying out free apps and then buying the versions I really like -- such as TightWire -- to get the extra functionality.  I also notice commercials in the free versions.  That may be a model to consider -- e.g. someone doing a C# quest and not using Resharper could be automatically shown a Resharper advert paid by Jetbrains.   

It's iTunes comes to the Agile Assessment world.  

>Quests could be contributed by people and companies.
The way I see it, the open nature of defining quests will keep this assessment system from going obsolete, conservative, or counter-innovation.  Anyone can define a quest, then they need to get a significant number of qualified individuals (20?) to endorse it, agree to its worth in points, and voilà--it goes live.  Since an individual hoping to get points has intrinsic motivation to do so, they'll be responsible for doing the data entry for their own quest--but for most quests this doesn't redeem points until some other game-player endorses it.  This endorsement process bestows a very small amount of points on the reviewer--but in fact this is a good thing too, because reviewers end up learning about what other people are doing in the community.  Other quests could be defined as a trainer/student relationship in which case money as well as points changes hands, but that's worked out between the interested parties.  I think that point exchange would go best with a written retrospective-like review -- rating 0-9, features to keep for next time, and things to change.


Most of this sounds good.  Do we worry about people gathering up points from whatever source they can?  Accumulating a lot of points for a lot of easy quests ought to be different from getting points from some hard quests. 

best
jk

D. André Dhondt

no leída,
24 sept 2009, 7:26:21 p.m.24/9/09
para assessin...@googlegroups.com
>Do we worry about people gathering up points from whatever source they can?

How does WoW deal with this?  I don't think a person should accumulate points for the same quest, but multiple quests of increasing difficulty should earn more and more points.  Another way to identify people gaming the system would be to see how well-rounded they are across multiple skill sets--skills that correspond with the 7 pillars that Ron Jeffries, Chet Hendrickson, et al. have described.

On the other hand, as long as points decay after a reasonable period (barring sick/parental leave?), then even people that repeat the same quest year after year should still merit at least some points, because they're still trying to learn along this axis.


--
D.André Dhondt

Joshua Kerievsky

no leída,
24 sept 2009, 8:00:30 p.m.24/9/09
para assessin...@googlegroups.com
On Thu, Sep 24, 2009 at 4:26 PM, D. André Dhondt <d.andre...@gmail.com> wrote:
>Do we worry about people gathering up points from whatever source they can?

How does WoW deal with this?  I don't think a person should accumulate points for the same quest, but multiple quests of increasing difficulty should earn more and more points.  Another way to identify people gaming the system would be to see how well-rounded they are across multiple skill sets--skills that correspond with the 7 pillars that Ron Jeffries, Chet Hendrickson, et al. have described.

There are plenty of things to assess and little need to make a nice neat number like 7, IMHO.  

On the other hand, as long as points decay after a reasonable period (barring sick/parental leave?), then even people that repeat the same quest year after year should still merit at least some points, because they're still trying to learn along this axis.

Points don't decay on my Wii.  I got over 2000 in Tennis and only recently came back to the Wii (after a many-month absence) to discover that my points were the same.   So I'm not sure about point decay.  On the other hand, it would be good to someone know that someone had high knowledge of TDD and little knowledge of the latest BDD stuff, for example.  Not sure how that could work.   

best
jk

Patrick Wilson-Welsh

no leída,
24 sept 2009, 10:08:54 p.m.24/9/09
para Joshua Kerievsky,assessin...@googlegroups.com

Hello Joshua,


I vote for hybrid, and I vote for trying it out on a couple of non-trivial "take home" codebase challenges that don't involve any of the compelling WoW mechanics. If any of you want more details off-line, I have some specific challenges in mind. 


My personal want is for enough of us in the current agile programming "community of trust" to achieve enough concensus about lightweight, self-sustaining, profitable enough assessment mechanisms, coupled with "open source" codebase challenges, coupled with a coupla good books (I'm working on one now) that together mean that we are helping more people learn the skills. Rising tide floats all skills levels. 


Think of it as really lightweight, agile curriculum for vocational programmers who want to step up and become software craftsmen. Think of it as part revenue stream and part pay-it-forward consciousness raising around software craft. 


With how little exercise design and instruction could we try this out?  How soon could we iterate?  How few basic agile programming practices, practiced solo, could we challenge with learning exercises and then assess, at first? 


If we try a hybrid programmatic/static-analysis whatever plus manual (remote pairing) mechanism on this at first, and barely break even on it for the first few students, so what?  We can learn and adjust. 


I want us to start small, get started, build a self-building community, retrospect, adapt to our mistakes, rinse repeat. 


But honestly I don't want to boil a new WoW ocean in order to get started. I want to get started small. 


And I want to focus on a small handful of things to learn, and a few daunting coding challenges for them, before I focus so much on the assessment mechanics. It really is the learning community that gets my hair on fire. I betcha dollars to donuts that a requisite, affordable assessment biz model can be emergent, as we iterate. 


--Patrick

-------

patrick welsh

248 565 6130

twitter: patrickwelsh

blog: patrickwilsonwelsh.com

Rob Park

no leída,
24 sept 2009, 11:58:01 p.m.24/9/09
para assessin...@googlegroups.com
So I've been lurking around a bunch of this and I do like the idea of WoW-like quests (as I understand them given I'm not WoW aficionado).

I agree with Patrick, that I'd prefer it to start small and evolve.  (Aside: isn't that what so many of us preach day after day already?)  

And it sounds like we need an initial platform with 1 or 2 initial "example" quests.  Perhaps supported thru an interface that allows for others to submit quests to be plugged in.  Once this "trusted community" reviews/enjoys a new quest then it should be published for others as well.  I'd suppose the author should get some points, too.

I'm most curious what those quests would look like and do.  How might the interaction go?  Josh, I imagine you must have thoughts (perhaps already posted that I've missed) given the IL eLearning content would have to have some overlap, no?  What does a quest need?  A language choice?  A customer?  What would the "app" do? Could the app play the role of the customer?  Would the app perhaps check coverage on the tdd quest?  perhaps require you to "checkin" your test first?

And perhaps too specific, but we should absolutely be able to pair on a quest and both receive equal points. 

-- 
Rob
--
http://agileintention.blogspot.com
http://twitter.com/robpark

D. André Dhondt

no leída,
25 sept 2009, 2:40:51 a.m.25/9/09
para assessin...@googlegroups.com
+1 on baby steps.  I've heard of a few online tools that exist now, that we could try enough to find out if it's worthwhile to use them, grow them, or build our own.  I'm going to write to Laurent Bossasvit right now to see if he's OK with us joining www.WeVouchFor.org -- I know the site is not under active development.
--
D.André Dhondt
mobile: 001 33 671 034 984

Charlie Poole

no leída,
25 sept 2009, 9:52:18 a.m.25/9/09
para assessin...@googlegroups.com
Anyone can join. I joined a while back. It's a good starting point for thinking about this
but I think it's missing a few things. I wrote Laurent asking how I could help but haven't
had an answer.
 
Charlie

From: assessin...@googlegroups.com [mailto:assessin...@googlegroups.com] On Behalf Of D. André Dhondt
Sent: Thursday, September 24, 2009 11:41 PM
To: assessin...@googlegroups.com
Subject: [Assessing-Agility] Re: Practical & Sustainable Assessment

Nayan Hajratwala

no leída,
25 sept 2009, 9:57:22 a.m.25/9/09
para assessin...@googlegroups.com
Well, in the interest of "doing something" & starting small, i've created a GitHub project http://github.com/nhajratw/WoA ...

Let's start putting in some stories (via the "issues" link) and start building something!

D. André Dhondt

no leída,
25 sept 2009, 9:58:31 a.m.25/9/09
para assessin...@googlegroups.com
On Fri, Sep 25, 2009 at 3:52 PM, Charlie Poole <cha...@nunit.com> wrote:
>It's a good starting point for thinking about this but I think it's missing a few things.
Agreed--but I want to start making baby steps to see what exactly we'd change.  He intentionally left it as unfinished work, hoping that we would improve it.


>I wrote Laurent asking how I could help but haven't had an answer.
See his response below.


---------- Forwarded message ----------
From: Laurent Bossavit <lau...@bossavit.com>
Date: Fri, Sep 25, 2009 at 10:26 AM
Subject: Re: ADS, Kerievsky's assessment, and Agile Alliance
To: "D. André Dhondt" <d.andre...@gmail.com>
Cc: agile-devel...@googlegroups.com, assessin...@googlegroups.com


Bonjour Laurent (and ADS and assessing-agility),

Hi André, in case my reply doesn't go to both the lists you CC'd (it might bounce because I'm not sure what address I'm subscribed to those lists under), please forward it...


Regardless, this is probably too much to discuss in an e-mail, but I am curious about what happened with the WeVouchFor project...

The answer is "not much". In 2007 I was ranting about what it would take for me to make a certification scheme palatable, and Brian Marick happened to be interested in taking on a fun coding project. And so it happened that we whipped up a first version of WeVouchFor, with the intention of seeing how people used it before taking it further.

Because the point was to leave everything as open as possible, there were only three parameters to a "certification" (all things considered perhaps endorsement is a better term), the level of skill (picked from a fixed set), the name of the skill (unconstrained), and the evidence behind an endorsement.

This "proof of concept" revealed some of the potential for misuse in the WVF concept. Notably:
- people vouched for others on the basis of too little evidence
- people have a fuzzy-set, rather than sharp, idea of what skills are
- people crafted insults disguised as endorsements
- there was no way to characterize the relationship between endorser and endorsee

So, not even counting the several ideas for future features we'd already identified before launching (such as allowing for people to display "portfolio" work and have that evaluated), there was a lot of work to do just to tweak the system to avoid the issues above. Also, we'd picked Rails for implementing the site, and neither of us was familiar with or felt quite at home within that framework.

Bottom line, my assessment is that to do properly what WVF set out to do is a bigger job than the pair of us, given the other demands on our time, plus geographical separation and all that, could handle.


Do you you want to be part of this next wave of an attempt-to-create-a-certification-scheme that is agile?

Depends on what the attempt is.

I've taken my best shot at showing, with actual running code rather than just words, what properties I think an agile certification scheme should have. It should be decentralized, owned by nobody in particular, and emerge from small contributions by everyone. It should "immunize" our community against further attempts to install something like CSM or WAQB-type certs.

If someone proposes a scheme that has these properties, I'll get behind it in whatever way is appropriate. If it's not clear that a scheme has these properties, then I'm not interested in discussing it.


Do you have time to meet and chat about what you learned the last time around?

Oh, always.


There is still a lot of talk on both the ADS list and the assessment list, people prosing things that may become solid proposals.  Is the WeVouchFor site in a state where we could direct a bunch of new users to try it out?  Would you be willing to open up the site and code for us to make modifications--and if so, who are the stakeholders that would need to be considered before making changes?

Well, the site works, such as it is.

I don't think Brian would mind, and I would favor, opening up what little code there is if there are enough people out there with the time and energy to work out the kinks. I'm totally willing to function as a Product Owner, or team up with someone who'd like to do that.


Last of all, as a member of the board of the Agile Alliance, do you think the Alliance would consider endorsing a recurring assessment scheme (while still maintaining their distance from certification)?


I can't speak for the board, but I can relay the question. We're meeting in London next week.

Cheers,
Laurent Bossavit
lau...@bossavit.com


D. André Dhondt

no leída,
25 sept 2009, 10:02:40 a.m.25/9/09
para assessin...@googlegroups.com
>Well, in the interest of "doing something" & starting small...

I'm tempted, but would rather follow/build on work that's already been done by Marick and Bossavit, until maybe we conclude that it's not compatible with what we're doing here.  Or maybe we need to wait even a bit longer and find out what Joshua, Ron, Chet, Marick, and others have to say about it all.

D. André Dhondt

no leída,
25 sept 2009, 10:04:33 a.m.25/9/09
para assessin...@googlegroups.com
>in the interest of "doing something" & starting small

One other thing--I think it's one thing to start building, it's another to do more research or trial runs of existing products.  Are there any other products we should be evaluating, other than WeVouchFor.org?  I know a couple people mentioned prototypes on this list or on ADS, but I can't find the notes.

Patrick Wilson-Welsh

no leída,
25 sept 2009, 10:46:41 a.m.25/9/09
para D. André Dhondt,assessin...@googlegroups.com

Hi All:


For the record, I am (perhaps pig-headedly) interested, for now, only in the problem of getting a semi-standard set of open-source coding exercises out into the world for people to try, and to learn from. 


The whole WeVouchFor thing does not help me help more programmers learn more about Agile Programming faster.


I'm only interested in determining how to assess whether or not someone passes/fails such an exercise, and therefore has demonstrated that they understand some pre-determined subset of techniques, practices, principles, and patterns. 


I think devising assessment systems larger than that is speculative. 


The specific set of practices I am interested in asking students to attempt to learn, in large, challenging exercises, are these and only these, for now:


-- Expressive naming

-- SRP / small modules / cyclomatic complexity

-- Compose Method

-- Extract Method / Extract Class / Move Method /Pull-Up & Pull-Down

-- "basic" test-driving

-- "basic" mocking (in greenfield contexts, and in Legacy/pathalogical decoupling contexts)

-- isolation testing

-- (?) integration/end-to-end testing (because of its easy of learning, not its Total Cost of Ownership)


So I ask you folks, as I am asking everyone, if we ask programmers who are brand new to agile programming to spend a substantial number of hours (hundreds, I am thinking) completing an exercise that "strongly encourages" them to learn mainly the above things ... 


...well, how useful is that as an Agile 101 sort of learning experience?  How important are those as first few lessons? 


Could we start with just the above set of things to learn, and the simplest (but very challenging and demanding) "take-home" coding exercise, and see how it works as a learning system?


Could we then devise just enough hybrid programatic/manual assessment to pass/fail people who are willing to try something this daunting, in order to drill just a few key concepts and practices into their heads?


What am I missing here? 


Cheers, 


--Patrick

Rob Park

no leída,
25 sept 2009, 10:51:29 a.m.25/9/09
para assessin...@googlegroups.com
I agree with you there Patrick.

Especially seeing Laurent's "potential for misuse" list... 

.rob.

Geoffrey Wiseman

no leída,
25 sept 2009, 10:52:41 a.m.25/9/09
para assessin...@googlegroups.com
On Thu, Sep 24, 2009 at 4:41 PM, Joshua Kerievsky <jos...@industriallogic.com> wrote:
Frankly, free scares me.  I don't know how free and sustainable go together.  We've all seen the dead open source projects (no updates since 2003).  Meanwhile, World of Warcraft is run by a company that charges money and uses that money to keep funding development. 

I'm not saying that assessment has to be paid.  I simply wonder how to make a sustainable model.  If a bunch of us want to make something happen, we have to put real time and energy into it and I wonder how far the free volunteer model can take us.  

Free gets the kind of adoption you'd want; without free, it'll likely remain niche, IMO.  That said, making sustainable/free is not without some challenges, I agree.  I think it's doable, but I also think it's putting the cart before the horse.
 
I also wonder about how to do the assessments.  There are high tech solutions (we make these at my company), low tech and hybrids.  High tech would be programmatically assessing a skill level.  Low tech would be manually assessing (likely in person) a skill level.  Hybrid would be some mix -- perhaps some automation and some manual verification or checking. 

I'd focus on this -- until you have a viable assessment model that people buy into and are (ideally) excited by, the rest is meaningless.  Get this part right, and the rest will follow.
 
  - Geoffrey 
--
Geoffrey Wiseman
http://www.geoffreywiseman.ca/

Geoffrey Wiseman

no leída,
25 sept 2009, 10:54:53 a.m.25/9/09
para assessin...@googlegroups.com
On Thu, Sep 24, 2009 at 4:52 PM, D. André Dhondt <d.andre...@gmail.com> wrote:
I think that a low barrier to entry, like a $10-100/year subscription to a web site that allows both self-assessments and peer assessments could end up providing enough information about folk that it's useful to both employers, clients, and employees.  We could encourage low-cost assessments as well as high-confidence assessments--that is, score a "reviewed and rated somebody's self assessment" quest lower than a "day of pairing at another company".  As Laurent Bossavit mentions at WeVouchFor.org, there could be ways to game the system but with enough currency being exchanged, the level of trust for people that are actually good would distinguish them from the fakes.

Still cart before the horse, but it's not hard to imagine that a site that assesses your skill level, knows where you fall short, and where you live would be a good place to do very targeted training advertising for instance.

  - Geoffrey

D. André Dhondt

no leída,
25 sept 2009, 11:00:30 a.m.25/9/09
para assessin...@googlegroups.com
>very targeted training advertising for instance
 +1, source of revenue for sustaining this thing.  Who wants to set up a corporation?  Then again, it'd be better coming from the Agile Alliance--some umbrella organization that has legitimate authority to run this.

--
D.André Dhondt

Geoffrey Wiseman

no leída,
25 sept 2009, 11:04:01 a.m.25/9/09
para assessin...@googlegroups.com
On Fri, Sep 25, 2009 at 11:00 AM, D. André Dhondt <d.andre...@gmail.com> wrote:
>very targeted training advertising for instance
 +1, source of revenue for sustaining this thing.  Who wants to set up a corporation?  Then again, it'd be better coming from the Agile Alliance--some umbrella organization that has legitimate authority to run this.

Still cart and horse.  ;)  Make a prototype.  See it people are interested.  See if it has merit.  See if it's something anyone can cheat on by asking their friend to do it for them.  Figure out the challenges and the direction.  Once there's real momentum, start thinking about sponsors, umbrella organizations, revenue streams.

  - Geoffrey 

D. André Dhondt

no leída,
25 sept 2009, 11:11:03 a.m.25/9/09
para assessin...@googlegroups.com
>Make a prototype.  See it people are interested.  See if it has merit.  See if it's something anyone can cheat on by >asking their friend to do it for them.

Frankly I don't really think that if I build something anyone will be interested.  Maybe if we all build it together, people will sign up. 

Then again, we've got two celebrities in our field that already built something--and almost nobody's using it, nor maintaining it, because it ends up being too expensive to maintain.

So I don't see how to go from this interesting discussion to a real, working model.


--
D.André Dhondt
mobile: 001 33 671 034 984

Nayan Hajratwala

no leída,
25 sept 2009, 11:11:46 a.m.25/9/09
para assessin...@googlegroups.com
From what i'm hearing, it seems that we need:

1) An initial set of exercises:
* do we have any in mind? I have a couple very simple TDD exercises that I give out for interviews and such.
* would people be willing to start "donating" exercises that they have to the Open Source?
* logistically, where should we collect/collaborate on their development? Seems like wiki on GoogleGroup/GitHub would be best.

2) A framework in which people can:
* register
* get the exercises
* submit their solutions
* have their solutions evaluated
* some sort of "points" allocated upon completion

If I read patrick's mail correctly, it seems that you are primarily interested in #1.  I'm personally interested in #1, but would quite enjoy working on #2.

Is this a reasonable breakdown of the problem?

Rob Park

no leída,
25 sept 2009, 11:17:34 a.m.25/9/09
para assessin...@googlegroups.com
+1

D. André Dhondt

no leída,
25 sept 2009, 11:28:59 a.m.25/9/09
para assessin...@googlegroups.com
I like this breakdown.  I'd add some way to include assessments beyond technical/programming skills, which is only a subset of what's required to be a good agile developer (e.g., good communication skills, good understanding of the business need, etc.)

Rob Park

no leída,
25 sept 2009, 11:36:40 a.m.25/9/09
para assessin...@googlegroups.com
And I'm assuming where we're talking about WoW, these quests should try not to be bland boring multiple choice, right... we need to put some creativity in there too.

Joshua Kerievsky

no leída,
25 sept 2009, 11:44:37 a.m.25/9/09
para assessin...@googlegroups.com
On Fri, Sep 25, 2009 at 7:46 AM, Patrick Wilson-Welsh <patrickwi...@gmail.com> wrote:

For the record, I am (perhaps pig-headedly) interested, for now, only in the problem of getting a semi-standard set of open-source coding exercises out into the world for people to try, and to learn from. 

Don't those already exist?  For example, someone's been maintaining a whole bunch of open source TDD exercises on some wiki.  Has that helped?  I would think so.  Does it challenge the "certified" crowd?  I think not.  

I understand baby steps inside out and backwards.  I also think it's crucial to have a charter and that comes before the baby steps.  We have to know what we are going after before we go after it, don't we?

best
jk

D. André Dhondt

no leída,
25 sept 2009, 11:54:17 a.m.25/9/09
para assessin...@googlegroups.com
On Fri, Sep 25, 2009 at 5:44 PM, Joshua Kerievsky <jos...@industriallogic.com> wrote:
>I also think it's crucial to have a charter and that comes before the baby steps.-- 

+1  -- there's something that's been bothering me all day... on my way home I realized that we probably won't have a problem building another prototype--it's the politics of getting people to use it.

So what do we need to flesh out the politics?  A charter!  Thanks, Joshua!


---

Geoffrey Wiseman

no leída,
25 sept 2009, 2:52:54 p.m.25/9/09
para assessin...@googlegroups.com
On Fri, Sep 25, 2009 at 11:11 AM, D. André Dhondt <d.andre...@gmail.com> wrote:
>Make a prototype.  See it people are interested.  See if it has merit.  See if it's something anyone can cheat on by >asking their friend to do it for them.

Frankly I don't really think that if I build something anyone will be interested.  Maybe if we all build it together, people will sign up. 

Ah, sorry, I didn't mean to say that you should go off and do that on your own.  I mean that if there's going to be a collective effort, I'd like to see it focused on finding a viable assessment model and seeing if the wider agile/lean community is interested in such a thing before figuring out how to sustain it.  I think just getting to a model and getting people interested in it and excited in it is challenge enough. 
 
Then again, we've got two celebrities in our field that already built something--and almost nobody's using it, nor maintaining it, because it ends up being too expensive to maintain.

I think the maintainable side could be sorted out if everyone were using and engaged in WeVouchFor, and the same is probably true for this.  But until you get to a point where it's really interesting and a lot of people are engaged ... the other stuff seems kinda moot.

  - Geoffrey

Joshua Kerievsky

no leída,
26 sept 2009, 12:56:53 p.m.26/9/09
para assessin...@googlegroups.com
Patrick

I think you have some great ideas outlined in this email.  I'm not sure if a sequence of smaller quests will be more appealing or user friendly than large quests.   

How do we know that someone did a quest?

Best
Jk 

Sent from my iPhone

Patrick Wilson-Welsh

no leída,
26 sept 2009, 2:55:13 p.m.26/9/09
para Joshua Kerievsky,assessin...@googlegroups.com

Hello Joshua,


My notion for evolving this system is this; admittedly it is a "meta-game": 


We first design a code quest that may, in fact, consist of few large exercises and a few smaller exercises. I have some ideas here, and some code I have been playing with; we can prolly cobble together something pretty quickly. 


Then iterate in the following way (which, BTW, we can use to build community and mindshare as well):  


-- divide this list and all comers into three camps:  

   -- The Good Faith Questers (who just try to do the quest in good faith, as best they can, by the rules)

   -- The Quest Cheaters (who try to "cheat" the system somehow, optimizing for performance over learning)

   -- The Assessors, who assess results of both groups above, and find patterns. This group tries to assess pass/fail performance as programmatically/efficiently as possible, and emergently determines how much pairing to do with the students. 


We then gather data on the whole system as best we can, objectively and subjectively. 


We then find what gamable holes we think we have in the system, and determine whether/how to fill them. 


We then iterate again. Thus we continuously improve the system, both in how efficiently it drives learning, and in how efficiently we can assess performance. 


Thoughts?


--Patrick

Responder a todos
Responder al autor
Reenviar
0 mensajes nuevos