Hi,
On 29/06/15 20:15, Tom Leese wrote:
> Work has been slowly working on this in the past few months and it's now
> pretty much complete except for the incident chart at the bottom which
> needs some thought. I would appreciate anyone taking a look over it and
> letting me know what they think.
This is looking really good, thanks for putting it together. I have a
couple of opinions to vend, mostly related to what that aim/purpose of
mentoring is. There's no real truth behind these opinions, only what my
experience is.
IMO, I'd say mentoring is about encouraging the competitors to follow
engineering processes, and the handbook covers that (prototype things,
reduce to smaller problems, etc). I'd suggest this should be extended by
encouraging competitors to try things out, even if they're going to
fail: finding out what doesn't work and being able to understand why is
an extremely important skill for people to learn. Not being fearful of
things breaking is an extremely important way of breaking analysis
paralysis.
(Obviously, there are limits: if purchasing £100 of obviously incorrect
motors, or setting fire to our equipment, the cost of experimentation
outweighs the subsequent enlightenment).
On a more organizational front, I'd say mentoring is an extremely
important part of fixing the mentality of competitors. If someone
external is visiting weekly / frequently, this makes Student Robotics
much "real-er" than if we're only connected to them electronically.
There's (IMO) a psychological pressure to deliver progress in return for
someone helping, and giving competitors that motivation to make things
happen is another facet of learning to do things.
~
Two tidbits: one team this year told me they'd spent months writing
their code in the simulator to make their robot move forwards, prod a
token, and move back; but that at the competition their code didn't make
their real-world robot do the same things! While it might be obvious,
IMO the simulator needs a health warning that it's a prototyping tool.
(Volunteer handbook might not be the best place to put it, but I don't
know where else to say this),
For the debugging process (which is good), IMO an additional step is
that the competitors need to clearly state what the expected behaviour
is (so that they can then compare it with what actually happens).
~
[Javascript on my lawn, gripe gripe, etc.]
--
Thanks,
Jeremy