The task description is here:
My code is here:
Taking last year's as a template it could have been handled by a few
loosely coupled teams:
- Build and submission -- the entry had to be capable of building
and running on a VM run by the competition organisers. This meant
building a build system as part of the submissions and making sure that
the builds would work on the target environment.
- Telemetry and rover control -- Not a huge task, but absolutely
- AI -- Where most of the work and the ideas was needed. The rover
controller would need to provide a good API for this. My AI was the
simplest possible as I didn't have time to put any real work into it.
- Map generation -- The organisers supplied some simple test maps,
but to really exercise the AI and control systems more maps would have
been useful. I didn't make any.
- A/B AI testing -- Something I didn't even have time to think
about last year, but some sort of system or procedure to test the AIs
against each other and the maps to make a decision about which should
This could all be handled by a small team, or broken out to a few
larger teams concentrating on each area.
For infrastructure, Felspar can provide a Subversion server and an
issue tracker/wiki (on our Support site).
If it was just me, I'd probably mainly use C++ and Python -- Python for
speed of development, and C++ for execution speed. I'd use Boost.Python
to plug them together. For a larger team though I think it'd be
important that people could use their preferred languages.
It's almost certainly an early optimisation to try to work out what the
correct team structure would be and the right tools until we've seen
the problem. Last year the submission was source code that the
organisers had to build and run, in most previous years it has been the
answer to what amounts to a number of complex riddles. Which of these
is used this year will make a huge difference to how we structure
ourselves and what the important tasks are.