So, it seems like the interface to the Learning Track will be changing
which seems positive so I will post some ideas about the Learning
Track in general.
First, I'll talk about the interface. So, it seems the ideal solution
gives people control over their evaluations so they can write whatever
algorithms they want. You could just use the setNumberOfTrials() and
check their code to make sure they don't violate it but that is bad
because it doesn't lead well to automation and can possibly be
I think the best solution is to have the task keep track of the number
of trials. So, you pass in a task with setTask() and then the
algorithm can use it however they want by calling Task.evaluate() for
At the same time, the task keeps an internal counter of how many times
evaluate() has been called, so that when the magic number has been
reached it will no longer run the trial.
The task would probably need a query function so people could know the
status of their trial quota. Functions like Task.canEvaluate() and
Task.getRemainingTrialCount() or something. (Obviously these
functions wouldn't be inside the actual Task class, but in some
Second, I want to talk about the general execution of the track since
I think it was pretty good but still have some suggestions.
1) Run each 'level' multiple times to get an average. These
algorithms are stochastic so any one run could be terrible or awesome.
It seems that maybe running each 'level' 5 or 10 times and taking the
average would be more consistent. It seems bad if the winner of the
competition would change each time you ran it. On the other hand, if
people's algorithms are very slow then maybe it's not feasible to do
it unless the deadline is ahead of time or there's a cluster of
computers to run it on. :)
2) I don't know about the levels of the last competition, but it seems
that the top 3 competitors completed all the levels. First, it'd be
cool if the final runs were shown (the recorder will make that easy).
Moreover, it'd be good if the final level was hard enough that it was
expected that no AI could beat it. There's all sorts of interesting
features that make this possible but for instance having dead ends
with hidden blocks on difficulties greater than 20 is pretty hard.
Especially if you can make it dead ends deep. Underground levels are
also hard but easily have impossible levels which aren't so
interesting. (the new level generation might correct for this.)
Finally, it seems like it would be interesting if levels were custom
made to be tricky for the AI. That's a lot of work but people could
submit level designs, though there might not be enough interest. (A
level editor would be an interesting addition to the code.)
That's a lot of text, but thanks for reading or skimming or whatever.