The Regressor
unread,Mar 2, 2012, 6:25:14 PM3/2/12Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Machine March Madness
Nice work on that starter code Danny! Theano is really neat that it
allows you to
very straightforwardly tweak your objective and it'll automatically
work out the
gradients for you. So in no time I managed to try a whole bunch of
small tweaks
on the standard PMF objective. The only caveat is that as you change
the objective,
it's scale necessarily changes and thus you also have to tweak the
learning rate. This
can be quite a pain since it's definitely not clear how the learning
rate should change, so
it becomes trial and error of loads of values between 0.1 - 0.00001.
One thing I was thinking about is that the standard squared error may
highly favor high scoring
games. For example, estimating 85 for a score of 100 will give an
error of 225, whereas 8.5 for a score of 10
will give an error of 2.25. So is this behavior we want or should the
model get the same error if it was proportionally
just as wrong? I tried modifying it to minimize absolute error and
rescaling the squared error in a number of
ways. It appears that normalizing the squared error of each score by
it's magnitude seems to help.
Anyway, that's just a tiny tweak but there's loads to be investigated
here.