I sent you a mail off the list regarding this, but I thought it would
be a good idea if you could post your reply here for the benefit of
all.
Could you please elaborate on the basic principles behind the
Simulated Annealing algorithm you are using, and what it intends to
achieve. Do you have a doc lying around somewhere which I can refer
to. What do you think are the strong points of the algorithm you are
using as opposed to the default one that was used in NTU?
Deepak
The main idea behind these approaches is that you can define your
criteria for measuring an
allocation, And try to maximize this
Obviously testing all possible options is not feasible and therefor we
use a heuristic search
approach as to find something as near as possible to the optimum(and
quite possibly the true optimum) in a reasonable time.
such approaches include: Simulated annealing, and genetic algorithms.
For simulated annealing one needs to define an evaluation function for
the allocation also referred to as the Energy function.
In a genetic algorithm all you need is to define a function comparing
two allocations(this can be done by comparing an energy function or
otherwise).
In both cases you need to be able to find an allocation similar to a given one.
For a genetic approach you need to also define a way to splice
together to allocations to get a new allocation similar to both
previous.
I myself have a lot of experience with tuning genetic algorithms for
allocation problems,
though from academic reading simulated annealing should work well when
the algorithm parameters are tuned properly.
As for adjudicator allocations, one needs to decide what we wish to achieve:
We obviously want good adjudicators to be chairs, we may want stronger
panels in the top rooms,
this needs to be formalized. In some cases we would like panels to
rotate and we would like adjudicators to meet different teams and
different adjudicators.
In the late rounds it is common to give young adjudicators showing
promise a chance at chairing in the weaker rooms, this is easily done
by strengthening the bias towards stronger panels in the top rooms in
later rounds.
We also usually avoid adjudicators and specifically chairs for judging
teams from their own university. We may want to build panels made up
adjudicators from many universities and countries.
We should not fear adding multiple criteria but we should simply
weight them in carefully.
Me
36 university_conflict: Penalty for each team-adjudicator from that team's uni. 37 High values: Uni- conflicts will occur less 38 Low values: (Down to 0): Uni conflicts matter less 39University Conflict and team conflict are at a high value, meaning the algorithm will try hard to steer away from conflicts between both adjudicators and their uni's, and any scratches that have been manually entered.
40 chair_not_perfect: Penalty for each chair of less quality than 100. Total penalty = penalty * (100 - real value) 41 High values: The best adjudicators will all be chairs 42 Low values: Having the best people in chair is not as important 43
44 "panel_steepness": Value between 0 and 1. Reflecting the relation between panel strength and debate strength. 45 High values: Up to 1: Debate strength strictly relates to panel strength. 46 Low values: All debates are considered equal. 47 Further remarks: Slowly increasing this value during the tournament is expected to have a positive effect on the tournament. 48
49 panel_strength_not_perfect: Penalty for distance to this 'ideal average'. 50 High values: Emphasis on getting panels on the 'right strenght' 51 Low values: Not so much emphasis on getting panels on the 'right strenght' 52
53 adjudicator_met_adjudicator: Penalty for adjudicator meeting each other again. This penalty is multiplied by the 54 times these two already were in one panel together. 55 High values: Adjudicators are not put in panels with previous co-panellists
 Me.