Hi Neil,
Thank you for your comment.
Just to clarify, the current evaluation does encourage methods with lower accuracy (i.e. closer to random chance in the classifier). The ranking is the inverse of the accuracy; therefore, the lower the accuracy, the higher the ranking.
In answer to your other question: "This could potentially lead to strategies that disguise data across datasets rather than genuinely mitigating cross-dataset differences". This will be addressed by the Stage 1 validation, which will determine if the HRTF is still realistic and has not been destroyed by the harmonisation process.
On a side note, if you are suggesting a ranking based on a distance to the chance accuracy level instead of 0%, it will not make a difference. That is to say, whether you add, subtract, or multiply 12.5%, it does not matter. The classifier will always have a chance level of 12.5% anyway, as the classifier will always have to select one of the eight datasets as an output, no matter the input. Therefore, it would not be possible to modify an HRTF to get the classifier to perform at a lower accuracy than that, even if you completely destroy the HRTF.
I hope this helps. Please let us know if anything is still unclear.
Cheers,
Aidan
---------------------------------------------------------
Dr Aidan Hogg
Lecturer at Queen Mary University of London
Honorary Research Associate at Imperial College London
Centre for Digital Music
Electronic Engineering and Computer Science
Queen Mary University of London
327 Mile End Road, London
E1 4NS, U.K