compute multi-class accuracy from binary comparisons

20 views
Skip to first unread message

Andrea Ivan Costantino

unread,
Mar 14, 2025, 7:17:28 AMMar 14
to CoSMoMVPA

Hi everyone,

I am performing a classification task on a multi-class dataset. Each observation in the dataset can have different labels depending on the categorization task (e.g., visual dimensions, semantic dimensions, etc.). The problem is that the chance level varies across tasks since the number of unique labels differs (e.g., 2 labels in the visual task, 3 labels in the semantic task). This makes comparing decoding accuracy across tasks problematic, as the baseline accuracy is not the same.

To address this, I was thinking of performing several binary classification tasks for each pair of labels and then averaging the classifier accuracy across all pairwise comparisons. This would yield a single, averaged classification accuracy with a consistent chance level (50%) across all tasks.

  1. Does this approach seem reasonable? Is there anything specific I should be cautious about?
  2. Is this implemented in CoSMoMVPA, or would I need to manually slice the dataset and implement it myself?
  3. Are there better alternatives, such as using different metrics (e.g., macro average, F1-score)? If so, are any of these already implemented in CoSMoMVPA?

Thanks in advance for any help you can provide!

Best,
Andrea

Reply all
Reply to author
Forward
0 new messages