Some questions about the fairness of the competition

437 views
Skip to first unread message

400 bug

unread,
Jul 20, 2023, 2:30:51 AM7/20/23
to unlearning-challenge
I want join this competition, but I still have some concerns/questions as shown in the following:
1. The same method may perform differently on different networks, for example, method A may perform well on lightweight networks, but may not perform well on large models. Method B is the opposite. How do you judge the quality of methods A and B?

2. Due to the lack of a unified training set for participants, some participants may use fewer training sets to train a simple network, which may make their unlearning easier. How can you avoid this problem?

3. I noticed that you mentioned including the prediction accuracy of the original model in the final score calculation. So how do you solve the unfair problem caused by different model parameter quantities, different quality training sets, and different amounts of training data among participants? For example,  participants  may have a good unlearning method, but they used a dataset with higher training difficulty, resulting in a lower final score.

Bhargav Kowshik

unread,
Jul 28, 2023, 12:52:26 PM7/28/23
to unlearning-challenge
> 2. Due to the lack of a unified training set for participants, some participants may use fewer training sets to train a simple network, which may make their unlearning easier. How can you avoid this problem?

The following line from the FAQ of the competition page I think answers this question.
"Our metric will also capture the desire to perform well not only in terms of forgetting quality, but also to not sacrifice the “utility” of the model (performance on retained and held-out examples)."

Eleni Triantafillou

unread,
Sep 12, 2023, 7:24:50 AM9/12/23
to Bhargav Kowshik, gzwf...@gmail.com, unlearning-challenge
Hi,

Thank you for your questions!

re: different networks: all submissions will use the same network architecture. We agree that understanding the interplay of model architectures and unlearning algorithms is an interesting topic for further research, but this is out of the scope of (this iteration of) our competition. 

re: training sets: I'm not sure I understand exactly what you mean here. To clarify, all participants' code will have the same inputs available.

re: accuracy, I think because, as I mentioned above, all participants will use the same architecture, this is no longer a concern?

By the way, our competition is now live on Kaggle where you can find more info on the setup used and how to make a submission (see our starting notebook too and the document we wrote that explains our evaluation procedure and gives more details about our setup).

Hope this helps! Please let me know if you have any other questions.

Thanks!

--
You received this message because you are subscribed to the Google Groups "unlearning-challenge" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unlearning-chall...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/unlearning-challenge/674be7cd-88f9-46fe-a151-753d731f385cn%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages