Dear organizers,
Our paper proposed a benchmark of six datasets to evaluate methods that were originally proposed to be effective in scenarios where few-labeled examples are available. Indeed, all datasets have less than 100 samples per class.
Among them, we also used a sub-sampled version of the ImageNet training set comprising only 50 images per class. This variant is definitely challenging for modern image classifiers (max accuracy 46.36%). Therefore, we are inclined to believe that our work could bring a useful discussion around the topic of training Imagenet with few samples, something that is hardly experimented with. However, before submitting our paper to Track 3 as it is, I was wondering if our article is anyway eligible despite having experiments that cover other datasets along with ImageNet.
Best regards,
Lorenzo Brigato