Ranking Final (Test) Phase

125 views
Skip to first unread message

alessandro.raganato

unread,
Feb 6, 2023, 2:54:55 PM2/6/23
to vwsd
Dear all,

Thank you all for participating!
Following the link with the results of the Final (Test) Phase:

https://docs.google.com/spreadsheets/d/1JJ2wezNN56bPOKge3fJ5Icvh5cB7GD9HojR7S1ly934/edit?usp=sharing

If you find some error/missing information in the results spreadsheet, please let us know as soon as possible.
We are looking forward to knowing more details about all your systems!

You can find more information on how to submit the paper on the official SemEval 2023 webpage:
https://semeval.github.io/SemEval2023/

Paper submission due 28 February 2023


Best,
the organizers


Yow-Ting Shiue

unread,
Feb 17, 2023, 9:11:02 PM2/17/23
to vwsd
Dear organizers,

Is it possible to get information about the "baseline organizers" approach?
For writing our system paper, we believe a comparison between our approaches and your baseline can be a useful component in analysis and discussion.

Thanks!

Yow-Ting Shiue
PhD Student
Department of Computer Science
University of Maryland, College Park

asahi ushio

unread,
Feb 21, 2023, 6:43:23 AM2/21/23
to vwsd
Hi Yow-Ting,

Thanks for the question! Given a single instance containing a query phrase (eg `andromeda tree`) and multiple candidate images, this baseline first uses CLIP (https://huggingface.co/openai/clip-vit-large-patch14-336 for English and https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1 for non-English) to get embeddings of the query and all the images, where we compute cosine similarity of the query to each candidate image, and then the image with the highest cosine similarity is considered as the model prediction. 

Please check the following repository where we put all the implementation of our baseline. You can actually run our baseline on the dataset, and run the evaluation on the test set.
Asahi
Reply all
Reply to author
Forward
0 new messages