Camera-ready submission required

10 views
Skip to first unread message

yifan jiang

unread,
Mar 31, 2024, 11:13:13 PM3/31/24
to BrainTeaser
Hey guys, 

Just a note that the camera-ready submission deadline is April 1st. We are still waiting for the camera-ready submission for some team. Please update your final submission in the console. 

Here are some notes for your reference while preparing the camera-ready submission (we can not accept your paper if these issues are not solved based on Semeval instructions):
1. Make sure you cite both the original Dataset paper (https://aclanthology.org/2023.emnlp-main.885/) and the Semeval task description paper. This is pretty important as our Semeval Task uses 20% of the original Dataset as test data and allows any approach rather than only evaluating with zero-shot in the original table.  As the task description paper will also cite your paper, we need to tell a consistent story to avoid further confusion.  The Bibtext citations are provided as follows:
Original Dataset Paper:
@inproceedings{jiang-etal-2023-brainteaser, title = "{BRAINTEASER}: Lateral Thinking Puzzles for Large Language Models", author = "Jiang, Yifan and Ilievski, Filip and Ma, Kaixin and Sourati, Zhivar", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.885", doi = "10.18653/v1/2023.emnlp-main.885", pages = "14317--14332", abstract = "The success of language models has inspired the NLP community to attend to tasks that require implicit and complex reasoning, relying on human-like commonsense mechanisms. While such vertical thinking tasks have been relatively popular, lateral thinking puzzles have received little attention. To bridge this gap, we devise BrainTeaser: a multiple-choice Question Answering task designed to test the model{'}s ability to exhibit lateral thinking and defy default commonsense associations. We design a three-step procedure for creating the first lateral thinking benchmark, consisting of data collection, distractor generation, and generation of adversarial examples, leading to 1,100 puzzles with high-quality annotations. To assess the consistency of lateral reasoning by models, we enrich BrainTeaser based on a semantic and contextual reconstruction of its questions. Our experiments with state-of-the-art instruction- and commonsense language models reveal a significant gap between human and model performance, which is further widened when consistency across adversarial formats is considered. We make all of our code and data available to stimulate work on developing and evaluating lateral thinking models."}Semeval Task Description Paper (Placeholder)
@inproceedings{jiang-semeval-2024-brainteaser, title = "SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense", author = "Jiang, Yifan and Ilievski, Filip and Ma, Kaixin", booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation", year = "2024", publisher = "Association for Computational Linguistics"}
2. Modify the paper based on the reviewer's suggestion.
3. Make sure you sign the copyright agreement form in the console.
4. The camera-ready version of the paper can be up to 6 pages.
5. Papers should not have page numbers.
6. Papers should fit within margin requirements.
7. After receiving the camera-ready submission,  we will check the papers and notify the authors to make necessary fixes.

Best,
Yifan Jiang
Reply all
Reply to author
Forward
0 new messages