Fwd: Updating your SemEval 2017 camera-ready bibliography

34 views
Skip to first unread message

Daniel Cer

unread,
May 10, 2017, 1:33:00 PM5/10/17
to STS SemEval
Hi STS participants!

If you submitted a paper to the 2017 SemEval workshop, you should have already received an e-mail similar to the one below.

As recommended by the SemEval workshop organizers, we're going to make sure we have a citation for each of the 2017 system description papers for task 1.

Each system description paper should be updated to include a citation to the 2017 STS task description paper using the bibtex below. Please, try to make the change as soon as possible. The workshop organizers would like all of the updates to be in by Monday, May 15th:
@InProceedings{cer-EtAl:2017:SemEval,
  author    = {Cer, Daniel  and  Diab, Mona  and  Agirre, Eneko  and  Lopez-Gazpio, Inigo  and  Specia, Lucia},
  title     = {SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation},
  booktitle = {Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)},
  month     = {August},
  year      = {2017},
  address   = {Vancouver, Canada},
  publisher = {Association for Computational Linguistics},
  pages     = {1--14},
  abstract  = {Semantic Textual Similarity (STS) measures the meaning similarity of sentences.
	Pairwise scores are on an ordinal scale, conveying both a degree of similarity
	and a categorical interpretation. Applications include machine translation
	(MT), summarization, generation, question answering (QA), short answer grading,
	semantic search, dialog and conversational systems. While prior years
	emphasized English, the 2017 task focuses on the multilingual and cross-lingual
	setting. We find performance lags in less well studied STS languages and
	language pairings (e.g., Arabic). MT quality estimation (MTQE) data used for
	one of the tracks highlights the importance and difficultly of making fine
	grained distinctions. The task obtained strong participation from 31 teams,
	with 17 participating in {\em all  of the language tracks} for 2017. Research
	on sentence level similarity and semantic representations widely makes use of
	STS data. We introduce a new shared training and evaluation set, {\em STS
	Benchmark}, a multi-year a selection of English STS pairs (2012-2017), to
	facilitate and encourage consistent and interpretable model assessments.},
  url       = {http://www.aclweb.org/anthology/S17-2001}
}
Dan
--------------------------------
STS 2017 Co-Organizer

---------- Forwarded message ---------
From: <semeval-o...@googlegroups.com>
Date: Tue, May 9, 2017 at 5:15 PM
Subject: Updating your SemEval 2017 camera-ready bibliography
To: <dani...@acm.org>


Dear Daniel Cer,

We have updated the official SemEval 2017 BibTeX based on all feedback that we have received. You can find the updated version here:

http://nlp.arizona.edu/SemEval-2017.bib

If you would like to update the references section of your camera-ready draft, please do so and submit your revised camera-ready by no later than Mon 15 May 2017.

https://www.softconf.com/acl2017/semeval/user/scmd.cgi?scmd=aLogin&passcode=193X-F6G4B9E5D4

If you are a task participant, we recommend that you cite the task description paper. If you are a task organizer, we recommend that you cite each of your participants.

Regards,
The SemEval 2017 Organizers
Reply all
Reply to author
Forward
0 new messages