In order to use your parser in the entailment task, I recommend
parsing both the text and the hypothesis sentences, and writing a
program that decides on the entailment looking at these parses. We
will not make such a program available. However you may want to check
out systems that have participated in RTE, the sample
conll-entailments.pl code on the task website, and the PETE Guide that
explains the entailment generation process. In particular note that
entailments usually focus on the syntactic relation between two
content words, and that sometimes dummy words like "somebody" and
"something" have been added to make the hypothesis sentence
grammatical. I am forwarding this message to the semeval-pete
discussion group in case people have other suggestions.
On Mon, Mar 15, 2010 at 1:25 PM, Tejaswini Deoskar <t.de...@uva.nl> wrote:
> Dear Dr. Deniz Yuret,
> I am interested in the Parser Training and Evaluation using Textual
> Entailment task that you have set up, and possibly hope to be able to submit
> a system.
> I am new to the task of entailment (have not participated in previous RTE
> tasks) and have a few basic questions, if you will be kind enough to
> I have a parser that outputs parses in the Penn Treebank style (including
> empty categories). I would like to test how it does on entailments in the
> development set that you have provided. What do I have to do in order to do
> this? The pete task description says somewhere that a program for evaluating
> entailments on both constituency and dependency parses will be made
> available- is this correct (I don't see it on the website).
> If I have to write my own program that will take a parse and test an
> entailment against it, are there any resources that I should know about?
> Thanks a lot in advance,
> Tejaswini Deoskar
> Institute for Logic, Language and Computation
> University of Amsterdam
> Tel: +31-20-525-8251
> Email: t.de...@uva.nl