Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Article: Learning Content Selection Rules for Generating Object Descriptions in Dialogue

2 views
Skip to first unread message

jai...@ptolemy.arc.nasa.gov

unread,
Jul 29, 2005, 9:09:20 PM7/29/05
to
JAIR is pleased to announce the publication of the following article:

Jordan, P.W. and Walker, M.A. (2005)
"Learning Content Selection Rules for Generating Object Descriptions in Dialogue",
Volume 24, pages 157-194.

For quick access via your WWW browser, use this URL:
http://www.jair.org/abstracts/jordan05a.html

Abstract:
A fundamental requirement of any task-oriented dialogue system is the
ability to generate object descriptions that refer to objects in the
task domain. The subproblem of content selection for object
descriptions in task-oriented dialogue has been the focus of much
previous work and a large number of models have been proposed. In this
paper, we use the annotated COCONUT corpus of task-oriented design
dialogues to develop feature sets based on Dale and Reiter's (1995)
incremental model, Brennan and Clark's (1996) conceptual pact model,
and Jordan's (2000b) intentional influences model, and use these
feature sets in a machine learning experiment to automatically learn a
model of content selection for object descriptions. Since Dale and
Reiter's model requires a representation of discourse structure, the
corpus annotations are used to derive a representation based on Grosz
and Sidner's (1986) theory of the intentional structure of discourse,
as well as two very simple representations of discourse structure
based purely on recency. We then apply the rule-induction program
RIPPER to train and test the content selection component of an object
description generator on a set of 393 object descriptions from the
corpus. To our knowledge, this is the first reported experiment of a
trainable content selection component for object description
generation in dialogue. Three separate content selection models that
are based on the three theoretical models, all independently achieve
accuracies significantly above the majority class baseline (17%) on
unseen test data, with the intentional influences model (42.4%)
performing significantly better than either the incremental model
(30.4%) or the conceptual pact model (28.9%). But the best performing
models combine all the feature sets, achieving accuracies near 60%.
Surprisingly, a simple recency-based representation of discourse
structure does as well as one based on intentional structure. To our
knowledge, this is also the first empirical comparison of a
representation of Grosz and Sidner's model of discourse structure with
a simpler model for any generation task.

The article is available via:

-- comp.ai.jair.papers (also see comp.ai.jair.announce)

-- World Wide Web: The URL for our World Wide Web server is
http://www.jair.org/
For direct access to this article and related files try:
http://www.jair.org/abstracts/jordan05a.html

-- Anonymous FTP from Carnegie-Mellon University (USA):
ftp://ftp.cs.cmu.edu/project/jair/volume24/jordan05a.ps
The compressed PostScript file is named jordan05a.ps.Z

For more information about JAIR, visit our WWW or FTP sites, or
contact jai...@isi.edu

--
Steven Minton
JAIR Managing Editor

0 new messages