[cfp] The Uncertainty in Natural Intelligence Workshop at UAI 2012

3 views
Skip to first unread message

Joseph Austerweil

unread,
Apr 30, 2012, 3:51:28 PM4/30/12
to ml-...@googlegroups.com
The Uncertainty in Natural Intelligence Workshop at UAI 2012

Call for participation
Workshop date: August 18, 2012
Location: Catalina Island, CA, USA (near Los Angeles)
Submission deadline: July 1, 2012 at 11:59pm PST
 
Organizers:
Noah Goodman (Stanford University)
Thomas Griffiths (University of California, Berkeley)
Josh Tenenbaum (Massachusetts Institute of Technology)
Joseph Austerweil (University of California, Berkeley)
 
Invited speakers:
Mark Steyvers (University of California, Irvine)
Xiaojin (Jerry) Zhu (University of Wisconsin-Madison)
Steve Piantadosi (University of Rochester)
Keith Holyoak (University of California, Los Angeles)
 
Workshop format:
The full day workshop consists of talks from the invited speakers and organizers, poster
spotlights (very brief talks), a session of contributed posters, and a discussion between
the audience members, speakers, and organizers.
 
Submission instructions:
Please email 1 page abstracts to uaihumanle...@gmail.com by July 1, 2012 at
11:59pm PST. All abstracts will be reviewed by the organizing committee and notifications
will be sent out by July 15, 2012.
 
Important dates:
Deadline for poster submissions: July 1, 2012 at 11:59pm PST
Notification: July 15, 2012
Workshop date: August 18, 2012
 
Workshop description:
Some of the hardest problems in artificial intelligence, such as feature and concept
learning, are solved seemingly effortlessly by people. These are problems of inductive
inference, which are difficult because there are many solutions that are consistent with
the information explicitly given with the problem (e.g., solving ab=2 for the value of a
without being given any additional information).
People solve problems of inductive inference by favoring solutions that are consistent
with their prior knowledge and penalizing solutions that are inconsistent with prior
beliefs. Bayesian inference provides a formal calculus for how people should update their
prior belief in each solution in light of their observations. Prior beliefs are
formulated as a probability distribution over the unobserved solutions. This methodology
has provided a successful paradigm for exploring formal solutions to how people solve
inductive problems.
Using Bayesian inference to formally represent human solutions to inductive problems not
only provides a computational explanation of human behavior, but also offers novel
methods for solving difficult problems in artificial intelligence. In this workshop, we
present recent computational successes in human learning as a source of new artificial
intelligence algorithms by exploiting the common computational language of these two
communities, probability theory. This workshop is a forum for researchers in artificial
intelligence, machine learning, and human learning, all interested in the same inductive
problems, to discuss computational methodologies, insights, and research questions. We
hope to foster a dialogue that leads to a greater understanding of human learning and
further unites these two areas of research.
Reply all
Reply to author
Forward
0 new messages