CfP: ICML Workshop on Reliable Machine Learning in the Wild

75 views
Skip to first unread message

Jacob Steinhardt

unread,
Apr 5, 2016, 2:21:41 AM4/5/16
to
Call for Papers
Reliable Machine Learning in the Wild
Date: June 23, 2016
Location: New York City (part of the ICML 2016 workshops)
Deadline: May 1st, 2016
Website: https://sites.google.com/site/wildml2016/

How can we be confident that a system that performed well in the past will do so
in the future, in the presence of novel and potentially adversarial input distributions?
Answering these questions is critical for high stakes applications such as autonomous
driving and automated surgical assistants, as well as for building reliable large-scale
machine learning systems. This workshop explores approaches that are principled or
can provide performance guarantees, ensuring AI systems are robust and beneficial in
the long run. We will focus on three aspects -- robustness, adaptation, and monitoring --
that can aid us in designing and deploying reliable machine learning systems.

Some possible questions touching on each of these categories are given below,
though we also welcome submissions that do not directly fit into these categories.
We are particularly interested in situations where systems must deal with very
novel inputs or where the cost of failure is high.

Robustness: how can we make a system robust to novel or potentially adversarial
inputs? In situations where failure has a high cost, how can we successfully identify
bad inputs and take appropriately conservative actions? What can be done if the
training data itself is potentially a function of system behavior or of other agents in
the environment?

Adaptation: how can machine learning systems detect and adapt to changes in
their environment, especially large changes (e.g. zero distributional overlap between
train and test, mis-specified models, shifts in the underlying prediction function)?
How should an autonomous agent act when confronting radically new contexts,
especially given uncertainty about the loss function?

Monitoring: how can humans meaningfully monitor large-scale machine learning
systems, such that they can judge for themselves whether the system continues
to perform well? How can they be given tools to meaningfully intervene when problems
arise?

Organizers: Jacob Steinhardt (Stanford), Tom Dietterich (OSU), Percy Liang (Stanford), Andrew Critch (MIRI), Jessica Taylor (MIRI), Adrian Weller (Cambridge)
Reply all
Reply to author
Forward
0 new messages