1st CFP: NIPS 2017 "Transparent and Interpretable Machine Learning in Safety Critical Environments" Workshop

189 views
Skip to first unread message

alfredo vellido

unread,
Sep 19, 2017, 1:01:39 PM9/19/17
to Machine Learning News
*** apologies for cross-posting ***

FIRST CALL FOR PAPERS
=====================

NIPS 2017 Workshop on Transparent and Interpretable Machine Learning in Safety Critical Environments
https://sites.google.com/view/timl-nips2017

Friday, December 8, 8:00am-6:30pm
Long Beach Convention Center, Long Beach, CA, USA
=================================================

IMPORTANT DATES
Submission deadline: 29th of October, 2017
Acceptance notification: 10th of November, 2017
Camera ready due: 26th of November, 2017
NOTE: beware of registration limitations. Main conference already sold out although workshop registrations still available.

SUBMISSION
Through CMT system: see workshop site above
=================================================

OVERVIEW
The use of machine learning has become pervasive in our society, from specialized scientific data analysis to industry intelligence and practical applications with a direct impact in the public domain. This impact involves different social issues including privacy, ethics, liability and accountability.
In the way of example, European Union legislation, resulting in the General Data Protection Regulation (trans-national law) passed in early 2016, will go into effect in April 2018. It includes an article on "Automated individual decision making, including profiling" that, in fact, establishes a policy on the right of citizens to receive an explanation for algorithmic decisions that may affect them. This could jeopardize the use of any machine learning method that is not comprehensible and interpretable at least in applications that affect the individual.
This situation may affect safety critical environments in particular and puts model interpretability at the forefront as a key concern for the machine learning community. In such context, this workshop aims to discuss the use of machine learning in safety critical environments, with special emphasis on three main application domains:
- Healthcare
Decision making (diagnosis, prognosis) in life-threatening conditions
Integration of medical experts knowledge in machine learning-based medical decision support systems
Critical care and intensive care units
- Autonomous systems
Mobile robots, including autonomous vehicles, in human-crowded environments.
Human safety when collaborating with industrial robots.
Ethics in robotics and responsible robotics
- Complainants and liability in data driven industries
Prevent unintended and harmful behaviour in machine learning systems
Machine learning and the right to an explanation in algorithmic decisions
Privacy and anonymity vs. inte

We encourage submissions of papers on machine learning applications in safety critical domains, with a focus on healthcare and biomedicine. Research topics of interest include, but are not restricted to the following list:
- Feature extraction/selection for more interpretable models
- Reinforcement learning and safety in AI
- Interpretability of neural network architectures
- Learning from adversarial examples
- Transparency and its impact
- Trust in decision making
- Integration of medical experts knowledge in machine learning-based medical decision support systems
- Decision making in critical care and intensive care units
- Human safety in machine learning systems
- Ethics in robotics
- Privacy and anonymity vs. interpretability in automated individual decision making
- Interactive visualisation and model interpretabilityrpretability in automated individual decision making

ORGANIZERS
Alessandra Tosi, Mind Foundry (UK)
Alfredo Vellido, Universitat Politècnica de Catalunya, UPC BarcelonaTech (Spain)
Mauricio Álvarez, University of Sheffield (UK)

SPEAKERS AND PANELLISTS
FINALE DOSHI-VELEZ - Assistant Professor of Computer Science, Harvard
BARBARA HAMMER - Professor at CITEC Centre of Excellence, Bielefeld University
SUCHI SARIA - Assistant Professor, Johns Hopkins University
DARIO AMODEI - Research Scientist, OpenAI
ADRIAN WELLER - Computational and Biological Learning Lab, University of Cambridge and Alan Turing Institute.
Reply all
Reply to author
Forward
0 new messages