CFP - NeurIPS2022 5th Robot Learning Workshop

49 views
Skip to first unread message

Markus Wulfmeier

unread,
Aug 15, 2022, 3:01:34 AMAug 15
to ml-...@googlegroups.com
Dear friends and colleagues

I'm excited to share that we'll be organising a 5th iteration of the Robot Learning Workshop at NeurIPS 2022. This should present a great opportunity to learn about recent advances and discuss your latest ideas and work. We hope this opportunity is useful to you and look forward to receiving innovative and impactful contributions.

One more thing: Thanks to our sponsors, we're providing funding for underrepresented groups at NeurIPS in particular aimed at attending and contributing to the workshop. Please feel free to share it with anyone who'd benefit from additional support.

Best wishes
Markus on behalf of the organising committee


## Call for Papers ##

NeurIPS'22 5th Robot Learning Workshop: Trustworthy Robotics 

http://www.robot-learning.ml/

Submission deadline: 22 September 2022 (AoE)

## Overview ##

Machine learning (ML) has been one of the premier drivers of recent advances in robotics research and has made its way into impacting several real-world robotic applications in unstructured and human-centric environments, such as transportation, healthcare, and manufacturing. At the same time, robotics is a key motivation for numerous research problems in artificial intelligence research, from data-efficient algorithms to robust generalization of decision models. However, there are still considerable obstacles to fully leveraging state-of-the-art ML in real-world robotics. For capable ML-equipped robots, guarantees on the robustness and analysis of the social implications of these tools are required for their utilization in human-facing robotic domains (e.g. autonomous vehicles, and tele-operated or assistive robots).

To support the development of robots that are safely deployable among humans, the field must consider trustworthiness as a central aspect in the development of robot learning systems. Unlike many other applications of ML, the combined complexity of physical robotic platforms and learning-based perception-action loops presents unique technical challenges. These challenges include concrete problems such as very high performance requirements, explainability, predictability, verification, uncertainty quantification, and robust operation in dynamically distributed, open-set domains. Since robots are developed for use in human environments, in addition to these technical challenges, we must also consider the social aspects of robotics such as privacy, transparency, fairness, and algorithmic bias. Both technical and social challenges also present opportunities for robotics and ML researchers alike. Contributing to advances in the aforementioned sub-fields promises to have an important impact on real-world robot deployment in human environments, building towards robots that use human feedback, indicate when their model is uncertain, and are safe to operate autonomously in safety-critical settings such as healthcare and transportation.

This year’s robot learning workshop aims at discussing unique research challenges from the lens of trustworthy robotics. We adopt a broad definition of trustworthiness that highlights different application domains and the responsibility of the robotics and ML research communities to develop “robots for social good.” Bringing together experts with diverse backgrounds from the ML and robotics communities, the workshop will discuss new perspectives on trust in the context of ML-driven robot systems.

## Topics and Objectives ##

Topics of interest include but are not limited to:

  • uncertainty estimation in robotics;

  • explainable robot learning;

  • domain adaptation and distribution shift in robot learning;

  • multi-modal trustworthy sensing and sensor fusion;

  • safe deployment for applications such as agriculture, space, science, and healthcare;

  • privacy aware robotic perception;

  • information system security in robot learning;

  • learning from offline data and safe on-line learning;

  • simulation-to-reality transfer for safe deployment;

  • robustness and safety evaluation;

  • certifiability and performance guarantees;

  • safe robot learning with humans in the loop;

  • algorithmic bias in robot learning;

  • quantification and adherence to social norms;

  • robotics for social good;

  • ethical robotics.

## Submissions ##

Submission website: https://cmt3.research.microsoft.com/RLW2022/

Email: neurips...@robot-learning.ml 

Submissions should use the NeurIPS template, and be 4 pages (plus as many pages as necessary for references). The reviewing process will be double blind following the same standards as the main conference.

Accepted papers and eventual supplementary material will be made available on the workshop website. However, this does not constitute an archival publication and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences. In the spirit of providing useful feedback, we will not accept submissions accepted to other conference or journal proceedings at the time of submission.

## Awards and Funding ##

We will likely be able to award prizes for the best papers. In addition, we hope to sponsor registration fees for presenting authors and some attendees, focussing on participants from underrepresented minorities in the field.

Apply for funding: https://forms.gle/G8zm6mtSY4r85fsf7

## Deadlines and Dates ##

  • Submission deadline: 22 September 2022 (Anywhere on Earth)

  • Notification: 14 October 2022 (Anywhere on Earth)

  • Funding Request Deadline: 21 October 2022 (Anywhere on Earth)

  • Workshop (virtual): 9 December 2022

## Organizers ##

Alex Bewley (Google Research, Zurich), Anca Dragan (UC Berkeley), Igor Gilitschenski (University of Toronto), Emily Hannigan (Columbia University), Masha Itkina (Stanford University, Toyota Research Institute (TRI)), Hamidreza Kasaei (University of Groningen, Netherlands), Nathan Lambert (HuggingFace), Julien Perez (Naver Labs Europe), Ransalu Senanayake (Stanford University), Jonathan Tompson (Google Research, Mountain View), Markus Wulfmeier (Google DeepMind, London)

## Advisory Board ##


Roberto Calandra (Facebook AI Research), Jens Kober (TU Delft, Netherlands), Danica Kragic (KTH), Fabio Ramos (NVIDIA, University of Sydney), Vincent Vanhoucke (Google Research, Mountain View)

Reply all
Reply to author
Forward
0 new messages