Summary: Full automation in robotics is currently not possible in many application domains (e.g. healthcare and assisted mobility, extreme environments) due to the complexity and variety
of tasks, and the unpredictable nature of the world and the human behaviours sharing the space. As a result, robots still require human help in human environments (e.g. think of your robot vacuum cleaner, which continuously needs your help while trying to
complete the relatively easy cleaning task). In doing so, the robot needs to be employed with good mechanisms to switch in between autonomy levels, which are ideally defined on a continuous scale, as well as finding good ways to communicate its plans to nearby
humans.
The aim of this PhD project is to 1) learn how to perform efficiently with/around humans through task and motion planning, 2) equip robots with multimodal communication tools to convey states and decisions, so that their operation
is explainable, and 2) develop fluent arbitration mechanisms for adopting appropriate levels of autonomy through perception and decision making. Complex
real-life application scenarios will be studied in this project, such as interacting with health personnel or patients in a hospital ward, or inspection in complex workplace environments.