[jobs] PhD position in Affective communication in human robot interaction: behavioral and neural perspective @ Italian Institute of Technology (IIT)

127 views
Skip to first unread message

netsp...@gmail.com

unread,
Jun 24, 2024, 10:12:55 AM (5 days ago) Jun 24
to Machine Learning News

PhD Position in Affective communication in human robot interaction: behavioral and

neural perspectives @ Italian Institute of Technology (IIT)

In the spirit of the doctoral School on Bioengineering and Robotics (https://biorob.phd.unige.it/how-to-apply), the PhD Program for the curriculum “Cognitive Robotics, Interaction and Rehabilitation Technologies” provides interdisciplinary training at the interface between technology and life-sciences. The CONTACT Research Line is coordinated by Alessandra Sciutti, who has extensive experience in Cognitive Architecture for Human Robot Interaction.

 

Description: During social interactions, the observation of actions allows us to understand the attitudes of others. Humans perform actions with different forms expressing their positive or negative mood/internal state. For example, observing a person that greets us, we may understand if that person is happy or not, or if he/she feels good or not. The perception and the generation of these forms of communication could be a valuable property for future robots allowing them to assume the right attitude in different scenarios, such as an authoritative role in the security contexts or a polite behavior in clinical ones, influencing human behavior. The aim of the present project is to study the kinematic features characterizing different human actions performed with different forms (i.e., gentle, enthusiastic, annoyed, rude) and to enable the iCub humanoid robot to express them with its own behaviour and detect them from visual observation of human actions. To quantitatively evaluate the impact on humans from behavioral and neural point of view, the project will leverage Real Time functional Magnetic Resonance Imaging technique (fMRI). Several robotic actions will be presented to healthy participants in order to study, in real time, the neural activity involved in the processing of these robotic actions. The research project will be carried out in collaboration with the University of Parma that is equipped with an advanced 3 Tesla MR scanner endowed with Real Time fMRI technology. The work will take advantage of an existing software module available on the iCub robot supporting the generation and detection of actions with different properties and will potentially improve it. The successful candidate will: 1) participate in the generation of iCub robot’s actions characterized by different kinematic features and forms; 2) participate in the development of algorithms to detect action forms; 3) develop and test cognitive paradigms coupled with cortical and subcortical Real Time fMRI recordings; 4) compute brain activity maps from fMRI data.

Requirements: Degree in Bioengineering, Computer Science, Computer Engineering, Robotics, or related disciplines, attitude for problem solving, C++ programming. We expect the candidate to develop skills in signal processing, and computational modelling. Excellent analytical skills (MATLAB) will also be required.

Contacts: Applicants are strongly encouraged to contact the prospective tutors before they submit their application: giuseppe...@iit.it, radoslaw.n...@dibris.unige.it , alessandr...@iit.it

Application’s deadline:  The 2024 Doctorate First Call will close on July 9th 2024 at 12 noon (CET)

 

 

Wenwu Wang

unread,
Jun 25, 2024, 11:44:56 AM (4 days ago) Jun 25
to Machine Learning News
Two fully-funded PhD studentships are available at the Centre for Vision Speech and Signal Processing (CVSSP), University of Surrey
Application deadline: 1 July 2024.

Deep learning for audio-visual scene analysis:
https://www.surrey.ac.uk/fees-and-funding/studentships/deep-learning-audio-visual-scene-analysis

Audio/acoustics machine learning for intelligent sound reproduction:
https://www.surrey.ac.uk/fees-and-funding/studentships/audioacoustics-machine-learning-intelligent-sound-reproduction

Please feel free to circulate the adverts to those who might be interested.  Many thanks. 

Best wishes,
 
Wenwu
 
 
--
Wenwu Wang
Professor of Signal Processing and Machine Learning

Centre for Vision Speech and Signal Processing (CVSSP)
& Surrey Institute for People Centred AI

University of Surrey
Guildford, GU2 7XH
United Kingdom
Phone: +44 (0) 1483 686039
Fax: +44 (0) 1483 686031
Email: w.w...@surrey.ac.uk


Reply all
Reply to author
Forward
0 new messages