From: Stefan Buijsman Dear colleagues,
Artificial Intelligence is used more and more in society, from healthcare to government decisions and recruitment. Along with the rapid increase of AI adoption come increased concerns about the inherent shortcomings of such technologies (e.g., robustness) and the social, and ethical implications. To create AI systems that can properly serve humans, it is crucial to put humans at the centre of the process such that the outcome system behaves in a way that fits the values and needs of people. This poses new challenges both to philosophical work on values and to (responsible) technological development. What (ethical) requirements should these systems adhere to? What does such ‘adherence’ mean and how can we demonstrate and validate claims regarding adherence? How to build AI systems that can be understood by humans and that can align their behaviour with human values? Tackling these challenges requires bridging the gap between philosophy and computer science within AI Ethics.
TU Delft has world-leading expertise in the operationalization
of AI Ethics and now looks to strengthen this by hiring two
three-year post-docs, one in philosophy and one in computer
science, that will be closely collaborating on topics within AI
Ethics of their choice.
For the philosophy position, send this to Dr. Stefan Buijsman: s.n.r.b...@tudelft.nl