Research Topics
The
Trustworthy and Collaborative Artificial Intelligence workshop aims to
explore the dynamic interplay between humans and AI systems, emphasizing
the principles
and practices that foster trustworthy and effective human-AI
collaboration. As AI systems increasingly permeate various aspects of
our lives, their design and deployment must align with human values to
ensure these AI systems are ethical, trustworthy, and
effective.
We
seek contributions bridging the gap between machine intelligence and
human understanding, e.g., through explainable AI techniques, and how
machine learning paradigms
- e.g., selective prediction, active learning, and learning to defer -
can optimize shared decision-making. We also welcome solutions
integrating human-AI monitoring protocols and interactive machine
learning. Finally, we encourage insights from user studies
and the design of collaborative frameworks that enhance trustworthiness
and robustness in human-AI interaction. In brief, our goal is to
promote discussion and development of hybrid systems that adapt to
evolving contexts while maintaining transparency and
trust, augmenting human capabilities and respecting human agency.
Topics of Interest
- Cognitive aspects in Human-AI Interaction
- Ethical aspects in Human-AI Interaction
- Legal aspects in Human-AI Interaction
- Human-AI Interaction Through Explanations
- Learning to Defer
- Learning to Reject
- Selective Prediction
- Active Learning
- Human-centered methods for machine learning
- AI alignment
- Human-AI monitoring protocols
- Hybrid decision-making systems
- Trustworthy AI
- Collaborative and Cooperative AI
- Interactive Machine Learning
- User studies
- Robustness of Human-AI interaction
Submission Guidelines