Workshop Code: W1-TEACHING
Home page: https://www.ieeesmc.org/cai-2026/w1-teaching/
(Please use this code when submitting your paper to this workshop.)
As AI tools such as ChatGPT, Copilot, and education-specific agents proliferate, institutions are grappling with how to integrate them constructively, rather than reactively. These systems aim to offer students personalised, on-demand learning support, such as answering questions, recommending resources, and simulating dialogue with expert tutors.
This workshop focuses on how AI can augment teaching, assessment, and feedback practices in higher education, while ensuring pedagogical soundness, academic integrity, and student engagement. It also covers the design, development, and evaluation of AI-driven tutoring systems and learning companions for higher education.
Showcase emerging AI applications that support lecturers and assessors (e.g., feedback generation, rubric alignment, formative assessment tools).
Explore the pedagogical impact and evidence base for AI in teaching and learning.
Address academic policy concerns such as plagiarism, AI-authorship, and fairness in assessment.
Facilitate collaboration between AI researchers, educators, EdTech developers, and instructional designers.
Advance research on adaptive learning agents powered by LLMs, knowledge graphs, and dialogue systems.
Evaluate the pedagogical effectiveness and ethical considerations of deploying AI tutors at scale.
Create guidelines and toolkits for designing transparent, reliable, and culturally sensitive AI companions.
Significance: AI challenges traditional teaching models and opens opportunities for rethinking feedback loops, assessment design, and teaching support at scale. Higher education increasingly serves diverse, global cohorts with varied learning needs. AI-powered companions can address gaps in access to support, but must be aligned with curriculum goals, sensitive to student needs, and free from bias.
AI for automated feedback, marking support, and formative assessment
AI-supported scaffolding and personalised tutoring agents
LLMs for learning analytics and student support
Assessment design for AI-augmented contexts (e.g., open-AI assessments)
Institutional policy, ethics, and detection of inappropriate AI use
Case studies on teacher workload reduction through AI
Design principles for intelligent tutoring systems (ITS) using AI and NLP
AI-generated explanations, feedback, and Socratic dialogue
Curriculum alignment with learning outcomes
Multilingual and inclusive AI tutor systems
Risks: misinformation, hallucinations, and dependency on AI
Evaluation: learning gain, engagement, and trust
A whitepaper/report synthesising AI assessment challenges and strategies
Draft AI-in-Assessment policy templates for universities
A network of AI-in-education researchers and practitioner partners
A prototyping roadmap for responsible AI tutor development
An annotated benchmark dataset for tutor-agent dialogues in education
A cross-institution research agenda on evaluation and pedagogy for AI tutoring