The recent Digital Experiences in Mathematics Education (DEME) Special Issue on ChatGPT was an important milestone, offering first insights into this emerging field (Pepin et al., 2025a). It mapped early uses, attitudes, and initial design prototypes, laying essential groundwork. This new Special Issue represents the crucial next step, capturing the shift from early experimentation to systematic, evidence-based integration of Generative AI (GenAI) into daily mathematics teaching and learning.
GenAI refers to machine-learning models capable of producing novel outputs, such as text, images, mathematical explanations, tasks, proofs, and code, in response to natural language prompts. In the context of mathematics education, GenAI’s potential lies in its ability to act as a dynamic partner in the learning process. It can, for instance, generate personalized problem sets, engage students in inquiry-oriented dialogue about mathematical concepts, provide step-by-step explanations, and assist in translating problems between symbolic, graphical, and verbal representations. However, GenAI also raises challenges such as risks of mathematical inaccuracies (Urban et al., 2025), concerns about over-reliance, opacity of underlying models, and issues of accessibility and equity. These affordances and limitations jointly define the emerging research agenda for meaningful integration of GenAI into mathematics classrooms.
GenAI introduces a paradigm shift in mathematics education, going beyond the computation and visualization enhancements of previous technologies (like dynamic geometry, computer algebra systems, graphing calculators, and interactive simulation tools). Its unique significance lies in its abilities to engage in natural language dialogue, generate mathematical tasks, produce and critique proofs, and translate across representations (text, diagrams, and code). These capabilities position GenAI not merely as a tool but as an interactive partner in mathematical learning and teaching.
This Special Issue seeks to provide educators, researchers, and policymakers with validated approaches, reproducible resources, and evidence-based insights that go far beyond speculative or exploratory work.
Aims and Scope: A Theory-Driven, Mathematics-Specific Agenda
We now see an urgent need to extend beyond initial explorations toward practice-focused, empirically anchored, and theoretically informed studies. This Special Issue welcomes a diverse range of research methodologies that can provide evidence on the integration of GenAI in mathematics education. These include qualitative approaches (e.g., case studies, interaction analysis), quantitative and experimental designs, survey-based work (Pepin et al., 2025b), Design-Based Research (DBR), mixed-methods studies, and theoretical or analytical contributions (Davis, 2025; Li, 2025). Authors are encouraged to clearly articulate how their methodological choices align with their research aims.
To guarantee disciplinary relevance, all submissions must explicitly address issues unique to mathematics teaching and learning rather than general pedagogy. This requires attention to core mathematical activities such as reasoning, proving, modeling, problem-solving, abstraction, and working with multiple representations (symbolic, graphical, and visual). Authors will be required to specify (a) the exact mathematical content or competence their paper targets and (b) why their work is specific to mathematics education.
We invite contributions that clearly articulate theoretical rationales and design principles, while reporting empirical or pilot classroom work explicitly tied to these frameworks.
We welcome submissions that explore the following themes:
• Empirical Classroom Trials and DBR: Document and evaluate how GenAI tools are embedded in authentic classroom practices.
• Theory-Driven Intervention Designs: Integrate established mathematics education theories with AI-supported pedagogies.
• Mathematics-Specific Assessment: Develop assessment approaches that leverage GenAI to evaluate mathematical competencies such as reasoning, proof, and argumentation. For example, this could include designing tasks that assess students’ ability to critique an AI-generated proof, to formulate a precise prompt to get a useful solution from a GenAI tool, or to explain the reasoning behind a corrected AI output, or for example, dynamically generated proof tasks or tools that analyze the structure of students’ reasoning rather than only final answers.
• Critical Engagement and Trustworthiness: Investigate how students and teachers critically evaluate, explain, and verify GenAI outputs in the context of mathematical tasks (Busuttil & Calleja, 2025). This research should contribute to mathematics education by developing frameworks and strategies for fostering mathematical critical literacy, helping learners distinguish between valid and flawed AI-generated arguments, and understanding how such engagement influences their own mathematical reasoning and sense-making.
• Longitudinal Practice Change: While acknowledging that GenAI is a recent development, we encourage studies that document the evolution of teaching practices and student learning over meaningful timeframes, even if these are shorter-term longitudinal studies (e.g., across a semester or academic year).
• Equity and Ethical Studies: This theme calls for research that critically examines the socio-political dimensions of GenAI in mathematics education. Contributions should connect issues of equity, accessibility, inclusion, and ethics to mathematical learning. For example, studies could investigate how GenAI tools can be designed to support multilingual learners in understanding word problems, analyze the differential impacts of AI-assisted learning on students from varying socioeconomic backgrounds, expose algorithmic biases in how GenAI handles diverse cultural contexts in mathematical modeling, or explore ethical frameworks for student use of GenAI in problem-solving and assessment.
• Community Infrastructure: This theme recognizes that theoretical advances and practical implementation are mutually reinforcing. Contributions of benchmark datasets, validated task banks, and reproducible toolkits are crucial for grounding theoretical claims in shared, testable resources. For instance, a theoretically-driven framework for assessing proof comprehension can be operationalized through a public dataset of AI-generated proofs with common error types. Such infrastructures allow the community to systematically test hypotheses, replicate studies, and build cumulative knowledge, thereby bridging the gap between abstract theory and concrete classroom practice in mathematics education.
Submissions are by call for papers (open call). There is a 1,000 word limit for abstracts and a 10,000 word limit for submissions.
Tentative Special Issue Timetable
Call for Extended Abstracts Opens: December 17, 2025
Deadline for Submission of Extended Abstracts (max. 1000 words): March 31, 2026
Deadline for Submission of Full Papers: September 30, 2026
First Round of Reviews Due: December 31, 2026
Authors Informed of First Decision: January 15, 2027
Submission of 1st Revision: March 15, 2027
Second Round of Reviews Due (if necessary): May 15, 2027
Final Submission of Papers: July 31, 2027
Copy-editing/Proofing and Target Publication: Late 2027 or First 2028
Submissions should be written according to the journal's submission guidelines, available here. Online submission: please use the journal's Online Manuscript Submission System (SNAPP) accessible here. Please note that paper submissions via email are not accepted. All papers will undergo the journal's standard review procedure (single-blind peer review), according to the journal's Peer Review Policy, Process, and Guidance. Reviewers will be selected according to the Peer Reviewer Selection policies. This journal offers the option to publish Open Access. You are allowed to publish Open Access through Open Choice. Please explore the OA options available through your institution by referring to our list of OA Transformative Agreements. For any questions, please contact the Lead Guest Editor, Osama Swidan (osa...@bgu.ac.il).
More: https://link.springer.com/collections/egbbbafadc
References
Busuttil, L., & Calleja, J. (2025). Teachers’ beliefs and practices about the potential of ChatGPT in teaching mathematics in secondary schools. Digital Experiences in Mathematics Education, 11, 140–166. https://doi.org/10.1007/s40751-024-00168-3
Davis, J. D. (2025). Evolving techniques and emerging schemes: Prospective teachers’ transformation of ChatGPT. International Journal of Science and Mathematics Education.https://doi.org/10.1007/s10763-024-10533-8
Li, M. (2025). Integrating artificial intelligence in primary mathematics education: Investigating internal and external influences on teacher adoption. International Journal of Science and Mathematics Education, 23, 1283–1308.https://doi.org/10.1007/s10763-024-10515-w
Pepin, B., Buchholtz, N., & Salinas-Hernández, U. (2025a). Mathematics education in the era of ChatGPT: Investigating its meaning and use for school and university education—Editorial to special issue. Digital Experiences in Mathematics Education, 11, 1–8. https://doi.org/10.1007/s40751-025-00173-0
Pepin, B., Buchholtz, N., & Salinas-Hernández, U. (2025b). A scoping survey of ChatGPT in mathematics education. Digital Experiences in Mathematics Education, 11, 9–41. https://doi.org/10.1007/s40751-025-00172-1
Meng, X., Yang, B., Yang, L., & others. (2025). A novel AI-empowered, student-centered teaching strategy for large classes in higher education. International Journal of Science and Mathematics Education. https://doi.org/10.1007/s10763-025-10573-8
Urban, M., Brom, C., Lukavský, J., Děchtěrenko, F., Hein, V., Svacha, F., Kmoníčková, P., & Urban, K. (2025). “ChatGPT can make mistakes. Check important info.” Epistemic beliefs and metacognitive accuracy in students' integration of ChatGPT content into academic writing. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13591