Research at the Chair of Digital Education

A central goal of our research is to identify design principles for digital tools and materials that optimally support learners’ cognition, motivation, and affect in their education process. We are also interested in how digital tools can create new opportunities for reducing teachers’ workload while systematically enhancing professional pedagogical actions in the classroom.

A key focus of our research is formative assessment for learning – a learning-accompanying diagnosis of learning processes aimed at providing students with helpful feedback and giving teachers precise information about the learners’ current knowledge levels. This is intended to enable teachers to better address individual student needs and promote targeted support.

We are equally interested in how students interact with digital and AI-based learning systems and how these tools can cognitively stimulate and motivate learners. Particular attention is devoted to ensuring that AI-supported systems track learning processes and intelligently analyze and translate them into adaptive support mechanisms. Such digital tools can significantly assist teachers in everyday decision-making and facilitate differentiated learning opportunities. 

We also explore what competencies teachers need for successful teaching in a digital future and incorporate this perspective into our regular teaching at the EUF.

With our applied basic research, which is primarily experimental-psychological and quantitative in nature, we aim to build a bridge between theory and practice.

Against this background, our empirical data collection takes place both in the lab and in real educational settings (e.g., schools). In our holistic investigations, we examine not only the learning effectiveness of systems and manipulations but also their effects on learners’ affective and motivational states. We use process data (e.g., timestamps, eye tracking, video data) to reconstruct learning trajectories and interactions with high temporal resolution and gain a deeper understanding.

In addition, we conduct systematic analyses of scientific literature (e.g., systematic reviews, meta-analyses, meta-meta-analyses) to aggregate significant findings from existing research and make them more applicable for both research and practice.

Adaptive Teaching and Learning with Digital Media

• Technology-based adaptive teaching and learning

• Intelligent tutoring systems and AI in the classroom

• Learning spaces of the future

Feedback for Learners

• Computer-based feedback

• AI-generated feedback and support

• Digitally mediated peer feedback

Innovations in Formative Assessment and Diagnostics

• AI-supported item creation (Automated Item Generation)

• Conversation-based assessment

• Learning analytics in the classroom

• Design of teacher dashboards

AI-Supported Evaluation in the Classroom

• Feedback on student texts using AI support

• Teacher professionalization for AI-supported evaluation

Learning and Testing with Multimedia

• Effects of multimedia design on digital test materials

• Digital blackboards and smartboards in teaching

• Teacher training for the selection and design of appropriate teaching/learning materials

Exams in Higher Education and the Impact of Artificial Intelligence

• Potentials and limitations of AI use in higher education teaching

• Competency-based examination formats of the future


 

Current Publications

Schewior, L., & Lindner, M. A. (in press). Multimedia effects in testing: A meta-analysis on cognitive, metacognitive and affective effects of pictures in test items. Journal of Educational Psychology.

Kuklick, L., Eder, T., Zhao, F., Mayer, R. E., & Lindner, M. A. (in press). Emotional reactions to performance feedback as measured by self-reports and automatic facial recognition. Journal of Educational Psychology. https://doi.org/10.1037/edu0001029

Holtmann, M., Kennedy, A. I., & Strietholt, R. (2026). What students learn depends on what we teach: Curriculum content coverage and student achievement in mathematics and science. Learning and Individual Differences, 128, 102907

Daniels, L. M., Wells, K., Lindner, M. A., Beeby, A. M., & Daniels, V. J. (2026). Satisfaction and frustration of basic psychological needs in classroom assessment. Trends in Higher Education, 5(1), 1-15. https://doi.org/10.3390/higheredu5010015

Holtmann, M., Meinck, S., Hernandez, A. S., & Isac, M. M. (2026). From Intention to Action: Understanding Youth Electoral Participation Across Countries Through Civic Education. Developmental Science, 29(2), e70127.

Schult, J., Fauth, B., Schneider, R., & Lindner, M. A. (2026). How schools rebound from pandemic learning loss: Longitudinal findings from mandatory large-scale assessments. Learning and Instruction, 101, Article 102263. https://doi.org/10.1016/j.learninstruc.2025.102263

Alemdag, E., & Gorgun, G. (2026). AI scoring of peer feedback quality under different prompting strategies: How close is it to expert scoring? In Proceedings of the 16th International Conference on Learning Analytics & Knowledge (LAK 2026), Bergen, Norway. 

Alemdag, E. (2026). Differences in students' processing of peer feedback: An exploratory epistemic network analysis of response messages. In Proceedings of the 16th International Conference on Learning Analytics & Knowledge (LAK 2026), Bergen, Norway.

Mertens, U., & Lindner, M. A. (2025). Computer-based answer-until-correct and elaborated feedback: Effects on affective-motivational and performance outcomes. Journal of Computer Assisted Learning, 41(2), e13112. https://doi.org/10.1111/jcal.13112

Alemdag, E., Eichelmann, A., & Narciss, S. (2025). A framework for learning from erroneous examples and meta-analysis of empirical research. Review of Educational Research. https://doi.org/10.3102/00346543251390901

Kuklick, L., & Lindner, M. A. (2025). How to enhance elaborated feedback in computer-based assessment: The role of multimedia and emotional design factors. Contemporary Educational Psychology, 72, Article 102396. https://doi.org/10.1016/j.cedpsych.2025.102396

Rožman, M., Holtmann, M., & Meinck, S. (2025). Students’ Engagement with Information and Communications Technologies. In An International Perspective on Digital Literacy: Results from ICILS 2023 (pp. 139-180). Cham: Springer Nature Switzerland.

Lindner, M. A. & Weßels, D. (2025). Zur Ausgestaltung von Richtlinien zur Nutzung generativer künstlicher Intelligenz an Hochschulen. Forschung & Lehre, 2025(2), 32–35. https://www.forschung-und-lehre.de/heftarchiv/ausgabe-2/25

Alemdag, E., & Narciss, S. (2025). Promoting formative self-assessment through peer assessment: peer work quality matters for writing performance and internal feedback generation. International Journal of Educational Technology in Higher Education, 22(1), 22. DOI: 10.1186/s41239-025-00522-4

Gorgun, G., & Bulut, O. (2025). Instruction‐tuned large‐language models for quality control in automatic item generation: A feasibility study. Educational Measurement: Issues and Practice, 44(1), 96–107. DOI: 10.1111/emip.12663

Bulut, O., Gorgun, G., & Yildirim‐Erbasli, S. N. (2025). The impact of frequency and stakes of formative assessment on student achievement in higher education: A learning analytics study. Journal of Computer Assisted Learning, 41(1), Article e13087. DOI: 10.1111/jcal.13087

Mertens, U., & Lindner, M. A. (2025). Computer‐based answer‐until‐correct and elaborated feedback: Effects on affective‐motivational and performance outcomes. Journal of Computer Assisted Learning, 41(2), Article e13112. DOI: 10.1111/jcal.13112

Bardach, L., Emslander, V., Kasneci, E., Eitel, A., Lindner, M. A., & Bailey, D. H. (2024). Research syntheses on AI in education offer limited educational insights. OSF Preprinthttps://doi.org/10.31219/osf.io/dx6kt_v1

Yildirim-Erbasli, S. N., & Gorgun, G. (2024). Disentangling the relationship between ability and test-taking effort: To what extent the ability levels can be predicted from response behavior? Technology, Knowledge and Learning, 1–23. DOI: 10.1007/s10758-024-09810-w

Gorgun, G., & Yildirim‐Erbasli, S. N. (2024). Algorithmic bias in BERT for response accuracy prediction: A case study for investigating population validity. Journal of Educational Measurement. DOI: 10.1111/jedm.12420

Buder, J., Lindner, M. A., Oestermeier, U., Huff, M., Gerjets, P., Utz, S., & Cress, U. (2024). Generative künstliche Intelligenz – Mögliche Auswirkungen auf die psychologische Forschung. Psychologische Rundschau. DOI: 10.1026/0033-3042/a000699

Schewior, L., & Lindner, M. A. (2024). Revisiting picture functions in multimedia testing: A systematic narrative review and taxonomy extension. Educational Psychology Review, 36(2), 49. DOI: 10.1007/s10648-024-09883-0

Ehrhart, T., Höffler, T. N., Grund, S., & Lindner, M. A. (2024). Static versus dynamic representational and decorative pictures in mathematical word problems: Less might be more. Journal of Educational Psychology, 116(4), 532. DOI: 10.1037/edu0000821

Wise, S. L., Kuhfeld, M. R., & Lindner, M. A. (2024). Don’t test after lunch: The relationship between disengagement and the time of day that low-stakes testing occurs. Applied Measurement in Education, 37(1), 14–28. DOI: 10.1080/08957347.2024.2311925