AI-support for teaching and learning at scale
Abstract. The proliferation of artificial intelligence (AI) tools and large language models (LLMs) has sparked dramatic changes to the landscape of post-secondary education resulting in new opportunities—and obligations—to re-evaluate norms for teaching and learning. This presentation includes a brief overview with perspective about rethinking assessment practices—i.e., how student learning is evaluated—during a period of such rapidly evolving technology. The session then transitions to sharing greater detail about ongoing research sponsored by the National Science Foundation, Penn State’s Center for Socially Responsible Artificial Intelligence, and a strategic partnership between Penn State and the University of Auckland in New Zealand, which seeks to develop LLM and AI-based tools intended to amplify instructor efforts to provide timely, personalized feedback to open-ended questions during class, especially for use in large classes (hundreds of students) at scales for which the logistics of doing so would be either untenable or impossible without a teacher-AI partnership. To this end, Beckman will also discuss how his team has approached evaluating performance of the tools they develop in order to build trust and confidence that they make a responsible contribution to the teaching team.
Resources
- Slides (PDF)
- ArXiv Preprint (link): Beckman, Burke, Fiochetta, Fry, Lloyd, Patterson, Tang, (in review). Developing Consistency Among Undergraduate Graders Scoring Open-Ended Statistics Tasks. Preprint URL: https://arxiv.org/abs/2410.18062
- EMNLP Paper (PDF): Li, Z., Lloyd, S., Beckman, M. D., & Passonneau, R. J. (2023). Answer-state Recurrent Relational Network (AsRRN) for Constructed Response Assessment and Feedback Grouping. Findings of the Association for Computational Linguistics: EMNLP 2023. https://doi.org/10.18653/v1/2023.findings-emnlp.254
- Pilot Study (PDF): Lloyd, S. E., Beckman, M., Pearl, D., Passonneau, R., Li, Z., & Wang, Z. (2022). Foundations for AI-Assisted Formative Assessment Feedback for Short-Answer Tasks in Large-Enrollment Classes. In Proceedings of the eleventh international conference on teaching statistics. Rosario, Argentina.
- ArXiv Preprint (link): Wei, Y., Pearl, D., Beckman, M., Passonneau, R. (2025). Concept-based Rubrics Improve LLM Formative Assessment and Data Synthesis. Preprint URL: https://arxiv.org/pdf/2504.03877
- 2026 Ashtekar Frontiers of Science Lecture Series (link): https://science.psu.edu/frontiers
Acknowledgments
- US National Science Foundation (NSF DUE-2236150; DUE-2417294)
- Penn State Center for Socially Responsible Artificial Intelligence (CSRAI Seed Grant #025243)
- Penn State Social Science Research Institute AI Seed Grant
- Strategic partnership between University of Auckland and Penn State University
Matthew Beckman
Assoc Research Professor | Penn State University
Director | CAUSE
email: mdb268 [at] psu [dot] edu
personal webpage: https://mdbeckman.github.io/
CAUSE webpage: https://www.causeweb.org