< All projects

Using large language models to explain programming error messages

For this research project, we explored secondary educators’ perspectives on using large language models (LLMs) to explain programming error messages in the classroom. 

Two research questions guided our investigation: 

Our goal was to understand how LLMs could help explain programming errors to secondary school students and teachers. We used feedback literacy theory to guide our investigation and determine what kind of support would be needed for students and teachers to use LLM explanations successfully.


In July and August 2023, eight expert secondary computer science educators in England participated in this research. Four male and four female educators were recruited based on their high expertise. 

An adapted prototype of the publicly available Raspberry Pi Code Editor was used for the study. The prototype called the OpenAI GPT-3.5 LLM to generate explanations of the IDE program error messages, accessible via a question mark next to the standard error message. The prototype was only available for the research study and is not available for public use at this time. 

The educators took part in individual, one-hour-long activity-based interviews. The interview schedule, including the project information sheet, programming activity adapted for the interview, selected error messages shown to educators and anonymous transcripts are all available in the project folder below. 

Publications

Veronica Cucuiat and Jane Waite. (2024). Feedback literacy: Holistic analysis of secondary educators’ views of LLM explanations of program error messages in the classroom. In Proceedings of the 2024 Conference on Innovation and Technology in Computer Science Education (ITiCSE 2024). Association for Computing Machinery, New York, NY, USA. (to be published in July 2024) (Open-access author copy)