arXiv (Cornell University)
Generating Planning Feedback for Open-Ended Programming Exercises with LLMs
April 2025 • Mehmet Arif Demirtaş, Claire Zheng, Max Fowler, Kathryn Cunningham
To complete an open-ended programming exercise, students need to both plan a high-level solution and implement it using the appropriate syntax. However, these problems are often autograded on the correctness of the final submission through test cases, and students cannot get feedback on their planning process. Large language models (LLM) may be able to generate this feedback by detecting the overall code structure even for submissions with syntax errors. To this end, we propose an approach that detects which high-…