Understanding LLM Responses in Programming Tasks: A Study on Prompt Quality, Personalization, and RAG Limitation
DOI:
https://doi.org/10.32664/icobits.v1.107Keywords:
Generative AI, Personalized Learning, Adaptive Prompting, Retrieval-Augmented GenerationAbstract
Programming is a fundamental skill in the field of technology and education in the digital era. However, students’ understanding of programming varies significantly between senior and junior learners. Senior students tend to have stronger comprehension of code structure and programming logic, while junior students often struggle to identify well-structured code. Current learning processes remain general and are not sufficiently tailored to individual student needs. Existing personalization approaches are still shallow and have not been fully integrated with students’ abilities and learning contexts. The emergence of Generative Artificial Intelligence (GenAI), particularly Large Language Models (LLMs) such as GPT, Claude, and Gemini, offers new opportunities to support more adaptive and personalized programming education. LLMs can act as interactive learning assistants capable of explaining concepts, generating code examples, and providing direct feedback. However, their use also presents challenges, including hallucination risks that may lead to conceptual misunderstandings and a strong dependency on the quality of user-crafted prompts. In addition, prior studies show that Retrieval-Augmented Generation (RAG) has not performed optimally in debugging contexts, as external documents are not always effectively utilized by the model. These issues highlight the need to explore more effective approaches for leveraging GenAI to deliver programming learning that is adaptive, contextual, and safe from misinformation.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 ICoBITS

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.





