Understanding LLM Responses in Programming Tasks: A Study on Prompt Quality, Personalization, and RAG Limitation

Authors

  • Meilyna Hutajulu Del Institute of Technology Author
  • Ioka Purba Author
  • Mulyadi Siahaan Author
  • Samuel Situmeang Author
  • Mario Simaremare Author

DOI:

https://doi.org/10.32664/icobits.v1.107

Keywords:

Generative AI, Personalized Learning, Adaptive Prompting, Retrieval-Augmented Generation

Abstract

Programming is a fundamental skill in the field of technology and education in the digital era. However, students’ understanding of programming varies significantly between senior and junior learners. Senior students tend to have stronger comprehension of code structure and programming logic, while junior students often struggle to identify well-structured code. Current learning processes remain general and are not sufficiently tailored to individual student needs. Existing personalization approaches are still shallow and have not been fully integrated with students’ abilities and learning contexts. The emergence of Generative Artificial Intelligence (GenAI), particularly Large Language Models (LLMs) such as GPT, Claude, and Gemini, offers new opportunities to support more adaptive and personalized programming education. LLMs can act as interactive learning assistants capable of explaining concepts, generating code examples, and providing direct feedback. However, their use also presents challenges, including hallucination risks that may lead to conceptual misunderstandings and a strong dependency on the quality of user-crafted prompts. In addition, prior studies show that Retrieval-Augmented Generation (RAG) has not performed optimally in debugging contexts, as external documents are not always effectively utilized by the model. These issues highlight the need to explore more effective approaches for leveraging GenAI to deliver programming learning that is adaptive, contextual, and safe from misinformation. 

Downloads

Download data is not yet available.

Downloads

Published

19-01-2026