Rationale:
Since 2020, the fast spread of Generative AI (GenAI) has changed how people learn to code. However, we still do not fully understand how these tools affect a student’s ability to think for themselves (Zhai, Wibowo and Li, 2024). While tools like ChatGPT make coding faster, they create an “AI Paradox”: students might get the right answer but lose the ability to solve problems independently (Darvishi et al., 2023). There is currently a gap in research about how to make AI “more human” and supportive, especially for learners in the Global South who face different challenges like high data costs (UNESCO, 2025). This project is important because it looks at whether AI acts as a helpful mentor or just a “cold machine” that does the work for the student.
Method:
The study will review 40 to 5O secondary source data from 2020-2025.
Research Design:
This study uses a Systematic Literature Review (SLR) of secondary data published following the PRISMA 2020 rules to ensure the research is clear and can be repeated by others (Page et al., 2021). To keep the study modern, it also uses the new PRISMA-trAIce extension (2025), which requires researchers to be honest about how they used AI to help find and analyze papers (Holst et al., 2025).
Philosophy:
The study uses “Critical Realism” to look beneath the surface at the hidden rules of AI, and “Critical Digital Pedagogy” to check if the technology is giving power back to the student or keeping it with the algorithm (Hinck et al., 2024).
Expected Results:
It is expected that the research may support the use of a “Socratic” AI—which asks questions instead of just giving code—helps students become better at debugging and deep thinking (Akram et al., 2025). It is also anticipated that students in West Africa use face barriers to access AI due to poor internet or limited resources (Senyo et al., 2023). Finally, the results will likely show that if students rely too much on AI, they become “mentally lazy” unless the teacher rewards the way they thought about the code rather than just the final result (Fan et al., 2025).
Conclusions
The study may find out whether or not AI is only helpful if we focus on the learning process, not just the finished product. This means schools should stop focusing only on “catching cheaters” and start using a “Pedagogy of Care” that supports students as they struggle to learn (Selwyn, 2024). Ultimately, these findings will help create rules that ensure technology supports social justice and “digital sovereignty” for learners everywhere (Gwagwa et al., 2020).
Reference List:
Akram, S.A., et al. (2025) ‘Generative AI for project-based assessment: Socratic scaffolding’, International Journal of Artificial Intelligence in Education.
Darvishi, A., et al. (2023) ‘Impact of AI assistance on student agency’, Computers & Education, 210, 104967.
Fan, S., et al. (2025) ‘The role of over-reliance on AI in the negative consequences of student learning’, Cogent Education.
Gwagwa, A., et al. (2020) ‘Artificial Intelligence (AI) in Africa: Ethical considerations, benefits and challenges’, African Journal of Information and Communication, (26), pp. 1–28.
Hinck, A., et al. (2024) ‘Digital critical pedagogy: a collaborative narrative literature review’, Mid-Western Educational Researcher, 36(1).
Holst, D., et al. (2025) ‘Transparent Reporting of AI in Systematic Literature Reviews: Development of the PRISMA-trAIce Checklist’, JMIR AI, 1, e80247.
Page, M.J., et al. (2021) ‘The PRISMA 2020 statement: an updated guideline for reporting systematic reviews’, BMJ, 372.
Selwyn, N. (2024) ‘Digital degrowth: toward radically sustainable education technology’, Learning, Media and Technology, 49(2), pp. 186–199.
Senyo, P.K., et al. (2023) ‘Digitalizing with bricolage: how Ghanaian microenterprises overcome structural barriers’, Information Systems Research.
UNESCO (2025) The Algorithmic Divide: A Report on AI Inequity in the Global South. Paris: UNESCO.
Zhai, C., Wibowo, S. and Li, L.D. (2024) ‘The effects of over-reliance on AI dialogue systems on students’ cognitive abilities’, Smart Learning Environments, 11, 28.
One response to “Systematic Literature Review of AI-Enabled Coding Assistants (2020-2025) ”
I read about your work with interest as a secondary computer science teacher. I have recently faced questions from parents mostly about the relevance of their child learning programming because of what GenAI can do. I have tried to respond by highlighting the importance of understanding outputs, how the machine works and retaining a workforce knowledgable in maintaining systems when they go wrong. I was wondering whether the shift towards the pedagogy of care you mention might also serve to influence views around the relevence of learning to code?
I feel that the assertion that learning programming is not relevant because GenAI can do it for us is rooted in a firmly human capital conception of the purpose of education. Prehaps such a pedagogical shift would have to be part of a wider change in how people see the purpose of education?