Systematic Literature Review of AI-Enabled Coding Assistants (2020-2025) 


Rationale:
 Since 2020, the fast spread of Generative AI (GenAI) has changed how people learn to code. However, we still do not fully understand how these tools affect a student’s ability to think for themselves (Zhai, Wibowo and Li, 2024). While tools like ChatGPT make coding faster, they create an “AI Paradox”: students might get the right answer but lose the ability to solve problems independently (Darvishi et al., 2023). There is currently a gap in research about how to make AI “more human” and supportive, especially for learners in the Global South who face different challenges like high data costs (UNESCO, 2025). This project is important because it looks at whether AI acts as a helpful mentor or just a “cold machine” that does the work for the student.
Method:
The study will review 40 to 5O secondary source data from 2020-2025.
Research Design:
This study uses a Systematic Literature Review (SLR) of secondary data published following the PRISMA 2020 rules to ensure the research is clear and can be repeated by others (Page et al., 2021). To keep the study modern, it also uses the new PRISMA-trAIce extension (2025), which requires researchers to be honest about how they used AI to help find and analyze papers (Holst et al., 2025).
Philosophy:
The study uses “Critical Realism” to look beneath the surface at the hidden rules of AI, and “Critical Digital Pedagogy” to check if the technology is giving power back to the student or keeping it with the algorithm (Hinck et al., 2024).
Expected Results:
It is expected that the research may support the use of a “Socratic” AI—which asks questions instead of just giving code—helps students become better at debugging and deep thinking (Akram et al., 2025). It is also anticipated that students in West Africa use face barriers to access AI due to poor internet or limited resources (Senyo et al., 2023). Finally, the results will likely show that if students rely too much on AI, they become “mentally lazy” unless the teacher rewards the way they thought about the code rather than just the final result (Fan et al., 2025).
Conclusions
The study may find out whether or not AI is only helpful if we focus on the learning process, not just the finished product. This means schools should stop focusing only on “catching cheaters” and start using a “Pedagogy of Care” that supports students as they struggle to learn (Selwyn, 2024). Ultimately, these findings will help create rules that ensure technology supports social justice and “digital sovereignty” for learners everywhere (Gwagwa et al., 2020).
Reference List:
Akram, S.A., et al. (2025) ‘Generative AI for project-based assessment: Socratic scaffolding’, International Journal of Artificial Intelligence in Education.
Darvishi, A., et al. (2023) ‘Impact of AI assistance on student agency’, Computers & Education, 210, 104967.
Fan, S., et al. (2025) ‘The role of over-reliance on AI in the negative consequences of student learning’, Cogent Education.
Gwagwa, A., et al. (2020) ‘Artificial Intelligence (AI) in Africa: Ethical considerations, benefits and challenges’, African Journal of Information and Communication, (26), pp. 1–28.
Hinck, A., et al. (2024) ‘Digital critical pedagogy: a collaborative narrative literature review’, Mid-Western Educational Researcher, 36(1).
Holst, D., et al. (2025) ‘Transparent Reporting of AI in Systematic Literature Reviews: Development of the PRISMA-trAIce Checklist’, JMIR AI, 1, e80247.
Page, M.J., et al. (2021) ‘The PRISMA 2020 statement: an updated guideline for reporting systematic reviews’, BMJ, 372.
Selwyn, N. (2024) ‘Digital degrowth: toward radically sustainable education technology’, Learning, Media and Technology, 49(2), pp. 186–199.
Senyo, P.K., et al. (2023) ‘Digitalizing with bricolage: how Ghanaian microenterprises overcome structural barriers’, Information Systems Research.
UNESCO (2025) The Algorithmic Divide: A Report on AI Inequity in the Global South. Paris: UNESCO.
Zhai, C., Wibowo, S. and Li, L.D. (2024) ‘The effects of over-reliance on AI dialogue systems on students’ cognitive abilities’, Smart Learning Environments, 11, 28.

4 responses to “Systematic Literature Review of AI-Enabled Coding Assistants (2020-2025) ”

  1. I read about your work with interest as a secondary computer science teacher. I have recently faced questions from parents mostly about the relevance of their child learning programming because of what GenAI can do. I have tried to respond by highlighting the importance of understanding outputs, how the machine works and retaining a workforce knowledgable in maintaining systems when they go wrong. I was wondering whether the shift towards the pedagogy of care you mention might also serve to influence views around the relevence of learning to code?
    I feel that the assertion that learning programming is not relevant because GenAI can do it for us is rooted in a firmly human capital conception of the purpose of education. Prehaps such a pedagogical shift would have to be part of a wider change in how people see the purpose of education?

    • Thanks for your response?
      A pedagogy of care reframes coding from a purely “vocational output” to a process of human agency. By prioritizing student well-being and critical literacy over rapid code generation, education shifts from producing human capital to fostering thinkers who can ethically govern, debug, and humanize the algorithms that GenAI produces.

  2. Hi Janix
    Thank you for a superb presentation. Some of the questions from the chat pane are below. Some you answered on the day, some not – it’s up to you how you respond here.
    Best wishes
    Simon

    Does the shift towards the pedagogy of care you mention possibly also serve to influence views around the relevance of learning to code?

    Why is a systematic lit review the best methodology to explore this Research Question? What might you miss from collecting data only from publications?

    How have you operationalised (made measurable) the conception of “humanised”?

  3. Hello Simon, thank you for passing these along! It was a pleasure presenting. Here are some reflections on those excellent questions from the chat:

    1. Pedagogy of Care & the Relevance of Coding
    A pedagogy of care fundamentally shifts the narrative from “coding as a tool for labor” to “coding as a medium for agency.” When we view programming through a lens of care—prioritizing the learner’s identity, emotional resilience, and ethical grounding—the relevance of learning to code isn’t about competing with a machine’s speed. Instead, it becomes about literacy and oversight. It moves us away from a “human capital” model toward one where understanding code is a prerequisite for participating in a society governed by algorithms.

    2. Systematic Literature Review (SLR) & Its Blind Spots
    An SLR is ideal for this stage because it establishes a rigorous baseline of what the global academic community currently validates as effective instruction. However, relying solely on publications can lead to missing:
    The “Grey” Gap: Innovative, “on-the-ground” practices from educators that haven’t been formalised into papers.
    Implementation Friction: Research often highlights success; we may miss the nuanced, messy failures of AI integration in real, under-resourced classrooms.
    Temporal Lag: AI moves faster than the peer-review cycle; we may miss the most recent shifts in LLM capabilities.
    3. Operationalising “Humanised”
    To make “humanised” pedagogy measurable, I’ve broken it down into three observable indicators:
    Relational Presence: The frequency and quality of “Presence Cues” (Alotaibi, 2026) where the AI acknowledges the learner’s specific context or struggle.
    Cognitive Autonomy: Measuring the “Chain-of-hints” (Hardman, 2026) to see if the AI allows the student to struggle productively rather than immediately providing the solution.
    Affective Feedback:Categorizing AI responses that validate the student’s emotional state (e.g., “I see this bug is frustrating, let’s look at it together”) vs. purely clinical error reporting.

Leave a Reply

Your email address will not be published. Required fields are marked *