Thursday 19th June
Ethical Use of Generative AI in Higher Education: Challenges and Opportunities
Syeda Rakhshanda Kaukab (Ziauddin University)
Abstract: The integration of Generative Artificial Intelligence (GenAI) tools—such as ChatGPT, Bard, and Claude—into higher education has opened up new pedagogical possibilities while simultaneously raising pressing ethical concerns. Essay writing, content creation, formative evaluation, and research synthesis are just a few of the academic duties that these AI-powered platforms can assist with, making them invaluable resources for both students and teachers (Cotton et al., 2023). However, there are also complicated issues with authorship, data privacy, bias, academic integrity, and unequal access that arise with the quick adoption of GenAI.
The possibility of academic dishonesty, in which students abuse GenAI to avoid original work and critical thinking, is one of the main worries. In turn, educators struggle to identify AI-generated content and uphold equitable evaluation procedures (Selwyn, 2023). The opaqueness of these models—trained on enormous, frequently untraceable datasets—is another issue that raises questions about ethical use and intellectual rights. Additionally, bias in AI-generated outputs and unequal access to these tools could exacerbate already-existing educational gaps, particularly in institutions and locations with limited resources (Zawacki-Richter et al., 2023).
This study thoroughly examines these difficulties as well as the potential for more efficient, individualized, and inclusive learning that GenAI offers. It makes the case for a proactive, values-based strategy for integrating GenAI, putting special emphasis on the creation of transparent institutional regulations, training in AI literacy, and ethical frameworks. Higher education institutions may capitalize on GenAI’s promise while preserving academic integrity, equity, and trust by raising awareness and encouraging critical involvement.
References
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 60(2), 117–127. https://doi.org/10.1080/14703297.2023.2190148
Selwyn, N. (2023). Education and artificial intelligence: Navigating the politics of ed-tech. Learning, Media and Technology, 48(1), 1–15. https://doi.org/10.1080/17439884.2022.2162258
Zawacki-Richter, O., Jung, I., & Bond, M. (2023). The ethics of artificial intelligence in education: Promises and pitfalls. AI & Society, 38, 1153–1166. https://doi.org/10.1007/s00146-022-01452-7
Navigating the Ethical Frontiers of AI in Higher Education: Insights from a Delphi Study on Academic Integrity
Ayşegül Liman Kaban (Mary Immaculate College; University of Limerick) and Aysun Gunes (Anadolu University)
Abstract: Drawing on a two-round Delphi method and engaging 12 international experts from academia, AI policy, and research ethics committees, this study investigates how the rapid proliferation of generative AI technologies is reshaping academic integrity frameworks in higher education. The study identifies the most pressing ethical challenges arising from the use of AI in research—including authorship ambiguity, algorithmic bias, data privacy, and the inadequacy of current institutional oversight. This paper will present evidence on how these challenges vary across global contexts and disciplines, and will demonstrate the degree of expert consensus on the ethical governance of AI in academic settings. The analysis was structured around consensus-building on twenty key ethical statements, which were quantitatively assessed using a five-point Likert scale and interpreted through interquartile range (IQR) analysis to determine alignment or divergence among experts. Findings indicate that while there is broad agreement on the importance of transparency, privacy protection, and accountability mechanisms in AI governance, significant variability remains around ethical data sharing, anonymization practices, and the independence of industry-funded academic research. These findings will be used to unpack the seven most critical institutional responsibilities for ethical AI integration in higher education: (1) clarity of authorship and attribution, (2) enforceable data privacy policies, (3) bias mitigation protocols, (4) transparent use of AI in student assessments, (5) inclusive policy formation, (6) structured AI ethics education, and (7) stronger academia–industry ethical alignment.
This presentation will argue that AI ethics in higher education cannot rely on post-hoc regulation. Instead, anticipatory and virtue-based ethical frameworks must be embedded in institutional culture to ensure integrity in the age of algorithmic authorship.
Towards an EDIA-based and AI-enabled pedagogy across the curriculum
Mirjam Hauck, Rachele deFelice, Clare Horackova, Deirdre Dunlevy, and Venetia Brown
The Open University, WELS/LAL, KMI
Abstract: This contribution is inspired by Tracie Farell’s “Shifting Powers” project that proposes that rather than asking whether AI is good or fair, we have to look at how it “shifts power”. Power relationships, we are reminded, preserve inequality within our society in real and material terms. How will AI contribute to those inequalities? Is there any chance AI can help to foster new balances of power and if so, what will this look like in practice? Taking these questions as a starting point, our work is a first attempt at mapping out an agenda for learning and teaching with GenAI guided by social justice and inclusion principles. It is underpinned by a critical approach to the use of gen AI and wants to equip learners – including teachers as learners – with the skills that enable them to work with gen AI in equitable and inclusive ways and thus contribute to shifting powers in education contexts.
We will be using the learning and teaching of languages and cultures as a case in point and present and discuss the tenets of educator training informed by Sharples’ (2023) proposal for an AI-enabled pedagogy across the curriculum with an added focus on social justice and inclusion.
Our insights stem from our collaboration with two AL colleagues who are – like many others – new to GenAI and have been trialling the so-called “protégé effect” whereby we learn best when we teach it to others. We will present the outline of the educator training which will be available as a short course later this year in the OU’s Open Centre for Languages and Cultures. In doing so we will pay particular attention to the tension educators are experiencing who find themselves balancing anxieties regarding the shortcomings and challenges of GenAI and a perceived lack of technological expertise on the one hand, and expectations to harness the innovative potential of GenAI in inclusive and equitable ways on the other.
Designing Pedagogical AI to Scaffold Peer Feedback: Insights from a Classroom-Based Pilot
Zexuan Chen, Bart Rienties, and Simon Cross
Abstract: As generative AI tools increasingly enter educational settings, understanding how students interact with pedagogical AI during peer assessment becomes critical for supporting meaningful learning. This study aims to enhance self-regulated peer feedback through dialogic interaction with a pedagogical AI agent. Grounded in self-regulated learning (SRL) theory and educational dialogue research, we proposed a conceptual model that outlines how AI scaffolding can support students across the four SRL phases: task definition, goal setting and planning, tactic enactment, and adaptation.
To operationalize this model, we developed Aiden, a pedagogical AI agent embedded in a custom-built peer review platform called PeerGrader. A classroom-based pilot study was conducted with a second-year undergraduate English class at a university in southern China (N = 41). Students completed a peer feedback task on an EFL writing assignment using an early version of Aiden. Data sources included system interaction logs and a brief post-task survey to examine student engagement patterns, types of feedback behavior, and perceived usefulness and usability of the AI interface.
Preliminary findings suggest that students showed strong interest in interacting with Aiden, as evidenced by the chat logs, and that the tool was helpful in facilitating peer feedback, as reflected by an increase in feedback quantity. Students generally found the tool easy to use and supportive in guiding their thinking, though some expressed a desire for more flexible or personalized prompts. Teacher feedback also highlighted the potential of integrating such tools into classroom instruction to enhance student engagement and support learning processes.
Insights from this pilot informed the iterative refinement of Aiden’s scaffolding functions and the broader implementation of the conceptual model in a follow-up study. This work provides practical implications for the design and classroom integration of pedagogical AI in support of peer assessment and self-regulated learning.
AI-enhanced educational videos for online harm reduction: insights from the PRIME project
Elizabeth FitzGerald and Peter Devine
Abstract: The PRIME (Protecting Minority Ethnic Communities Online) project delivered innovative harm-reduction interventions, processes and technologies to transform online services and create safer spaces for Minority Ethnic (ME) communities. The project partners were Heriot Watt University, the Open University, Cranfield University, Glasgow and York universities. The project undertook racialised inequalities and discrimination in the design and delivery of digital health, housing and energy service and finished in March 2025. It produced academic publications, a Code of Practice, policy briefs, a technical toolkit and design guidelines.
One of the OU’s deliverables were three short videos, produced in nine languages involving speakers from diverse communities identifying the main types of harms, protective actions and sources of assistance. These videos were developed within IET’s Learning Technologies team, leveraging AI-generated and AI-supported approaches to enhance efficiency in content production.
This presentation will outline the step-by-step process involved in video development, the role of AI in streamlining design and translation, and initial findings from focus group evaluations on engagement and effectiveness. Early indicators suggest the videos contribute to more inclusive and accessible digital education, reinforcing PRIME’s commitment to safer online spaces for minority ethnic communities.
SAGE-RAI: Smart Assessment and Guided Education with Responsible Artificial Intelligence
Joseph Kwarteng, Aisling Third, John Domingue, Alexander Mikroyannidis, David Tarrant. Gráinne O’Neil, Siobhan Donegan, Tom Pieroni, Kwaku Kuffour-Duah, Thomas Carey-Wilson, Stuart Coleman
Abstract: Can responsible Generative AI (GenAI) lead to improved student outcomes? The SAGE-RAI project investigates this critical question through a partnership between The Open University’s Knowledge Media Institute and the Open Data Institute, developing and evaluating AI Digital Assistant that enhance personalised learning at scale. Motivated by Bloom’s 1984 study demonstrating that one-to-one tutoring enables students to perform two standard deviations better than traditional classroom instruction, we explore how responsible GenAI can unlock cost-effective, scalable personalised education. Our project addresses the practical limitations tutors face when supporting large cohorts, investigating whether AI-enhanced tutoring can bridge the performance gap while maintaining educational quality and equity.
Our AI Digital Assistant employs Retrieval-Augmented Generation (RAG) technology, combining Large Language Models with task-specific knowledge bases to ensure accurate and relevant responses. The system supports flexible deployment across local and cloud platforms, accommodating LLMs from multiple providers including OpenAI and open-source models. Through user-centric design emphasising transparency, privacy, and ease of use, we democratise AI assistant creation for nontechnical educators. The project has deployed AI Digital Assistants across the ODI’s learning environment, where it functions as the “ODI AI Assistant” and supports both internal staff activities and personalised student learning by integrating with existing educational courses.
Central to our approach is responsible AI implementation, addressing critical challenges including misinformation, copyright concerns, and algorithmic bias. Our evaluation methodology measures pedagogical benefits and issues through real learner testing, comparing AI-guided approaches against traditional methods. The project delivers open-source reusable tools, contributes to popular RAG libraries like “embed-js” and produces ethical guidelines for responsible GenAI application in education, establishing best practices for scalable implementation.