AI for academic writing – to plagiarise or not to plagiarise?

Artificial Intelligence (AI), notably Chat GPT, as a language model, can potentially be misused for plagiarism due to its ability to generate coherent and contextually relevant text. While it’s a powerful tool for various legitimate purposes, there is a risk that unethical users may employ it to produce content without proper attribution or originality.

This was my belief last summer when I flagged several final-year scripts for potential plagiarism for unethical use of Chat GPT. Jonathan, in his recent blog, refers to my explanation of why I suspected students’ use of AI because their work seemed too perfect.

By ‘perfect’, I meant, as I wrote in the paragraph above, ‘coherent and contextually relevant text’. Actually, I did not write that first paragraph (only!); rather, it was provided for me by Chat GPT. (I doubt I could have expressed so accurately the way I had felt about AI.)  So, have I committed plagiarism?

I become unsure when I turn for help to The Open University’s Plagiarism Policy:  Plagiarism is using, without acknowledgement, someone else’s words, ideas or work.   How far can we reasonably describe a robot as ‘someone else’? Was I unethical to use an expression I had commissioned framed? Would it have been less unethical if I had edited Chat GPT’s text, or acknowledged its use, or have supplemented the text Chat GPT provided with appropriately referenced academic sources – my students did?

Further, what of our writing is totally original in any case? Bakhtin ([1952-3]1994) tells us, “Each utterance is filled with the echoes and reverberations of other utterances” (page 291). We continually adapt and adopt snippets of text from elsewhere and present them as our own. It is acceptable to consult a dictionary, a thesaurus or a Google search to help us write that coherent and contextually relevant text. Jonathan, in his recent blog post, asks what all the fuss is about regarding AI, and I wonder, should we be making a fuss?

Jonathan cites “Can IT think?” by Philip Ball (2023), who argues that AI should be treated with great caution. and I have come across descriptions of widespread exploitation of AI  with dubious results, such as the use of a Chatbot as a therapist – but is employing AI to aid our academic writing unethical?

Returning to Chat GPT for inspiration, it continued to advise (or followed my instruction to do so) about the existence of Open AI, their research company, which states their belief that “AI should be an extension of individual human wills” – an extension, not a replacement, then for human endeavour. This approach seems to resonate with Simpson (2023), a clinical teaching fellow, who advocates reframing the way (medical) students think about AI “not as an academic shortcut but as more of a companion”.   I like the idea of “companion” – like a dictionary or thesaurus – I also appreciate the concept of “shortcut” as a contested one.

Might we ask, in our potentially fraught, busy, complex lives, why we should not look for shortcuts in our academic life besides our everyday existence? And how much of that endeavour that AI use shortcuts form part of a valued academic activity? It’s saving thinking and editing time and providing that springboard to develop discussion, as it has for me above. Daher (2023) in Will Chat GPT be the disrupter academia needs? seems to cautiously embrace AI as “the spark that will change education for the better“, a means to reframe what we value in academic writing and to turn our focus towards critically evaluating sources.

I do not understand that argument. Surely critical evaluation already forms a key part of being an academic. And I do value that time of thinking, crossing out, rewriting, checking and editing; it’s part of the process that makes writing my own. I’m not looking for shortcuts, and I don’t plan to continue to make significant use of AI in my own work. But I don’t now think using AI in academic writing is necessarily unethical, and how far it is plagiarism is a discussion we need to have.

Chat GPT finished the 200 words I’d requested with a bland reassurance:  Encouraging responsible AI use can help ensure that the technology benefits society positively without contributing to plagiarism issues. (Chat GPT)

AI, then, is just another tool in our digital repertoire, and, as Jonathan asked, What is all the fuss about? I am still not sure…

by Jane Cobb

I have been an Associate Lecturer at the Open University since 2002, tutoring mainly English Language modules.  I live in Stourbridge in the West Midlands with my husband, two adult children and three Romanian rescue (street) dogs.  My recent EdD and my current research interests concern the multiple perspectives around feedback practices around assessed writing in HE.  This is my first venture into blogging, and I am looking forward to this creative space, where colleagues can share, debate, and discuss issues arising around their research.

One Reply to “AI for academic writing – to plagiarise or not to plagiarise?”

  1. As the Jonathan mentioned in Jane’s blog, I thought I would respond. Hopefully, others will join in the debate.
    I was really struck by the way Jane used and then acknowledged the ‘AI writing’ from Chat GPT in opening paragraph. I didn’t spot this at first and it would be interesting to know if other people do note that it is subtly different. Perhaps we we have to think about how we read and whether we have, or if we can develop, sufficiently well-tuned antennae to spot when AI is present.
    So I wonder if AI is going to be the shortcut that Jane mentions, whether we need to learn to read more carefully, alert to the possibility that AI is present.

Leave a Reply

Your email address will not be published. Required fields are marked *