Help! A robot wrote my essay!: Reflections on webinars about ChatGPT

Illistration of three robots prodding a human brain to acquire knowledge.

Eleanor Moore ~ Learning Designer

 

With experts across the world sounding alarm bells about the impact of AI (disappearing jobs and widening wealth inequality to name just two issues), how worried should we be about its potential in education?

When ChatGPT burst on the scene in November 2022 with its human-sounding prose and extensive knowledge base, the worried frown in academia suddenly got a lot deeper. With the last bastion of academic excellence – the essay – under threat, learning technologists and academics across the globe are asking: what are we going to do? It will therefore come as no surprise that the recent University of Kent webinars on ChatGPT exceeded the maximum user limit in Teams with 1337 attendees.

In the first webinar, Michael Webb from JISC explored the impact of ChatGPT3 on education and whether content detector tools provide a solution to assessing who has produced the work. As Michael explained, for us to trust these systems, we need to understand how they work: one tool, GPT0, looks for patterns in the text, whereas OpenAI’s own tool is trained on sets of texts both written and not written by AI.

In the second webinar, ChatGPT: holding up the mirror, Philippe De Wilde from the University of Kent explained how ChatGPT uses randomness, or stochasticity, to deal with prompts. So, it repeats what it has learnt and parrots it, but with a certain randomness. Michael explained how he inputted a text generated by ChatGPT into the OpenAI detection tool which responded that it was ‘very likely’ to be AI-generated. By making a few simple tweaks in ChatGPT (e.g., asking it to use the word ‘the’ less), the detection tool then responded that the text was ‘very unlikely’ to be AI-generated.

So, if simple tweaks can easily defeat the detection tools, and students are one step ahead, sharing detection tool hacks on TikTok, then perhaps this is not the answer? For the moment, Michael advises staff in HE institutions not to pass student work through AI detector tools as they’d need to be adopted as institutional tools with data permission issues fully addressed.

There is one thing for sure: unlike essay mills which are now illegal, AI tools are not going to go away. As Philippe said, ChatGPT will become ubiquitous. Google has already released Bard and despite it embarrassingly answering a question wrong on its launch day, wiping over $150bn off the stock price of its parent company, be in no doubt that AI tools are being continuously refined. Michael pointed out, ‘dealing with systems that output plausible but wrong information feels like a very new challenge’. However, we only need to think of tools such as Wikipedia to see how the rudimentary can evolve into an omnipresent giant.

How can we respond?

Let’s look at the positives, of which there are many! The University of Kent has set up a Padlet where you can discuss and see what others are saying about ChatGPT in education. There are many ideas, resources and links to explore, including what ChatGPT itself thinks are its advantages, plus ways to make life in academia easier, save time and enhance the student experience. For example, you could ask ChatGPT to suggest a range of essay titles, write the rubric, compose the essay, and give feedback. Could AI tools one day replace the lecturer? There’s another furrow in the worried brow.

For the moment, the big issue is how do we deal with student use of these tools? As Philippe explained, ChatGPT is mirroring exactly what is going on in academia at present: an increase in the volume and a decrease in the originality of output. When a student writes an essay, they bring together a mixture of sources but where is the originality? The pressure on students and academics to produce at volume often comes at the cost of creativity and originality. Philippe wants to move towards revaluing originality which can be achieved by reducing the volume of output from both students and academics. Could this flag an opportunity for a cultural shift?

In the third webinar, How AI has answered the UnGoogleable exam question and what to ask next?, David Smith from Sheffield Hallam University described a recent experiment: he fed exam questions into ChatGPT and asked colleagues, who didn’t know that the answers were AI-generated, to mark them. The answers were quite good, the information was largely correct, and most essays were marked at high 2:2s to low 2:1s. But the text was vague; there was a distinct lack of depth and understanding and a fair bit of bias. David explored how ChatGPT fared with a variety of different question types; the webinar is worth a look to explore the full range in more depth. David gave helpful pointers to the tool’s potential: ask students to locate reference sources, verify facts, evaluate information, and use personal perspective to arrive at a judgement. Ask them to document how they altered their perception and how they ensured the information wasn’t biased. There is a real opportunity here for reflection and critiquing skills.

ChatGPT and other AI technology are not going to disappear, so we need to work with them. We wouldn’t expect students not to use a calculator or a spellchecker, and AI already has its place in the academic toolkit. For now, what can you do? Michael’s advice is to delve into AI tools to explore how they work with types of questions you ask. Ask ChatGPT something that you have detailed knowledge on and see how it responds. By using the tool, you’ll begin to understand its opportunities and limitations. The University of Kent Padlet is definitely a good starting point and JISC have set up a group for people who want to work with them on the educational uses of AI. If you haven’t already, start conversations with your colleagues about AI. Set up working groups, read, share, and don’t be afraid. There’s a whole world of opportunity to embrace.

 


References

The Guardian (10 February 2023) ‘Google v Microsoft: who will win the AI chatbot race?’, https://www.theguardian.com/technology/2023/feb/10/google-v-microsoft-who-will-win-the-ai-chatbot-race-bard-chatgpt (Accessed 14 February 2023).

The Education Hub (23 April 2022) ‘Essay mills are now illegal – Skills Minister calls on internet service platforms to crack down on advertising’, https://educationhub.blog.gov.uk/2022/04/28/essay-mills-are-now-illegal-skills-minister-calls-on-internet-service-providers-to-crack-down-on-advertising/ (Accessed 15 February 2023).


Banner image: Shawndra Hayes-Budgen, via Canva