Professor Cath Ellis: ChatGPT And The Controversy Surrounding Student Cheating

By Alice Duthie
on 25 January 2023

ChatGPT, the cutting-edge language model developed by OpenAI, is revolutionising the way we interact with technology. But as the capabilities of AI continue to advance, so too does the potential for misuse. One such concern is the rise of contract cheating, where students use AI-generated essays to pass off as their own work. It’s a problem that can no longer be ignored, and one that requires the attention of educators, institutions, and developers alike. Here, we delve into the complex issue of contract cheating and the role of AI in academic integrity.

Did you catch that the introduction above was written by AI? All that I had to type into ChatGPT was “write me a punchy magazine introduction about ChatGPT and contract cheating”, so imagine the implications this has for academic integrity!

Sydney’s UNSW Professor Cath Ellis answers our questions about ChatGPT and the controversies associated with the AI tool. She is currently researching the area of academic integrity with a particular interest in contract cheating.

professor cath ellis chatgpt
Professor Cath Ellis

Professor Cath Ellis can you tell us more about your research?

Since 2015 I have been investigating academic integrity and particularly student cheating. I’ve been particularly interested in a specific form of cheating: it’s what we call ‘contract cheating’. This is when students outsource the work they need to do for their assessment and learning to another person to do for them. The cheating occurs when the student then submits that work as if it were their own work and as if it is demonstrating their own learning, which of course it isn’t. When contract cheating occurs there is, by default, at least one other person involved and that provider can be doing it for non-commercial reasons (eg a student doing a favour for another student, or a parent doing their child’s work out of a misplaced sense of love or care) or the provider can be doing it for commercial reasons (in exchange for money). I’ve been particularly interested in the commercial contract cheating industry. I’m interested in the business models that operate within this industry as well as their work processes and the value propositions they present to their student-clients. I’m also interested in how we can use that knowledge to improve both how we detect contract cheating and how we deter students from engaging in it. 

ChatGPT has created a lot of controversy recently because of cheating concerns. How worried should we be?

It is certainly something we should attend to but I’m not convinced we need to panic about it. So – my suggestions is that a ‘be alert but not alarmed’ response is the best one. 

On the one hand, every teacher on the planet has to accept that a lot of things you can ask a student to do to demonstrate their learning they can, in turn, ask an AI tool like ChatGPT to do. In many instances, these tools will do a good-enough or even a good job of it. That is also the case with contract cheating. If you can ask a student to do something, in most instances they can also ask someone else to do it for them. What’s important to remember here is that if either of these things happen – a student outsources their work to another person or cognitively offloads to a tool – genuine learning hasn’t occurred. That’s what we should be worried about. Doing nothing about either of these things is, therefore, not an option; it hasn’t been an option for some time if we think about the widespread prevalence and affordability of commercial contract cheating services. If ChatGPT causes teachers to wake up to the fact that their assessment work is pretty easy to outsource to a tool and/or another person then that’s a good thing. 

On the other hand, these tools exist, they are already out there in the world. Many professions and professionals are already using them. Frankly – most of us make use of AI and even generative AI on a day-to-day basis, often without giving it much thought. Predictive text, facial recognition, spam filters, navigation systems all make use of AI and I for one would not want to go back to a world without these things. AI is starting to be used to do things in areas like medicine and law that were previously unimaginable and that’s a good thing. Teachers need to learn what AI tools are, how they work, what they can do, what they can’t yet do and what they will never be able to do. They also need to learn how to use them because they have to help students learn how to use them too. AI is only going to get better and the value proposition these tools will present to many if not most industries is going to be very compelling indeed. Graduates need to know how to use them efficiently, effectively, ethically, morally and legally. This all necessarily requires higher-order skills like critical analysis and evaluative judgement. Here’s an example: we know that a lot of generative AI tools currently produce outputs of variable quality and accuracy.  We need to ensure our graduates have the evaluative judgements skills needed to know the difference between what good looks like relative to barely good enough and not yet good enough. We also need to make sure graduates know how to take something from not yet good enough and turn it into something that is good enough, good or even great if they are going to be able to add genuine value to the enterprises of their graduate employers. 

computer

Do you think that the rise of AI platforms like ChatGPT could contribute to a redesign of the way assessments are written? Will there be a greater shift to evaluating students’ critical thinking over memorisation?

I certainly hope it does. That shift to evaluating higher order thinking is exactly what’s required. Fortunately, it’s easier to secure higher order thinking than lower order thinking. But the problem is that students need to build up to those higher order thinking levels, and they usually do this by mastering the lower order thinking first and this often requires memorisation. I’m talking about stuff that people just have to know and be able to remember. A good example of this is remembering basic anatomy before moving on to other aspects of medicine. The tricky part is that testing these ‘lower order’ skills and knowledges is much harder and more expensive to secure. This will probably require more than just assessment redesign – it’s likely to require curriculum redesign. Our long term aim needs to be to focus our energies on genuinely measuring genuine student learning and to do this we need to find ways to empty the value of cheating from our courses.

What measures might education institutions put in place to avoid students cheating with AI?

Wholesale curriculum redesign is hard to achieve quickly – so in the immediate short term there are a few things institutions and teachers can do. First, they can talk to students about it: taking the time to be clear about what, in each assessment context, is considered to be acceptable and unacceptable assistance is a great first step. This is a reminder that cheating is contextual. The exact same behaviour in one context can be considered perfectly acceptable and even commendable which in another is rightly considered cheating. I’ll give you an example: riding an eBike on a cycle commute to and from work is a something any reasonable person would accept and even commend, but if a cyclist competing in the Tour de France did it, it would universally be considered cheating. Sometimes we need to value what someone can do on their own without particular forms of assistance. In those instances, we need to make it clear to students. This won’t necessarily stop all students from making use of what’s deemed to be unacceptable assistance, but it clears away any confusion from students who may not be sure. Another thing we can do is refocus our attention on the main game: student learning. Having conversations with students about the work they are claiming to have done themselves to ensure that it actually demonstrates their genuine learning is something we’re probably going to have to be prepared to do. Putting in ‘traps’ to catch students out is not a great way to go about things. We can’t expect students to act with integrity if we don’t and setting traps is not what I would call acting with integrity.

As new AI models rapidly emerge, will it even be possible for plagiarism checking software to keep up?

Probably not. We often talk about detection as something of an arms race and that is only ever going to end one way. But at the same time, we have to do what we can to secure our assessment in order to assure that genuine learning has occurred. One thing that distinguishes ChatGPT and other AI tools from contract cheating is that AI is not explicitly designed for cheating whereas contract cheating – and particularly the commercial contract cheating industry – certainly is. Legislation that outlaws the provision of contract cheating is quite a different proposition to trying to ban AI for this reason. 

Are there any ways that ChatGPT could positively impact the education system?

Oh absolutely – many, many ways that we are only just beginning to consider and explore. In fact, it already is. Even things as simple as Powerpoint design assistance (which is a form of generative AI) helping teachers make lecture slides more engaging and visually rich is something that I’m sure millions of students are appreciating. Teachers are imaginative and innovative people on the whole. We are only just at the start of hearing about all the interesting and effective ways that these tools will bring benefits to teachers and their students’ learning. 

Related News


More WLT News