探花视频

Universities need to ‘redefine cheating’ in age of AI

Widespread use of new tools has ‘blurred lines’ between academic support and misconduct, study finds

June 27, 2025
Student struggling in exam hall, with computer code overlayed on other students taking the exam. To illustrate that artificial intelligence has “blurred the line” between what constitutes academic support and what should be seen as misconduct.
Source: Alamy/iStock montage
A6DKX3 - 1182617856

Artificial intelligence has “blurred the line” between what constitutes academic support and what should be seen as misconduct, necessitating a rethink on what is considered cheating, according to a new study.

A fifth (22 per cent) of students surveyed for the paper, , admitted using AI to cheat in their assessments in the past 12 months.

But how students said they had cheated with AI varied. While one simply said they had asked ChatGPT to “write my stuff lmao [laughing my ass off]”, others displayed more nuanced uses. One told the study that they had used it to “ask questions when I was stuck on a particular question, [as I] needed to be sure I was getting it right”.

Author Phil Newton, an academic integrity expert and neuroscientist at?Swansea University’s medical school, writes in the paper that it is unclear whether this sort of behaviour “would constitute cheating under conventional interpretations of assessment security and integrity”.

探花视频

ADVERTISEMENT

The paper in the journal?Assessment and Evaluation in Higher Education?further says that there is a “profound disconnect between simple ideas of ‘cheating’ and the more complex, nuanced uses for GenAI” which “blur the line between using GenAI for academic support versus misconduct”.

AI tools can offer?“enormous benefits” to students, and “therefore, from a policy perspective, it is now unclear what it means to ‘cheat’, and what is an acceptable use of these tools to support learning”. Many of the behaviours identified by students “could reasonably be interpreted as legitimate use of GenAI for academic means”, it says.

探花视频

ADVERTISEMENT

Newton questions, for example, whether an assessment draft outline generated by AI “represents the learning of the student” or whether it is cheating.

He?told?探花视频?that current university policies leave students in a “no-win” situation, as “either they use GenAI and risk being accused of cheating, or they don’t use it, knowing full well that many of their colleagues?will, and so they will get a worse mark”.

Proofreading using AI could either be viewed as cheating “depending on the rules”, he said, adding:?“If we are testing their ability to write, then using ChatGPT as a proof-reader would be cheating. If we are testing their ability to demonstrate learning, then proof-reading might be something we want them to do, especially if they have some challenges with writing or language.”

The?paper also finds that 91 per cent of students report that they are assessed through written coursework, and that 55 per cent of students had been assessed?using unsupervised online examinations, with such methods “vulnerable to cheating”.

探花视频

ADVERTISEMENT

It concludes: “Most students are using GenAI, and so there are serious questions about the use of these assessment methods as valid ways to certify the learning of students. There is an urgent need for the sector to develop more appropriate summative assessments in the age of GenAI, and for appropriate policies to support the use of those assessments.”

juliette.rowsell@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (9)

Does no one recall Cliffnotes, encyclopedias, even paper notes? None of this is unprecedented. Other, that is, than the uncoordinated, wild-eyed responses. Can we never learn from the history of reception of one new technology after another?
Yes but the students still had to do read the Cliffnotes and York Notes and to try and adapt the material to their answers to the specifics of the question asked or task posed. Whereas AI platforms do more than that, they can compose the response to order given an instruction and the prog sorts out the material and provides an answer. Also, if someone used the Cliff notes etc you could with a little bit of trouble locate the source materials (often in relatively undigested form) and see what the student had done with them. With AI you can't do that as it's a bespoke response, if I understand this correctly, and it does not leave a trace in the way a printed or electronic text will. Then also a student may say yes I ran the question through an AI platform but I also read the books and articles and just used the AI as you might a sample answer in addition to my own research.
Yes given the development of this technology I think we really have to be very careful about accusations of 'cheating' or being a 'cheat'. The AI assistance is generally becoming perfectly normalised in non-academic but also professional working environments. For example, why do you think Mr Rory Stewart (and his ilk) is so amazingly erudite and informed about geopolitical and history when he speaks on his podcasts? Well he has ChatGB open in front of him all the time. Is that cheating?
Cars have been commonplace for over half a century, yet we still encourage young people to do exercise.
Eh? I don't encourage my students to "do exercise". None of my business. I think I would get into trouble pdq if I suggested one of my overweight students might benefit from a bit of exercise!! What are your essay feedback sessions like? "Well Kelly, I think your essay was extremely good, well-researched and very relevant and incisive. But you could do with losing a few pounds if I am being honest". And yeah, they have cars and can drive to gym if they want, a lot of them do, which is a better analogy.
Well this is the key point in my view. If AI has become so pervasive and accepted in our world generally, social media, etc and in the world of work itself, it is then very hard to take it out of the the academic life of the student completely and if we allow it's use then it is very difficult to 'police' (if you like), legitimate and illegitimate use. Many of our students do not grasp the finer details of our lucubrations in this regard. Comparing AI to study aids like York or Coles Notes is a bit like comparing a MacBookPro to the quill pen. Many students won't see these things as 'cheating' as they see their aims as to achieve the best mark, so deploying the latest learning technology to this end when they are paying high fees and working in part time jobs to fund their education may not be as cut and dried as we academics with our PhDs in our comfy offices drinking tea judgmentally chastising the errant might think it to be? And indeed, in the Creative Subjects we know that a number of artists and writers now use AI to generate their compositions as part of their creative process thus blurring our (nineteenth century?) notions of originality, creativity, plagiarism. I think this is why some commentators have raised the issue of an 'existentialist' crisis for certain subjects (I guess chiefly the Humanities). After franchising out the knowledge aspect of our discipline time years ago (Dr Google and his Scholars etc) maybe it was inevitable that the same thing would happen to the skills component.
King Cnut didn't try to stop the tide, if that's what you mean...
What?
new
Hm, seem to have replied to the wrong comment..

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT