A few weeks ago an educational magazine published an article entitled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT, which was reviewed by experts and read by thousands of users with good marks without realizing that it was precisely a text prepared by the ChatGPT AI based on other work and plagiarizing researchers’ findings, as published by Anna Fazackerley in The Observer.
Ironically, this article warned of how artificial intelligence tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism,” when it was written by the popular and controversial chatbot.
The professor at the Marjon University of Plymouth, Debby Cotton, lent her name posing as the creator of the article: “We wanted to show that ChatGPT writes at a very high level,” explains the technology expert to The Observer, who assures that the development of AI “is an arms raceâ€.
And it is that for Cotton “technology advances very quickly and it will be difficult for universities to overcome it”, as made clear by the controversial article that has passed under the radar of all official filters and theoretically plagiarism-proof from one prestigious education magazine.
Cotton, along with two colleagues from Plymouth University who also claimed to be co-authors, alerted the editors of the journal Innovations in Education and Teaching International, but the four academics who reviewed it assumed it had been written by these three academics thanks to the convincing work of ChatGPT.
After years of chasing so-called “essay factories” that sell pre-written essays and other academic work to students who try to cheat the system, now universities around the world are facing the challenge of AI, which is even more difficult to track and detect.
Just these “factories†for students who don’t want to waste their time crafting new writing could be using ChatGPT for their new fakes, and universities acknowledge that they are rushing to catch up and find anyone to pass off the work of the popular chatbot. as own.
Thomas Lancaster, a computer scientist and expert in cheat contracts from Imperial College London, interviewed by The Observer, explains that many universities are “panicking” because of the proliferation of these ‘made in AI’ trials: “If all we have before us For us it is a written document, it is incredibly difficult to prove that it has been written by a machine, because the level of writing is usually goodâ€, he warns.
As if that were not enough, Lancaster adds that “the use of English and the quality of the grammar are usually better than that of a student”, something still with room for improvement thanks to the ability to write “more human” in the latest version of the AI model, ChatGPT-4, which was published last week.
However, Lancaster believes that AI is still green to do work later in the university course, so “as the course becomes more specialized, it will be much more difficult to outsource the work to a machine,” although “I would probably do a good job with the first few tasks.”
This problem has already reached the University of Bristol, where the vice-chancellor Kate Whittington raises the alarm: “We are very clear that we will not accept cheating because we have to maintain the standards.” At this iconic British educational institution, repeat offenders caught using AI for their essays could face expulsion.
In Coventry there is also a whole protocol launched to combat plagiarism from ChatGPT, where Irene Glendinning, head of academic integrity at the University, explains her concerns: “We are redoubling our efforts to get the message to students that if they use these tools for cheating, they can be kicked out.â€
Faced with this situation, Glendinning’s response is aimed at students, who believe that “they are wasting their money and time if they do not use the university to learn.” For this reason, he points out that it is necessary to look for jobs where “hear the voice of the student” and flee from those with “many facts and little critical analysis.”