How are Durham students using ChatGPT?

By Samantha Webb, Elizabeth McBride, and Sophia Lieuw-Kie-Song 

In an anonymous survey conducted by Palatinate, many students said they had used ChatGPT last year.

While a few students admitted that they had asked the tool to write full assignments, most saw this use as redundant. 

One student described a particularly bad experience; “Before I realised it was generative, which meant it just completely lied and was totally unreliable, I thought I could use it like an advanced search engine because I was burnt out and stupid.

“I checked all the references, and the pages they led to didn’t have the references on them.”

anonymous student

“I typed in ‘provide a quote from X on y with Harvard referencing’ and it did! The texts all existed and what it’d written checked out, so I put it in. Lo and behold, not even slightly real quotes. Complete fabrications. I checked all the references, and the pages they led to didn’t have the quotes on.” The same student claimed that the University hadn’t flagged the false references.

Durham has not issued a complete ban on ChatGPT use, but has said “inappropriate use of AI in producing assessed work could be considered as cheating and may constitute academic misconduct. Students unclear on the academic misconduct regulations may check with their academic departments.”

Many said they used ChatGPT as a free tool to check their grammar, while others generated essay plans or ideas for the title of assignments. One student described ChatGPT as a way of “proofreading, similar to how word editors work.” Several said they had asked the tool to restructure their work and link paragraphs better.

Another noted that ChatGPT could be used as a building block for assignments; “I only used it in desperation a couple of times, but I directly asked it a question I didn’t know where to start with and then used Google and my own knowledge to perform a sanity check on the answer before paraphrasing it.”

Others were more tech-savvy. One Physics student outlined how they’d inputted all assignment data into the programme “then asked ChatGPT which areas of physics are relevant, which formulas use these variables etc. basically just to save time instead of looking back on notes.”

Another described how they “fed ChatGPT the marking criteria and asked for feedback from it on areas of improvement. I also used it to assess the strength of my arguments and help with their structure.”

“I fed ChatGPT the marking criteria and asked for feedback from it on areas of improvement.”

One student said, “Being an international student that’s first language isn’t English, with the additional difficulty of having a learning disability, the tool aids in me gathering an overall plan that is less complicated. Simplifying the task at hand helps me figure out the overall assignment with less grandiose wording. I don’t make the tool complete my assignment, rather just assist me as a tutor.” 

A graduate described how they had used ChatGPT to “copy-edit their dissertation introduction for readability and clarity,” after their supervisor recommended the tool for this purpose.

Some students expressed concern about the University’s ability to detect ChatGPT, however on the whole there was doubt that the tool could achieve high grades, with one noting “assignments are too complex for ChatGPT to be able to do successfully, so it only helps with proofreading type tasks anyway”.

Another stated, “I don’t think English students are using ChatGPT—our degree relies on original thought and ideas, which ChatGPT can’t provide.”

“I’ve been in seminars with people that used ChatGPT to summarise readings, and it’s very clear they don’t know what they’re talking about or they’ll get the main point of the reading completely wrong. I wouldn’t be surprised if my peers use ChatGPT outright for summatives, but I feel better prepared in the long run (ie applying for Masters and PhD degrees) as I have the experience writing papers, and I feel like there’s only so far ChatGPT can take them.”

“I’ve been in seminars with people that used ChatGPT to summarise readings, and it’s very clear that they don’t know what they’re talking about.”

Yet another student expressed concern that the perceived popularity of the AI tool amongst the student body may “invalidate the process of being assessed on our ability to think critically”. 

While students unanimously agreed that using ChatGPT to make slight modifications or check punctuation was acceptable, many suggested that “to get whole paragraphs or essays written for you completely undermines the hard work that everyone else is doing and risks changing the university process”. 

One said, “You’d hope the university will be bringing in some sort of screening system whereby people can’t just hand in a full AI written essay.”

A Freedom of Information request revealed that Durham investigated zero cases of ChatGPT-related academic misconduct centrally in the 2022/2023 academic year. However, in a statement to Palatinate, Durham said, “The fact that the SDC has not yet considered or formally disciplined any students for potential misuse of ChatGPT for the year 2022-23 does not mean that we have not investigated or penalised students for inappropriate use of AI.”

“Inappropriate use of AI in producing assessed work could be considered as cheating and may constitute academic misconduct. Students unclear on the academic misconduct regulations may check with their academic departments.

“Durham has a two-stage academic misconduct process, depending on the severity of the matter.

“Getting whole paragraphs or essays written for you completely undermines the hard work that everyone else is doing.”

“Cases are first investigated by academic departments who may apply penalties including reduced marks or awarding a mark of zero for an assessment if work is found not to be a student’s own.

“The second stage, reserved for the most serious of cases and potentially resulting in expulsion from the University, is undertaken by the Senate Discipline Committee (SDC). 

It is unclear what exactly the University defines as “inappropriate use” from the teaching and learning handbook; Durham advises students to contact their departments if they want more information.

Durham has not adopted Turnitin’s AI detection tool, which was developed at the start of last year.  The company claims that their new tool can determine whether a sentence was written by a human or AI with a 98% accuracy rate – they say that the software can distinguish the likelihood of AI input in individual sentences, even if the author had edited the original AI-produced writing. However, none of the 17 universities that Palatinate sent Freedom of Information requests to had adopted Turnitin’s new tool.

AI detection varies over the sector, according to FOIs submitted by Palatinate. Durham, LSE, Queens University Belfast, Sheffield, and Edinburgh all investigated zero cases of ChatGPT centrally in the past year. Meanwhile, York said that they had investigated 45 cases of AI-related academic misconduct, and applied penalties to 15. Glasgow also said that they had sanctioned 12 students, while St Andrews said that staff had detected and reported 9 cases of ChatGPT use, 6 of whom have all received a penalty mark of zero.

Exeter, Southampton, and Nottingham do not yet have a specific category to record AI-related misconduct. Cases are categorised under “false authorship” or “plagarism”.

Many universities, such as Edinburgh, have banned the use of ChatGPT. However, in a FOI response the institution announced that although they have suspected cases of students using AI to complete assignments, they have not had enough evidence to proceed with an investigation despite the supposed ban. As of the end of the last academic year, no students have been disciplined for using the AI tool.

Image:

One thought on “How are Durham students using ChatGPT?

  • Interesting survey responses. Can you make the full data available to those in Departments, DCAD, and elsewhere trying to create policy and practice to assist students in understanding AI use? It’s reassuring to see students recognising the often limited utility of current generative AI for assessment, but that, nevertheless, they also see where and how it can be helpful.

    You miss a crucial aspect of the reasons for not using Turnitin’s detection software, which is not the actual uses it misses, but the false positives it produces. There isn’t, so far as I know, independent verification of Turnitin’s claims about its accuracy, but there are significant reports about false positives as a problem with this and other AI detection software. Those reports suggest up to 4% of positive findings are false positives. If, as might be typical for Arts and Humanities and many Social Science students, you submit 25 pieces of assessed coursework, that rate would mean, on average, every student would face a false allegation of academic misconduct during their degree. Disproving that false positive could be very hard. With consequences ranging from reduced marks, failed assessments with capped resits, through to outright failure of modules or even expulsion, Durham University cannot use these systems.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.