‘It’s a human issue’: will NTU case prompt rethink on AI use for students?

Saga involving three students accused of academic misconduct reveals gaps in universities’ dispute resolution processes, observers say

Students at The Hive learning hub at Singapore’s Nanyang Technological University. Three NTU students were accused of academic misconduct over the use of generative AI. Photo: Shutterstock

A case at Singapore’s Nanyang Technological University (NTU) involving three students accused of academic misconduct over the use of generative AI (Gen AI) has prompted observers to question if tertiary institutions need clearer processes to deal with disputes.

The students were told by their teacher in a briefing for a module on health, disease outbreaks and politics at the School of Social Sciences that the use of ChatGPT and AI tools was not allowed in the “development or generation” of their essay proposal and long essay, according to pictures of the slide that were published online.

One student’s appeal was being processed before a review panel that would include AI experts, while the other two scored zero for the assignment, local media reported. The students used online tools to organise their citations. The two who have been punished used ChatGPT in their research but said they did not use it to write their essays, according to local media outlet CNA.

Fong Wei Li, a lawyer at Forward Legal who specialises in internet and social media law, told This Week in Asia that the saga showed the gaps in processes among universities in dealing with such disputes between teachers and students.

“Most universities acknowledge that Gen AI is part of our lives, but what universities don’t go further in doing is consistent framework about processes for grievances; if a faculty accuses a student of using Gen AI and the student disputes it,” Fong said.

“There has not been a critical mass for these kinds of disputes, but do we want to wait for something to happen like it did at NTU before there is a process if a student disputes?”

A survey last year found that 86 per cent of students used artificial intelligence in their studies. Photo: Shutterstock
A survey last year found that 86 per cent of students used artificial intelligence in their studies. Photo: Shutterstock

Gen AI is gaining popularity among students worldwide. Last year a survey of 3,839 university students across 16 countries by the Digital Education Council, a global alliance of universities and education innovation representatives, found that 86 per cent of students used artificial intelligence in their studies.

In the recent NTU case, the student whose appeal is pending said last Thursday on Reddit she was accused of misusing Gen AI by Assistant Professor Sabrina Luk. The student claimed she had provided Luk with a time-lapse video of her writing process to prove it was her work.

The student used Study Crumb, an academic writing platform, to alphabetise her citations and was told she had committed academic fraud. She received a zero for the assignment, which had a weightage of 45 per cent, getting a “D” grade.

An NTU spokesman told local media on Thursday that the school met two of the three students who were accused of academic misconduct for consultations this week.

CNA reported that the student whose appeal was rejected had used the citation generator Citation Machine and ChatGPT to organise her citations. As there were mistakes in her bibliography, her work was flagged as potentially AI-generated. She had also used ChatGPT for “minimal” background research and did not use it to write her essay.

A university student and her professor. Experts have urged educators to exercise nuance and improve their knowledge of AI tools. Photo: Shutterstock
A university student and her professor. Experts have urged educators to exercise nuance and improve their knowledge of AI tools. Photo: Shutterstock

The third student was accused of using fake and inaccurate citations after he said he leaned on AI tools to summarise information for his background research and to format his citations. The student said he did not pursue the matter, as he had already found a job and his top priority was to pass. He eventually scored a “D” for the module, according to CNA.

Both students who have been dealt with raised concerns about the punitive punishments meted out, considering they had not used generative AI to write the essays.

Fong from Forward Legal argued that punishments should be made clear for various offences, noting that a student who used Gen AI to organise citations but personally wrote the essay should not face the same punishment – scoring a zero – as a student who used Gen AI to write the essay. “Punishments must be appropriate to the breach.”

Local universities told This Week in Asia that generative AI had transformed the higher education landscape and that the universities were incorporating it in students’ learning.

Karin Avnit, deputy director of SIT Teaching & Learning Academy at the Singapore Institute of Technology, said while assessment integrity remained a priority, the university’s long-term focus was on enabling deeper learning through AI to help students in initial drafts and ideas, so that they could spend more time on other aspects such as creativity and real-world problem-solving.

“However, students are also reminded that, regardless of the tools they use, they are ultimately accountable for the originality, accuracy, and credibility of their work,” Avnit said.

A National University of Singapore sign is displayed at the campus entrance in Singapore. The university says students who submit AI-generated work without proper acknowledgement will be committing plagiarism. Photo: AFP
A National University of Singapore sign is displayed at the campus entrance in Singapore. The university says students who submit AI-generated work without proper acknowledgement will be committing plagiarism. Photo: AFP

A National University of Singapore (NUS) spokesman said students who submitted AI-generated work as their own without proper acknowledgement of the AI tool used would be committing plagiarism and dealt with under the prevailing provisions for plagiarism.

NUS also prohibits the use of AI-writing detection tools to assess submitted work for plagiarism as the tools lack reliability and accuracy, and their use can undermine the trust between students and instructors.

Jonathan Sim, who teaches AI and philosophy at NUS, urged educators to exercise nuance and improve their knowledge of AI tools. “The main message is everyone needs to upskill. Educators are no exception.”

He also pointed out that generative AI was becoming increasingly commonplace and available on search engines such as Google.

Fong agreed and said: “The instructors and lecturers have an onus not to vilify AI … Creating a culture of ignorance when students use Gen AI in their daily life is like an ostrich sticking its head in the sand and ignoring what’s going on in the world.”

Sim echoed his university’s emphasis on trust between students and teachers in dealing with generative AI disputes. He noted that what was needed urgently in the education system was to build trust between teachers, students and the AI tools.

“Since the day ChatGPT came out, trust has been fractured between teachers and students. And we need that trust for teaching and learning to be effective. This is not an AI issue – it is a human issue.”

About Author /

Start typing and press Enter to search