The real-world dilemmas that some students and staff are already encountering with generative artificial intelligence (AI) came into sharp focus during Unisa’s 2024 Research and Innovation Week.
During the session on "The responsible use of AI", it became clear that concerns over the use of generative AI in higher education are not an abstract challenge to be faced in the future, but one that is unfolding daily in the lives of Unisa staff and students.
After listening to several ethics experts talk about the advantages and pitfalls of AI, a postgraduate student in the audience said during question time that she was feeling "scared" and uncertain.
She listed some pressing concerns, such as the perceived lack of opportunity for students to defend their work when AI detectors allocate a score indicating a high level of AI use. "Will I have the opportunity to justify my work?" she asked.
This was an important question given the fallibility of AI detectors amid the rapid development of new generative AI tools that make it more and more difficult to distinguish AI-generated text from human-written text.
"New tools are transforming AI text to human-like text. The tools are evolving at the speed of wind," said Sidney Engelbrecht, who joined the Unisa session virtually from Saudi Arabia, where he is a senior research compliance specialist at King Abdullah University of Science and Technology.
The worried postgraduate student also asked about the ethics of using certain AI tools for editing and paraphrasing. "Is paraphrasing with AI unethical? I would also like to know if it is ethical for me to use the Grammarly editing tool for my dissertation to make it nice and readable?" she asked, noting many students did not have English as their first language.
Engelbrecht said the critical point to remember is to be "mindful" when using AI. "It’s about applying one’s mind to ensure the end product remains your work."
Dr Retha Visagie, Manager of the Research Integrity Office at Unisa, agreed. "The important thing is how we use it," she said.
For example, paraphrasing is an academic writing skill that needs to be learnt and practised. "I would argue that it becomes less scholarly to use AI to paraphrase another author’s work. It takes us away from utilising our own critical thinking skills and our own resources and abilities," she said.
Dr Nicky Tjano, co-chair of Unisa’s AI Task Team, said that Unisa was busy preparing AI policies and guidelines to clarify the responsible use of AI at the university. In the meantime, he and the other speakers emphasised the importance of being transparent, accountable and honest when using AI. For example, it is of the utmost importance to disclose upfront whether AI tools have been used to produce academic work.
Failure to disclose the use of AI is landing a growing number of academics around the world in hot water. International publishing house Wiley closed down 19 journals and retracted some 11 000 articles over the authors’ lack of transparency on the use of AI, said Prof Moeketsi Letseka, holder of the UNESCO Chair on ODL, who was sitting in the audience.
Another academic in the audience raised concerns over the use of AI tools among Unisa students.
Prof Gerhard Prinsloo of the College of Agriculture and Environmental Sciences expressed frustration over some of his students’ unquestioning use of AI and reluctance to acquire basic academic skills, such as how to write a literature study.
"AI can be a tool to improve a literature study, but if you are a fourth-year student who has never done a literature study before and do not have the basics, what will you be improving? We are talking about our thought leaders and intellectuals, who we are training now and trying to skill by skipping steps."
Prinsloo said he continually encountered pushback from students whose work had been shown through AI detection to have been generated predominantly or completely by AI.
The thinking seems to be that "AI is available, so why do I need to learn how to do something?" he said, adding: "Until we have governance, principles, policies in place to govern our use of AI, especially on the student level, I want to argue, please ban it."
While a ban on AI use is highly unlikely, clarity on when and how AI use is considered acceptable or unacceptable is greatly needed, the speakers agreed. "It will be very important to involve students in this discourse," Visagie said.
"Technology can make our lives easier," she said, "but we ultimately remain responsible and accountable for what we do with it."
* By Clairwyn Rapley, Directorate of Research Support
Publish date: 2024/05/30