Developing at an unprecedented speed, AI’s abilities to generate human-like results make it more difficult to identify cheating.While the world enthusiastically embraces this technology, professors must change their assignments to maintain academic integrity.
Plagiarism detection tools like Canvas-embedded Turnitin and other tech companies responded to concerns of cheating by introducing new AI-detection software, but this detection software is unreliable.
Detection software uses algorithms to identify the differences in word probability between human and AI writing. Despite Turnitin’s claims of 98% accuracy, an experiment conducted by the Washington Post revealed that the system falsely identified original work as AI-generated more than 50% of the time.
Instead of risking false positives with faulty AI-detection software, educators should seek out alternative ways to determine originality. By redesigning their assignments, professors can ensure that students are not opting out of crucial learning opportunities.
“Software currently in use at the University to detect plagiarism is being evaluated before recommendations are made to faculty and students about using it for AI detection,” said Kathleen Harrison, assistant director of communications for the Office of the Executive Vice President and Provost, in a statement.
It is uncertain what these recommendations will look like, but previously published guidelines from UT’s Center for Teaching and Learning encourage professors to creatively redesign assignments to assess student learning by requiring students to reference their class materials and demonstrate their knowledge through in-person activities.
Creative student assessments are more practical since as language models become more sophisticated, it will be increasingly difficult for already flawed programs to detect AI writing based on a formula.
Frederick Luis Aldama, an English and radio-television-film professor, intends to radically adopt AI into class activities and encourages students to embrace its possibilities while accepting its risks to academic integrity.
“Let’s run toward it and embrace it, and see how it can facilitate and make writing better rather than me spending my time trying to catch you. I’m just not interested in that,” Aldama said.
While a vocal advocate for AI’s classroom potential, Aldama recognizes that there may be instances where students attempt to claim AI writing as their own. To discourage this misuse, Aldama said he will allow students to cite chatbots in their work.
Though it is a daunting task, making assignments inaccessible to AI gives professors the opportunity to sharpen their students’ critical thinking skills beyond the scope of AI software.
Whether professors choose to embrace or reject the technology, one thing is clear: AI is here to stay.
Chowdhury is an international relations global studies senior from Spring, Texas