UT community explores ethical concerns of AI integration in classrooms
February 24, 2023
Academic and ethical controversy surrounding the artificial intelligence program ChatGPT has spurred a conversation about how to best approach a tool that offers the potential for learning expansion, but also introduces the risk of more fraudulent classwork.
Research company OpenAI created ChatGPT in November 2022, and within a few months, school districts across the nation placed bans on the program. However, universities such as UT stand at a crossroads between recognizing the innovation potential of ChatGPT and concerns over the possibility for exploitation, choosing to rethink the concept of plagiarism.
Art Markman, vice provost for academic affairs and psychology professor, said while ChatGPT may entice students to act in academically dishonest ways, classrooms can greatly benefit from these types of AI programs if given proper training and education.
“We’re grappling with the fact that we’ve got our first AI systems that can mimic human writing,” Markman said. “That’s had some interesting implications for how we think about assignments in high schools and colleges.”
Although academic dishonesty rates for this semester are currently unavailable, Markman said there has been no evidence to suggest an association between ChatGPT and increased reports of fraudulent work so far.
In addition to releasing an informative resource page about ChatGPT, Markman said the University hopes to potentially provide training for professors that outline productive ways to navigate discussion and integration of AI in classrooms.
“We also want to provide ways of thinking about how ChatGPT might change the way that we teach some aspects of writing,” Markman said. “It’s a tool, and tools like this are going to be increasingly common.”
Computer science junior Aryan Parikh said while he sees the potential for academic dishonesty, the benefits of AI in classrooms far outweigh the drawbacks.
“One of the few things that GPT can do is you can ask it to rephrase information, you can ask it to rewrite information in a particular way,” Parikh said. “I think that the risk of (using ChatGPT for plagiarism) obviously, does exist and it has probably been used to that effect.”
Parikh said there will likely be a move to redefine terms like “plagiarism” in the coming years as programs like ChatGPT may convolute the meaning.
Government professor Benjamin Greggs said successfully using AI ethically will require a precise definition of plagiarism discussed at large by each class across campus. However, he said limiting the use of AI in his classroom would be unnecessary and limiting.
“I cannot look for 100% compliance — I am a scholar and a teacher, I am not a policeman,” Gregg said. “It would be unjust, I think, to penalize the majority of students who use computers and AI ethically because a few students perpetrate fraud.”
Markman said he looks forward to seeing where increased AI use takes the University.
“It absolutely poses some short term challenges for education, and is going to provide some short term temptations for a certain number of students to do something that they know is probably wrong, but in the long run I think it’s actually going to be a huge benefit to education,” Markman said.