Industry professionals discussed the ways artificial intelligence is falsely portrayed in the media and the ethics of becoming an AI-dependent society in a Feb. 21 roundtable.
Good Systems, UT’s Grand Challenge Initiative dedicated to creating ethical AI technologies, which hosted the event, invited Andrew Augustin, an illustrator, game designer and assistant professor of practice; Rakeda L. Ervin, film producer and director of community education at the Austin Film Society; Geoff Marslett, animator, director, writer, producer and actor; and Suzanne Scott, associate chair of UT’s Radio-Television-Film department and a fan study scholar. The discussion was moderated by Sam Baker, a founding member of the executive team for Good Systems.
Augustin, who created and conceptualized the characters in The Sims 3 Pets, said the prevalence of AI may cause job cuts. As a studio owner, Augustin said studios, including his own, feel they do not need to use human writers anymore.
“I don’t use them anymore because I don’t have to,” Augustin said. “I think (the use of AI is) going to multiply.”
Ervin said she is more optimistic about the integration of technology into jobs.
“You don’t have to wait for that job to be created but to study and see what the needs are and become an expert and create a lane for yourself, that’s totally new,” Ervin said.
However, the panelists agreed that educating people about the niche roles related to AI is crucial for maintaining ethical usage.
“Education is important to help demystify new technology and AI and also to help make it accessible for a larger community of people and also for the workforce,” Ervin said.
Scott talked about the implications of authenticity in AI, specifically in fanfiction, because the AI may fail to pick up on tropes created by humans who are part of the fanbase.
“There is a human factor in terms of literacy,” Scott said. “If (AI is trained) exclusively on a fanfiction archive, it might understand the sort of writing tropes and common themes and the ways that fans interpret particular narratives. If it’s just pulling from official media, it might be missing an entire level of fan-interpreted practices that are actually really fundamental to creating pleasure for people reading that piece.”
This opened up a conversation about copyright laws, as well, Scott said.
“Exploratory conversations have all been about ‘to what degree does human involvement need to be present in order for something to be copyrighted?’” Scott said. “And then the secondary layer of those questions is ‘should copyrighted materials be allowed to be used to generate new things without those operators’ permissions?’”
The way AI is portrayed in the media may be frightening, but not in the ways that may pertain to realistic dangers of AI, like algorithmic bias, Scott said.
“We are often aligning and not representing the elements of AI that might be equally pernicious as killer robots, or maybe more harmful to society than killer robots, but are not particularly spectacular to watch,” Scott said. “They don’t make for good TV, right?”
In the panelists’ closing statements, Marsett highlighted his uncertainty about the future of AI.
“I think there’s value to things that are rare,” Marsett said. “AI is coming, and people are going to grow up using it. But what it’s going to do is make something else rarer and more special. I don’t know what that is, but it creates a new diamond, and I do think that’s human nature.”