With the rise of generative artificial intelligence, the ongoing battle against disinformation has become increasingly difficult. As UT students, we see this firsthand when scrolling through websites like X, formerly known as Twitter, or Youtube, where misleading claims run rampant.
These inaccuracies get amplified in relation to politics and can be as far-reaching as presidential nominees. For example, during the Sept. 10 presidential debate, where former President Donald Trump baselessly claimed that Haitian immigrants were “eating the dogs” and “eating the cats,” despite the fact that city officials have debunked this claim.
As UT students, we need to remain proactive and understand both why disinformation rises and how to analyze content critically before sharing it with others.
Josephine Lukito, an assistant professor at UT’s School of Journalism and Media, said that false information spreads more quickly than factual information online, especially when that info has been “sensationalized very emotionally.”
According to a 2023 USC Study, negative or politically charged information tends to attract increased engagement, and since social media platforms are algorithmically based, this content is promoted.
The overall social media landscape has become even more complex with the rise of generative AI. The promotion of false content has been a controversy in human societies even before we went online, and generative AI has spurred an increased capability of spreading and actualizing this fake content, according to Matthew Lease, professor at UT’s School of Information.
“AI increases the speed and the scale (of misinformation),” Lease said. “One of the challenges is we are still reliant on having people in the loop to help detect these problems when they arise and respond to them.”
Given the influence of AI and bad actors, it becomes difficult for people to identify what is factual. Fortunately, there are tools and systems in place to combat disinformation, both at the policy and individual levels.
Although AI can spread misinformation, it can also be part of the solution.
“Artificial Intelligence is used for both reduction of misinformation and disinformation as well as the detection and removal of that content,” Lukito said.
Lease is also helping to solve this issue as part of the leadership team of Good Systems AI, which works on creating ethical AI to “maximize the potential benefits of new technologies like this while minimizing or mitigating the risks”.
Additionally, Lease recommends that students look at the domain of a website to ensure that the information is accurate and to identify trusted information providers, such as government websites and reputable news providers, instead of anecdotal journalism.
“I think media literacy, and AI literacy in particular, are so essential for any citizen or any individual who’s trying to make sense of the election,” Lukito said.
Before students share info online, they should ensure that the information is accurate and verified by fact-checkers. False information can lead to fundamental changes in thinking on important policies if someone believes the conjecture.
What does the future look like for disinformation and AI? According to Lukito, it can be best described as a “cat and mouse game,” with AI becoming more believable while AI detection also becomes stronger.
“It’s definitely getting trickier and trickier, and one of the challenges is people really do have to make more of an effort to educate themselves and stay informed,” Lease said.
UT students are on the frontline of this challenge. By staying informed, asking questions about the content we read and share, and advocating for stronger fact-checking and regulations, we can ensure that information is safer for everyone.
Gupta is a statistics and data science freshman from Frisco, Texas.