UT has declared 2024 its “Year of AI,” aimed at bringing UT to the forefront of AI innovation by promoting artificial intelligence research and developing future experts in the field. However, amidst the hopes of advancement, UT must consider the ethical implications of AI development.
Computer science professor Swarat Chaudhuri, explains how bias seeps into artificial intelligence systems.
Even carefully collected data will often reflect problems or inequities in a society,” Chaudhuri said. “AI algorithm is something that learns patterns in data and then acts in response to them.”
Matthew Lease, a professor in the School of Information, elaborated on this idea.
“The challenge, given that we collect data from the world and the world is a biased place, is how to evaluate how our models are reproducing that bias that exists in the world,” Lease said.
The algorithms that feed AI data sets are human-made, and therefore are often skewed based on systemic biases. Lease explains the methods in which this discrimination is built into AI systems.
“The way algorithms work is the more data you have to train the AI, the better they do. And so, just simply by lack of data, a group that is underrepresented in data will tend to have lower performance,” Lease said.
Chaudhuri references one such example, a study on predictive policing based in Oakland, California, in which police used an algorithm to predict areas where drug crimes were most likely to occur based on previous arrests. The study found that the algorithm predicted higher rates of drug use in primarily low-income neighborhoods with high minority populations due to previous over-policing in the areas. Chaudhuri explains that this process creates a “feedback loop” in which human biases are embedded in the algorithm, leading to their amplification.
“Applied naively, this algorithm will then send more cops to those areas, and then naturally, when you send police officers to an area, they also have the tendency to see more (crime),” Chaudhuri said.
Samantha Shorey, an assistant professor of communication studies, references another example in AI-based hiring processes, which disproportionately eliminate minority candidates for positions.
“There’s already human bias built into hiring,” Shorey said. “When we seek to automate that process, our first thought is, maybe that’s a way of overcoming bias, when, in actuality, oftentimes what ends up happening is we build those biases into the system.”
In order to mitigate the biases embedded in algorithms, the university must prioritize engagement with many different communities and stakeholders to properly address the ethical concerns about AI.
“Having greater representation of the people that design and produce AI technologies can help create technologies that are better able to render the diversity of human experience,” said Shorey.
Good Systems is a research project about the ethical implications of AI conducted by the university. Through the creation and sponsorship of programs like Good Systems, UT can promote inclusivity and ethical AI practices.
“The big idea we have is that if we want to make ethical and responsible AI, you can’t do it in a vacuum. You have to think about a societal challenge you want to help solve,” Lease said.
UT’s “Year of AI” is not only a path for technological innovation, but also a call to action to implement AI responsibly. UT houses a student body full of diversity. As the University focuses on the expansion of AI research, it must prioritize addressing the bias of these systems and their impact on all students.
Ava Saunders is a journalism and government freshman from Wheaton, Illinois.