UT-Austin School of Information professor receives $1 million grant for unbiased artificial intelligence research

Brooke Ontiveros

An assistant professor at the UT School of Information received more than $1 million last month in grant funding to research how to create more accurate and representative artificial intelligence applications, according to the School of Information’s press release from Feb. 5.

The National Science Foundation, in partnership with Amazon, awarded $1,037,000 to Min Kyung Lee to fund her project, Advancing Fairness in AI with Human-Algorithm Collaborations. Lee will work with researchers from Carnegie Mellon University and the University of Minnesota-Twin Cities to complete research by 2022, according to the School of Information’s press release.

“AI is growing, but the problem is it’s growing too fast without careful evaluation and examination of its consequences,” Lee said. “It’s the researcher’s role to make sure, in the early stages, (that) AI will be (fair).”

Lee said a lot of companies are using AI, but they do not always examine whether the AI is fair. She hopes to create algorithms to help that. To accomplish this, the researchers will gather data on people’s values of fairness, Lee said.


“We create pairwise comparisons of two decisions and ask people to choose a fair option among the two,” Lee said. “Then, we use this to train a fair algorithm, and we’ll adapt and extend this method.”

Lee said one of the questions researchers asked participants was to pick which food kitchen has a greater need for resources.

Lee said biases could creep into AI during the training phase when potentially biased data is supplied to the AI. Lee said most biases in AI occur because the datasets or populations given to AI do not represent everyone and everything fairly. She said part of her research is to create fair human feedback to the AI that represents a diverse set of human values.

Facial recognition AI biases can appear where the technology will work better for specific populations than others, Lee said. Bias can also be found in recommendation results on social media algorithms such as YouTube and Twitter, which are responsible for search results, Lee said.

“Biases in tailored information delivery (could cause) different people to get more information about opportunities,” Lee said. “For example, job ads or housing opportunities and higher-paying jobs could only be shown to the white male population.”

Lee said social media has a significant problem with biases in AI. Lee said social media is tailored to earn profit, so AI will replicate content that could make consumers stay on the platform for longer, including fake news. 

Mathematics freshman Jasper King said he hopes that Lee’s research will lead to unbiased AI.

“If AI is taking over, which it kind of is, making it fair could lead to objectively better decisions made today without AI,” King said. 

Computer science freshman Janssen Bozon said it is crucial to work on creating an unbiased AI.

“The fact (is) that you use AI in a lot of places already, and if the AI is biased you could be reducing access to (underrepresented) people,” Bozon said.

Editor's note: This story was corrected to fix a quote.