UT algorithm bias research published amid anti-DEI efforts in Texas Legislature
February 23, 2023
Algorithms trained to be sensitive to bias based on gender and race in the job recruiting process pose the risk of showing a new kind of bias called “social norm bias,” a study published Jan. 23 found.
The research, done in collaboration with Microsoft Research and California Institute of Technology, focuses on calibrating algorithms that sort potential candidates to mitigate bias and discrimination in recruiting. The study comes amid moves by members of the Texas government, including Lt. Gov. Dan Patrick, to ban diversity, equity and inclusion hiring efforts at public universities.
The first part of the research, published in 2019, studied gender bias in algorithms tasked with predicting an individual’s occupation for the purpose of targeted advertisements in job recruiting. Researcher Maria De-Arteaga said this was done by training an algorithm to sort through around 500,000 online biographies and predicting what their occupations were.
“In our data set, 15% of surgeons were women, (and) what happened was that among the surgeons that were correctly labeled surgeons, the percentage of women was only 11%,” said De-Arteaga, an assistant information, risk and operations management professor. “So among the people you were showing the (job) advertisement to, you were exacerbating the gender imbalance in the occupation.”
The research published last month focused on residual harms once the algorithms were adjusted to mitigate the bias. The research defines social norm bias as when algorithms show less bias on a group level — like towards women — but the occupation predictions are influenced by stereotypes of that group, De-Arteaga said.
“The algorithm associates the word ‘empowerment’ with you being more likely to be a woman,” De-Arteaga said. “So, if you’re a woman and use that word, you’re less likely to be classified as a surgeon than if you don’t, even after the bias mitigation approach.”
The study concluded social norm bias should be an explicitly measured metric as bias mitigation approaches cannot be guaranteed to succeed, De-Arteaga said. As well, the research indicated investing in computational resources to retrain the algorithm’s processing to be more sensitive overall is worthwhile.
Focusing on occupation prediction in the recruitment stage showed that by not adjusting for these biases, people are deprived of job opportunities by never being made aware of the positions to begin with.
This study was published against the backdrop of a Texas Legislature poised to end diversity, equity and inclusion efforts, which is one of Lt. Gov Patrick’s top 30 legislative priorities this session. A bill filed by Rep. Carl Tepper (R-Lubbock) would prohibit the funding and support of any “initiative or formulation of diversity, equity, and inclusion beyond what is necessary to uphold the equal protections of the laws” at public universities.
The work of De-Arteaga and her colleagues to reduce bias in recruiting and hiring is an effort that may be targeted by these policies, but De-Arteaga said there is still work to be done in deciding the best course of action to mitigate such bias and discrimination.
“I do think that there is a possibility that we can define algorithms in a way that they help us make better and fairer decisions,” De-Arteaga said. “I don’t think algorithms are the only path to do that.”