Official newspaper of The University of Texas at Austin

The Daily Texan

Official newspaper of The University of Texas at Austin

The Daily Texan

Official newspaper of The University of Texas at Austin

The Daily Texan

Advertise in our classifieds section
Your classified listing could be here!
October 4, 2022

Texas Career Engagement panel examines ethical considerations of AI in the workplace

Makenzie Long
Facilitator Professor Sherri Greenberg and panelists (from left to right) Will Griffin, Kenneth Fleischmann, and Steve Kramer spoke on artificial intelligence technology advances on Thursday in the UT FAC.

A Texas Career Engagement panel, cohosted with Good Systems, discussed the implications and integration of artificial intelligence in the workplace on Feb. 8.

Sherri Greenberg, assistant dean at the Lyndon B. Johnson (LBJ) School of Public Affairs, moderated the discussion called ‘AI and Ethics’ at the Peter T. Flawn Academic Center. The panel was a part of the Essential Skills Immersion Series and featured professionals from diverse backgrounds sharing their experiences with AI. 

At the beginning of the panel, Greenberg said artificial intelligence does not have a single standard definition. However, she said Stanford’s One Hundred Years Study defines it as “a set of computational technologies that are inspired by, but typically operate quite differently from the ways people use their nervous systems and body to sense, learn, reason and take action.”

The panelists included Will Griffin, the former chief of ethics at Hypergiant, Steve Kramer, chief scientist at KUNGFU.AI and Kenneth Fleischmann, professor at the School of Information. They reflected on their experiences in their respective fields, how technology has shaped and changed their careers and advised students to utilize AI to further their careers. 

When asked why ethics are important, Fleischmann said he believes it’s important to foster trust between a company and its customers. He said people need to be able to trust companies and technologies without fear of algorithmic biases that might be embedded in AI systems. 

“The potential for that to go wrong and for the public, then to lose trust in not only that AI system but everything that AI system has touched (including) all of the people (and) organizations that were using that system,” Fleischmann said during the panel. “It’s quite dangerous.”

Kramer said he is concerned with AI safety or the practice of preventing harmful real-life consequences of AI and highlighted the impact of AI systems.

Some examples of real-life consequences include an incident in Arizona when scammers used AI to replicate a 15-year-old’s voice to demand ransom. The state introduced legislation earlier this month that would consider AI used in this way a tool for extortion, blackmail or harassment. 

Griffin said his advice for young people is to learn how to become well-versed in AI to make better ethical considerations on the technology. However, Fleischmann said while using AI for inspiration and ideas is acceptable, a person still needs to be the one making the decisions.

All three panelists said AI’s use in the workplace is inevitable, so it’s important to plan ahead and understand its limitations. 

“We know what AI is and it will continue to be in the workplace,” Greenberg said during the panel.“It is important that we approach the workplace and AI in the workplace in an ethical manner.”

Editor’s Note: This story was corrected to fix the spelling of a source’s name and clarify the panel’s cohost.

More to Discover