Official newspaper of The University of Texas at Austin

The Daily Texan

Official newspaper of The University of Texas at Austin

The Daily Texan

Official newspaper of The University of Texas at Austin

The Daily Texan

Advertise in our classifieds section
Your classified listing could be here!
October 4, 2022
LISTEN IN

UT grad student awarded 2017 Google Ph.D. Fellowship

google_courtesy+of+Yu-Chuan+Su
Courtesy of Yu-Chuan Su

Yu-Chuan Su, UT computer science graduate student, is being paid by Google to get his Ph.D.  

Su is one of 33 people from North America, Europe and the Middle East to receive a 2017 Google Ph.D. Fellowship.  He received the fellowship in the field of machine perception, which focuses on teaching computers how to interact with their surroundings, similar to how humans do.

“I really didn’t expect (the fellowship), actually,” Su said. “I had assumed most people who applied had already interned at Google, so I was quite surprised about receiving the fellowship since (I hadn’t).”


According to a statement from Michael Rennaker, program manager at Google, the Ph.D. Fellowship program was created in 2009 in order to recognize and support outstanding graduate students doing exceptional research in computer science. 

Kristen Grauman, computer science associate professor and Su’s Ph.D. advisor, said Su was quite deserving of the recognition. 

“I have worked closely with Yu-Chuan over the last two and a half years, and I am continually impressed by his drive and technical skill,” she said.

Su’s current research focuses on computer vision: the process of making computers that can gain high-level understanding of a digital image or video. 

“Computer vision has mainly been about giving a computer system an image and wanting the system to return with analysis — for example, knowing how many people are in the photo,” Su said. “You can imagine, for example, if we equip a robot with a camera, the robot can not only capture the image, but also determine how to capture the image, which will give us better capabilities for analysis.”

In the past year, Su has been working to improve 360-degree videos, which are videos that record in every direction and allow viewers to control the direction of viewing. Su said many problems exist within the current interface of 360-degree videos. 

“When you’re focusing on just one direction, there might be something interesting happening in the opposite direction,” Su said. “If the audio doesn’t provide you with enough information, you sometimes fail to really see what’s happening around you and what the video creator really wants to show you.”

Instead, Su proposes making the video control process automatic, so the viewer doesn’t miss anything important.

“You’d give me a 360-degree video as input, which would then be analyzed,” Su said. “Then, (the results) would tell you which direction you should see at each moment, making the entire process automatic.”

Last week, Su presented his research in Lyon, France at a conference focused on intelligent cinematography and editing, which involves using computer vision to automatically edit, re-light and re-frame live-action videos.

“It’s very similar to what we’re trying to do with 360-degree videos, but the goal was to make the movie editing process, in a sense, automatic,” Su said. 

Su is currently in his third-year of graduate school and has already published five papers. 

“He has definite momentum as he approaches the midpoint of his Ph.D.,” Grauman said. “His work is aimed at challenging problems in video analysis that, in my view, are central to the future agenda of computer vision.”

More to Discover
Activate Search
UT grad student awarded 2017 Google Ph.D. Fellowship