Official newspaper of The University of Texas at Austin

The Daily Texan

Official newspaper of The University of Texas at Austin

The Daily Texan

Official newspaper of The University of Texas at Austin

The Daily Texan

Advertise in our classifieds section
Your classified listing could be here!
October 4, 2022
LISTEN IN

Here’s how UT researchers are improving 360 technology

cameras_0410_StephSonik_Camera360
Steph Sonik

UT computer vision researchers are working to improve 360 technology. 

Using 360 technology, virtual reality headsets and 360 videos allow people to view spaces as if they were present. In a 360 video, content is recorded in every direction, and it is easy to miss what’s most important. Yu-Chuan Su, a computer science graduate student, is working to improve navigation in 360 videos.

“The Youtube interface allows you to drag your phone screen to look around in the video,” Su said. “Our experience is that this interface is kind of awkward, and it can be difficult to find the right thing to look at.”


Su’s research focuses on using algorithms to train computer systems to identify the most interesting direction to watch a 360 video.

“Instead of letting users start with a random orientation in the video, we are trying to provide a viewpoint in the video that looks like something other people will be interested in,” Su said.

This algorithm can be used to help uploaders choose the starting viewpoint for their 360 videos, Su said.

“On Facebook, users can provide manual annotations about where viewers should watch during the 360 video, and our algorithm can provide a starting point for this,” Su said. “If the user doesn’t have time to provide the annotation, our algorithm can provide it automatically.”

Bo Xiong, a computer science graduate student, is trying to find better ways to visualize 360 images and videos with cubemapping, a common technique that uses a six-faced cube to view a 360 image. A 360 image is projected onto the faces of the cube to create a 2D version of the 360 image. However, distortions can occur during that process. 

 



“If you project one part of an object onto one face of the cube and the other part onto another face, when you unfold the cubemap, you will see that one object is projected in different parts, and that will introduce some distortion,” Xiong said.

Xiong’s research seeks to automatically orient the cube, so important images of a 360 video are always placed on the same face and aren’t subject to distortions.

“Cubemap is a common way to visualize 360 videos and images, and our method provides an automatic way to visualize so that distortion is not on the important object but on the unimportant objects, like backgrounds,” Xiong said.

Santhosh Ramakrishnan, a computer science graduate student, is also using this 360 technology to train robotic cameras to navigate and gather information on a 3D space.

“You can just think of this as a robot moving around in the real world, except it’s staying in one spot and able to look around,” Saravanan said. “The question is, how should the robot actually look around in this 360 image so that it can gather useful information about the 360 capture?”

Saravanan said using 360 technology is a useful way to train robots how to identify the main parts of a 360 image.

“360 research is very interesting because it helps simulate how a robot can move around the real world without actually having a robot moving around in the real world,” Saravanan said. “Given that you have data about the 360 scene around you, you can figure out how an agent would move around in this particular world of its own.”

More to Discover
Activate Search
Here’s how UT researchers are improving 360 technology