Random numbers may not improve your chances of winning the lottery, but they have the potential to improve cyber security.
David Zuckerman, a UT computer scientist, and Eshan Chattopadhyay, a graduate computer science student, have developed a theorem that lays the foundation for new ways of producing random numbers that require less computational efforts.
“From a theoretical perspective, our algorithm is a great advance from current ones,” Zuckerman said.
Zuckerman works on the mathematical aspects of computing: creating algorithms and solving theorems. He has also focused a lot of his research specifically on randomness in algorithms.
Randomness applications work by creating sequences that do not follow an established pattern and thus cannot be predicted. Scientists use randomness to model complexity or create randomized algorithms. Examples include measuring statistical changes in the economy and electoral polling, or analyzing complex systems like climate change.
The study of randomness extraction focuses on obtaining nearly uniform numbers from sources that are only weakly random. Extractors are functions that output and uniformly distribute random numbers that vary from the input source.
Imagine you are with a group and friends and are rolling a dice. If there are six people present, you have six chances of rolling each of the numbers on the dice. While the input (your friends rolling the dice) is not random, the output (the numbers you roll) is random. The number of times you roll changes the quality of randomness you can achieve.
“High quality randomness is important because it helps keep online channels more secure,” Zuckerman said. “When you buy something from Amazon and even if you have never [bought from] Amazon before, you can create a secure channel, even if someone is eavesdropping on all the communication. You need randomness to do this.”
Confidential applications, like credit card transactions, use randomness to protect themselves from hackers while scientists use randomness to test hypotheses. Zuckerman said that in high-quality randomness the numbers are completely random, while sets of low-quality randomness include numbers that exhibit some sort of pattern, meaning they could potentially be predicted.
“A lot of times hackers attack security by finding where people use low-quality randomness when they are supposed to be using high-quality randomness,” Zuckerman said.
Presently, most applications require truly random, uncorrelated numbers but high-quality randomness is hard to achieve.
Previous versions of randomness extractors were less practical because they required that at least one of the source sequences be close to truly random.
Chattopadhyay’s and Zuckerman’s theorem allows the use of two sequences that are only weakly random. In their theorem, they mix a small amount of high-quality with a high amount of low-quality randomness to produce a large amount of high-quality numbers.
“Our algorithm is efficient in theory although it is not suitable for practical use yet,” Zuckerman said. “We would want to improve it from a theoretical perspective before we even began implementing it.”
Chattopadhyay said the theorem has potential to be wide-reaching. The duo will present their research at the annual Symposium on Theory of Computing in June where they plan to discuss their methods that lead to this key discovery.