How Rolling Loaded Dice Will Change the Future of AI

Highlights

  • Researchers at the MIT Probabilistic Computing Project, funded by the Intel Probabilistic Computing Center, developed a computer algorithm to simulate the roll of “loaded” dice to produce random numbers.

  • With its low memory requirements and fast speed, the algorithm may improve the accuracy and efficiency of artificial intelligence applications, including decision support.

author-image

作者

MIT researchers have developed a computer algorithm that generates random numbers with speed, accuracy, and low memory requirements. The MIT team will present the Fast Loaded Dice Roller (FLDR) at the virtual International Conference on Artificial Intelligence and Statistics on August 26-28.

The MIT Probabilistic Computing Project is integrating probabilistic inference, generative models, and Monte Carlo methods into the building blocks of software and hardware. Intel's Probabilistic Computing Center is funding and collaborating with the MIT team, to explore faster, more accurate inference techniques and artificial intelligence (AI) applications in decision support, common-sense data analytics, and robotics.

Machines normally use deterministic processes that have predictable, repeatable outcomes. By integrating probability and randomness into hardware and software, probabilistic computing will help AI systems to comprehend and compute with uncertainties inherent in natural data.

“Probabilistic methods enable us to make risk-informed decisions. However, modeling the uncertainty is typically subject to an accuracy-time trade-off,” said Javier Felip Leon, research scientist at Intel Labs and collaborator with the MIT research team. “The more time spent computing, the better the uncertainty quantification. One of the bottlenecks resides in sampling algorithms, which generate values that are randomly distributed in a specific way. FLDR can help by allowing more samples, increasing the estimation accuracy for the same time budget.”

The FLDR computer algorithm simulates the roll of “loaded” or weighted dice to produce random numbers. The FLDR algorithm addresses the memory storage issue that affected the landmark Knuth-Yao approach to generating random numbers.

In 1976, computer scientists Donald Knuth and Andrew Yao introduced the fastest algorithm ever to simulate the roll of weighted dice in the most time-efficient manner. While efficient, the Knuth-Yao algorithm has a major drawback — it requires too much computer memory to store the information, making the method impractical for many real-world problems.

FLDR was designed to have the efficiency of the Knuth-Yao algorithm, using a fraction of the memory. Nearly as time efficient as the Knuth-Yao method, FLDR uses up to 10,000 times less memory storage while taking no more than 1.5 times longer per operation.

Developed by MIT graduate student Feras Saad, research scientist Cameron Freer, professor Martin Rinard, and principal research scientist Vikash Mansinghka, FLDR could make Monte Carlo simulations and Monte Carlo inference techniques more efficient, by speeding up the underlying random number generation. Monte Carlo simulations use a dice roll to generate more complex patterns of random numbers.

“Monte Carlo inference can require hundreds of thousands of times more random numbers than Monte Carlo simulations,” Mansinghka told MIT News. “That’s one big bottleneck where FLDR could really help.”

Methods like FLDR could be used to simulate and model complex systems, including climate changes, seismic activity, and financial markets. It also could improve the robustness and accuracy of automated commonsense in data analytics and robotics.