Calculate Faster With Artificial Intelligence


An adaptive computer program from the company DeepMind has found some of the most efficient algorithms yet for matrix multiplications. In the future, this could speed up some computing operations in the field of artificial intelligence (AI) and also computer science in general.

Processing images on a cell phone, graphics in computer games, speech recognition, and even weather simulations all rely on certain computational operations: matrix multiplications. Depending on the size of the matrices to be multiplied together, this task is more or less complex.

To quickly arrive at a correct result for matrix multiplications, the most efficient possible calculation methods and algorithms are needed. For centuries, there was little progress in this area until the German mathematician Volker Strassen showed in 1969 that matrix multiplications can be solved in fewer steps than assumed up to that point. At that time, the mathematician invented the Strassen algorithm.

System learns efficiency
More than 50 years later, the adaptive AlphaTensor system from the British company DeepMind has now found even more efficient algorithms for comparatively small matrix multiplications that are relevant in practice.

Artificial intelligence with Deep Reinforcement Learning has achieved this in a kind of game that, according to the experts at DeepMind, has similarities with chess – with many more possible moves and a three-dimensional playing field. With each move or calculation step, the system analyzes whether it is closer to solving the task than before. In this way, it learns to gradually find the most efficient algorithms.

Exceeds current state of research
The AlphaTensor system was not provided with any information about matrix multiplications at the beginning of the research. All algorithms from the system were based on their own experience and on the calculations that had already been performed previously. Quickly, AlphaTensor was then able to create algorithms that already corresponded to the current state of research. Soon after, the system even managed to surpass them.

The experts at DeepMind are currently presenting the system in a new study in the scientific journal Nature. Matrix calculations that involve one hundred multiplications using traditional methods could already be solved in 80 multiplications through great effort and human know-how. AlphaTensor succeeded in only 76 multiplications.

Experiences from Chess and GO
DeepMind has already demonstrated several times that amazing things can be achieved with reinforcement learning. About five years ago, the program AlphaZero was able to teach itself chess within a few hours and then easily outperformed the best program for computer chess at the time.

DeepMind’s AI programs have also triumphed in the game of GO in the past, although the game has been a challenge for computers for decades. In matrix multiplications and AlphaTensor, the number of potential possibilities and “moves” is now even more than 30 orders of magnitude (x 1030) higher.

For more efficiency
In the future, the experts would like to use AlphaTensor to further explore how efficient the solution of certain matrix multiplications can become. However, according to them, the results obtained so far can already lead to “significantly greater efficiency and speed” in some computer programs and computational tasks.

AlphaTensor, according to the authors, can also find algorithms that are optimized for use with certain hardware and therefore work ten to twenty percent faster on that hardware – for example, special graphics cards.

In general, the authors hope that AlphaTensor will serve as a basis for finding the most efficient algorithms possible in the future, even for computational tasks and mathematical problems that have nothing to do with matrices.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker