Huang’s law becomes reality
07:53, 05.10.2023
For many years the development of processors seemed to follow the logic of Gordon E. Moore’s prediction, according to which every couple of years the capacity of central processors doubles while their cost is halved. However, what we are witnessing now looks more and more like the “Huang’s law” made by the prediction by Jensen Huang in 2018. According to this fresher law, graphic processors are going to develop much faster than central processing units. At the same time, the CPU industry will hit its limit with chips only becoming more and more expensive and the necessity to review the semiconductor industry and computing approaches.
The tendency toward Huang’s law was recently demonstrated by NVIDIA in the talk of its Chief Scientist Bill Dally.
The latest developments by Dally’s team achieved a 1000x increase in AI performance within the last 10 years.
This level of progress was in many ways achieved through finding new approaches to optimizing the way calculations are done by computers, facilitating the representation of numbers computers use to make calculations.
In particular, NVIDIA Hopper architecture featuring a Transformer Engine reaches its high-performance indicators due to a dynamic mix of 8- and 16-bit floating point and integer math.
Another factor is the carefully developed instructions that “explain” the GPU how to reach better performance with higher energy efficiency.
Structural sparsity, introduced in NVIDIA Ampere increases the AI performance by reducing the weight of AI models without losing their accuracy.
Dally is optimistic about the further development of graphic processors according to Huang’s law, considering that its potential won’t be exhausted in the years to come.