IBM’s Analog AI Cores Approach – Future of AI Computing?
In some of its recent announcements, IBM claimed that one of its innovation has a great promise for the future of the AI hardware platform. Even more surprising the innovation is based on analog memory devices, long forgotten as thing of the past.
In a recent paper, its research team claimed that they achieved the same accuracy as a Graphical Processing Unit (GPU)-based system.
Analog techniques, involving continuously variable signals rather than binary 0s and 1s, work with limited precision, which is the main factor why they were not spread among the modern computing platforms. AI scientists, however, have realized at some point that deep neural networks models could work even with a level of precision too low for any comparable computations.
IBM has been mainly focusing on the work with analog non-volatile memories (NVM) to accelerate the “backpropagation” algorithm. These memories allow the “multiply-accumulate” operations used throughout these algorithms to be parallelized in the analog domain, at the location of weight data, using underlying physics. This allows performing a number of calculations at the same time rather than one after the other as it would be with a digital approach. And instead of shipping digital data between digital memory chips and processing chips, computations are performed inside the analog memory chip.
Even in early design this technology allowed IBM to achieve energy efficiency of 28,065 GOP/sec/W and throughput-per-area of 3.6 TOP/sec/mm2, outpacing the modern GPUs by two orders of magnitude.