17 Nov NVIDIA Is Not Just Accelerating AI, It Aims To Reshape Computing
NVIDIA’s CEO, Jen-Hsun Huang, took the stage at the annual SuperComputing conference this week to share his vision with an enthusiastic crowd. The tireless cheerleader of deep learning and datacenter acceleration foresees a brave new world of computing, enabled by Artificial Intelligence (AI), and of course accelerated by the company’s GPUs. Armed with new partnerships, products and initiatives, Huang, boldly proclaimed that the new model of accelerated computing, combined with the power of Deep Learning, would usher in the era of “exascale” computing far sooner than previously thought possible. If he is right, the potential impact on the world around us, from new medicines and products to manufacturing, will be profound.
NVIDIA’s new SATURN V is the most efficient supercomputer in the world. (Source: NVIDIA)
What has NVIDIA announced?
First, Huang announced that the company’s partnership with IBM had produced the first server optimized for the newest NVIDIA GPU technology that connects 4 of the company’s Pascal P100 GPUs directly to the IBM POWER8+ processors using the speedy NVLink fabric. This approach can accelerate the training of neural networks for machine learning faster than traditional design in which the GPUs are connected to CPUs over the aging and slow PCI-E interface used in x86 servers. IBM and NVIDIA also announced a new suite of software called PowerAI, so that data scientists and AI researchers can focus on the development of new AIs, not the mundane work of setting up and configuring a Linux server and all the tools and frameworks needed just to get started.
Second, NVIDIA announced a partnership with Microsoft to promote the use of NVIDIA’s latest Pascal GPUs with the Microsoft Cognitive Toolkit (formerly known as CNTK) both for on premise execution and in the Microsoft Azure cloud. With these two partnerships in place, NVIDIA has aligned itself with two of the largest suppliers of enterprise IT technology and services to accelerate the adoption of AI in the enterprise and help take AI into the mainstream of global businesses.
Third, Huang announced that NVIDIA is teaming up with the National Cancer Institute and the US Department of Energy to accelerate cancer research with new AI software called CANDLE—the Cancer Distributed Learning Environment—in support of President Barack Obama’s cancer “Moonshot” initiative. The project will be supported by the company’s fourth announcement, the world’s 28th fastest supercomputer, dubbed SATURN V, which of course was the name of the rocket that propelled man to the moon. The NVIDIA self-built supercomputer was anointed as the world’s most efficient in the TOP500 organization’s annual ranking of the world’s fastest supercomputers. NVIDIA built the SATURN V using 124 NVIDIA DGX-1 computers, each of which supports 8 Pascal P100 GPUs and 2 Intel Xeon processors.
How Will These Announcements Help NVIDIA?
Interestingly, when I asked Huang how he planned to help enterprises embrace the AI revolution and thereby cross the chasm, he pointed to CANDLE and the NVIDIA Deep Learning Institute. The company plans to develop more tailored software and education to tackle other enterprise challenges ripe for Deep Learning. He also mentioned that his company’s own engineers are now using the SATURN V to develop next generation software and hardware to accelerate and improve their own products.
While we have come to expect NVIDIA to continually push the envelop with new products and breakthroughs in AI, the bigger story that emerged as Huang spoke was the transformative impact Deep Learning may have on the world of High Performance Computing (HPC). Specifically, the combination of GPU acceleration and the ability of AIs to predict answers may significantly shorten the path to “exascale” computing.
Exascale is the next big goal of supercomputing industry and is traditionally expressed as a computer capable of executing a billion billion calculations every second, or 1018 floating point operations per second. But Huang believes that instead of calculating a result with mathematical expressions of the laws of physics as computers do today, Deep Learning could be used to short-circuit the math for a large class of problems, effectively modeling the answer instead of calculating it. For some problems this is a viable approach, and for others it would be completely useless.
One example is modeling of the function and size of of the human heart walls in the diagnosis of heart disease. Calculating the size and movement of the left ventricle from ultrasound images would be a massive and almost intractable problem using traditional physics-based programming. But modeling it with Deep Learning is actually fairly simple: just mark 40 points on each frame of the video images, and let Deep Learning take it from there. Conversely, calculating the stress on a turbine engine fan blade is something that requires precise calculations using well known mathematical equations, and not something where you want to accept the word “probably”, as in “the fan blade will probably not disintegrate in a 10 year life span”.
That’s what Deep Learning does: It predicts outcomes that are likely to be correct…most of the time. Just like the thought processes in the human brain decide, “Now would probably be a good time to apply the brakes so you don’t run over that kid crossing the road.”
Given the massive amount of calculations required to learn like the human brain, NVIDIA certainly hopes that this endeavor will reshape industries across the economy to fuel demand for more and ever faster GPUs. Other companies including Intel, Advanced Micro Devices (AMD), and many startups will try to add their own versions of accelerators to the mix. But they will need to muster a significant effort and nurture a large ecosystem of avid followers to approach what NVIDIA has built today, not to mention matching the enthusiasm and vision of NVIDIA’s vocal and passionate leader.