05 Jan Ten Predictions For AI Silicon In 2018

2017 was an exciting year for fans and adopters of AI. As we enter 2018, I wanted to take a look at what lies ahead. One thing is certain: we’ve barely just begun on this journey and there will be great successes and monumental failures in the year to come. Before I dive into the dangerous waters of predictions, it might be helpful to set the stage with some of the highlights and lowlights of AI of 2017. A lot happened this past year so I will try to keep this brief!

Ten events that shaped the year for AI chips in 2017

  1. NVIDIA  continued to blow the doors off of the wildest expectations for its Data Center Business, churning out triple digit growth to reach a ~$1.5B revenue run rate.
  2. NVIDIA surprised the market with the NVIDIA Volta V100 GPU and cloud services for machine learning, capable of achieving 125 trillion operations per second with TensorCores—6X the performance of its one-year-old PASCAL predecessor.
  3. NVIDIA also surprised the market by announcing its own Deep Learning ASIC, to be included in the company’s next generation DrivePX automotive platform. As promised, the company published the specs as open source technology in Q3.
  4. AMD launched its AI GPU and software, the Vega Frontier Edition.The company announced a few big deployments wins, including Baidu for GPUs and Microsoft Azure for its EPYC CPUs.
  5. Google  announced its own ASIC chip for AI deep learning training, the Cloud TensorFlow Processing Unit, delivering 45 TeraOps per die, and featuring a 4-die 180 TeraOps card for use in its datacenters and cloud services. This announcement fueled much speculation regarding the threat that ASICs may present to NVIDIA ’s dominance.
  6. Microsoft announced impressive results for its internal use of Intel  Altera FPGAs for Machine Learning and other applications. This heightened the expectations for Xilinx  in the datacenter. Speaking of which…
  7. Amazon.com AWS announced AWS Marketplace Solutions for its Xilinx-powered F1 instances for application acceleration (for Video, Genomics, Analytics and Machine Learning). Baidu , Huawei , and others also jumped on the Xilinx FPGA bandwagon.
  8. Intel missed milestones for the production release of the Nervana Engine, which the company acquired in 2016.
  9. Intel canceled the Knights Hill Xeon Phi chip, either because the standard Xeon processor was so good, and/or because the company plans to shift its AI efforts to Nervana. There’s no doubt in my mind that the significant savings in development expenses was the ultimate decision driver.
  10. Finally, the number of ASICs being developed for AI to challenge NVIDIA has grown dramatically, including half-dozen Chinese startups (presumably with government backing), a half-dozen US VC-funded companies, and several other large companies (including Qualcomm , Huawei , and Toshiba .

Ten 2018 predictions for AI silicon

Now that we’re all caught up, let’s move on to the predictions. I will couch these in terms of High, Medium, and Low probability just to hedge my bets.

  1. Google will announce public availability of its TPU in the Google  Compute Cloud, along with new API and tool services to better compete with Microsoft and Amazon for Machine Learning as a Service. (HIGH probability)
  2. Intel  will finally bring out the Nervana Engine, probably in Q2 or perhaps Q3. The company simply cannot wait any longer to establish relevancy in this hyper-growth market especially after the cancellation of KNH. However, I doubt Intel will exploit the chip’s on-die fabric since it wants to sell as many Xeons as possible—I sincerely hope for Intel ’s sake I am wrong about this latter point. (HIGH probability)
  3. NVIDIA will pre-announce the chip that follows Volta. Since Volta is so new and remains way in front of any chip out there, look for this to be announced at SC’18 in November, instead of GTC in March. (MEDIUM probability)
  4. Xilinx will win at least one high-profile customer for AI inference, although I do not think it will be Microsoft. (HIGH probability)
  5. While 2017 was the year of AI in the Data Center, 2018 will see a surge of AI at the edge, with IoT and other edge applications building momentum. This will be critical for NVIDIA , as it needs to grow at the edge to maintain its leadership pace. (MEDIUM probability)
  6. Although Dell , HPE, and Lenovo have all brought forward new infrastructure to support AI, the adoption of AI in the enterprise will continue to lag until 2019 or later. (HIGH probability)
  7. Someone will buy at least one ASIC startup, such as Wave Computing, Cerebras, or Groq. Odds are higher that the acquirer will be Dell or Hewlett Packard Enterprise, seeing as the systems business model is more in-line with OEMs than NVIDIA or Intel . (MEDIUM probability)
  8. NVIDIA will bring out a full-fledged ASIC product (not just DLA logic for open source) for Machine Learning. I would rate this as LOW probability for 2018, since I do not believe NVIDIA will feel threatened by ASICs like Google TPU until 2019. That being said, CEO Jensen Huang is not one to wait for threats to materialize before he acts.
  9. At least one of the large Chinese cloud providers ( Baidu, Tencent, or Alibaba ) will buy one of the many Chinese ASIC startup vendors late in 2018. (MEDIUM Probability)
  10. While AMD ’s EPYC CPU will gain significant traction in the datacenter, the company will struggle to establish meaningful (double-digit) market share in GPUs for AI. The company’s high-end Vega GPU is still a generation behind NVIDIA Volta, and it takes time to establish an ecosystem. AMD will be very focused on getting its APUs to market in 2018. (HIGH Probability)

Well, that wraps it up. Feel free to post your own thoughts, critiques, etc., on this site! Happy New Year!