29 Mar Arm Chooses NVIDIA Open-Source CNN AI Chip Technology

Post Image

A few weeks ago, we covered ARM’s announcement that it would be delivering a suite of AI hardware IP for Deep Learning, called Project Trillium. ARM announced at the time that third party IP could be integrated with the Trillium platform, and now ARM and NVIDIA have teamed up to do just that. Specifically the […]

27 Mar NVIDIA VR + AI = Billions Of Miles Of Virtual Driving

Post Image

NVIDIA announced today that it has combined its historical strength in high-performance 3D graphics with its Artificial Intelligence (AI) technologies to create a cloud-based platform for simulating and testing the driving operations of autonomous vehicles. By enabling developers to virtualize the closed-loop system of sensors, image processing, routing, and driving controls, the company’s new DRIVE […]

26 Mar Xilinx Everest: Enabling FPGA Acceleration With ACAP

Post Image

Xilinx , the leading vendor for Field Programmable Gate Array (FPGA) chips, has announced its widely-expected 7nm generation architecture, called Everest, targeting datacenter acceleration applications as well as AI, IOT and the company’s traditional markets. While most market observers expected Xilinx to beat Intel (Altera) to the 7nm milestone, the scope of the Everest program […]

02 Mar Qualcomm’s Smart AI Strategy: Scalable Software For Scalable Devices

Post Image

Just before the Mobile World Congress (MWC) confab, where techies meet to enjoy tapas in Barcelona while learning about all things mobile, Qualcomm announced a new software suite. These new offerings seek to enable AI capabilities on existing Snapdragon mobile platforms. While most training of Deep Neural Networks is done in the cloud on NVIDIA […]

14 Feb Arm Throws Their Axe Into The AI Ocean With Project Trillium

Post Image

Arm’s OD and ML processors work together to find and identify faces. This blog was co-written by Patrick Moorhead, President and Principal Analyst and Karl Freund, Sr. Analyst, Machine Learning and HPC, Moor Insights & Strategy. Machine learning is the hottest thing in tech right now; hotter than smartphones, hotter than virtual reality, even hotter than […]

13 Feb Google Announces Expensive Cloud TPU Availability

Post Image

Google  recently announced that the Google Cloud TPU (announced last May), is now available in limited quantities as a beta on the Google Cloud Platform (GCP) for running TensorFlow-based AI applications. Clearly, the search giant is excited about its shiny new device, which the company has claimed will substantially reduce its datacenter footprint and cost […]

17 Jan Google, Microsoft, And Amazon Place Bets On AI In The Enterprise

Post Image

Google just announced significant enhancements to its machine learning services (MLaaS), attempting to close the significant competitive gap that Microsoft  has enjoyed, in my opinion, for the last year or so. Not to be left out, Amazon.com AWS announced the company’s own new MLaaS tools and services at AWS Re:Invent last November, trying to court […]

05 Jan Ten Predictions For AI Silicon In 2018

2017 was an exciting year for fans and adopters of AI. As we enter 2018, I wanted to take a look at what lies ahead. One thing is certain: we’ve barely just begun on this journey and there will be great successes and monumental failures in the year to come. Before I dive into the […]

14 Nov What’s Hot At SC17: The Synthesis Of Machine Learning & HPC

Post Image

High Performance Computing (HPC) has historically depended on numerical analysis to solve physics equations, simulating the behavior of systems from the subatomic to galactic scale. Recently, however, scientists have begun experimenting with a completely different approach. It turns out that Machine Learning (ML) models can be far more efficient and even more accurate than the […]

27 Oct Amazon And NVIDIA Simplify Machine Learning

Post Image

NVIDIA and Amazon.com have announced new Machine Learning software stacks in the NVIDIA GPU Cloud (NGC), and a new 8 Volta GPU EC2 instance for immediate availability, respectively. While this announcement was completely expected, it is an important milestone along the road to simplifying and lowering the costs of Machine Learning development and deployment for AI […]

17 Oct Startup Accelize Simplifies FPGA Acceleration

Post Image

FPGAs have been the hot new topic in data center discussions, ever since Microsoft shared their FPGA successes at the HotChips conference in August. Xilinx  and Amazon Web Services continued to amp up the buzz by partnering with IP providers such as Edico Genome, Ryft Systems Inc., and NGCodec Inc., to deliver acceleration-as-a-service with Amazon […]

28 Sep NVIDIA Targets Next AI Frontiers: Inference And China

Post Image

NVIDIA’s meteoric growth in the datacenter, where its business is now generating some $1.6B annually, has been largely driven by the demand to train deep neural networks for Machine Learning (ML) and Artificial Intelligence (AI)—an area where the computational requirements are simply mindboggling. Much of this business is coming from the largest datacenters in the […]

27 Sep Amazon And Xilinx Deliver New FPGA Solutions

Post Image

Datacenter adoption of FPGAs for workload acceleration has been a hot topic of late—especially after Microsoft announced significant gains in its datacenter at the Hot Chips conference in August. However, due to the rare skillset these reconfigurable silicon platforms demand, widespread adoption has been slow to materialize outside of Microsoft . I believe that might […]

28 Aug Microsoft: FPGA Wins Versus Google TPUs For AI

Post Image

The Microsoft Brainwave mezzanine card extends each server with an Intel Altera Stratix 10 FPGA accelerator, synthesized to act as a “Soft DNN Processing Unit,” or DPU, and a fabric interconnect that enables datacenter-scale persistent neural networks. At the recent Hot Chips conference, three of the world’s largest datacenter companies detailed projects that exploit Field […]

04 Aug Will ASIC Chips Become The Next Big Thing In AI?

Post Image

When Google announced its second generation of ASICs to accelerate the company’s machine learning processing, my phone started ringing off the hook with questions about the potential impact on the semiconductor industry. Would the other members of the Super 7, the world’s largest datacenters, all rush to build their own chips for AI? How might […]

17 Jul Microsoft Finds Its AI Voice

Post Image

Microsoft held an intimate analyst and press event in London this week, coincident with the 20th anniversary of the founding of the Cambridge Research Lab, the European hub led by Professor Christopher Bishop. Microsoft now employs over 7,000 Artificial Intelligence (AI) research scientists and development engineers around the world under Microsoft Research (MSR) Executive VP […]

10 Jul Baidu Adds Momentum To NVIDIA’s Lead In AI

Post Image

It is not much of an exaggeration to say that Artificial Intelligence (AI), in its present form, would not be possible without NVIDIA GPUs. NVIDIA has enjoyed near-universal support from some of the largest adopters of Machine Learning technology, as NVIDIA’s speedy GPUs accelerate the training and use of the Deep Neural Networks that enable […]

26 Jun ISC17: HPC Embraces Diversity As AMD & ARM Up The Ante Vs. Intel, IBM And NVIDIA

Post Image

Renewed Competition in HPC Will Enable Workload Optimization Intel has enjoyed a dominant position in High Performance Computing (HPC) processors for nearly a decade now, with only IBM  POWER offering a viable CPU alternative, since Advanced Micro Deviceseffectively, albeit perhaps unintentionally, exited the market in 2010. Meanwhile, NVIDIA has kept things interesting by delivering GPUs as […]

22 May Google Cloud TPU: Strategic Implications For Google, NVIDIA And The Machine Learning Industry

Post Image

Google  announced the 2nd generation of the company’s TensorFlow Processing Unit (TPU), now called the Cloud TPU, at the annual Google I/O event, wowing the industry with performance for Machine Learning that appeared to eclipse NVIDIA’s Tesla Volta GPU only one week after that chip was launched. (See below why I say “appeared”.) Unlike Google’s […]

16 May RESEARCH BRIEF: AMD Lays The Foundations For Machine Learning With Radeon Vega Frontier & Optimized Software

Post Image

Advanced Micro Devices (AMD) has shared more details about its upcoming code-named “Vega” family of GPUs, as well as information regarding the progress the company has made with its open-source ROCm software stack for HPC and Machine Learning (ML). Investors and AI practitioners alike would like to see AMD enter this market as a second […]