17 Dec Robotics And Smart Cities Just Got A Lot More Real With NVIDIA Jetson Xavier Edge Compute Module
Last week I attended NVIDIA’s Jetson Developer Meetup at the company’s headquarters in Santa Clara, CA. There may have been more cool devices per square foot at the event than I’ve seen in a long time—self-flying cameras, food delivery robots, driverless farming, mapping robots, non-invasive people and traffic checkers, and more. At the event, NVIDIA launched the latest addition to its Jetson family of products, the new NVIDIA Jetson AGX Xavier (you can read my coverage of the precursor Jetson TX2 here, and our senior AI analyst Karl Freund’s preview of Xavier here, if interested). Let’s take a closer look at the company’s newest edge compute offering.
I think it’s important to understand the market dynamics at play in the autonomous machines market. This is the beginning of a revolution and revolutions don’t happen overnight; they come in increments before the hockey-stick growth. When they hit, they hit big, like machine and deep learning that are transforming everything from the datacenter.
The NVIDIA Jetson AGX Xavier will enable the construction of the next generation of autonomous machines, serving as the “brain” behind these bots. These robotic and smart cities platforms featuring Jetson AGX Xavier should be able to operate completely autonomously in the field, without the need for human intervention or constant cloud connectivity. While Jetson TX2 delivered enough performance for certain use cases, delivery, smart garage, and aerial inspection required between 20-30 TOPS of performance at 30-10W power draw. Enter Jetson AGX Xavier.
The Jetson AGX Xavier module boasts “workstation performance,” up to 32 TOPS, while requiring “clock-radio” levels of energy consumption, as low as 10 watts. NVIDIA is good at explaining and marketing, for sure. Jetson AGX Xavier includes a custom 8-core NVIDIA Carmel 64-bit ARMv.8.2 CPU, a 512-core NVIDIA Volta GPU with 64 TensorCores, dual NVIDIA Deep Learning Accelerators (DLAs), and two 7-way VLIW Vision Accelerators. It features 16GB 256-bit LPDDR4x memory, and 32GB eMMC 5.1 storage. One architectural element I found special is that the CPU and GPU share the same memory and the GPU doesn’t have to cross the PCIe bus for memory copies. This translates to more performance at lower power draw as well as a lower BOM cost as the GPU doesn’t require separate memory.
Jetson AGX Xavier’s 440Gbps of I/O makes it ideal for streaming sensors and high-speed peripherals, such as cameras (very important for autonomous machines). It features 3 USB 3.1 ports, 4 USB 2.0 ports, and 5 PCIe Gen 4 controllers (according to NVIDIA, one of the first embedded devices to support PCIe Gen 4), and more. All of these hardware specs fit inside a remarkably compact 100x87mm form factor.
As far as software goes, the module utilizes the tools and workflows of NVIDIA’s powerful AI platform. Additionally, Jetson AGX Xavier supports applications developed by both JetPack and DeepStream SDKs. These software capabilities, coupled with Jetson AGX Xavier’s impressive hardware specs, will allow developers to deploy AI-powered machines and devices at scale—robotics, IoT edge devices, medical instruments, intelligent video analytics, and much, much more. One very pragmatic feature all Jetson’s support is backward compatibility of JetPack and DeepStream. In other words, the latest software will accelerate Jetson TX1, TX2, and AGX Xavier. While NVIDIA isn’t committing to a locked down software development cycle length guarantee, the company acknowledged that it expected devices to have a ten-year lifespan.
At the Jetson Developer Meetup, I got a glimpse at some of what NVIDIA’s technology is enabling—LTE-enabled cameras on smart poles, building mapping robots, delivery robots with vision and LIDAR, concierge robots. This technology is the future, and I’m inclined to believe NVIDIA when it says that Jetson AGX Xavier is how we get there. I met with about ten companies at the event, and I was struck that each one of them, the engineers, knew exactly the feature benefits Jetson TX1 provided over Jetson TX2 and Jetson AGX Xavier provides over TX2. These were descriptions of their product features that could be enabled. I was also struck at the large companies who are actually using products, not just trialing, names you may know like JD.com (inventory robots), Komatsu (construction), and Denso and FANUC (industrial manufacturing.)
One great demo NVIDIA showed illustrated the benefits of DeepStream on Jetson TX2 versus AGX Xavier autonomously performing video analytics in real-time. While Jetson AGX won’t be in self-driving cars, the point was to demonstrate the dramatic improvement in performance of Jetson AGX Xavier. Both demos had 30 video channels, each running 4 DNNs per channel, identifying and recognizing objects like cars, people, curbs, etc. The big difference? Jetson AGX Xavier operated at 30W at 1080P resolutions and P4 operated at 75W at 720p resolutions. This represents huge improvements, 15X performance and 10X energy efficiency in just a few years.
All in all, the Jetson AGX Xavier module looks to be another stellar offering from NVIDIA, a company whose name is practically synonymous with machine and deep learning at this point. The potential for fully autonomous machines is so vast that it’s impossible to fully wrap your mind around it. NVIDIA is right at the forefront of enabling the next generation of these machines, and I can’t wait to see what comes next. The new module is available now, globally.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.