09 Jan Synaptics Announces ML-Accelerated, Far-Field Voice SoCs For The Smart Home Segment At CES 2019
CES 2019 kicks off this week in Las Vegas, and with it comes the yearly deluge of consumer electronics-related announcements. I wanted to lead off my coverage at the event talking about Synaptics—the “human interface solutions company” formerly known by many in the industry as “the touchpad company,” who continues to push further into mobile and IoT.
Last year, I wrote a piece on Synaptics’ evolution as a company. If “Synaptics 1.0” was PC touchpads, and “Synaptics 2.0” was smartphone touchscreens, displays, and fingerprint technology, then the company has now entered what CEO Rick Bergman refers to as “Synaptics 3.0”—the IoT phase.
In keeping with this new iteration of the company, today Synaptics announced its new AS3xx series of Smart Edge AudioSmart solutions. Let’s take a closer look.
AudioSmarts at the smart home edge
AudioSmarts is geared towards the next generation of voice-enabled smart home devices (think smart hubs, speakers, appliances, and the like). The market is huge for these smart consumer devices, and last week, Amazon announced it had sold 100M of them. Add Apple, Google, and all the Chinese-based device makers like Lenovo, Huawei and Xiaomi and you are looking at billions of devices.
I believe the new AS3xx family is one of the first (if not the first) to feature integrated neural network acceleration, a wake word engine (with support for custom wake words), and advanced far-field voice processing. Performing these advanced functions at the edge (instead of the cloud) should result in better responsiveness, and makes personal data more secure to boot since it’s not being sent up to the cloud. The other thing this means is that these smart functionalities won’t be as dependent on seamless network connectivity.
Synaptics is not a stranger to the smart home—with its 2017 strategic acquisition of Conexant, it got a good foothold in the far-field voice-enabled market. Its technology is already designed into Apple HomePod, as well as most Alexa-based smart devices. By manufacturing this new family of SoCs in-house, it should be able to even further improve and optimize its far-field voice processing and wake word technology, all at an affordable price point friendly to use in consumer electronics. The company also says that the integrated nature of the AS3xx line makes it ideal for product manufactures in the smart home space who lack audio/voice expertise.
The first member of the family, the AS371, is sampling now. This SoC features a machine learning engine that utilizes Synaptics’ SyNAP technology (Synaptics Neural Network Acceleration and Processing). The company claims its SyNAP technology enables complex functions such as user identification and behavioral prediction, making human-device interaction feel more intuitive. Sampling soon is the AS390, for voice-enabled devices with displays, the AS350, for voice-enabled low-power devices, and the AS320, for voice-enabled microcontroller-based devices (all projected for the first quarter of 2019).
The AS3xx family is exactly the sort of technology I expect out of “Synaptics 3.0.” The company continues to build on its legacy strengths, and expand into new, next-generation human interface technology. By bringing machine learning to the smart home edge, and perfecting far-field voice processing and wake word technology, tomorrow’s voice-enabled smart home devices will be even more intuitive and powerful. I look forward to hearing more from the company at CES 2019 and seeing some of this technology in action.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.