31 Jul Synaptics Timing With Edge Solutions Aligns With Google’s Increased Privacy Push
Google held its big annual developer conference, I/O, back in May. While the event has come and gone, its repercussions continue to be felt—particularly around privacy and how the company seeks to enable more of it. I think this conversation opens up an opportunity for vendors who are creating intelligent solutions for the edge—one of which happens to be Synaptics.
If you’ve been following Synaptics for a while, like I have, you probably originally heard about it in terms of its touchpad business. Indeed, the company still manufactures and has the dominant share in computer touchpads and fingerprint readers, but its business strategy pivoted in the last several years towards a much broader focus on mobile and IoT (see my write-up here). In fact, as of July 2018, Synaptics’ PC business accounted for only about 15% of its overall business, while mobile accounted for 62% and IoT made up 23%. This is strategic—Synaptics understands that the IoT market is about to explode. Let’s take a closer look at how all this fits together timing-wise with Google and its efforts to ramp up privacy.
At Google I/O there was a lot of discussion around inferencing at the edge. I/O saw the launch of several new devices, including the Pixel 3a, the Pixel 3a XL, and a new smart home hub named the Nest Hub Max (a rebranding of Google Home). In an age where consumers are becoming increasingly wary of their devices “listening in” on them and violating their privacy, there’s a big question over how to allay concerns and drive adoption of devices like these. Google highlighted several new privacy measures at I/O, such as easier-to-access privacy controls and the introduction of “incognito mode” to its Maps and Search functions. However, the real answer to the question of privacy lies in edge processing.
Google used the Pixel 3 to tell this story and promoted communicating with the device using natural language that requires the memorization of less keywords. Employing new, smaller machine learning models that can function entirely on the phone saves users from the privacy risk of having to send their data up to the cloud for processing. While Google demonstrated this at the event on the Pixel phone, these models are also coming to Google Home and Mini.
Key to this is the concept of federated learning, which is essentially a distributed machine learning approach that enables mobile phones at different locales to learn a model together without transferring personal data. Additionally, phones are able to locally personalize these models. Google employs this capability now with its smart digital keyboard, Gboard, to aid with next-word prediction. Gboard can now even pick up on various trending neologisms in real-time, such as now nearly ubiquitous “YOLO.” Additionally, Google says federated learning will enable the next generation of its Google Assistant to run locally on devices, instead of the cloud, and provide answers as much as 10 times faster.
I was impressed by the improvements to Google Assistant, but even with it running locally, I fear the privacy exposure of an assistant that does not use a wake-word and is always listening. Even with the best of intentions, this can still be risky and there could be unforeseen mistakes made.
So where does Synaptics fit in? As a part of its new focus on IoT, the company recently made two strategic acquisitions: Conexant and the multimedia division of Marvell. Marvell will prove important—the multimedia division was focused on consumer video, including processors for OTT devices like Google Chromecast and Amazon Fire TV and for Android-based set-top-boxes. Conexant is particularly important to this discussion though because it presented Synaptics with its gateway into the far-field, voice-enabled market. Thanks to this acquisition, Synaptics SoCs are now designed into the Apple HomePod and many Alexa-based smart devices.
Earlier this year, Synaptics announced a new series of its Smart Edge AudioSmart SoCs, the AS3xx series, geared specifically towards next generation of voice-enabled smart devices. This new family of chips features integrated neural network acceleration (one of the first, if not the first to do so), a wake-word engine (that supports custom wake-words), and advanced far-field voice processing. In other words, if you’re looking to do inference at the edge, the AS3xx series looks to be just what the doctor ordered. These chips purport to provide improved data privacy, better responsiveness, and less dependence on seamless network connectivity—all of which will be crucial for tomorrow’s smart home hubs and voice-enabled devices.
Synaptics’ shift towards IoT and the far-field voice-enabled market comes at a time when Google and other global OEMs are realizing the necessity of performing more inference at the edge. Its groundbreaking AS3xx series, in particular, seems very-well poised to take advantage of this shift. No longer just “the touchpad company,” I believe we’re going to see Synaptics technology at the heart of a lot of tomorrow’s smart home solutions. I’ll continue to watch with interest.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.