28 Jun Should Technology Augmentation Allow Us To Be Better Than Our Normal Selves?

I have written before in Forbes about Wellsmith and how they are working with Cone Health to deliver a new digital care platform. In that piece, we learned how Wellsmith uses consumer-centric devices to support a consumer’s journey to a better quality of life. Of course, the quantified self and using devices like Fitbit to measure yourself is not new. What is new is how Wellsmith built these devices onto the continuum of care used by both the consumer and their doctor to better manage conditions like Type 2 Diabetes.  But what happens when consumer technology solutions go past helping people to simply manage their illnesses or disabilities? Can these solutions provide augmentation that extends the capabilities of the users past the abilities of a ‘normal’ person? Should they? As an example, let us look at hearing aids.

Three revolutions

At the Moor Insights and Strategy website, Yuri Teshler, one of our previous healthcare lead and a founder at Wellsmith, authored a blog about the latest hearing aids he uses. His experiences with the Signia Nx 13 with OVP (Own Voice Processing) offer an interesting set of insights as to where augmentation may be going.

Yuri first outlines the three revolutions hearing aids have experienced over the last few years: form factor, sound quality, and connectivity. Having worked with Yuri for many years, I understand from him the value of a reduction in the form factor and the improvement in comprehension of his hearing aids. What was never clear to me, as someone lucky enough not to need hearing aids, was how complex connectivity has become.

Consider Bluetooth connectivity.

It’s frustrating for anyone to get their headphones it to work initially on one device, and then to add and swap between multiple other devices. Now consider someone with a hearing aid. They may have a smartphone, tablet, TV, car radio/audio, laptop, PA system or any other number of audio devices that they need to listen and integrate into their hearing. With hearing aids, that’s pretty hard if not impossible to do. In many cases, unless you can quickly and easily connect you the existing speakers up really loudly or just miss out on the audio. In the blog, Yuri talks about how Apple’s MFi is helping to change and improve this for hearing aid wearers.

There does seem to be a very interesting additional benefit that comes from all this connectivity. Not only does a hearing aid user no longer miss the sounds the rest of us take for granted, but they also may get to do digitally what none of us can.

Augmented hearing

Here are three audio solutions that are or will be available to consumers soon:

  • For $159 you can now buy Google Pixel Buds which promise “real-time translation with Google Translate.”
  • Companies like Realtime Transcription are starting to offer consumers services that have previously only been available to companies or governmental bodies
  • Bose announced a project it’s calling “Bose AR” at this year’s SXSW festival, and it showed off a pair of prototype glasses that demonstrate what sound-based AR might look and feel like.

Individually these are interesting, but it’s unlikely that these are something that anyone would want to use all the time. In addition, the three services all come from different sources and use different devices. But imagine you already had a device that was always listening for you, and these services could quickly and easy integrate with that device?

Hearing aids with technology like MFi may provide the perfect base for something like this. A move in this direction might be The Dash Pro tailored by Starkey. While the Braggi Dash gets mixed reviews as headphones, their Kinetic User Interface is interesting. For instance, you can skip the starting song with a head shake, accept an incoming call by nodding your head or decline an incoming call by shaking your head. If you could combine gestures like this with some of the services I talked about before, imagine how powerful it could be.

Imagine a user of augmented hearing aid device: during a meeting in a foreign country, a small head gesture automatically translates everyone speaking into English. By staring at one person who speaks quietly, the hearing aids amplify just that voice, so it mixes well with others in the room, including someone on a conference line. Another gesture starts to record the part of the meeting where the action items were discussed, then produces minutes or to-dos automatically sent to all attendees. All this done without touching a keyboard or interrupting the flow of the meeting.

Bionic hearing

While this case study may be a time away, the premise of all these services operating together, seamlessly integrated, is not hard to imagine. What is harder to imagine, if Google Glasses is any example, is that most of us will want to use additional devices to aid our already ‘normal’ capabilities. Maybe this is a place where having hearing aids will be an advantage allowing users to easily sprint past “typical” folks into the realm of technological superpowers. Check out the piece on Moor Insights and Strategy website.

Note: Nigel Dessau contributed to this article.