18 May Google I/O 2017: It’s Really About AI And Machine Learning, People
This week I’m attending Google’s annual developer conference, Google I/O, in Mountain View, CA. The conference comes on the heels of a number of other big tech events (Microsoft Build last week, and Facebook F8last month), and I went into I/O hoping to see a lot of innovation, and some real, standout differentiation from these other tech giants. Here’s my take on the biggest announcements coming out of Day 1, along with some AR/VR news from Day 2.
Looking at the world through Google Lens
CEO Sundar Pichai wasted no time during the Day 1 keynote, forgoing the typical big-picture digital transformation talk, in favor of diving right into announcements (after a brief intro lauding Google’s history and recent accomplishments).
The first thing Pichai announced was a new technology called Google Lens. The concept behind Google Lens is using Google’s computer vision and AI technology to create a search engine of sorts for images—point the camera at a storefront, and it will pull up the name of the place, business listing information, customer ratings and more. Google Lens is Google Goggles for this decade and reminds me of Samsung’s Bixby Vision. Google Lens can also be integrated into Google Assistant, mining images for useful information to add to your calendar. An example was given of pointing the camera at a marquee for an upcoming concert—Google Lens extracted the date and time of the show, Assistant dropped it right into the calendar. Lens can also be paired with Assistant for help with translations. I think Google Lens is a step on the right track towards the Mixed Reality world the industry has been priming us for the last several years. Needless to say, the more Google consumers are using their cameras to capture information, the more Google knows about you and can send you more tailored ads.
Introducing the next generation of TPUs, this time for ML training
Next, Pichai introduced Cloud TPUs, the second generation of Google’s Tensor Processing Units for Google Cloud, which Google says will “greatly accelerate machine learning workloads”, shifting from just inference uses, to inference and training. Google says each TPU delivers as much as 180 teraflops of floating-print performance, with the ability to combine into pods of 64 TPUs —delivering an impressive 11.5 petaflops.
It’s unclear to me what Google is actually getting out of their TPU journey beyond a positioning point of why their AI is better because they offer a TPU. One thing to consider is that TPUs could be siphoning off resources from other AI projects. The Google TPU is an ASIC, hard-coded to a limited set of functions year on year without change. We find ASICs in audio and video decoders where standards are firm and don’t change, like H.264 video, very different from the rapidly changing machine learning landscape. ASICs also add an additional 18 months to 24 months to a development cycle, an eternity in machine learning.
My mind isn’t made up on this, but I need to do a lot more research on it. AI analyst Karl Freund will be digging deep into this.
Updates to Google Home and Assistant
Scott Huffman, VP of Engineering, Assistant, took the stage to tell us about some of the new updates to Google Assistant, touting Assistant’s improving ability to be conversational. Assistant will now sometimes give “proactive notifications”—that is, it, not you will start a conversation. This sounds nice, but I have to wonder how it will know when I’m in the room? In addition, users will now be able to type questions, for situations when people don’t feel comfortable talking to Assistant out loud. Another big announcement made on Day 1 was that starting now, Assistant will be available on iPhones for the first time.
As for Google Home, it was announced that as of now users can schedule appointments on the device—options for follow-up reminders coming soon. Soon, users will also be able to make hands-free phone calls to anyone in the U.S. or Canada via Home—for free, without any setup or even a phone. Also coming later this year, users can get visual responses to inquiries, played through a TV via Chromecast. It’s a bit bizarre to think I need my smart speaker to find things on my TV, but when you consider just how accurate it is, it just may work.
For consumers, Google’s AI strategy seems a lot more compelling than Facebook’s, but no less scary. For Google’s AI to work well, it needs loads and loads of personal information. That personal information will improve functionality for Home and Photos, and it will be used commercially to create denser user profiles. Right now, Microsoft has a superior AI model for work and productivity. The company’s prime business models isn’t profiles or advertising, it’s productivity software, and services. Microsoft doesn’t require personal information to make it tick. I’ll be watching to see if Google attempts to make inroads in there at some point in the future.
Anil Sabharwal, Vice President of Photos, took a moment on the stage to talk about how far Google Photos had come since its launch two years ago—around 500 million monthly active users worldwide, and around 1.2 billion photos and videos uploaded each day. He then went on to announce three new key features to Google Photos. I think Google Photos is one of the most unique services Google has. It’s great.
First, he announced Suggested Sharing—a feature that, using AI, suggests who you might send your photos to, based on who is in the photos. When you share your photos of a specific event, your friends/family will get a suggestion to add theirs too. The second new feature, called Shared Libraries serves a similar function—it allows users to automatically their photos with the very special people in their lives (spouses, partners, best friends). If users don’t want to automatically share all of their photos with specific people, they can opt to only share the photos of specific people (their kids, for example), or from a specific date forward.
The third new feature is called Photo Books, an option that seeks to bring back the magic of the traditional photo album, by allowing users to design and order high quality Photo Books through Google Photos. Google Photos uses its search technology to help users sort through their photos—picking the “best” shots by removing duplicates and low-quality photos. This is nice and all, but Apple is already doing essentially the same thing—it’s not going to be a major differentiator.
In my opinion, the sharing and suggestion features are the most useful. Consumers are changing the way they take photos and these new features take that into account. For instance, we now take ten, maybe 20 pictures of the same object or person. AI helps Google pare those down. We also want to share photos easily with friends and family without having to remember everyone who is in the photos—these features are going to make that a whole lot less of a headache. There may be some privacy implications—accidentally sharing the wrong photos with the wrong people, and whatnot—but overall I think these features are going to be very useful and beneficial. From a personal standpoint, I think it’s going to make my life easier with my family who have to be technological contortionists to share photos now. You should see us after a party- we’re a wreck sitting on the couch finding ways to send pictures back and forth. One person Airdrops, another iMessage, another texts, I share through Google Photos.. hilarious!
Introducing the anticipated big O Android
Dave Burke, VP of Engineering, Android, took the stage fill us in on Android O. He led off the segment by celebrating the fact that Android had reached a significant milestone—2 billion monthly active devices running on the OS. Google says Android O is all about bringing a more “fluid experience” to devices, while simultaneously improving vital areas such as security and battery life.
One of the new “fluid experience” features is picture-in-picture, which allows you to do two tasks simultaneously—for example, having a video call while checking your calendar. I love this feature on Apple’s iOS. Another neat feature is the new Smart Text Selection, which uses machine learning to recognize certain text groups on screen (an address, for example), and set the selection without having to deal with moving the annoying text handles—good news for people with big fingers! Android O also boasts a quicker startup time than its predecessors, and incorporates various optimizations to help batteries last longer. I really appreciated the addition of TensorFlow Lite and support for DSPs like Hexagon in the Qualcomm Snapdragon 835. Lite is a smaller, more compact run-time to run programs leveraging machine learning.
One of the biggest questions i had coming into I/O was how Google would address the next billion smartphones that are flip phones now. On Day 1, we also saw a preview of what Google is calling Android Go—designed to improve experience for entry level devices with 1GB or less of memory. Google says Android Go incorporates features designed for a demographic of people who have limited data connectivity and speak multiple languages. Google is optimizing Android O to run on these entry level devices, and is designing new apps (such as YouTube Go and Chrome) to use less memory, storage, and data. This is an area I would expect Google to win in as neither Apple or Microsoft is stepping up to the plate and Ubuntu just pulled the plug on their mobile OS.
Forging ahead with AR and VR
The last portion of Day 1’s keynote session focused on AR and VR, and was elaborated further during Day 2’s AR/VR keynote—for that reason, I’ll roll all AR/VR related news into one section here. Last year, Google announced their Daydream View VR headset. It’s been relatively slow to take off, with quite a number of Daydream-ready phones either already on the market, or forthcoming (including the Samsung Galaxy S8). The big announcement from Day 1 in this area is that Google is launching new Daydream standalone headsets, taking all the benefits of smartphone VR and simplifying it by sticking it into a standalone headset—no phone or PC required. These new headsets will feature the newly announced WorldSense, for positional tracking. Google says they worked with Qualcomm to come up with a reference design for these standalone headsets that partners can use as a blueprint—there are headsets currently on the way from both HTC VIVE and Lenovo. This is another win for Qualcomm, continuing their dominance of mobile AR and VR. AR and VR analyst Anshel Sag wrote in detail here.
With Day 2 came several other VR announcements. Daydream’s Director of Product Management, Mike Jazayeri, announced the upcoming release of Daydream 2.0, which Google is calling Euphrates. One of the key aspects of Euphrates is that users will now be able to capture their VR experiences, and share it with others, casting directly to your TV. Also announced was that users will soon be able to get together with their friends in VR and watch YouTube videos in VR—cat videos are about to become a lot more engaging.
Another key technology Google is making use of is Tango, which enables devices to track motion as well as understand distances and positions within the physical world. WorldSense, which I mentioned earlier, is one of the ways Google is making use of Tango for VR purposes. For AR purposes, Google is utilizing Tango to power its new Visual Position Service (VPS)—a tool that helps a device understand its location indoors not unlike a GPS. Early use cases of this include finding specific items within stores and museums. Google is anticipating this technology to be potentially useful for helping the visually-impaired to navigate. Tango also can be used for smartphone AR applications—for example creating more interactive classrooms, or helping visualize a piece of furniture in your home before you purchase it. This technology is still in its early stages, but I think there’s a lot of really interesting potential here. I was struck by the awesomeness of mapping the inside of building like we do with Google Maps. I also was struck by how Google could monetize this data. With VPs, Google can more accurately link a search with consideration and purchase. Advertisers will love this.
None of what I saw at I/O on VR fundamentally changes the game. These are improvements on a way to a hugely important future down the road.
I was impressed by some of the machine-learning things I saw, especially in Google Photos, while feeling a little lukewarm on others. While many of the new features in Google Assistant, Photos, and Home add value, they also require the sharing of a lot of personal voice, photo, video and location information. That’s a trade-off that I think we as a civilization are becoming more and more willing to make, but it certainly raises some privacy and security concerns. We have to always remember Google’s primary business is advertising and all roads lead to that. As long as consumers know and are aware of the trade of privacy for free services, I’m OK.
On Day 2 I was impressed by a some of the AR/VR announcements and demonstrations I saw even though none of them are earth-shattering, I think Google is well-positioned to remain a visionary and leader in the space, and appreciate they are moving the puck up the ice, particularly with 6DoF. Another interesting and informative Google I/O for the books.