Google I/O 2017. Boring?
Gone are the days of free smartphones, Chromebooks, and skydivers unveiling Google Glass. Google I/O 2017 opened with a rather lackluster keynote delivered by its CEO Sundar Pichai.
Compared to Satya Nadella’s impassioned keynote at Microsoft Build a week earlier, Pichai’s remarks, along with many product showcases throughout the event, were met with general disappointment by the tech press. They were quick to call Google “boring,” even going as far to say that it’s “bad news for innovation.”
But if you look closer, Google made some remarkable progress in repositioning itself as the company of the future: one focused on AI beyond mobile. Revisiting Pinchai’s opening speech:
“But computing is evolving again. We spoke last year about this important shift in computing, from a mobile-first, to an AI-first approach. Mobile made us re-imagine every product we were working on. We had to take into account that the user interaction model had fundamentally changed, with multitouch, location, identity, payments, and so on. Similarly, in an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems, and we are doing this across every one of our products.”
Google is leveraging AI to remove points of friction in the world of computing. Reframing all the announcements made at the event makes it clear what Google intends to do.
Yes, these new products may force some changes in user interaction with the new technology. But by the virtue of being the best in the machine learning and artificial intelligence space, Google is positioning itself to be the epicenter of information–all of it.
Let’s take a deeper dive.
Google I/O 2017: Major Announcements
- Active monthly Android users surpass 2 billion.
- Google Assistant is now available on iOS.
- Android Go aims to connect the “next billion” users online.
Just as Google became synonymous with online search, Google is poised to be the default service provider on mobile. The ubiquity of Android allows Google to offer its products as the default one installed on more than 2 billion phones.
By turning Google Assistant into an app, Google can now also reach iOS customers and beat their default service by being better (similar to Google Maps being superior to Apple Maps). Finally, with Android Go, which allows for internet usage in countries with developing web infrastructure, Google is looking to capture more of the market.
With a massive user-base in place, next comes the power of artificial intelligence. Here are some other major announcements from Google:
- Google Assistant SDK can be embedded in any device.
- Google Photos now provides Shared Libraries and physical copies called Google Books, as well as AI-powered photo search and editing capabilities.
- Google Lens merges its knowledge graph with computer vision and moves into the augmented reality space.
- Standalone Daydream VR headsets are underway, along with VR support for Galaxy S8.
Google Assistant SDK can now challenge the dominance of Alexa in the IoT space. At I/O, Google demonstrated how the Google Home can now be used as a phone, a bluetooth speaker, and also support a visual response via Chromecast-connected TV.
All of these point to a mission to connect all devices to its knowledge graph. It doesn’t matter how you interact with Google’s devices: voice, mobile, TV, or text.
Enhanced capabilities from Google Photos, such as suggested sharing, shared libraries, and photo books, help Google compile more photos, which in turn will lead to even better computer vision capabilities by serving as training data. Not only is Google creating a social network via shared photos, but it’s pushing to connect knowledge and vision. As stated by Pichai:
“As you can see, we are beginning to understand images and videos. All of google was built because we started understanding text and web pages, so the fact that computers can understand images and videos has profound implications for our core mission.”
Essentially, Google is now collecting information from the physical world, not only through text and web searches, but through Google Lens and Google Photos. This connection has big implications in VR and AR, and Google promptly followed with more developments regarding its Daydream headsets.
Finally with its new TPU chips optimized to train machine learning algorithms, Google is transforming the cloud to be smarter. As it uses neural networks to train other neural networks, it’s looking to speed up the training process and come up with automated ways to accelerate this process.
On the whole, Google is filling in more and more gaps in its knowledge graph.
Earlier this year, Facebook’s AI director, Yann LeCun, boldly claimed that in the near future, machines will be able to learn common sense just by observing the world. In other words, machines will be able to incorporate context, just as humans do, to understand and process information more intelligently.
Ben Thompson said it best in his article “Boring Google”:
“Make no mistake, none of these opportunities are directly analogous to Google search, particularly the openness of their respective markets or the path to monetization… All three apps, though, are leaning into Google’s strengths.
- Google Assistant is focused on being available everywhere.
- Google Photos is winning by being better through superior data and machine learning.
- Google Lens is expanding Google’s utility into the physical world.
Google I/O may have been boring. But make no mistake. The tech giant is slowly making a transition from Google.com to Google.ai. The shift from mobile-first to AI-first is happening and happening fast.