Google’s Machine Learning Backbone

Last year at Google I/O, the term AI was thrown around frequently. This year, Google didn’t use AI as much during their opening keynote but they did use the term machine learning much more. It was a subtle but important shift, which speaks to how Google is orienting themselves around their mission statement to “organize the world’s data”.

Google calls their smart agent the Google Assistant but, looking at a number of their demos, this is simply the front end naming they are using for a much smarter and transparent assistant that will manifest itself in many ways throughout Google’s services. One of the main apps they updated and introduced new features for was Google Photos. Here is a great example of Google’s smart assistant showing up in many ways but blending into the background while still adding value. The assistant will recognize people in your photos and suggest those people as ones you can share your photos with. Google photos can create mini movies or photo albums which you can have printed, and intelligently selects the best photos from a group for you to insert into a photo book. All of this is powered by the many years of work in machine learning Google has embarked on and the fruit of this labor is starting to show up in somewhat invisible yet useful ways.

Gmail is another place they quickly demonstrated this smart agent. In Gmail, users have the options to use smart replies — contextual recommendations of phrases or sentences to use in reply to an email. The machine learning algorithm understands the context of the text in the email and can offer common or learned responses from you based on your conversation style. As creepy as this sounds (and, in some ways is), it is also very useful for the end user. While I do believe a great many consumers do care about privacy, the historical evidence also suggests that convenience trumps privacy in many situations like this one.

What was most apparent to me during the day one keynote from Google I/O was how much machine learning/AI has become the backbone for virtually every service/app Google offers. It is deep in search, Google Assistant app, YouTube, Google Photos, Gmail, calendar, contacts, Allo, The Google Assistant app, Maps, etc., and becoming a staple of the ways Google is trying to add more value to users of their services. Again, things that save us time, are convenient, and truly useful can, in many cases, trump privacy.

Google Services Everywhere
As Carolina articulated well yesterday, the platform or consumer engagement battle has moved away from the core operating systems in many cases to the software and services layers of value. This is why Google will continue to be as aggressive as they can to put their services everywhere. Bringing the Google Assistant to iOS is evidence of this strategy and one similar to Microsoft, where they are looking to battle for consumer engagement and, in some cases, look to steal Apple customers in the core areas where Apple wants to compete as well. Google Assistant and Google Photos are the two that come to mind of areas where Apple cannot lose iOS customers to Google but where Google wants to get customers from Apple.

This strategy has deeper implications in developing or emerging markets where Android dominates the mobile landscape. Places like India, Indonesia, SE Asia, and many other countries are largely Android strongholds. The better these AI/smart assistant services become on Android and become deeper engrained into the core OS, the more likely it is those customers will stick with Google services/Android and make it harder for Apple to acquire new customers to the iPhone. If my thesis is correct that things like Siri or the full smart assistant experience from Apple become a stickier glue then the same is true for Google with Android customers.

This is likely why Google has re-launched their low-end emerging market Android play with Google Go, which has learned from the failing of Android One and will attempt to unify the low-end of the Android device market.

The Cloud TPU
Lastly, Google is tying this machine learning backbone to continued advancements in specialty server hardware and custom chips, whose sole purpose is to accelerate training networks with data and inferencing (querying) the data to turn it into actionable intelligence.

This new Cloud TPU board is furthering their work on the TensorFlow TPU, which was a custom designed ASIC (not an FPGA as many originally thought). That chip was only designed for inference, where this new board includes custom chips that also aid in training networks with data. From the sound of it, I’d say this board does include an FPGA off the shelf or custom design, but Google hasn’t confirmed this observation.

Google is, no doubt, doubling down on machine learning and all the backend network, software, training, inference, and anything else they need to be the leader in machine learning from beginning to end. All the while, their success depends on pulling their users, and getting new ones, deeper into their ecosystem so they can make sense of not just the world’s data but each individual’s as well.

Published by

Ben Bajarin

Ben Bajarin is a Principal Analyst and the head of primary research at Creative Strategies, Inc - An industry analysis, market intelligence and research firm located in Silicon Valley. His primary focus is consumer technology and market trend research and he is responsible for studying over 30 countries. Full Bio

94 thoughts on “Google’s Machine Learning Backbone”

Leave a Reply

Your email address will not be published. Required fields are marked *