Now that we’re several years into the AI revolution, people are starting to expect that the applications they use will become more intelligent. After all, that was the high-level promise of artificial intelligence—smarter, more contextually aware applications that could handle tasks automatically or at least make them less tedious for us to do.
The problem is, that hasn’t really proven to be the case. Sure, we’ve seen a few reasonably intelligent features being added to certain applications. However, you’ve often had to go out of your way to find them, and interacting with them hasn’t often been intuitive.
Thankfully, we’re finally starting to see the kind of easy-to-use intelligence that many expected to see when AI-enhanced applications were first introduced. Some of the latest additions to Google’s G Suite productivity applications, for example, bring tangible enhancements to the common day-to-day tasks we all use.
A new beta version of Google Docs now has Smart Compose features—first introduced in Gmail last year—which can make automatic suggestions to your writing. For longer form documents created in Docs, Google’s AI-powered features have the ability to suggest entire sentences, not just individual words or phrases, and are likely to help speed up the writing process.
In addition, Docs also has neural network-powered technology to make better grammar and spelling suggestions within your documents. A small but very useful example is the ability to recognize words or acronyms that may be unique to an industry or even a company (such as an internal project code name) and automatically add those to the dictionary. Once that’s done, the feature can then recognize and correct when mistakes have been made in those new words.
For Google Calendar, the company is enabling the use of Google Assistant and voice commands to manage your calendar, including doing things such as creating meetings, updating the time and/or location, and more, all with spoken commands. It’s the kind of personal assistant technology that many people expected from the first generation of intelligent assistants, but didn’t get.
Similarly, the integration of Google Assistant into G Suite can now enable people to send quick email messages or dial into conference calls completely hands-free, thanks to voice commands and dictation. While these aren’t dramatic new features, they are the kind of simple yet practical things that AI-based intelligence is bringing to applications overall, and they’re indicative of what the technology can realistically do.
Finally, Google is integrating voice-based control of meeting hardware in conjunction with an Asus built Hangouts Meet Hardware device. Designed to integrate with a monitor and conference room cameras, the microphone and speaker-equipped box can respond to requests to start and end meetings, make phone calls, and more. In addition, Google added voice support for accessibility features to the device, such as being able to turn on spoken feedback for visually impaired users.
What’s interesting about many of these new G Suite additions is that they’re starting to leverage technological capabilities that Google first created in more standalone forms but are now incorporating into broader applications. Google Assistant capabilities, for example, are certainly interesting on their own and from a search-focused perspective, but they’re equally, yet differently, valuable as a true personal assistant feature for calendaring.
In fact, in general, it seems Google is starting to take advantage of a variety of core advances it has developed, particularly around areas like AI, analytics, and managing vast amounts of data, across many of its larger platforms, from G Suite to Google Cloud Platform (GCP) and beyond. Of course, this isn’t terribly surprising, but it’s certainly interesting to observe and highlights the potential that Google has to disrupt the markets in which it remains a smaller player.