The Future of UI: Contextual Intelligence

on June 24, 2014
Reading Time: 4 minutes

Despite all the tremendous developments in the world of mobile devices, there’s one aspect that’s been essentially stagnant for quite some time: a user interface based on grids of application icons. Since the 2007 introduction of the iPhone, that visual representation, and variations on it, have been at the heart of virtually all the mobile devices and mobile operating systems we’ve used. Current versions of iOS, Android, Windows, Windows Phone, ChromeOS and even Firefox all basically ascribe to the app grid format and structure. It’s reached the point where mobile devices seem to be defined and, as I’ll argue shortly, confined by it.

To put it bluntly, it’s time for the icon grid to go.

Now, to be fair, the visual metaphor of the icon grid works on many levels. It’s relatively simple to understand and it helped serve the very useful purpose of driving the creation of an enormous variety of different applications—icons to fill the grid. I’d argue, in fact, part of the reason “apps” have become some such a core part of our experience with mobile devices is due in no small part to the central role they play in the icon grid representation delivered by modern mobile operating systems. The app icons aren’t just the central part of the visual organization of the UI, they are the essential element of the OS and drive the experience of how the device is intended/expected to be used. Given the UI, what else would you do but find and launch apps?

In a world where there are over a million choices of apps/icons to fill many of the grids, however, the metaphor seems woefully inadequate. At a basic level, sorting through even tens of applications can be challenging, let alone hundreds or more. Even more importantly, we’re seeing an increasing emphasis on services that are only modestly tied to applications. While I’m not quite calling for the death of mobile apps, I do believe we are seeing a de-emphasis on them and a move towards services as people look for new means of interacting with their devices.

Through these more service-oriented apps, people are starting to see their devices acting a bit more intelligently. Instead of forcing the device user to initiate all the activities—typically by launching an app—these more service-driven applications start to perform activities on behalf of the user. Apps such as Assistant from Speaktoit, for example, show where these developments are headed.

The problem is, the icon grid metaphor doesn’t really work for these types of services/apps and provides little opportunity for the device to be “intelligent”. Instead, it basically forces you to think about and engage in one specific activity at a time. Moving forward, however, I believe users are going to increasingly expect/demand this type of intelligence and that’s the primary reason why it’s time for a completely different perspective on UI.

Interestingly, and perhaps controversially, I would argue Microsoft’s recent efforts with Windows Phone 8.1 are starting to move in this new direction. The UI is still primarily icon grid-based, but there are elements of it, including Live Tiles and Siri competitor Cortana’s more proactive assistance to the device user, that start to suggest the future I’m describing.

But there’s still a long way to go. Even simple things like adjusting what applications or services are available at a given time on the home screen of devices is something that a division of the new, hardware-less Nokia has introduced in the form of a smart launcher called Z Launcher (initially available only for Android). It’s a good idea, but there’s so much more that could be done leveraging information the smartphone already has: location (based on GPS or even WiFi); speed of movement (in a car or plane, for example) based on gyroscope and other common sensors; etc.

More intelligent use of all this data could enable an entirely new type of UI as well as a set of smarter services/experiences that initiate more activities on behalf of the device user. In addition to sensor data, simply logging the activities a user regularly engages in, then analyzing that (let’s call it “small data analytics”), and applying those simple learnings to changes in the UI could also be part of a UI overhaul.[pullquote]Spending time developing more “contextual intelligence” is key to making devices that are already an important part of people’s lives even more essential.”[/pullquote]

All of these things are part of understanding where, when and how the user is engaging with the device—its context so to speak—and spending time developing more “contextual intelligence” is key to making devices that are already an important part of people’s lives even more essential.

Most of these new intelligent service-like capabilities can/will leverage sensor data in the device. This is one of the reasons why I expect we’ll see new sensors, from altimeters and barometers to pulse oximeters and more, as the key new hardware capabilities built into next generation phones. It’s also the one opportunity that gives sensor-laden wearable devices a chance to survive as intelligent smartphone peripherals. Future OSs should be able to use a phone’s built-in sensors as well as any others made available from a nearby connected device.

We already have specific applications that can leverage some of this sensor-based data, but in order to enable the leap forward I believe is necessary to improve the interaction with a device, these kinds of services need to be embedded throughout the operating system. In addition, the OS developers need to open these kinds of service APIs to others so they can further enhance the user experience with their own variations, extensions, etc. That’s the future for today’s app developers.

Location-based services and other types of simple “contextual intelligence” have been talked about and even demonstrated for a while, but now’s the time to take things to the next level and really move our mobile devices into a more proactive, more intelligent future. Can’t wait to see where we end up.