The Future of UI: Contextual Intelligence

Despite all the tremendous developments in the world of mobile devices, there’s one aspect that’s been essentially stagnant for quite some time: a user interface based on grids of application icons. Since the 2007 introduction of the iPhone, that visual representation, and variations on it, have been at the heart of virtually all the mobile devices and mobile operating systems we’ve used. Current versions of iOS, Android, Windows, Windows Phone, ChromeOS and even Firefox all basically ascribe to the app grid format and structure. It’s reached the point where mobile devices seem to be defined and, as I’ll argue shortly, confined by it.

To put it bluntly, it’s time for the icon grid to go.

Now, to be fair, the visual metaphor of the icon grid works on many levels. It’s relatively simple to understand and it helped serve the very useful purpose of driving the creation of an enormous variety of different applications—icons to fill the grid. I’d argue, in fact, part of the reason “apps” have become some such a core part of our experience with mobile devices is due in no small part to the central role they play in the icon grid representation delivered by modern mobile operating systems. The app icons aren’t just the central part of the visual organization of the UI, they are the essential element of the OS and drive the experience of how the device is intended/expected to be used. Given the UI, what else would you do but find and launch apps?

In a world where there are over a million choices of apps/icons to fill many of the grids, however, the metaphor seems woefully inadequate. At a basic level, sorting through even tens of applications can be challenging, let alone hundreds or more. Even more importantly, we’re seeing an increasing emphasis on services that are only modestly tied to applications. While I’m not quite calling for the death of mobile apps, I do believe we are seeing a de-emphasis on them and a move towards services as people look for new means of interacting with their devices.

Through these more service-oriented apps, people are starting to see their devices acting a bit more intelligently. Instead of forcing the device user to initiate all the activities—typically by launching an app—these more service-driven applications start to perform activities on behalf of the user. Apps such as Assistant from Speaktoit, for example, show where these developments are headed.

The problem is, the icon grid metaphor doesn’t really work for these types of services/apps and provides little opportunity for the device to be “intelligent”. Instead, it basically forces you to think about and engage in one specific activity at a time. Moving forward, however, I believe users are going to increasingly expect/demand this type of intelligence and that’s the primary reason why it’s time for a completely different perspective on UI.

Interestingly, and perhaps controversially, I would argue Microsoft’s recent efforts with Windows Phone 8.1 are starting to move in this new direction. The UI is still primarily icon grid-based, but there are elements of it, including Live Tiles and Siri competitor Cortana’s more proactive assistance to the device user, that start to suggest the future I’m describing.

But there’s still a long way to go. Even simple things like adjusting what applications or services are available at a given time on the home screen of devices is something that a division of the new, hardware-less Nokia has introduced in the form of a smart launcher called Z Launcher (initially available only for Android). It’s a good idea, but there’s so much more that could be done leveraging information the smartphone already has: location (based on GPS or even WiFi); speed of movement (in a car or plane, for example) based on gyroscope and other common sensors; etc.

More intelligent use of all this data could enable an entirely new type of UI as well as a set of smarter services/experiences that initiate more activities on behalf of the device user. In addition to sensor data, simply logging the activities a user regularly engages in, then analyzing that (let’s call it “small data analytics”), and applying those simple learnings to changes in the UI could also be part of a UI overhaul.[pullquote]Spending time developing more “contextual intelligence” is key to making devices that are already an important part of people’s lives even more essential.”[/pullquote]

All of these things are part of understanding where, when and how the user is engaging with the device—its context so to speak—and spending time developing more “contextual intelligence” is key to making devices that are already an important part of people’s lives even more essential.

Most of these new intelligent service-like capabilities can/will leverage sensor data in the device. This is one of the reasons why I expect we’ll see new sensors, from altimeters and barometers to pulse oximeters and more, as the key new hardware capabilities built into next generation phones. It’s also the one opportunity that gives sensor-laden wearable devices a chance to survive as intelligent smartphone peripherals. Future OSs should be able to use a phone’s built-in sensors as well as any others made available from a nearby connected device.

We already have specific applications that can leverage some of this sensor-based data, but in order to enable the leap forward I believe is necessary to improve the interaction with a device, these kinds of services need to be embedded throughout the operating system. In addition, the OS developers need to open these kinds of service APIs to others so they can further enhance the user experience with their own variations, extensions, etc. That’s the future for today’s app developers.

Location-based services and other types of simple “contextual intelligence” have been talked about and even demonstrated for a while, but now’s the time to take things to the next level and really move our mobile devices into a more proactive, more intelligent future. Can’t wait to see where we end up.

Published by

Bob O'Donnell

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

15 thoughts on “The Future of UI: Contextual Intelligence”

  1. What I think makes the icon grid so understandable is the same thing that made tables in HTML and spreadsheets so understandable. People just understand grids. It is how (planned) cities are laid out, it is how we (use to) segment maps. It is very symmetrical and mathematical. Obviously, the problem isn’t coming up with a new UI, it is coming up with one that people will understand without having to think about it. That may require new generations of devices. Once we get past the signifier/connotations of “smartphone” and have new signifieds that require new labeling, the slate will be clear for new thinking. I would think this would be easiest with more specific purpose devices than general purpose devices like most mobile devices.

    But the other problem, I think, will be one of trust, and this has been covered on TP previously. To get that kind of intelligence from a device requires a great deal of intimate data. How much of that data we are willing to surrender will depend on the motivations of the provider. I am more cool with Apple collecting that data because I am convinced that they are philosophically driven to collect that data principally for the user’s purposes. I am less inclined (understatement) to give up that kind of intimate detail to Google/Android because I know they are operating under a quid pro quo that they will use the data for themselves with little concern about the user.

    Also, as Brian Hall once wrote about, considering how much detail Amazon and Google collect on us now, I am amazed at how out of touch their targeted ads or suggested purchases are. Obviously new algorithms are needed, too.

    Joe

    1. Joe, fair points taken on the grid idea, but my concept is to go into a completely different direction. While that might require new devices, I really think it could happen with smartphones.
      Regarding the trust issue, that’s absolutely true, but I actually think some of this could be done on the device without needing to sending anything to any OS provider. That was my concept of “little data analytics” versus the more common “big data analytics.” Intelligent algorithms on the device could track what it is that you do and then use that to adjust the UI accordingly.

      1. But do you really think Google would pass up on this data?

        This to me is one of those areas that would benefit and confuse people at the same time, kind of like the Cloud. Apple _could_ really excel here since they usually want to integrate things in a way where the user doesn’t have to do anything.

        All I can think of is the new movie _Sex Tape_:

        “It went up, it went up to the cloud”

        “You can’t get it down from the cloud?”

        “Nobody understands the cloud, it’s a ****ing mystery.”

        Joe

        1. I’m not saying that all of the date would stay private, but I’m suggesting it could. As I noted in a response above there’s two different options: charge for the services delivered, or them for free in exchange for your information.

          1. Oh, sure your personal data could stay private, in theory. And you would hope that by paying for a service you would be guaranteed that. But it is still a matter of trust, and many companies haven’t earned that trust.

            BTW, it seems that Apple could do a lot of the “little data analytics” on the device (having 64-bit SoC’s and all kinds of sensors optimized with the OS and power management, etc.). However, Google seems to be bent on pulling all value and processing up into their cloud, exacerbating the trust issues they already have.

  2. The question that your article brought to mind is this. If the data is to be personal and private, then what is the “self interest” of the maker of the software? The software game is not an altruistic one, me thinks. As they say Show me the money!

    1. They can charge for the services they provide, or they can say it’s free in exchange for your information. I believe both business models are viable…

  3. Not to throw out a bad pun, but our future is in the cards. I completely agree that the move from apps to services will dictate new ways in which we access them in smartphone… and have been examining how this will extend to other devices such as smartwatches. Having the desired information or service of that moment available at a glance is crucial. However, as the flawed humans we are, our actions are not always predictable nor do we tend to be patient enough for a learning system to understand our habits.

    Grid icons allow us a mental filing system and an ability to create habits around location of that data or service. We tend to access those apps based a version of mental muscle memory – scrolling and flipping to the location. Even once we create a system that allows us to more easily access the information around the context, we must still have an intuitive system to find the outlier services we seek to access. Be that grid, file, visual mapping, or other.

    An issue outlined below is privacy. Sad to say, that ship has sailed. Our phones already track our behaviors, locations, velocities, surfing habits, etc. I had to laugh at the number of people outraged that the NSA would track calls – and publicly post that outrage on Facebook, a service that openly seeks to know every detail about us. While the ability to opt-out and remain private will be a required option, the maximum benefit will come from allowing our behaviors to be tracked and used for both good and potentially bad. That is a difficult balance.

Leave a Reply

Your email address will not be published. Required fields are marked *