Is “Mobile Only” The Future?

Earlier this week, I attended ARM’s press event where the company laid out an impressive vision for how mobile devices using ARM cores—essentially 99% of all phones and a majority of tablets—will be evolving throughout this year and next. The company’s new Cortex A72 CPU, Mali T880 GPU and CoreLink CCI-500 system interconnect—all of which are scheduled to appear in 2016 products—offer impressive improvements in performance, yet are able to maintain the modest power requirements for which ARM-based devices are known.

The company made a point to talk about some fairly advanced applications running on smartphones, including content creation, 3D printing and more. ARM also highlighted how far smartphone performance has come since 2010, with demos of how much faster common activities are on phones from 2010, 2012 and 2014. In fact, the company’s press release claimed CPU performance from ARM cores had increased an amazing 50x over the last five years.

All of the company’s comments and demos beg an important question. How far can smartphone performance be taken and can smartphones become the sole computing device many people need? It’s a fascinating question, and one that needs to be looked at on many different levels.

From a pure computational performance perspective, we’ve heard countless times that we all carry the power of supercomputers in our pockets. So, debating whether there is or is not enough computing capability in a smartphone has essentially become irrelevant. Yes, there are tremendous CPU and graphics capabilities on smartphones and, when you add in always-on connectivity thanks to cellular radios, it’s clearly a very capable computing platform.[pullquote]Debating whether there is or is not enough computing capability in a smartphone has essentially become irrelevant. However, there is more to a computing experience than raw computing.[/pullquote]

However, there is more to a computing experience than raw computing—the input and output (I/O) capabilities are, arguably, equally important. Most obviously, the size of a screen associated with a computing device makes an enormous difference in the quality of the experience you have with that device. Even within the smartphone category, the rapid transition from 3.5”-4” screens (don’t they look like toys now?) clearly shows the desire people have for larger, higher resolution displays. But even a 6” phablet can’t compare to a 13” notebook screen or a 27” desktop monitor (let alone a 55” TV!).

I know there are plenty of reports of people using their large screen smartphones to do everything (particularly in parts of Asia), but is it because that’s all they really need and want? Or is it because that’s all they can afford or all they can easily access? Everywhere I’ve been in the world, I see lots and lots and lots of large screens and it seems to be basic human nature to want to see things (and work with things) on larger displays. Until we get to foldable screens, you simply can’t fit a large display in your pocket.

In addition to the display issue, there are input capabilities that are possible (or not) on a small, touchscreen device. Yes, you can do an incredible amount of things on a smartphone, but there are plenty of applications, entertainment experiences, and information types that could use other input methods. It’s not just keyboards—although I continue to contend they are one of our most underappreciated peripherals—but other types of I/O devices, including audio, pen, and other specialized offerings.

Smartphones give us the flexibility to bring a computing experience and information access tool with us at nearly all times, and there’s no question this convenience is incredibly valuable. However, that still doesn’t mean there isn’t a very real need for other kinds of computing devices and computing experiences.

What I will say about the advancements like the ones ARM announced is they do raise the issue of integrating things like wireless display options for connecting smartphone-sized devices to larger screens, or other wireless I/O options to a new level. I do believe something like the ill-fated Motorola Atrix—a smartphone from a few years back that offered connectivity to larger displays and other peripherals—could be a very viable option in the not-too-distant future.

However, there are still a number of very large hurdles to overcome, including more agreements on wireless I/O standards, wider deployment of those standards in both devices and peripherals, and compatibility with operating systems and applications that match the wide range of people’s needs. You could argue these concerns are easily overcome, but I think these very issues will prevent a “mobile only” computing world from becoming the mainstream choice for some time to come.

Published by

Bob O'Donnell

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

5 thoughts on “Is “Mobile Only” The Future?”

  1. I think there are 2 aspects:

    1- some services seeing overwhelmingly more usage from phones. That’s probably going to become ever more so but that doesn’t mean they should forget about desktop. I really hate it when one of my favorite mobile apps has no desktop equivalent, or a really lackluster one. Competitors who shine on both media will have an edge.

    2- Using a smartphone’s CPU for a variety of scenarios (tablet, laptop, desktop). To me that doesn’t make a whole lot of sense:
    + I often use my phone/tablet and PC simultaneously (to look up docs, listen to music, watch a movie, call, chat…). I’m not even quite convinced by the current crop of laptopable tablets because one of the situation I use my tablet the most is as a second screen next to my laptop when I’m away from my dual-screen PC. I understand that’s far from the most common scenario and I’m a bit spoiled though.
    + wireless docking is fraught with issues, from price to compatibility to degraded performance and image quality. If I’ve got to purchase a host of docks every time I change phone, and only to get a fuzzy picture and input+output lag…. Samsung’s *wired* dock for the Galaxies is $100, as expensive as an Android (and now Windows) mini-PC, and not that much cheaper than an OK tablet ($150). And that dock is obsolete the moment I change my phone.
    + the cost advantages are minimal. Once you take out the phone dressing (battery, screen, radios…), a Galaxy S5’s innards (CPU, RAM, power circuitry…) cost about $70. Say a “receiver” innards cost $25 (you still need power, some CPU, some RAM… a Chromecast sells for $32), that’s a $45 saving, say $80 retail after margins and taxes… I’d rather pay $80 extra and have standalone tablets and laptops and PCs. And for purely Media (not computing) uses, pay $32 extra and have a Chromecast plugged in everywhere.

    1. Thanks for the comments. I think you’re point about owning and simultaneously using multiple devices is a particularly valid one (and something I’ve written about in the past). In that regard, I too am a big believer in services that expand across device sizes, platforms, etc.
      Just to play devil’s advocate to my own argument, however, the one question is can we ever really get the synchronization of these services across all our devices to work well? There is a part of me that says, if I get everything on a single device, then I never have to worry about that issue and if that device has the capability to support multiple operating modes, multiple simultaneous display sizes, multiple peripherals, etc., wouldn’t that be a viable solution? I think some people do get overwhelmed with too many devices and I think at some point the pendulum swings back from more devices to less per person.
      Certainly going to be interesting to watch.

      1. I think the ideal solution would be a user-centric cloud service, probably self-hosted, or at least with a local server component.
        Currently a handful (or several handfuls) of disparate mission-oriented cloud services make sure our data is available anywhere we’re online, and/or synched for a subset of that data, and… indexed for ad purposes on the way. I can’t help but think a single, user-centric service that does everything I need would be better, pulling together underlying cloud providers or providing equivalent services itself: sync of working documents, backups, synch of lightweight media libraries (music and books), remote availability of heavy media libraries (videos), email, even blogs….
        I’ve been trying to get close to that with a combination of Google Drive, Bittorrent Sync, and Plex (plus old school backup tools for boot drive images). It’s working, with document, listening/reading/watch lists synching and caching on PC, laptop, tablet and phone, and document backups wholly automated; but it’s still nowhere near as transparent as it could/should be (there’s a whole lot of configuring were individual reader programs should fetch/save their media and lists), and definitely not ready for non-geek adoption.

        Edit hey, that should be called meta-cloud… MS should work on it ^^

        1. Ha, yeah, exactly or, as I like to call it, even a MetaOS….
          But yes, that would be a great solution. I also think the local server aspect is important, particularly for privacy-related issues. My guess is we will start to see companies try to put together these services in conjunction with authentication to create what I call a “portable digital identity” service.

Leave a Reply

Your email address will not be published. Required fields are marked *