Where the innovation is: brains, inputs and outputs
Most of the devices we use have three essential components: processors, inputs and outputs. What’s interesting is much of the innovation in the consumer technology space at present is happening across these three categories, but the major companies in the space are each innovating in different areas. Let’s look at a few of them: Apple, Facebook, Google and Microsoft.
Apple has over the last several years invested mostly in processing and inputs. It acquired Siri, and then integrated it into iOS to allow voice commands to serve as an alternative to the touchscreen as an input technology. It has steadily refined Siri each year since then; improving voice recognition, extending its functionality and making it smarter. At the same time, Apple has been bringing more of the brains of its devices in-house, and innovating around processing with successive generations of its A chips. Of course, the brains behind Siri sit in the cloud rather than on the device, though Apple still does little beyond Siri in the cloud by way of off-device processing.
But we’ve also seen Apple start to invest in the more passive forms of inputs that can be provided by sensors in devices. The M7 coprocessor introduced with the iPhone 5S is a great example of this, while HealthKit and the Health app are clearly intended to make better use of the variety of information captured both by Apple’s own sensors and those of third parties. If Apple releases one or more wearable devices later this year, it’s likely a major feature will be more sensors to serve as passive inputs.
The domain where Apple has arguably invested and innovated least is in outputs, where glass displays remain the fundamental medium. Siri arguably has an output function as well as an input function, to the extent it responds verbally to queries and commands, but it doesn’t go beyond that. Apple also pioneered high resolution “Retina” displays on smartphones and has since extended those to tablets and computers, but these innovations are mostly a matter of degree rather than a major paradigm shift. The other dynamic here is an increasing variety in the glass displays Apple can use as outputs. Apple TV allowed other Apple devices to make use of larger displays in the home, while CarPlay will allow them to do the same in the car. CarPlay will also allow physical buttons to be used to control smartphone-resident functions for the first time.
Facebook is interesting in that it doesn’t actually make devices itself. As such, its innovations have largely been around its apps and its website. Facebook obviously has a major emphasis on brains, in that it requires a huge amount of processing to run Facebook and its many algorithms. But these are all cloud-based, and Facebook provides essentially no processing within its endpoints (either its website or its mobile apps), which act merely as vehicles for output and input for content processed in the cloud.
All that makes its acquisition of Oculus VR particularly interesting. This is an investment primarily in outputs – new displays – and to some extent inputs – the way the wearer moves around when wearing the Oculus Rift or a similar device. This is a major departure from Facebook’s past strategy in some ways, but may also be a sign that, while it has failed to exert much influence over the input and output modalities for smartphones and tablets, it can yet have a role in the next generation of inputs and outputs. It’s also a big bet that, at least in some contexts, both inputs and outputs will be more immersive in future than they are at present.
Google is another company that has primarily not made its own hardware, with the exception of its ownership over the last couple of years of Motorola. Google is all about the brains and, like Facebook, sees the processing as something that happens mainly in the cloud. Ben Bajarin has written, “Google’s strategy is dumb glass + smart cloud. Apple’s strategy is smart glass + deep cloud integration / synchronization”. Google doesn’t control the devices, and so it’s largely focused on cloud processing with relatively little happening at the device level.
But Google has experimented with interesting new inputs and outputs at the device level. Google Glass, though apparently still some way from commercialization, is a big experiment in both new outputs and inputs. It provides a display in the lens that’s persistently visible, rather than on a more distant piece of glass, and allows the wearer to provide input through voice and body movement. It complements Google Now, which uses both direct user input and passive contextual and algorithmic inputs and represents innovation of a different kind.
Google has also begun to extend the Google Now functionality and its cloud processing to other outputs, including the car (Android Auto), the wrist (Android Wear) and the home (Android TV). With its acquisition of Nest, it’s getting into a whole different kind of sensors and related data, though it’s apparently keeping that walled off from the rest of Google for now unless users opt in to sharing it.
Microsoft has been one of the most innovative companies on this list in terms of inputs, introducing voice and gesture input for the Xbox with Kinect in 2010. It’s arguably the only major consumer technology really innovating around gesture control, although voice has become a much more broadly used input mechanism, embraced by Apple, Google and most recently Amazon.
Microsoft has also steadily expanded the range of devices which it makes itself, from the Xbox to Surface tablets and Lumia smartphones. This gives it more control over the inputs, such as the keyboards for the Surface, but it has done little so far with the outputs. It has now introduced Cortana for voice input on Windows Phone, extending a core capability to this platform. As it wants to take Windows Phone down-market, Microsoft can’t really enforce bigger or better brains in those devices on a platform level, but since it now controls the old Nokia hardware unit, it can at least build some additional hardware capability into Windows Phones. But it’s also working on platform- and device-agnostic cloud processing in the form of Azure.
The one big area Microsoft seems to be missing is sensors – Apple has HealthKit (and possible wearables coming soon), Google has Android Wear and Fit, but Microsoft has yet to stake a claim in the passive sensor-based input domain, a situation I wouldn’t expect to last long.
A key skill is mixing these three elements in the right way
Even as the rate of innovation seems to have slowed in smartphones, there’s plenty of room yet for innovation across all three of these areas: processing on-device, supported by cloud processing; new input methods including voice and gestures; new outputs including immersive experiences and new endpoints. Each of these four players and others is investing both organically and inorganically in new capabilities in these three domains, and there’s doubtless much more to come.
The biggest challenge in all these is actually balancing the three components – inputs, outputs and processing – in such a way as to achieve the optimal results. One of the challenges with the current generation of smartwatches is they’re arguably trying to do more than the technology is capable of and are unable to squeeze the requisite inputs, outputs and processors into devices that fit comfortably and stylishly on the wrist. Samsung appears to be discovering that, when all three elements are too similar in two device categories, they tend to cannibalize each other. Setting devices apart by giving them truly differentiated combinations of processors, inputs and outputs is a key skill, and one which Samsung’s scattergun approach to its device portfolio sometimes falls short on.