As we move into an increasingly connected world, where the number of devices we own and use continues to rise and the activities we’re trying to track and control continues to expand, there’s at least one obvious challenge confronting both the industry and us. Where do we get shown the information/content we want to see?
While that may seem like of a bit of an odd or even naïve question, I think it could become a very important one. Plus, I believe the answer to it will have a number of important implications on the development of new technologies and new devices, particularly in burgeoning areas like wearables and home automation.
Of course, the obvious answer to the question would be on the screen of whatever new device we purchase for that particular tracking or controlling application. After all, screen display technologies continue to improve and expand at a rapid rate. Plus, as many modern device categories evolve, it’s often the screens themselves that are both the main hardware attraction as well as the center of our attention.
But I’m starting to wonder how far we can really take that argument. Does every new device we purchase really need to have its own dedicated screen? I’m sure my old friends and colleagues in the display industry won’t be happy to hear this, but I think we could soon reach a point of diminishing returns when it comes to adding big, beautiful displays to all our new devices.
To put it more practically, do I need to put a great display on a wearable device or a home automation gateway or any of a number of other interesting Internet of Things (IOT)-related devices that I expect we’ll see introdu ced over the next several years? My take is, no, probably not.
Of course, some might argue there’s isn’t so much a limit on the screens we need as the number of devices themselves. But as much as I would like to think that there’ll be an increasing degree of device consolidation and a desire for consumers to reduce the numbers of devices they own and use, I see absolutely no evidence to suggest that possibility. In fact, the number of devices per person just continues to increase.
So, what does this mean? I believe it means we need and will start to see more developments that leverage the incredible set of screens we already have. Between big-screen HDTVs, higher resolution PC monitors, notebooks, tablets and smartphones, a large percentage of people in the US already have access to relatively wide choice of screen sizes, most all of which have HD-level resolutions (if not higher).
The challenge is that you can’t easily connect to and/or “take over” those screens from other devices. Most people are unlikely to want to use cables from newer devices—especially ones that are likely to be small—so the only realistic option is for wireless connections. To do that, you need some kind of intelligence in both the sending and receiving devices as well as agreed upon standards.
For years, there were several different competing wireless video standards, some from the CE industry and others from the PC industry, but most of them have now fallen by the wayside. Two of the last survivors—Intel’s WiDi technology and the WiFi Consortium’s Miracast—have essentially merged into a single standard as of this time last year, allowing a wide range of devices from Windows-based PCs to Android-based smartphones and tablets to connect to a select set of Miracast or WiDi-enabled TVs. (Unfortunately, backwards capability with legacy devices, including early implementations of either Miracast or WiDi, isn’t always great.)
As with many areas, Apple has its own standard for wireless video connections called AirPlay. For iOS-based devices, in particular, AirPlay enables applications like sending video from an iPad to an AppleTV device plugged into a larger TV.
In the emerging worlds of wearables and other IOT-type applications, however, it’s not clear how connections between those devices and the likely screen targets of smartphones and tablets are going to work. Right now, many of the wearables and IOT devices function as dedicated accessories to host devices but, in several cases, the range of host OSes supported is very limited.[pullquote]If the Internet of Things is truly to take hold, the ability for screen-less devices to leverage existing displays will be a critical enabling technology.”[/pullquote]
The problem that vendors face is essentially a philosophical question about the nature of each device. If it includes a reasonable size screen, it’s a standalone device and if it doesn’t, it’s essentially an accessory. While it’s tempting to think that each device should be able to function as a standalone master device, I thin k consumers could tire of too many “masters.” Instead, accessorizing a few primary devices, particularly the smartphone, could prove to be a more fruitful path.
As vendors start to offer a wider range of devices and consumers try to integrate these new options into their existing device collections, the need for more and better adoption of screen-sharing technologies will become quickly evident. In fact, I’d argue that we could even see faster adoption of new technologies if there were easier ways to share screens because doing so will make consumers feel like the devices work with their existing equipment and that, in turn, will encourage adoption of them.
The problem now is that few vendors are spending much time or effort on screen sharing. But if the Internet of Things is truly to take hold, the ability for screen-less devices to leverage existing displays will be a critical enabling technology. So, let’s hope we start to see more screen sharing soon.
Just about every connected device still needs to have some kind of minimal control/display, in order to accomplish two things: indicate that it is working properly (given the flaky nature of networks and software, not having this sanity check on the device invites endless tech support problems) and (far more important) to provide a secure way for the owner to authorize an outside controller app (without physical on-the-device authorization, you’re opening a huge security hole where anyone on the network* can take over your connected dingus).
* the deplorable security of most routers and of most people’s wifi passwords means anyone with the inclination can get onto most home wifi networks.
Agreed that we’ll continue to see small displays and LED-style notification lights, etc., but I’m referring to fully graphical LCD-type screens here. That’s where I think we will start to see some of the limitations I describe.
From the article: “If the Internet of Things is truly to take hold, the ability for screen-less devices to leverage existing displays will be a critical enabling technology.”
Smartphones (“existing displays”) created the opportunity for the IoT. If all smartphones were eliminated in the blink of an eye, the IoT train would never have left the station.
For this reason, manufacturers of the wireless “things” don’t need to worry about display screens. The people who buy their “things” already carry displays in their pockets. At much lower cost than a display screen, a manufacturer can develop an app that provides far more information to the user — and the smartphone screen is mobile, which means constant access to service. A display screen in a fixed location is inconsistent with the philosophy driving the IoT.
The issue is that smartphones won’t necessarily have all the hardware capabilities of some of the new devices, like wearables and home automation-type products, so there will be a need for new hardware. The apps point is a good one, but my guess is some of these new devices will have needs beyond a simple app and will need more complete access to a full display–hence the column.
Apple is building most of the displays that these IoT products will use, per the appropriate context. There will be displays on a wearable, smartphone, tablet, computer, TV, and car. (As of right now, I don’t expect Apple to build its own TV or car display; they’ll continue to build the access to the TV display via improved AppleTVs and AirPlay, and access to the car display via the smartphone and CarPlay).
For all IoT (health, home, etc), I expect a wearable (like the rumored iWatch), with an interface that is always available (seen, accessed), to provide simple status and simple control (speech or touch). I expect the smartphone to provide more comprehensive status and finer, more complex, control via its larger, but not always available, display. As Apple moves to larger iPhones, people will find it less convenient to keep removing it from their pockets, purses, and bags, and the wearable will be far more useful and convenient for quick and simple jobs, especially when on the move or in the car. Even when in a house, a person may leave their iPhone in one room, but now have quick access to it from anywhere in the house via the wearable. (Apple showed this concept at WWDC for a Mac and iPhone, but I think the real point of it is the wearable and iPhone.)
I think those are relatively realistic scenarios, but this once again highlights the potential concern of Apple’s proprietary means for communicating between devices. Obviously, Apple’s driven a fantastic ecosystem with their devices and technologies, but if IOT is going to reach the kind of levels that I believe are going to be possible, devices are going to need to somehow communicate across multiple “host” platforms, and that’s where a challenge could come in….
Odds are the basic comm medium/protocol will be either Bluetooth or WiFi or both, depending on the IoT product’s mobility and power source. It’s above those layers that I expect to see the divergence that you’re concerned with.
If history is a guide, Apple continues to define its own protocols for its ecosystem, including encryption and security. To handle this divergence, the IoT product developer can make one product that includes all the various protocols (hopefully, it’s just software), or make multiple product versions, each with a different protocol. The consumer will have to choose which version to buy (likely with different price points).
Beyond that, lots of good questions. Will all Android vendors go along with Google’s protocol choice? Will Google implement their choice in AOSP or in the Google Play layer? Will Samsung go with its own protocol? Will Apple eventually get outflanked, or will Apple’s more affluent, more apt-to-buy user base protect it for a good long time?
Indeed…lots of good questions and most without clear answers as of yet….
The Apple Network of Things will use the iPhone as the hub, I see it serving as a kind of engine. Interactive notifications seem like an interesting interface for wearables, but I expect a lot of sensor accessories that do things and communicate with the iPhone, and those sensors won’t need screens, they will just be delivering data.
This is my first time pay a quick visit at here and i am really happy to read everthing at one place
I think the admin of this site is really working hard for his website, since here every stuff is quality based data.
Greetings! Very helpful advice in this particular article! It is the little changes which will make the most important changes. Thanks a lot for sharing!