Two weeks ago at Google I/O, Google thrust wearable computing into the mainstream, public eye by performing one of the most incredible stunts I had ever seen on the technology stage. Wearing Google Glass and communicating via real-time voice and video, daredevils jumped out of a blimp, landed on the Moscone Center roof, rappelled down its side and biked into Google I/O to throngs of cheering participants. Is wearable computing great for “show” but making no sense in reality, or is this technology truly the future of computing?
We need to first define wearable computing. This may appear simple in that it’s “computing you can wear”, but it’s a bit more complicated than that as I have seen some very confused news articles on it. Let’s start with what they aren’t. Wearables are not computing devices that are implanted into the body. This may appear at first glance to be an odd thing to even discuss, but over the next ten years there will be very many compute devices implanted to help people with their medical ailments to help with sight, hearing, and drug dispensing. Related to this, wearables are not implanted into prosthetic limbs either. Wearables are compute devices embedded into items attached to the body. Some are hidden, some are visible. Some are specific compute devices; others are embedded into something we view today as something totally different like glasses, contact lenses, clothing, watches and even jewelry.
So now we know what wearable computers are, let’s look at the current challenges making them more than a niche today. For all wearables, input and output are the biggest issues today that keep them from being useful or inexpensive. Today, keyboards and pointing devices (fingers included) are the most prevalent input methods for today’s computer devices. Useful voice control is new and gaining popularity, but isn’t nearly as popular as keyboard, mouse, and touch. For wearable input to become popular, voice command, control, and dictation will need to be greatly improved. This has begun already with Apple Siri and Google Voice, but will need to be improved by factors of ten to use it as a primary input method. Ironically, improvements in wearable output will help with help with input. Let me explain. Picture the Google Glass display that will be built into the glasses. Because it knows where you are looking by using pupil tracking, it will know even better what you are looking at which adds context to your command. The final input method in research right now is BCI, or brain control interface. The medical field is investing heavily in this, primarily as a way to give quadriplegics and brain-injured a fuller life.
Output for wearables will be primarily auditory, but of course, displays will also provide us the information we want and need. Consumers will be able to pick the voice and personality as easy as they get ring-tones. Music stars and even cartoon character “agent voices” will be for sale and downloadable like an app is today. Some users will opt for today’s style earphones, but others will opt for the newer technology that fits directly into the ear canal like a mini-hearing aid, virtually unnoticeable to the naked eye.
Display output is the most industry-confused part of the wearable equation. The industry is confused on this because they are not thinking beyond today’s odd-looking Google Glass’s glasses form factor. Over time, this display technology will be integrated as an option into your favorite’s brand of designer glasses. You will not be able to tell what are regular glasses and ones used for wearables. Contact lenses are being reworked as well. Prototypes already exist for contact lenses that work as displays, so like all emerging tech, will make it into the mainstream over time. The final, yet most important aspect to the wearable display is that every display within viewing distance will be a display. Based on advancements in wireless display technology and the prevalence of displays everywhere, your wearable will display data in your car, your office, the bathroom mirror, your TV, your refrigerator…… pretty much everywhere.
So if the input-output problem is fixed, does this mean wearables will be an instant success? No, it does not. Here, we need to talk about the incremental value it brings versus the alternatives, primarily smartphones. Smartphones may actually morph into wearables, but for the sake of this analysis, it helps to keep them separate. On the plus side, wearables will be almost invisible, lighter, more durable, and even more accurate for input when combined with integrated visual output. New usage models will emerge, too, like driving assistance. Imagine the future of turn-by-turn directions with your heads-up-display integrated into your Ray Ban Wayfarers. On the negative side, wearables will be more expensive, have less battery life, less compute performance and storage, and almost unusable if a display isn’t available all the time. This is a very simplified view of the variables that consumers will be looking look at as they make trade-offs, but these same elements will be researched in-depth over many, many years.
So are wearables the next wave of computing? The short answer is yes but the more difficult question is when and how pervasive. I believe wearables will evolve quicker than the 30 years it took cellphones to become pervasive. The new usage models for driving, sports, games and video entertainment will dictate just how quickly this will evolve. I believe as technology has fully “appified” and Moore’s law will still be driving silicon densities, wearables will be mainstream in 7-10 years.
One thought on “Are Wearables the Next Wave of Computing?”
When was google it glasses I found they already put a Google Glasses in the Google Play Store see: https://play.google.com/store/apps/details?id=com.gau.go.launcherex.theme.goglasses