The Challenge of Wearable Computing

I’d like to start out with a question I have been asking myself. Why does Google Glass need to be on my face? More importantly, to get the benefits of Google Glass (whatever one deems that to be) why does it need to come in a form factor that goes on my face? The answer is that it likely does not.

The same question will need to be answered by any potential existence of Apple’s iWatch or any smart watch. My favorite line of critics of the iWatch, or smart watches in general for that matter, is that no one wears watches these days. My standard response is: and those that do don’t wear them to keep time.

I absolutely agree that the wrist is prime real estate, but I’d add that it is also highly valuable real estate. Therefore for a consumer to put something on their wrist, their face, or any other part of their person, there must be a clear value proposition.

In Search of a Value Proposition

This is why to date the only real wearable success stories we have are devices like the Fitbit, Nike Fuelband, Jawbone Up, and others in the wearable health segment. The industry term for this segment is “Quantified Self.” These devices track our activity and give us insight into how many steps we have taken, calories, burned, quality and quantity of sleep, etc.

For many this is a clear value proposition and a compelling reason to place an additional object on their body. The value proposition is also a simple one: wear this object and it will give you details about your activity and general health which for many is valuable information. When a segment like wearable computing is in the early stages of adoption, as we are in now, simple value propositions are key to getting initial consumer adoption.

Google’s Glasses challenge lies both in the value proposition and the form factor. Google hopes to flesh out the value proposition with the public research and developing happening with its early adopters. The form factor however, is a larger question. While its true that many people wear sunglasses, or eyeglasses, most would tell you they do not always want to or even enjoying having glasses on their face. There is eye surgery for those who need glasses so that they no longer have to wear glasses. Given behavioral observations around glasses, one would need to conclude that to keep an object on ones face, there must be a good reason.

Whatever the longer term benefits of something like Google Glass turn out to be, it is likely that they will show up in other objects not necessarily glasses. Like displays in our cars, or more intelligent screens on our person like our phones, or perhaps even a smart watch.

Similarly, any smart watch will also have to make its case for existence beyond the techno-geek crowd. Here we come back to my earlier point that those who wear a watch don’t do so to keep time. I wear a watch. I like my watch and besides my wedding ring it’s the only piece of jewelry I wear. I intentionally selected this watch for a variety of reasons. It is not on my wrist because I need it to keep time. It is a fashion accessory for me. I’d argue that for most watch wearers this is the case as well. This is exactly my point on why the wrist is valuable real estate. It is valuable because those who place it there do so for more than just its functionality.

Why Should I Wear This?

Objects we choose to put on our person and go out in public with are highly personal and intentionally selected. The personal and intentional reasons that we wear objects are the things that wearable computing devices don’t just need to overcome they need to add to as well.

A smart watch needs to add to the reasons I wear a watch. Smart glasses need to add to the reasons I put glasses on my face. Addressing these things are the challenges of those who aspire to create wearable computers that are worn by the masses. I am also confident it is where much innovation will happen over the next 10 years.

We have ideas on how this shakes out. Things like relevant, contextual information at a glance, or notifications for example. All the exact value propositions of wearable computing are not yet fully known. Even with so much ambiguity around wearable computing, I am optimistic and looking forward to the innovations that will take place to create wearable computers that add value to our lives.

Are Wearables the Next Wave of Computing?

Two weeks ago at Google I/O, Google thrust wearable computing into the mainstream, public eye by performing one of the most incredible stunts I had ever seen on the technology stage.  Wearing Google Glass and communicating via real-time voice and video, daredevils jumped out of a blimp, landed on the Moscone Center roof, rappelled down its side and biked into Google I/O to throngs of cheering participants.  Is wearable computing great for “show” but making no sense in reality, or is this technology truly the future of computing?

We need to first define wearable computing.  This may appear simple in that it’s “computing you can wear”, but it’s a bit more complicated than that as I have seen some very confused news articles on it.  Let’s start with what they aren’t.  Wearables are not computing devices that are implanted into the body.  This may appear at first glance to be an odd thing to even discuss, but over the next ten years there will be very many compute devices implanted to help people with their medical ailments to help with sight, hearing, and drug dispensing.  Related to this, wearables are not implanted into prosthetic limbs either.  Wearables are compute devices embedded into items attached to the body.  Some are hidden, some are visible.  Some are specific compute devices; others are embedded into something we view today as something totally different like glasses, contact lenses, clothing, watches and even jewelry.

So now we know what wearable computers are, let’s look at the current challenges making them more than a niche today.  For all wearables, input and output are the biggest issues today that keep them from being useful or inexpensive.  Today, keyboards and pointing devices (fingers included) are the most prevalent input methods for today’s computer devices.  Useful voice control is new and gaining popularity, but isn’t nearly as popular as keyboard, mouse, and touch.  For wearable input to become popular, voice command, control, and dictation will need to be greatly improved.  This has begun already with Apple Siri and Google Voice, but will need to be improved by factors of ten to use it as a primary input method.  Ironically, improvements in wearable output will help with help with input.  Let me explain.  Picture the Google Glass display that will be built into the glasses.  Because it knows where you are looking by using pupil tracking, it will know even better what you are looking at which adds context to your command.  The final input method in research right now is BCI, or brain control interface.  The medical field is investing heavily in this, primarily as a way to give quadriplegics and brain-injured a fuller life.

Output for wearables will be primarily auditory, but of course, displays will also provide us the information we want and need. Consumers will be able to pick the voice and personality as easy as they get ring-tones.  Music stars and even cartoon character “agent voices” will be for sale and downloadable like an app is today.  Some users will opt for today’s style earphones, but others will opt for the newer technology that fits directly into the ear canal like a mini-hearing aid, virtually unnoticeable to the naked eye.

Display output is the most industry-confused part of the wearable equation.  The industry is confused on this because they are not thinking beyond today’s odd-looking Google Glass’s glasses form factor.  Over time, this display technology will be integrated as an option into your favorite’s brand of designer glasses. You will not be able to tell what are regular glasses and ones used for wearables.  Contact lenses are being reworked as well.  Prototypes already exist for contact lenses that work as displays, so like all emerging tech, will make it into the mainstream over time.  The final, yet most important aspect to the wearable display is that every display within viewing distance will be a display.  Based on advancements in wireless display technology and the prevalence of displays everywhere, your wearable will display data in your car, your office, the bathroom mirror, your TV, your refrigerator…… pretty much everywhere.

So if the input-output problem is fixed, does this mean wearables will be an instant success?  No, it does not.  Here, we need to talk about the incremental value it brings versus the alternatives, primarily smartphones.   Smartphones may actually morph into wearables, but for the sake of this analysis, it helps to keep them separate.  On the plus side, wearables will be almost invisible, lighter, more durable, and even more accurate for input when combined with integrated visual output.  New usage models will emerge, too, like driving assistance.  Imagine the future of turn-by-turn directions with your heads-up-display integrated into your Ray Ban Wayfarers. On the negative side, wearables will be more expensive, have less battery life, less compute performance and storage, and almost unusable if a display isn’t available all the time.  This is a very simplified view of the variables that consumers will be looking look at as they make trade-offs, but these same elements will be researched in-depth over many, many years.

So are wearables the next wave of computing?  The short answer is yes but the more difficult question is when and how pervasive.  I believe wearables will evolve quicker than the 30 years it took cellphones to become pervasive.  The new usage models for driving, sports, games and video entertainment will dictate just how quickly this will evolve.  I believe as technology has fully “appified” and Moore’s law will still be driving silicon densities, wearables will be mainstream in 7-10 years.