Google Glass and Segway: Early Adopter Lore

In 2001, the Segway hit the market. VCs like Kleiner Perkins’ John Doerr fawned all over it pre- launch. Even Steve Jobs and Jeff Bezos were enthralled when they saw it. To its inventor, Dean Kamen, it represented the next breakthrough in personal transportation. His boldest claim came when he predicted in Time magazine that the Segway “will be to the car what the car was to the horse and buggy.”

Kamen is a true renaissance man and when he speaks, it is best to listen. He has had the ear of at least two presidents and is highly respected in the medical field for his invention of the all-terrain electric wheel chair. Perhaps he is best known for inventing the insulin pump.

I had the privilege of sitting next to Kamen at an event at the San Jose Tech Museum just before the Segway came out. I was already aware of his accomplishments and I was (and still am) in awe of him. Regis McKenna, the legendary PR vet who handled PR for Apple and Steve Jobs until the mid-1990s, was also sitting with us. I clearly remember how McKenna, who is a type 1 diabetic and had used an insulin pump since it came on the market, took this opportunity to thank Kamen for creating this medical wonder and to explain how it affected his life. It was a very touching moment and, in turn, Kamen graciously thanked McKenna for his kind remarks. At that moment it really hit home that technology was not just something that I work with but rather something that has the potential of improving lives.

The Segway, however, had a lot of problems from the start. To begin, it cost more than $3,000 and had a short battery life. There was also serious pushback as local communities banned it on sidewalks, malls, some streets. Many people were not pleased to share the roads and aisles with Segway riders. In 2009, Time magazine named it one of the 10 biggest tech failures of the last decade.

Although the Segway was a bust at the consumer level, it has been embraced by vertical markets such as police departments, private security in malls and entertainment parks, and tour companies. This isn’t surprising given that most new technologies are often flushed out in vertical markets before ever getting cheap enough to find broader consumer demand (if they ever do).

Now that Google Glass has come onto the scene, I see some similarities between it and the Segway. The rhetoric, for one, is parallel. Google CEO Larry Page talks like it will be the next big thing to revolutionize the world. After spending two weeks with the glasses, noted technology blogger Robert Scoble wrote, “I will never live a day of my life from now on without it (or a competitor). It’s that significant.” My company will be getting a pair of the glasses to test in the next month and perhaps we will have the same reaction.
There’s certainly a lot of hype, but Google Glass isn’t even commercially available yet and pushback has already started. People worry about invasion of privacy and distracted drivers. It’s being barred in movie theaters, casinos, strip clubs, and bars and a recently introduced bill could ban the use of the device while driving.

I have no doubt that early adopters will shell out the $1,500 at first but some of my tech friends express concerns. How will they look in public while wearing them? Will others think they aren’t engaged in conversation, but rather searching for things? I compare it to wearing a Bluetooth headset; when speaking to a person, I take it off lest they think that I am not listening to them but instead to something coming through the headset.

Tech You Can Wear

It will be very interesting to see how the first generation of users will evaluate its worth given that only 8,000 testers will receive them. These early testers, however, should give us a good sense of whether these glasses have staying power. I suspect they might conclude that Google Glass is not really ready for consumer primetime.
Like the Segway, it will likely get the most attention from vertical markets where its real value can be exploited even at its high price, which is not aimed at consumers anyway. It must drop to around $300 before it gains any traction in the mass market. Even then, there will probably be a steep learning curve in functionality and social norms before it is accepted for everyday use.

I may be wrong about the Segway comparison since they are clearly two very different technologies. Still, I can’t help but see the likenesses between them. I fear that once the novelty wears off, unless there is a killer app, Google Glass could lose steam and potentially go the way of the Segway.
On a personal note, I strongly believe in the potential of wearable devices, regardless of the reception of Google Glass. It will go down in history as one of the products that helped define wearable computing. Wearable devices give us a digital sixth sense and we are just scratching the surface of how they can provide enhanced information that will impact all aspects of our personal and professional lives in the future.

An Overview of How Google Glass Works… A Curse or a Blessing?

Screen Shot 2013-03-18 at 7.49.40 AM I loved Tim Bajaran’s piece on G-Glass – Mine simply expands on some basic facts, adding value for all of us who aren’t following the very Iron-Man creation of this latest Google Project.  We’re losing Google Reader, but gaining hardware. Does anyone else see Apple’s “product” model being adopted here? 

Yes, Alice, we’ve definitely fallen into the looking glass. Google’s most recent project, Google Glass, will delve far into the realm of science fiction, bringing Tony Stark, Iron Man-esque technology to the masses. The Google Glass project delivers a wearable computer system in the form of glasses, offering hands free messaging, photography, and video recording.  Straight out of 007, this offers the ability to share everything you see, live, in real time: directions, reminders, the web – all seen through the lens, right in front of your face.

The glasses have a display in the top right corner of the frame, making endless information available at all times, and will reportedly connect with either your Android or iPhone implementing WiFi, 3g, and 4g coverage. These revolutionary specs won’t just be a piece of spectacular hardware; Google is negotiating with Warby Parker, a company which specializes in the sales of trendy glasses, in an attempt to bring infinite data while still looking fashionable.

The best part of Google’s Project Glass is that Google is currently allowing civilians, not developers, the opportunity to influence product development. Google declared, “We’re looking for bold, creative individuals who want to join us and be a part of shaping the future of Glass.” Applications are being accepted through the use of Google+ and Twitter, through the hashtag #ifihadglass.

While this idea of unlimited data being available even more easily than at your fingertips is revolutionary, it raises more than a few questions regarding privacy. The ability to record everything right in front of you, in real time, is a daunting thought, covering everything from being photographed at a cafe, to making videos in airports. Beyond the questionable “Glass etiquette” that will certainly develop over time, the prospect that Google and the government will be able to access users’ data is shattering.

If the Glass Project brings information right in front of your face, allowing you to communicate, to access the internet, contacts, etc., and share what you are seeing live, what will stop others from accessing your private information? Although a few decades late, Orwell’s 1984 has definitely caught up with us.

The issues that may arise from the mass production of Google Glass are met with equally impressive, revolutionary concepts around social networking and sharing. Glass would be the apex of social sharing, allowing people to be in constant contact, literally letting individuals step into other’s shoes, to view the world from a different point of view. You could be standing in New York’s Time Square and share and trade that experience with someone around the world, exploring the streets of Venice or Sydney, Australia. Such universal sharing would truly redefine the human experience.

At its best, this would also effect topics as broad as human rights and poverty – but the cost remains to be seen. Only time will tell if the Google Glass Project will be the vessel connecting mankind, Pandora’s box, or something in the middle.

Live the Future Now

By nature of what I do for a living, I spend a lot of time thinking about the future. As a part of that exercise I like to employ a tactic I call live the future now. I’ll explain. Part of how I attempt to create a vision for the future and analyze opportunities and weaknesses of solutions is to try to use existing technology to do things I believe we will do in the future. This is why I am currently using tablets in and around my house in ways that seem unorthodox, or “crazy” as some have told me. I’m trying to get a sense of how these devices may evolve. For example I believe someday a tablet computer will exist in every room. They may also be communal and thus may be mounted on walls, refrigerators, in bathrooms, etc. This is why I literally have 15 tablets in some use around my house (or perhaps that is simply how I justify it).

In the early 2000’s, quite a bit of my research focus was the digital home. I spent a lot of time piecing together solutions in an attempt to stream HD videos wirelessly to all my displays in my house (which was 4 at the time) because I knew wireless whole home video would someday be a reality. I used any and all technologies I could get my hands on as I attempted to build the most connected and automated digital home possible. I basically used my own house as a lab. Interestingly, 10 years later and we still aren’t close to mass market commercialization of the digital home I envisioned and tried to create. It was a painful experience trying to create this digital home back then and many man hours were spent connecting DMAs (digital media adapters as they were called), home theatre PCs, 5ghz proprietary line of sight video points, beam antennas, and many more technologies.

This exercise was valuable and it was all based in an attempt to live the future now so I could learn and observe the potential of certain experiences. The point, however, was an attempt at technological ethnography of the mass market of tomorrow.

Understanding the Mass Market of Tomorrow

One of the most critical things any company can do is seek to understand the needs, wants, and desires of their customers of tomorrow. This is generally why RND labs exist. A key component of any RND lab are individuals with a vision of how the mass market may use their innovations based on tomorrow’s customers needs, wants, and desires. This is often done very poorly by many technology companies.

Understanding what the current mass market needs is important for the short term. Understanding the mass market of tomorrow is important for the long term. This practice is at the core of what we do at Creative Strategies, and it is why I engage in the practice of attempting to live our technological future in the present as much as possible.

Different Approaches

There are two approaches a company can take to understand the mass market of tomorrow. One is to do it solely inside the companies walls. Apple does this for example but so does Microsoft and many other technology companies. This model is traditional but as I pointed out above, requires incredible insight and understanding about the future market in order to know what to commercialize and what to scrap. Apple is perhaps one of the only companies who has continually done this well. Some companies may actually test their products with large groups of employees in order to broaden their sample size as well. Palm used to do this, and I am sure many others do this as well.

The other approach, and the one I think is extremely interesting, is Google’s approach. Google does their RND out in public. ChromeBooks and Google Glass are two prime examples of this. These products may have mass market potential, or they may not, but a great way to find out is to test it with people and observe their behaviors and translate that into learnings. Call it market research with the help of the broad public. Things the market likes, keep. Things the market doesn’t like, don’t keep. Testing future products on actual future consumers and learning from their observations is an extremely interesting way to do future use case research. I appreciate that Google does their RND in public. I also applaud their ability to get people to pay for the privilege of doing their homework for them.

Most consumers don’t know what they want until the see it or experience it. It’s extremely hard in internal RND labs to truly understand mass market sentiment. This is why I think Google’s approach is so interesting. Competitors can learn from this and adapt, which is a risk. But I like the direction they are taking. Regardless of your opinion of the products themselves or Google, I like the idea that Google is getting back to its roots.

Are Wearables the Next Wave of Computing?

Two weeks ago at Google I/O, Google thrust wearable computing into the mainstream, public eye by performing one of the most incredible stunts I had ever seen on the technology stage.  Wearing Google Glass and communicating via real-time voice and video, daredevils jumped out of a blimp, landed on the Moscone Center roof, rappelled down its side and biked into Google I/O to throngs of cheering participants.  Is wearable computing great for “show” but making no sense in reality, or is this technology truly the future of computing?

We need to first define wearable computing.  This may appear simple in that it’s “computing you can wear”, but it’s a bit more complicated than that as I have seen some very confused news articles on it.  Let’s start with what they aren’t.  Wearables are not computing devices that are implanted into the body.  This may appear at first glance to be an odd thing to even discuss, but over the next ten years there will be very many compute devices implanted to help people with their medical ailments to help with sight, hearing, and drug dispensing.  Related to this, wearables are not implanted into prosthetic limbs either.  Wearables are compute devices embedded into items attached to the body.  Some are hidden, some are visible.  Some are specific compute devices; others are embedded into something we view today as something totally different like glasses, contact lenses, clothing, watches and even jewelry.

So now we know what wearable computers are, let’s look at the current challenges making them more than a niche today.  For all wearables, input and output are the biggest issues today that keep them from being useful or inexpensive.  Today, keyboards and pointing devices (fingers included) are the most prevalent input methods for today’s computer devices.  Useful voice control is new and gaining popularity, but isn’t nearly as popular as keyboard, mouse, and touch.  For wearable input to become popular, voice command, control, and dictation will need to be greatly improved.  This has begun already with Apple Siri and Google Voice, but will need to be improved by factors of ten to use it as a primary input method.  Ironically, improvements in wearable output will help with help with input.  Let me explain.  Picture the Google Glass display that will be built into the glasses.  Because it knows where you are looking by using pupil tracking, it will know even better what you are looking at which adds context to your command.  The final input method in research right now is BCI, or brain control interface.  The medical field is investing heavily in this, primarily as a way to give quadriplegics and brain-injured a fuller life.

Output for wearables will be primarily auditory, but of course, displays will also provide us the information we want and need. Consumers will be able to pick the voice and personality as easy as they get ring-tones.  Music stars and even cartoon character “agent voices” will be for sale and downloadable like an app is today.  Some users will opt for today’s style earphones, but others will opt for the newer technology that fits directly into the ear canal like a mini-hearing aid, virtually unnoticeable to the naked eye.

Display output is the most industry-confused part of the wearable equation.  The industry is confused on this because they are not thinking beyond today’s odd-looking Google Glass’s glasses form factor.  Over time, this display technology will be integrated as an option into your favorite’s brand of designer glasses. You will not be able to tell what are regular glasses and ones used for wearables.  Contact lenses are being reworked as well.  Prototypes already exist for contact lenses that work as displays, so like all emerging tech, will make it into the mainstream over time.  The final, yet most important aspect to the wearable display is that every display within viewing distance will be a display.  Based on advancements in wireless display technology and the prevalence of displays everywhere, your wearable will display data in your car, your office, the bathroom mirror, your TV, your refrigerator…… pretty much everywhere.

So if the input-output problem is fixed, does this mean wearables will be an instant success?  No, it does not.  Here, we need to talk about the incremental value it brings versus the alternatives, primarily smartphones.   Smartphones may actually morph into wearables, but for the sake of this analysis, it helps to keep them separate.  On the plus side, wearables will be almost invisible, lighter, more durable, and even more accurate for input when combined with integrated visual output.  New usage models will emerge, too, like driving assistance.  Imagine the future of turn-by-turn directions with your heads-up-display integrated into your Ray Ban Wayfarers. On the negative side, wearables will be more expensive, have less battery life, less compute performance and storage, and almost unusable if a display isn’t available all the time.  This is a very simplified view of the variables that consumers will be looking look at as they make trade-offs, but these same elements will be researched in-depth over many, many years.

So are wearables the next wave of computing?  The short answer is yes but the more difficult question is when and how pervasive.  I believe wearables will evolve quicker than the 30 years it took cellphones to become pervasive.  The new usage models for driving, sports, games and video entertainment will dictate just how quickly this will evolve.  I believe as technology has fully “appified” and Moore’s law will still be driving silicon densities, wearables will be mainstream in 7-10 years.