Apple’s Very Human Interface Guidelines

I am impressed with the speed of Apple’s foray into entirely new UIs. No, I am not talking about the re-jiggered version of iOS the company built for the Apple Watch. It merely reveals the way forward. Apple is clearly focused on transforming our bodies into the next great interface.

The devices this could enable are nearly limitless.

Starting with the launch of the Apple Watch, our voice and flesh, maybe soon our eyes, become common input methods — the mode by which we interface with data and interact with machines (or screens, clothes, wearables).

You already know about voice. The iPhone is now good enough to be reliably used for dictation, creating notes, tweets, texts, setting reminders and appointments, even searching the web. The Apple Watch incorporates this voice capability, along with touch, right from the start. The Apple Watch also incorporates physical interactions — haptics. Apple brands this as “taptics.”

My recent article in Macworld examined this new UI:

Haptic technology—haptics—uses force upon the skin to deliver real-time tactile feedback. These physical sensations are created by tiny motors called actuators. Done right, haptics can mimic the feeling of a pin prick by a wearable that tracks your blood sugar, simulate the plucking of virtual guitar strings on a tablet screen, or re-create the physical recoil of a phaser from your favorite game controller.

To date, use of haptics has been limited in part by middling accuracy — how much and where exactly the force is applied. Apple appears to have uncovered the use cases and improved the accuracy enough to make haptics a core feature of its next big thing. As the company boldly states:

Because (Apple Watch) touches your skin, we were able to add a physical dimension to alerts and notifications — you’ll feel a gentle tap when you receive an incoming message. Apple Watch also allows you to connect with your favorite people in some new, spontaneous ways not possible with any other device.


Physical sensations — haptics — are core to the Apple Watch UI.

It’s called the Taptic Engine, a linear actuator inside Apple Watch that produces haptic feedback. In less technical terms, it taps you on the wrist. Whenever you receive an alert or notification, or perform a function like turning the Digital Crown or pressing down on the display, you feel a tactile sensation that’s recognizably different for each kind of interaction. Combined with subtle audio cues from the specially engineered speaker driver, the Taptic Engine creates a discreet, sophisticated, and nuanced experience by engaging more of your senses. (emphasis added)

Where might this lead us?

I won’t predict any specific devices. I will say that, by leveraging human voice, touch and sensation, entirely new forms of interaction become possible — with data, objects and people. Thus, while I confess I am not terribly interested in the Apple Watch per se, I am very excited by Apple’s deliberate if somehow under the radar efforts at launching these human-centric UIs.

See Me, Feel Me, Touch Me

Sogeti Labs predicts a “personalization” revolution by 2025, a world filled with an amazing array of mobile devices, sensors, wearables, things, robots and semi-autonomous machines. In this brave new world, current input methods simply won’t work. No matter how great or how knowing the state of artificial intelligence or Big Data may be ten years from now, the world of “computing everywhere” will be severely limited if it cannot be instantly and reliably engaged by voice, touch, physical force and/or eyesight. Apple — with its pricey, jewelry-like watch — is showing us the way forward. Not with a failed beta like Google Glass but with a very real product soon available for sale around the world.

I predict the potential for human UI to be so great in fact I suspect Apple’s appropriately named Human Interface Guidelines only barely scratches the surface of what will soon be possible. These are the early days of the human-computing interface, akin to when the early PC makers touted the benefits of “storing your recipes”.

Here are some of the present ways the Apple Watch will leverage our body to interact with data (emphasis mine):


Siri. Dictate a message, ask to view your next event, find the nearest coffee shop, and more. Siri is closer than ever with the Apple Watch.


Phone. Use the built-in speaker and microphone for quick chats, or seamlessly transfer calls to your iPhone for longer conversations. You can also transfer calls from the Apple Watch to your car’s speakers or your Bluetooth headset. 


In addition to recognizing touch, the Apple Watch senses force, adding a new dimension to the user interface. Force Touch uses tiny electrodes around the flexible Retina display to distinguish between a light tap and a deep press, and trigger instant access to a range of contextually specific controls — such as an action menu in Messages, or a mode that allows you to select different watch faces — whenever you want. It’s the most significant new sensing capability since Multi‑Touch.


Heartbeat. When you press two fingers on the screen, the built-in heart rate sensor records and sends your heartbeat. It’s a simple and intimate way to tell someone how you feel.


To pay with Apple Watch, just double click the button next to the Digital Crown and hold your wrist up to the contactless reader. You’ll hear and feel a confirmation from the Apple Watch once your payment information is sent.



Since the Apple Watch sits on your wrist, your alerts aren’t just immediate. They’re intimate. With a gentle tap, notifications subtly let you know when and where your next meeting starts, what current traffic conditions are like, even when to leave so you’ll arrive on time.

According to Apple, “You won’t just see and respond to messages, calls, and notifications easily and intuitively. You’ll actually feel them.”


Confession: There are no eye-driven UI features in Apple Watch. I do wonder, however, if such a UI may be coming soon. Consider how “Looks” will work in Apple Watch 1.0:

A Short Look provides a discreet, minimal amount of information—preserving a degree of privacy. If the wearer lowers his or her wrist, the Short Look disappears. A Long Look appears when the wearer’s wrist remains raised or the user taps the short look interface. It provides more detailed information and more functionality—and it must be actively dismissed by the wearer.

I can absolutely envision an Apple Watch 2018 model, for example, which can and does change the information presented based on actual eye glances, not just movements.

The overall design of the Apple Watch, its innovative computer on a chip, the clever Digital Crown input and other features and technologies are all laudable. That said, I think the most important aspect of the Apple Watch is what it portends: entirely new ways of interacting with data, machines and people all thanks to entirely new forms of human-centric interfaces.

Dear Smartwatch, thanks for the notification, now what?

Every now and then I try out a variety of smartwatches. The latest I am putting through its paces is one based on Google’s own Android Wear platform. I keep hoping I’ll crack the nut and discover something new that may explain why this particular product might be successful. I’m still skeptical, but perhaps I found some light at the end of the tunnel.

The value proposition of smartwatches, to date, is notifications. What stands out to me, each and every time I use a smartwatch, is that nearly all notifications pushed to my wrist are just duplicates of what is on my smartphone. Nothing different. Same thing, on both devices. A phone call comes in and my pocket and my wrist start making noise. A text message comes in and my phone and my wrist start making noise. An email, a twitter message, a message saying my photos have been backed up to Google cloud, a message from one of the six messaging apps I use regular to talk to people all over the globe, each renders a nice buzz and ring on both my smartphone and my smartwatch.

Where this really is amazing (/sarcasm) is when I’m on my PC, have my phone on my desk and my watch on my wrist. I get chimes, buzzed, and notified in several different ways all at the same time. With regard to the watch, these notifications are utterly useless in most situations. Given I am OCD about Twitter, email, and texts, I thought I would genuinely like getting notified of a text, email, Twitter message, or phone call on my wrist. The problem I’ve found is the notifications I’m interested in always knowing about are only somewhat appealing when I’m not staring at some other digital screen in my life, which also gets these notifications — like my tablet, PC, or smartphone. So when are these situations? When I’m walking through the city from meeting to meeting and my phone is in my pocket. When I am driving. When I am in a lunch meeting with a friend or colleague. Basically, whenever my smartwatch is the only personal screen I have in sight. The problem with this is, in many of those contexts, the notification is nice to know but un-actionable. Thank’s for the notification but now what? This is how I feel more often than not when I get dozens of notifications on my wrist.

So where do we go? Is there a “there there” with smartwatches? Perhaps an experience I had today gives us insight.

I was on my way back from a meeting in Palo Alto to have dinner with some colleagues in Santa Clara. I had two hours to kill before appointments so I went to Starbucks to catch up on email and Twitter. I’m accustomed to looking at my smartwatch with every little buzz, even though I am rarely rewarded for doing so. However, at about 5pm, I got a notification that was rewarding, relevant, and extremely useful. I was deep into email and Twitter and lost track of the time. I got a notification from Google Now, which said leave at 5:11pm to arrive on time. Pictured below.


Thanks to Google Now, my smartphone knew what my next appointment was, where it was, looked up local traffic data, and notified me when I had to leave to be there on time. Given I had lost track of time, that was useful. I’ve noted Google Now is the beginning of an anticipation engine, and for the first time that anticipation engine yielded value which appeared on my wrist.

As I use these devices, what becomes apparent is most notifications pushed to the wrist today are useless. More importantly, they are notification overload. One of my concerns is this causes a “my watch cried wolf” kind of syndrome. When so many notifications come to my wrist, and 90% are not useful, I learn to ignore them and then subsequently miss the few that are actually useful.

All of this to say, and to agree with Tim’s column today, we still have a long way to go for this to be a viable mass market solution. Notifications must get smarter. Google Now telling me when to leave with a notification I don’t see on my PC, and one that can be lost in other notifications on my Android phone, was useful. It was contextual and relevant. These are the kinds of things that make sense. The problem is, I find them few and far between. Maybe Apple cracks the nut, but maybe we just aren’t ready yet. We will know soon enough.

Please Silicon Valley. Do Not Turn The Car Into Another Boring Box.

We stand at the intersection of the Internet of Things and the Connected Car. Soon, Cortana shall summon to us a driverless, fully autonomous vehicle, shared by the community, owned by no one, that will safely transport us to our chosen locale, as we tweet, stream, and tap away from the comfort of the back seat. Mostly, this is good. For most even, it will likely be very good. But I fear one of humanity’s greatest inventions, the car, will be reduced to yet another boring box, stuffed with computer chips, powered by lines of codes, and possessing no soul.

Please Silicon Valley, do not kill my love for the car.


One Piece At A Time

A revolution is taking place within the automotive industry. It began not in Detroit, Germany or Tokyo, but as with all revolutions, from the outside. In this case, Silicon Valley. The spread of computing, connectivity and the cloud has at last reached our cars. Driving — and automobiles — will never be the same.

Per the glorious visions of venture capitalists, the new market dreams of old world automakers and the ceaseless, prosaic functions of the Internet of Things, this is our car’s very-near future: Sensors under the hood, inside the dash, within the tires, sensors embedded in the roads and placed above traffic lights, all pumping out streams of data in real time, sent via telemetry to nearby vehicles, transmitted to the web for processing and analysis, shared with the crowd, then acted upon by the many computer chips within our own increasingly self-aware vehicle, all part of a highly monetizable big data ecosystem.

I am not at all opposed to this. Such efforts will almost certainly lead to faster commutes, a greener planet, fewer accidents and many saved lives. The Silicon Valley vision for the car of tomorrow should be lauded.


I ask only that the very best aspects of the car be carried forward into the future and not de-constructed into little more than a cubicle on wheels.

As a native Detroiter, I know cars are more than just data generators. Cars are freedom, independence, liberty, aspiration, mobility. In so many ways, cars disconnect us from the world as they reconnect us with our primal emotions. Cars are beautiful, personal, powerful. I want this not to go away.

I am not at all convinced we can trust Silicon Valley to transform these glorious mechanical objects into anything other than another node in a data-fueled, globe spanning web.

Let Me Ride

While driverless cars, as Google has promoted, are likely a decade away from practical use, semi-autonomous vehicles should be available in the developed world well before the end of this decade. The Internet of Things will enable these semi-autonomous, ‘situationally aware’ vehicles to keep us properly centered in the lane, to apply the brakes if we, the ‘driver,’ fail to spot the pedestrian in the crosswalk. They can ease off the throttle should they sense another vehicle is too close.

The car of 2020, and probably much sooner, will inform us when we are driving too fast given the current road conditions — and take corrective action should we fail to heed its informed advice.

connected car

These semi-autonomous vehicles will communicate with other cars, busses, navigation services and transit authorities as much as they communicate with us. This is good. As a proponent of mobile technologies, the cloud, wearables, sensors, Bluetooth, et al, I fully appreciate the value that comes from the open sharing of our data. If I am stuck in traffic, by all means let my car inform others of a better route. If a driver’s car wishes to inform those of us a few minutes behind that there’s a hidden police stop, good for us.

Above all however, the connected car will make for safer roads. Over 95% of all car accidents are caused by driver error. The Internet of Things will put a stop to this.

According to Intel, which is keen to put still more computing chips into our cars, with a mere one second warning, over 90% of all car accidents could be prevented. A half-second warning will prevent over 50% of all car accidents. Sensors and computer chips can act faster than us. They can also behave far more rationally. If we are being dumb, careless, foolish or simply unaware behind the wheel, our connected car can save us from ourselves — and save many others as well.

Over one million people die each year from car accidents. The benefits of integrating connectivity and computing inside our cars and within our road systems is significant.

And yet…

I still want the car to remain mostly mechanical, always beautiful, powerful, visceral — all those things that are never considered relevant in Silicon Valley.

Where I come from, it was absolutely no coincidence the boy whose father let him borrow the Camaro Z28 happened to be dating the prom queen.

No parallel to this exists for the young man with the biggest PC tower or the newest smartphone.

When it comes to our cars, whether for 2015 or 2025, let us not place clock speed above top speed, throughput over horsepower, or user interface above road handling. Nodes have primal desires, too.


No Particular Place To Go

While few things in life are as joyous as a fast car, top down, the open road beckoning, music blaring, such moments are rare. No matter how beautiful or powerful the car, the daily commute can be a grind. The connected car helps mitigate this, delivering all the comforts of our modern, fully connected world, accessible via a tap on the screen, or a command from our voice.

Stuck in traffic? No worries. The smartphone-like cars of post-2015 will offer:

  • streaming music, your favorite podcasts, even videos (for the kiddies)
  • news, weather, market data — read aloud, even personalized, as your new car, like a giant rolling Siri, knows your interests
  • geofenced notifications
  • Twitter and Facebook updates, voice driven, naturally
  • the fastest routes to everywhere you want to go
  • the nearest gas stations and restaurants
  • driving analysis, perhaps even a driver ‘Klout’ score based on your speed, how hard you brake, how close you were driving to other vehicles
  • engine diagnostics

These are all good. Silicon Valley is actively seeking to disrupt our commute. I stand with them. As our cars become increasingly more connected, tapping more computing power, more crowd wisdom, more algorithmic analysis, our driving should improve, our commutes should become more enjoyable,  and ultimately, personal productivity should increase. Quite possibly, stress levels will all go down.  

Again, my selfish concern is that these measurable goods will increasingly lead to an emphasis on “cars” that maximize efficiency, comfort, UIs, and that offer the best search, the most up-to-date data, the sharpest display.

A box.

Help Me, Apple. You’re My Only Hope

Is it possible to have the best of tomorrow with the best of yesterday?


I believe in the beneficent power of technology and innovation. I fully appreciate that Big Tech, Big VC, and Big Government want a lead role in the multi-trillion-dollar Internet of Things revolution. All are eager to remake our existing infrastructure, to place “intelligence” inside our cars, to link driver, car, road, and metro transit system into a cohesive, smartly flowing whole. I accept their work will alter not only driving but possibly even remake our towns and cities.

Why, then, does this make me a bit uneasy?

I do not fear my next car will experience a blue screen of death. Well, not much. Nor am I terribly worried hackers will access my car’s data, which will no doubt be linked to a payment system that lets me speed through electronic tollbooths.

I fear Silicon Valley will fail to divine the value in what makes cars glorious, and reduce the ultimate driving machine to just one more computing device.

Should I be disheartened or joyful that Apple SVP Eddy Cue joined the Ferrari board in 2012? Or that Apple SVP Phil Schiller sees fit to have a Racer X avatar on his Twitter profile?

phil schiller3

Will these Apple executives help keep our cars from becoming just the latest personal computer box?  I can’t afford a Ferrari, although I can pretend I’m Racer X — or possibly his brother, Speed. The question is, how long can I maintain the dream?

Android is Eating the World

Screen Shot 2013-11-14 at 8.32.21 AM

Benedict Evans has a must read slide deck from his mobile is eating the world presentation. I’m going to piggyback on his title a little and tackle the narrative that Android is eating the world. It is the narrative that is hard to escape and it would be a significant point if it was a unified version of Android which was eating the world. However, when you take a step back, and view Android in the big picture, you learn it is in fact an extremely fragmented Android which is eating the world.

I’m fond of saying that Android in its purest form is not a platform. It is a technology which enables companies to create platforms. Samsung is using Android to create a platform. Amazon has used Android to create a platform. Nearly every major OEM in China is using Android to create a platform. ((There are 100 different app stores in China based on Android. 20 of them are the major players and each has its own billing and certification process)) And Google is using Android to create a Google specific platform. ((Consumer behavior, by way of app download trends and purchasing vary greatly by each app store)) All of these companies and more are taking Android to create their own platform and their own ecosystems. There is no single unified Android codebase which is dominating the world. There is no single Android app store, there is no single Android ecosystem. What does exist is a vast array of different platforms and different ecosystem running this underlying kernel called Android.

Where I think the confusion in the Android is eating the world narrative exists is the line of thinking that Google = Android. That every bit of the Android is winning narrative is a narrative that benefits Google. This view represents a clear mis-understanding of Android and what it is and why it exists.

The Role of the Android Platform

There is only one company in the market right now that does not need platform assistance from a third party. That is Apple. Every other hardware company needs a third party to provide them with software to run on their hardware. Microsoft has been this company for most of the computing era. Google, with Android, has provided the Microsoft alternative to the mobile world. Hardware OEMs need this third party software support because they need a company to provide a platform and standards support for a wide variety of technologies.

However between the two, Android offers to hardware OEMs what Microsoft does not, the ability to differentiate. Ship Windows or Windows Phone and your product from a software standpoint is no different from your competitors. Which means your basis to compete is extremely limited to form and price. Android, on the other hand, allows hardware companies to take the platform which Google is supporting with standards and driver support and customize it in a way to offer some level of visual and feature differentiation at a software level. Microsoft is providing a standardized unified platform. Google is providing a standardized platform to create other platforms / ecosystems. These solutions are very different and enable entirely different ecosystems.

The Multiple Android Markets

I wrote a few weeks ago about how Android is enabling appliance electronics to get more intelligent. In this regard, Android is very similar to embedded Linux. Android is likely poised to power refrigerators, thermostats, coffee pots, robots, you name it. Android as a platform in this regard is very interesting. But again this the embedded version of Android not the one that powers smartphones, tablets, TVs, etc. That is a very different Android. This version of Android is the most interesting to me.

The other Android market is the one for products like smartphones and tablets. This market is the one that garners the most attention. Yet when you look at Android’s smartphone and tablet market share you see that the bulk of it is made up from devices that are considered in the mid-low range of price points. Android’s share of premium handsets is very small, less than 15% globally. The vast majority of Android’s market share rise over the past few quarters has come from the low-end or devices costing wholesale less than $250. ((Creative Strategies, Inc estimates.))

The same is true in tablets where last quarter Android white box tablets costing less than $100 made up just over 30% of device shipments. ((IDC estimates.))

Looking at the share of devices at certain price points, and what OS they run, it is clear that Android owns the low-end and Apple owns the high-end. In many emerging markets there will be a battle for the mid-range between Apple and Android OEMs. Looking at Android in this light highlights its importance. Had Google not released Android what platform would have risen to serve the low-end? Android is in fact helping develop, developing parts of the world. From a technology standpoint, Android’s role in helping to develop emerging markets is in fact a good thing.

So while is true that Android is eating the world, it is doing so in a very non-unified way outside of driver and standards support. This adds to the level of complexity to any analysis about Android. Android is eating the world but what is interesting is that not only Google owns Android. Android is owned by all and benefit all in entirely different ways.

When you take a step back you realize that we have never had anything quite like Android before. While we may make assumptions about what Google may do with their version of Android, we can’t make the same assumptions with what other hardware companies will do with their version of Android. To keep enabling this multiplicity of Android ecosystems all Google simply has to do is keep up with driver and standards support. Perhaps this was the point of Android all along.

What remains unclear is how Google can benefit, which may not be the point or even necessary, from the landscape Android is enabling. They have all but given up in China. iOS devices are worth more to them in every major developed market. They would of course love to see this change but there is no evidence to suggest otherwise. So Android dominants the low-end of the tablet and smartphone market and commodity connected electronics. Time will tell in what ways this benefits Google. But as I mentioned, it may not the point of Android or even necessary for Google.

Google does not equal Android. You understand this when you can see the forest in the trees.

How FitBit Changed My Life

When technologists and pundits weigh in on technology trends and products, we are always quick to point out that technology needs to meld with our lives as opposed to us changing to fit the device.  In most cases that is true… but not all.  My FitBit experience was different from many I’ve had in that I changed key items in my lifestyle as opposed to FitBit melding with me.  FitBit literally changed my life, and I think there are lessons that we can derive from that.

Let me give a primer on FitBit first for those unfamiliar.  FitBit develops a line of health and fitness wearables and a scale that connect to mobile, PC and cloud services. The wearables fall into three categories, one that’s worn on the wrist (Flex), one that’s designed to be clipped on (Zip), and one that fits hidden inside a pocket (One). These devices track steps, calories, distance, very active minutes, floors, hours slept, and times woken up from sleep. The scale, Aria, captures weight and BMI.  All of this information is then fed into your computer or phone that you can view in an easy-to read phone or web dashboard.

I have the One wearable and Aria scale.  Every morning when I get up, I’ll read the One’s display to see how long I slept.  Then I’ll reset it by holding down the button for a few seconds to let it know I’m done sleeping.  After that I jump on the Aria scale, which, as anyone know who’s trying to lose a few pounds like me, can be an “interesting” experience.  Aria tells me my weight and BMI and then syncs directly with the cloud.

After getting dressed, I’ll make sure I pop the One into my pocket.  I have goals setup for steps, so, like checking a smartphone for social media updates, I’m constantly checking the One’s tiny display to see where I am during the day with my number of steps and derived calories.  If I haven’t met my goals, I will literally change what I’m doing.  I’ll take the stairs more.  I’ll walk to something versus driving. I’ll take the long way instead of the short-cut.   At the airport, I’ll walk versus taking the people movers.

At night, I changed my behaviors, too.  I will literally sleep with the One.  It comes with a soft wrist band where you slip the One inside and then wear for the night.  I then hold the button down for a few seconds that says I’m “sleeping”.  When I first thought about wearing something at night, I had a visceral negative reaction.  I don’t even like to wear any jewelry to bed and certainly didn’t expect to get comfortable with a wrist strap. After one night, it became reflex.  It helped that my wife didn’t make any comments about the “geek” in me.

So why does any of this relevant to us high-tech folk?

I believe that the more changes one is willing to make in their life to fit in a tech device, the more important, meaningful and game changing the device.  Typically, the underlying driver is something deeper than it appears on the surface.  Look at smartphones as an example.  We now place smartphone by our beds and take them everywhere we go, incessantly checking them hourly (by the minute in the case of my teenage girls).  This is driven by the strong need to communicate and be part of a community.

My willingness to have the FitBit on my side or on my wrist 24×7 and change what and how I do things is driven by an even stronger need, the need to survive and thrive.  I’ve made the personal connection with the device and my ability to live a healthier and longer life, so I’m willing to so many things differently that would be considered odd or anti-social.  Let me take this one step further, and a bit on the scary side.

If you’re like me, when you hear of the word “implant” you think “artificial” or maybe some pain.  If I’m so willing to make changes driven by the need to survive and thrive, if an implanted health device came along that could make me even healthier, what would I do?  It’s scary to think I’d even consider it, but now I would.  Let me step back a bit and close this out.

I believe that one of best ways to evaluate the stickiness and success of a future consumer tech product is to look at that unique need it can fulfill.  Is it about love, community, communication, health, wealth?  Those products that uniquely filled those needs or helped you get there will be the stickiest.  Fitbit and the class of products like them making that personal connection with a lot of people, and I’m bullish about their future.  This can also help us predict the tech future, as perilous as that is.  Want to know how Google Glass or the Xbox One will do?  Run it by the fulfillment test and see.